1. 程式人生 > >CentOS 使用二進位制部署 Kubernetes 1.13.1叢集

CentOS 使用二進位制部署 Kubernetes 1.13.1叢集

元件版本 && 叢集環境

元件版本:

  • Kubernetes 1.13.1
  • Etcd 3.3.10
  • Flanneld 0.10

部署節點:

ip 主機名
192.168.20.203 master
192.168.20.202 host2
192.168.20.201 host1

叢集環境變數:

# 建議使用未用的網段來定義服務網段和Pod 網段
# 服務網段(Service CIDR),部署前路由不可達,部署後集群內部使用IP:Port可達
SERVICE_CIDR="10.254.0.0/16"

# Pod 網段(Cluster CIDR),部署前路由不可達,部署後路由可達(flanneld 保證)
CLUSTER_CIDR="172.18.0.0/16"

# kubernetes 服務IP(預先分配,一般為SERVICE_CIDR中的第一個IP)
CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"

# 叢集 DNS 服務IP(從SERVICE_CIDR 中預先分配)
CLUSTER_DNS_SVC_IP="10.254.0.2"

# flanneld 網路配置字首
FLANNEL_ETCD_PREFIX="/kubernetes/network"

初始化環境

1. 設定關閉防火牆及SELINUX

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled

2. 關閉Swap

swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0

3. 設定系統引數 - 允許路由轉發,不對bridge的資料進行處理

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

4. 建立安裝目錄

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

5. 安裝 Docker

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce
systemctl start docker && systemctl enable docker

6. ssh-key認證

ssh-keygen 
ssh-copy-id 192.168.20.201
ssh-copy-id 192.168.20.202

建立CA 證書和金鑰

kubernetes 系統各個元件需要使用TLS證書對通訊進行加密,這裡我們使用CloudFlare的PKI 工具集cfssl 來生成Certificate Authority(CA) 證書和金鑰檔案, CA 是自簽名的證書,用來簽名後續建立的其他TLS 證書。

1.安裝 CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.建立CA

# cat ca-config.json
{
	"signing": {
		"default": {
			"expiry": "87600h"
		},
		"profiles": {
			"kubernetes": {
				"expiry": "87600h",
				"usages": [
					"signing",
					"key encipherment",
					"server auth",
					"client auth"
				]
			}
		}
	}
}	
  • config.json:可以定義多個profiles,分別指定不同的過期時間、使用場景等引數;後續在簽名證書時使用某個profile;
  • signing: 表示該證書可用於簽名其它證書;生成的ca.pem 證書中CA=TRUE;
  • server auth: 表示client 可以用該CA 對server 提供的證書進行校驗;
  • client auth: 表示server 可以用該CA 對client 提供的證書進行驗證。

修改CA 證書籤名請求:

# cat ca-csr.json
{
	"CN": "kubernetes",
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [
		{
			"C": "CN",
			"L": "BeiJing",
			"ST": "BeiJing",
			"O": "k8s",
			"OU": "System"
		}
	]
}	

  • CN: Common Name,kube-apiserver 從證書中提取該欄位作為請求的使用者名稱(User
    Name);瀏覽器使用該欄位驗證網站是否合法;
  • O: Organization,kube-apiserver 從證書中提取該欄位作為請求使用者所屬的組(Group);

生成CA 證書和私鑰:

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
# ls ca*
# ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem	

3.分發證書:
將生成的CA 證書、金鑰檔案、配置檔案拷貝到所有機器的/k8s/kubernetes/ssl/目錄下:

cp ca* /k8s/kubernetes/ssl/
scp /k8s/kubernetes/ssl/* 192.168.20.202:/k8s/kubernetes/ssl/
scp /k8s/kubernetes/ssl/* 192.168.20.201:/k8s/kubernetes/ssl/

部署高可用etcd 叢集

kubernetes系統使用etcd儲存所有的資料,我們這裡部署3個節點的etcd叢集,這3個節點直接複用kubernetes 的3個節點,分別命名為etcd01etcd02etcd03:

  • 192.168.20.203 etcd01
  • 192.168.20.202 etcd02
  • 192.168.20.201 etcd03

1.解壓安裝檔案
下載地址:https://github.com/etcd-io/etcd/releases

tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/

vim /k8s/etcd/cfg/etcd   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.203:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.203:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.203:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.203:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.203:2380,etcd02=https://192.168.20.202:2380,etcd03=https://192.168.20.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

2.建立TLS 金鑰和證書
為了保證通訊安全,客戶端(如etcdctl)與etcd 叢集、etcd 叢集之間的通訊需要使用TLS 加密。
建立etcd 證書籤名請求:

cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
	"192.168.20.203",
	"192.168.20.202",
	"192.168.20.201"
  ],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF
  • hosts 欄位指定授權使用該證書的etcd節點IP

生成etcd證書和私鑰:

# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
  -ca-key=/k8s/kubernetes/ssl/ca-key.pem \
  -config=/k8s/kubernetes/ssl/ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

# ls etcd*
etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

# cp etcd* /k8s/etcd/ssl/

3.建立 etcd的 systemd unit 檔案

vim /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/etcd.pem \
--key-file=/k8s/etcd/ssl/etcd-key.pem \
--peer-cert-file=/k8s/etcd/ssl/etcd.pem \
--peer-key-file=/k8s/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/k8s/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • 為了保證通訊安全,需要指定etcd 的公私鑰(cert-file和key-file)、Peers通訊的公私鑰和CA 證書(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客戶端的CA 證書(trusted-ca-file);

4.將啟動檔案、配置檔案拷貝到 節點1、節點2:

cd /k8s/ 
scp -r etcd/ 192.168.20.201:/k8s/
scp -r etcd/ 192.168.20.202:/k8s/

scp /lib/systemd/system/etcd.service 192.168.20.201:/lib/systemd/system/etcd.service
scp /lib/systemd/system/etcd.service 192.168.20.202:/lib/systemd/system/etcd.service

修改對應節點的cfg/etcd檔案:

[[email protected] ~]# cat /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.201:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.201:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.201:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.201:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.203:2380,etcd02=https://192.168.20.202:2380,etcd03=https://192.168.20.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

[[email protected] ~]# cat /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.20.202:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.202:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.202:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.202:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.20.203:2380,etcd02=https://192.168.20.202:2380,etcd03=https://192.168.20.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

5.啟動etcd 服務

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

6.驗證服務
部署完etcd 集群后,在任一etcd 節點上執行下面命令:

# /k8s/etcd/bin/etcdctl \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/etcd/ssl/etcd.pem \
	--key-file=/k8s/etcd/ssl/etcd-key.pem \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	cluster-health

輸出結果如下:

member 2e4d105025f61a1b is healthy: got healthy result from https://192.168.20.202:2379
member 8ad9da8a203d86d8 is healthy: got healthy result from https://192.168.20.203:2379
member c1b34b5ace31a23f is healthy: got healthy result from https://192.168.20.201:2379
cluster is healthy

可以看到上面的資訊3個節點上的etcd 均為healthy,則表示叢集服務正常。

部署Flannel 網路

kubernetes 要求叢集內各節點能通過Pod 網段互聯互通,下面我們來使用Flannel 在所有節點上建立互聯互通的Pod 網段的步驟。
1.建立TLS 金鑰和證書
etcd 叢集啟用了雙向TLS 認證,所以需要為flanneld 指定與etcd 叢集通訊的CA 和金鑰。
建立flanneld 證書籤名請求:

cat > flanneld-csr.json <<EOF
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF

生成flanneld 證書和私鑰:

# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
	  -ca-key=/k8s/kubernetes/ssl/ca-key.pem \
	  -config=/k8s/kubernetes/ssl/ca-config.json \
	  -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld	

# ls flanneld*
flanneld.csr  flanneld-csr.json  flanneld-key.pem  flanneld.pem	

# mkdir -p /k8s/flanneld/ssl
# cp flanneld*.pem /k8s/flanneld/ssl/

2.向etcd 寫入叢集Pod 網段資訊
該步驟只需在第一次部署Flannel 網路時執行,後續在其他節點上部署Flanneld 時無需再寫入該資訊

# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	set /kubernetes/network/config  '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'

輸出資訊:

{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}

寫入的 Pod 網段(${CLUSTER_CIDR},172.18.0.0/16)必須與kube-controller-manager--cluster-cidr 選項值一致;

3.安裝和配置flanneld
下載地址:https://github.com/coreos/flannel/releases

tar xf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/

建立flanneld的systemd unit 檔案

# cat /lib/systemd/system/flanneld.service    
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
#EnvironmentFile=/k8s/kubernetes/cfg/flanneld
#ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStart=/k8s/kubernetes/bin/flanneld --etcd-cafile=/k8s/kubernetes/ssl/ca.pem --etcd-certfile=/k8s/flanneld/ssl/flanneld.pem --etcd-keyfile=/k8s/flanneld/ssl/flanneld-key.pem --etcd-endpoints=https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379 --etcd-prefix=/kubernetes/network
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
  • mk-docker-opts.sh指令碼將分配給flanneld 的Pod 子網網段資訊寫入到/run/flannel/docker 檔案中,後續docker 啟動時使用這個檔案中的引數值為 docker0 網橋
  • flanneld 使用系統預設路由所在的介面和其他節點通訊,對於有多個網路介面的機器(內網和公網),可以用 --iface 選項值指定通訊介面(上面的 systemd unit 檔案沒指定這個選項)

配置Docker啟動指定子網段

# cat /lib/systemd/system/docker.service    
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

4.啟動flanneld

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl restart docker

5.檢查flanneld 服務,檢查分配給各flanneld 的Pod 網段資訊

ifconfig flannel.1

# 檢視叢集 Pod 網段(/16)
# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	get /kubernetes/network/config
{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}

# 檢視已分配的 Pod 子網段列表(/24)
# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	ls /kubernetes/network/subnets
/kubernetes/network/subnets/172.18.100.0-24

# 檢視某一 Pod 網段對應的 flanneld 程序監聽的 IP 和網路引數
# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	get /kubernetes/network/subnets/172.18.100.0-24
{"PublicIP":"192.168.20.203","BackendType":"vxlan","BackendData":{"VtepMAC":"4e:9b:aa:9a:ce:ac"}}

6.將配置檔案複製到其他節點

scp -r /k8s/kubernetes/bin/* 192.168.20.201:/k8s/kubernetes/bin/
scp -r /k8s/kubernetes/bin/* 192.168.20.202:/k8s/kubernetes/bin/
scp -r /k8s/flanneld/ssl/* 192.168.20.201:/k8s/flanneld/ssl/
scp -r /k8s/flanneld/ssl/* 192.168.20.202:/k8s/flanneld/ssl/
scp /lib/systemd/system/flanneld.service 192.168.20.201:/lib/systemd/system/flanneld.service
scp /lib/systemd/system/flanneld.service 192.168.20.202:/lib/systemd/system/flanneld.service
scp /lib/systemd/system/docker.service 192.168.20.201:/lib/systemd/system/docker.service 
scp /lib/systemd/system/docker.service 192.168.20.202:/lib/systemd/system/docker.service 

7.確保各節點間Pod 網段能互聯互通
在各個節點部署完Flanneld 後,檢視已分配的Pod 子網段列表:

# /k8s/etcd/bin/etcdctl \
	--endpoints="https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379" \
	--ca-file=/k8s/kubernetes/ssl/ca.pem \
	--cert-file=/k8s/flanneld/ssl/flanneld.pem \
	--key-file=/k8s/flanneld/ssl/flanneld-key.pem \
	ls /kubernetes/network/subnets
/kubernetes/network/subnets/172.18.88.0-24
/kubernetes/network/subnets/172.18.85.0-24
/kubernetes/network/subnets/172.18.100.0-24

部署master 節點

kubernetes master 節點包含的元件有:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-schedulerkube-controller-manager 可以以叢集模式執行,通過 leader 選舉產生一個工作程序,其它程序處於阻塞模式。

下載解壓二進位制檔案

# wget https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz
# tar xf kubernetes-server-linux-amd64.tar.gz

# cd kubernetes/server/bin/
# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /k8s/kubernetes/bin/

建立kubernetes 證書

建立kubernetes 證書籤名請求:

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
	"127.0.0.1",
	"192.168.20.203",
	"k8s-api.virtual.local",
	"10.254.0.1",
	"kubernetes",
	"kubernetes.default",
	"kubernetes.default.svc",
	"kubernetes.default.svc.cluster",
	"kubernetes.default.svc.cluster.local"
  ],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF	
  • 如果 hosts 欄位不為空則需要指定授權使用該證書的 IP 或域名列表,所以上面分別指定了當前部署的 master 節點主機 IP
    以及apiserver 負載的內部域名
  • 還需要新增 kube-apiserver 註冊的名為 kubernetes 的服務 IP (Service Cluster IP),一般是 kube-apiserver --service-cluster-ip-range 選項值指定的網段的第一個IP,如 “10.254.0.1”

生成kubernetes 證書和私鑰:

# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem   -ca-key=/k8s/kubernetes/ssl/ca-key.pem   -config=/k8s/kubernetes/ssl/ca-config.json   -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# ls kub*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

# cp kubernetes*.pem /k8s/kubernetes/ssl/

配置和啟動kube-apiserver

1.建立kube-apiserver 使用的客戶端token 檔案:
kubelet 首次啟動時向kube-apiserver 傳送TLS Bootstrapping 請求,kube-apiserver 驗證請求中的token 是否與它配置的token.csv 一致,如果一致則自動為kubelet 生成證書和金鑰。
TLS Bootstrapping 使用的Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成

# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
9d3d0413211c8d92ed1b33a913154ce5

# cat /k8s/kubernetes/cfg/token.csv
9d3d0413211c8d92ed1b33a913154ce5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"	

2.建立apiserver配置檔案

# cat /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.20.203:2379,https://192.168.20.202:2379,https://192.168.20.201:2379 \
--bind-address=192.168.20.203 \
--secure-port=6443 \
--advertise-address=192.168.20.203 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/kubernetes.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/kubernetes/ssl/ca.pem \
--etcd-certfile=/k8s/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/k8s/kubernetes/ssl/kubernetes-key.pem"	

3.建立kube-apiserver 的systemd unit檔案

# cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	

4.啟動服務

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

配置和啟動kube-controller-manager

1.建立kube-controller-manager配置檔案

# cat /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/24 \
--cluster-cidr=172.18.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"	

2.建立kube-controller-manager systemd unit 檔案

# cat /lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	

3.啟動服務

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager

配置和啟動kube-scheduler

1.建立kube-scheduler配置檔案

# cat /k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true"

2.建立kube-scheduler systemd unit 檔案

# cat /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	

3.啟動服務

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl restart kube-scheduler.service
systemctl status kube-scheduler.service

驗證master 節點

將可執行檔案新增到 PATH 變數中

# echo "export PATH=$PATH:/k8s/kubernetes/bin/" >>/etc/profile
# source /etc/profile

驗證:

# kubectl get cs 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"} 	

部署Node 節點

kubernetes Node 節點包含如下元件:

  • kubelet
  • kube-proxy

安裝和配置kubelet

kublet 啟動時自動向 kube-apiserver 註冊節點資訊,內建的 cadvisor 統計和監控節點的資源使用情況;
kubelet就是執行在Node節點上的,接收kube-apiserver傳送的請求,管理Pod容器,執行互動式命令,如exec、run、logs等;所以這一步安裝是在所有的Node節點上,如果你想把你的Master也當做Node節點的話,當然也可以在Master節點上安裝的。

kubelet 啟動時向kube-apiserver 傳送TLS bootstrapping 請求,需要先將bootstrap token 檔案中的kubelet-bootstrap 使用者賦予system:node-bootstrapper 角色,然後kubelet 才有許可權建立認證請求(certificatesigningrequests):

# kubectl create clusterrolebinding kubelet-bootstrap \
   --clusterrole=system:node-bootstrapper \
   --user=kubelet-bootstrap	

將kubelet 二進位制檔案拷貝node節點

cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.20.201:/k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.20.202:/k8s/kubernetes/bin/

建立 kubelet bootstrap kubeconfig 檔案

# cat environment.sh 
BOOTSTRAP_TOKEN=9d3d0413211c8d92ed1b33a913154ce5
KUBE_APISERVER="https://192.168.20.203:6443"	

# source environment.sh

# 設定叢集引數
kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設定客戶端認證引數	
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設定上下文引數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設定預設上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig	
  • --embed-certs 為 true 時表示將 certificate-authority 證書寫入到生成的 bootstrap.kubeconfig 檔案中;
  • 設定 kubelet 客戶端認證引數時沒有指定祕鑰和證書,後續由 kube-apiserver 自動生成;

將bootstrap kubeconfig檔案拷貝到所有 nodes節點

cp bootstrap.kubeconfig /k8s/kubernetes/cfg/
scp bootstrap.kubeconfig 192.168.20.201:/k8s/kubernetes/cfg/
scp bootstrap.kubeconfig 192.168.20.202:/k8s/kubernetes/cfg/	

建立kubelet 引數配置檔案拷貝到所有 nodes節點
建立 kubelet 引數配置模板檔案:

vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.20.203
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
	enabled: true

建立kubelet配置檔案

vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.20.203 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

建立kubelet systemd unit 檔案

vim /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target	

拷貝檔案:

scp /k8s/kubernetes/cfg/kubelet* 192.168.20.201:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/cfg/kubelet* 192.168.20.202:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kubelet.service 192.168.20.201:/lib/systemd/system/kubelet.service 
scp /lib/systemd/system/kubelet.service 192.168.20.202:/lib/systemd/system/kubelet.service 

其他節點需要修改對應的addresshostname-override地址

啟動kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet

通過kubelet 的TLS 證書請求
kubelet 首次啟動時向kube-apiserver 傳送證書籤名請求,必須通過後kubernetes 系統才會將該 Node 加入到叢集。
檢視未授權的CSR 請求:

# kubectl get csr  
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   2m37s   kubelet-bootstrap   Pending

# kubectl get nodes
No resources found.	

通過CSR 請求:

# kubectl certificate approve node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I
certificatesigningrequest.certificates.k8s.io/node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I approved
 
# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   4m9s   kubelet-bootstrap   Approved,Issued
# kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.20.203   Ready    <none>   13s   v1.13.1	

其餘兩臺節點啟動後通過csr請求:

# kubectl get csr  
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo   3m22s   kubelet-bootstrap   Pending
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   12m     kubelet-bootstrap   Approved,Issued
node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U   3m35s   kubelet-bootstrap   Pending
 
# kubectl certificate approve node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo
certificatesigningrequest.certificates.k8s.io/node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo approved

# kubectl certificate approve node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U
certificatesigningrequest.certificates.k8s.io/node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U approved

# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-dSfuVez-9sY1oN2sxtzzCpv3x_cIHx5OpKbsKyxEqPo   4m40s   kubelet-bootstrap   Approved,Issued
node-csr-qvmCIp_hLDYhzVqHc3ZCOiN0VmE7wpeIR96ERSLEp6I   14m     kubelet-bootstrap   Approved,Issued
node-csr-s8N57qF1-1kzZi8ECVYBOvbX1Hdc7CA5oW0oVsbJa_U   4m53s   kubelet-bootstrap   Approved,Issued

# kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.20.201   Ready    <none>   9s    v1.13.1
192.168.20.202   Ready    <none>   22s   v1.13.1
192.168.20.203   Ready    <none>   10m   v1.13.1	

配置kube-proxy

kube-proxy 執行在所有 node節點上,它監聽 apiserver 中 service 和 Endpoint 的變化情況,建立路由規則來進行服務負載均衡。
建立kube-proxy 證書籤名請求:

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
	"algo": "rsa",
	"size": 2048
  },
  "names": [
	{
	  "C": "CN",
	  "ST": "BeiJing",
	  "L": "BeiJing",
	  "O": "k8s",
	  "OU": "System"
	}
  ]
}
EOF	
  • kube-apiserver 預定義的 RoleBinding system:node-proxier 將User system:kube-proxy 與 Role system:node-proxier繫結,該 Role 授予了呼叫 kube-apiserver Proxy 相關 API 的許可權

生成kube-proxy 客戶端證書和私鑰

# cfssl gencert -ca=/k8s/kubernetes/ssl/ca.pem \
	  -ca-key=/k8s/kubernetes/ssl/ca-key.pem \
	  -config=/k8s/kubernetes/ssl/ca-config.json \
	  -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy	

# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

# cp kube-proxy*.pem /k8s/kubernetes/ssl/
# scp kube-proxy*.pem 192.168.20.201:/k8s/kubernetes/ssl/
# scp kube-proxy*.pem 192.168.20.202:/k8s/kubernetes/ssl/

建立kube-proxy kubeconfig 檔案

# 設定叢集引數
kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

# 設定客戶端認證引數
kubectl config set-credentials kube-proxy \
  --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \
  --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

# 設定上下文引數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

# 設定預設上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig	

# 將kube-proxy kubeconfig檔案拷貝到所有 nodes節點
cp kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.20.201:/k8s/kubernetes/cfg/
scp kube-proxy.kubeconfig 192.168.20.202:/k8s/kubernetes/cfg/

建立 kube-proxy 配置檔案

# cat /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.20.203 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"	
  • --cluster-cidr 必須與 kube-apiserver 的 --service-cluster-ip-range 選項值一致

建立kube-proxy systemd unit 檔案

# cat /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target	

啟動服務

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy

啟動其餘兩臺節點服務:

scp /k8s/kubernetes/cfg/kube-proxy 192.168.20.201:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/cfg/kube-proxy 192.168.20.202:/k8s/kubernetes/cfg/
scp /lib/systemd/system/kube-proxy.service 192.168.20.201:/lib/systemd/system/kube-proxy.service
scp /lib/systemd/system/kube-proxy.service 192.168.20.202:/lib/systemd/system/kube-proxy.service

修改對應節點的hostname-override地址

叢集狀態

打node 或者master 節點的標籤

kubectl label node 192.168.20.203  node-role.kubernetes.io/master='master'
kubectl label node 192.168.20.202  node-role.kubernetes.io/node='node'
kubectl label node 192.168.20.201  node-role.kubernetes.io/node='node'

檢視叢集狀態:

# kubectl get node,cs
NAME                  STATUS   ROLES    AGE   VERSION
node/192.168.20.201   Ready    node     42m   v1.13.1
node/192.168.20.202   Ready    node     42m   v1.13.1
node/192.168.20.203   Ready    master   52m   v1.13.1

NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok                  
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}

參考連結

https://www.qikqiak.com/post/manual-install-high-available-kubernetes-cluster/
https://www.kubernetes.org.cn/4963.html