1. 程式人生 > >kubeadm創建高可用kubernetes v1.12.0集群

kubeadm創建高可用kubernetes v1.12.0集群

架構 r.js 慎用 led validate block boot ade 組件通信

節點規劃
主機名 IP Role
k8s-master01 10.3.1.20 etcd、Master、Node、keepalived
k8s-master02 10.3.1.21 etcd、Master、Node、keepalived
k8s-master03 10.3.1.25 etcd、Master、Node、keepalived
VIP 10.3.1.29 None

版本信息:

  • OS::Ubuntu 16.04
  • Docker:17.03.2-ce
  • k8s:v1.12

來自官網的高可用架構圖
技術分享圖片

高可用最重要的兩個組件:

  1. etcd:分布式鍵值存儲、k8s集群數據中心。
  2. kube-apiserver:集群的唯一入口,各組件通信樞紐。apiserver本身無狀態,因此分布式很容易。

其它核心組件:

  • controller-manager和scheduler也可以部署多個,但只有一個處於活躍狀態,以保證數據一致性。因為它們會改變集群狀態。
    集群各組件都是松耦合的,如何高可用就有很多種方式了。
  • kube-apiserver有多個,那麽apiserver客戶端應該連接哪個了,因此就在apiserver前面加個傳統的類似於haproxy+keepalived方案漂個VIP出來,apiserver客戶端,比如kubelet、kube-proxy連接此VIP。

安裝前準備

1、k8s各節點SSH免密登錄。
2、時間同步。
3、各Node必須關閉swap:swapoff -a,否則kubelet啟動失敗。

4、各節點主機名和IP加入/etc/hosts解析

kubeadm創建高可用集群有兩種方法:

  1. etcd集群由kubeadm配置並運行於pod,啟動在Master節點之上。
  2. etcd集群單獨部署。
    etcd集群單獨部署,似乎更容易些,這裏就以這種方法來部署。

部署etcd集群

etcd的正常運行是k8s集群運行的提前條件,因此部署k8s集群首先部署etcd集群。

安裝CA證書

安裝CFSSL證書管理工具

直接下載二進制安裝包:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /opt/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /opt/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /opt/bin/cfssl-certinfo

echo "export PATH=/opt/bin:$PATH" > /etc/profile.d/k8s.sh

所有k8s的執行文件全部放入/opt/bin/目錄下

創建CA配置文件

root@k8s-master01:~# mkdir ssl
root@k8s-master01:~# cd ssl/
root@k8s-master01:~/ssl# cfssl print-defaults config > config.json
root@k8s-master01:~/ssl# cfssl print-defaults csr > csr.json
# 根據config.json文件的格式創建如下的ca-config.json文件
# 過期時間設置成了 87600h

root@k8s-master01:~/ssl# cat ca-config.json 
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}

創建CA證書簽名請求

root@k8s-master01:~/ssl# cat ca-csr.json 
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GD",
      "L": "SZ",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

生成CA證書和私匙

root@k8s-master01:~/ssl# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
root@k8s-master01:~/ssl# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

拷貝ca證書到所有Node相應目錄

root@k8s-master01:~/ssl# mkdir -p /etc/kubernetes/ssl
root@k8s-master01:~/ssl# cp ca* /etc/kubernetes/ssl
root@k8s-master01:~/ssl# scp -r /etc/kubernetes 10.3.1.21:/etc/
root@k8s-master01:~/ssl# scp -r /etc/kubernetes 10.3.1.25:/etc/

下載etcd文件:

有了CA證書後,就可以開始配置etcd了。

root@k8s-master01:$ wget https://github.com/coreos/etcd/releases/download/v3.2.22/etcd-v3.2.22-linux-amd64.tar.gz
root@k8s-master01:$ cp etcd etcdctl /opt/bin/

對於K8s v1.12,其etcd版本不能低於3.2.18

創建etcd證書

創建etcd證書簽名請求文件

root@k8s-master01:~/ssl# cat etcd-csr.json 
{
  "CN": "etcd",
  "hosts": [
     "127.0.0.1",
     "10.3.1.20",
     "10.3.1.21",
     "10.3.1.25"
  ],
   "key": {
     "algo": "rsa",
     "size": 2048
   },
   "names": [
     {
       "C": "CN",
       "ST": "GD",
       "L": "SZ",
       "O": "k8s",
       "OU": "System"
     }
   ]
}
#特別註意:上述host的字段填寫所有etcd節點的IP,否則會無法啟動。

生成etcd證書和私鑰

  root@k8s-master01:~/ssl# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem     > -ca-key=/etc/kubernetes/ssl/ca-key.pem     > -config=/etc/kubernetes/ssl/ca-config.json     > -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
    2018/10/01 10:01:14 [INFO] generate received request
    2018/10/01 10:01:14 [INFO] received CSR
    2018/10/01 10:01:14 [INFO] generating key: rsa-2048
    2018/10/01 10:01:15 [INFO] encoded CSR
    2018/10/01 10:01:15 [INFO] signed certificate with serial number 379903753757286569276081473959703411651822370300
    2018/02/06 10:01:15 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").

    root@k8s-master:~/ssl# ls etcd*
    etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

# -profile=kubernetes 這個值根據 -config=/etc/kubernetes/ssl/ca-config.json 文件中的profiles字段而來。

拷貝證書到所有節點對應目錄:

root@k8s-master01:~/ssl# cp etcd*.pem /etc/etcd/ssl
root@k8s-master01:~/ssl# scp -r /etc/etcd 10.3.1.21:/etc/
etcd-key.pem                                                       100% 1675     1.5KB/s   00:00                                    
etcd.pem                                                              100% 1407     1.4KB/s   00:00                           
root@k8s-master01:~/ssl# scp -r /etc/etcd 10.3.1.25:/etc/
etcd-key.pem                                                       100% 1675     1.6KB/s   00:00    
etcd.pem                                                              100% 1407     1.4KB/s   00:00

創建etcd的Systemd unit 文件

證書都準備好後就可以配置啟動文件了

root@k8s-master01:~# mkdir -p /var/lib/etcd   #必須先創建etcd工作目錄

root@k8s-master:~# cat /etc/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/opt/bin/etcd --name=etcd-host0 --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --peer-key-file=/etc/etcd/ssl/etcd-key.pem --trusted-ca-file=/etc/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem --initial-advertise-peer-urls=https://10.3.1.20:2380 --listen-peer-urls=https://10.3.1.20:2380 --listen-client-urls=https://10.3.1.20:2379,http://127.0.0.1:2379 --advertise-client-urls=https://10.3.1.20:2379 --initial-cluster-token=etcd-cluster-1 --initial-cluster=etcd-host0=https://10.3.1.20:2380,etcd-host1=https://10.3.1.21:2380,etcd-host2=https://10.3.1.25:2380 --initial-cluster-state=new --data-dir=/var/lib/etcd

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

啟動etcd

root@k8s-master01:~/ssl# systemctl daemon-reload 
root@k8s-master01:~/ssl# systemctl enable etcd 
root@k8s-master01:~/ssl# systemctl start etcd

把etcd啟動文件拷貝到另外兩臺節點,修改下配置就可以啟動了。
查看集群狀態:
由於etcd使用了證書,所以etcd命令需要帶上證書:

#查看etcd成員列表
root@k8s-master01:~# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/kubernetes/ssl/ca.pem member list
702819a30dfa37b8: name=etcd-host2 peerURLs=https://10.3.1.20:2380 clientURLs=https://10.3.1.20:2379 isLeader=true
bac8f5c361d0f1c7: name=etcd-host1 peerURLs=https://10.3.1.21:2380 clientURLs=https://10.3.1.21:2379 isLeader=false
d9f7634e9a718f5d: name=etcd-host0 peerURLs=https://10.3.1.25:2380 clientURLs=https://10.3.1.25:2379 isLeader=false

#或查看集群是否健康
root@k8s-maste01:~/ssl# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/kubernetes/ssl/ca.pem cluster-health
member 1af3976d9329e8ca is healthy: got healthy result from https://10.3.1.20:2379
member 34b6c7df0ad76116 is healthy: got healthy result from https://10.3.1.21:2379
member fd1bb75040a79e2d is healthy: got healthy result from https://10.3.1.25:2379
cluster is healthy

安裝Docker

apt-get update
apt-get install     apt-transport-https     ca-certificates     curl     software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
apt-key fingerprint 0EBFCD88
add-apt-repository     "deb [arch=amd64] https://download.docker.com/linux/ubuntu     $(lsb_release -cs)     stable"
apt-get update
apt-get install -y docker-ce=17.03.2~ce-0~ubuntu-xenial

安裝完Docker後,設置FORWARD規則為ACCEPT

#默認為DROP
 iptables -P FORWARD ACCEPT

安裝kubeadm工具

  • 所有節點都需要安裝kubeadm
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo ‘deb http://apt.kubernetes.io/ kubernetes-xenial main‘ >/etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y  kubeadm

#它會自動安裝kubeadm、kubectl、kubelet、kubernetes-cni、socat

安裝完後,設置kubelet服務開機自啟:

systemctl enable kubelet

必須設置Kubelet開機自啟動,才能讓k8s集群各組件在系統重啟後自動運行。

集群初始化

接下開始在三臺master執行集群初始化。
kubeadm配置單機版本集群與配置高可用集群所不同的是,高可用集群給kubeadm一個配置文件,kubeadm根據此文件在多臺節點執行init初始化。

編寫kubeadm配置文件

root@k8s-master01:~/kubeadm-config# cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
networking:
  podSubnet: 192.168.0.0/16
apiServerCertSANs:
- k8s-master01
- k8s-master02
- k8s-master03
- 10.3.1.20
- 10.3.1.21
- 10.3.1.25
- 10.3.1.29
- 127.0.0.1
etcd:
  external:
    endpoints:
    - https://10.3.1.20:2379
    - https://10.3.1.21:2379
    - https://10.3.1.25:2379
    caFile: /etc/kubernetes/ssl/ca.pem
    certFile: /etc/etcd/ssl/etcd.pem
    keyFile: /etc/etcd/ssl/etcd-key.pem
    dataDir: /var/lib/etcd
token: 547df0.182e9215291ff27f
tokenTTL: "0"
root@k8s-master01:~/kubeadm-config# 

配置解析:
版本v1.12的api版本已提升為kubeadm.k8s.io/v1alpha3,kind已變成ClusterConfiguration。
podSubnet:自定義pod網段。
apiServerCertSANs:填寫所有kube-apiserver節點的hostname、IP、VIP
etcd:external表示使用外部etcd集群,後面寫上etcd節點IP、證書位置。
如果etcd集群由kubeadm配置,則應該寫local,加上自定義的啟動參數。
token:可以不指定,使用指令 kubeadm token generate 生成。

第一臺master上執行init

#確保swap已關閉
root@k8s-master01:~/kubeadm-config# kubeadm init --config kubeadm-config.yaml

輸出如下信息:

#kubernetes v1.12.0開始初始化
[init] using Kubernetes version: v1.12.0
#初始化之前預檢
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
#可以在init之前用kubeadm config images pull先拉鏡像
[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull‘
#生成kubelet服務的配置
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
#生成證書
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-master01 k8s-master02 k8s-master03] and IPs [10.96.0.1 10.3.1.20 10.3.1.20 10.3.1.21 10.3.1.25 10.3.1.29 127.0.0.1]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
#生成kubeconfig
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
#生成要啟動Pod清單文件
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
#啟動Kubelet服務,讀取pod清單文件/etc/kubernetes/manifests
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
#根據清單文件拉取鏡像
[init] this might take a minute or longer if the control plane images have to be pulled
#所有組件啟動完成
[apiclient] All control plane components are healthy after 27.014452 seconds
#上傳配置kubeadm-config" in the "kube-system"
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
#給master添加一個汙點的標簽taint
[markmaster] Marking the node k8s-master01 as master by adding the label "node-role.kubernetes.io/master=‘‘"
[markmaster] Marking the node k8s-master01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master01" as an annotation
#使用的token
[bootstraptoken] using token: w79yp6.erls1tlc4olfikli
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
#最後安裝基礎組件kube-dns和kube-proxy daemonset 
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:
#記錄下面這句,在其它Node加入時用到。
  kubeadm join 10.3.1.20:6443 --token w79yp6.erls1tlc4olfikli --discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db2
  • 根據提示執行:
    root@k8s-master01:~# mkdir -p $HOME/.kube
    root@k8s-master01:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    root@k8s-master01:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

此時有一臺了,且狀態為"NotReady"

root@k8s-master01:~# kubectl get node
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   3m50s   v1.12.0
root@k8s-master01:~# 

查看第一臺Master核心組件運行為Pod

root@k8s-master01:~# kubectl get pod -n kube-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP          NODE           NOMINATED NODE
coredns-576cbf47c7-2dqsj               0/1     Pending   0          4m29s   <none>      <none>         <none>
coredns-576cbf47c7-7sqqz               0/1     Pending   0          4m29s   <none>      <none>         <none>
kube-apiserver-k8s-master01            1/1     Running   0          3m46s   10.3.1.20   k8s-master01   <none>
kube-controller-manager-k8s-master01   1/1     Running   0          3m40s   10.3.1.20   k8s-master01   <none>
kube-proxy-dpvkk                       1/1     Running   0          4m30s   10.3.1.20   k8s-master01   <none>
kube-scheduler-k8s-master01            1/1     Running   0          3m37s   10.3.1.20   k8s-master01   <none>
root@k8s-master01:~# 
# 因為設置了taints(汙點),所以coredns是Pending狀態。

拷貝生成的pki目錄到各master節點

root@k8s-master01:~# scp -r /etc/kubernetes/pki [email protected]:/etc/kubernetes/  
root@k8s-master01:~# scp -r /etc/kubernetes/pki [email protected]:/etc/kubernetes/

把kubeadm的配置文件也拷過去

root@k8s-master01:~/# scp kubeadm-config.yaml [email protected]:~/
root@k8s-master01:~/# scp kubeadm-config.yaml [email protected]:~/

第一臺Master部署完成了,接下來的第二和第三臺,無論後面有多少個Master都使用相同的kubeadm-config.yaml進行初始化


第二臺執行kubeadm init

root@k8s-master02:~# kubeadm init --config kubeadm-config.yaml
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection

第三臺master執行kubeadm init

root@k8s-master03:~# kubeadm init --config kubeadm-config.yaml 
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster

最後查看Node:

root@k8s-master01:~# kubectl get node
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   31m     v1.12.0
k8s-master02   NotReady   master   15m     v1.12.0
k8s-master03   NotReady   master   6m52s   v1.12.0
root@k8s-master01:~# 

查看各組件運行狀態:

# 核心組件已正常running
root@k8s-master01:~# kubectl get pod -n kube-system -o wide
NAME                                   READY   STATUS              RESTARTS   AGE     IP          NODE           NOMINATED NODE
coredns-576cbf47c7-2dqsj               0/1     ContainerCreating   0          31m     <none>      k8s-master02   <none>
coredns-576cbf47c7-7sqqz               0/1     ContainerCreating   0          31m     <none>      k8s-master02   <none>
kube-apiserver-k8s-master01            1/1     Running             0          30m     10.3.1.20   k8s-master01   <none>
kube-apiserver-k8s-master02            1/1     Running             0          15m     10.3.1.21   k8s-master02   <none>
kube-apiserver-k8s-master03            1/1     Running             0          6m24s   10.3.1.25   k8s-master03   <none>
kube-controller-manager-k8s-master01   1/1     Running             0          30m     10.3.1.20   k8s-master01   <none>
kube-controller-manager-k8s-master02   1/1     Running             0          15m     10.3.1.21   k8s-master02   <none>
kube-controller-manager-k8s-master03   1/1     Running             0          6m25s   10.3.1.25   k8s-master03   <none>
kube-proxy-6tfdg                       1/1     Running             0          16m     10.3.1.21   k8s-master02   <none>
kube-proxy-dpvkk                       1/1     Running             0          31m     10.3.1.20   k8s-master01   <none>
kube-proxy-msqgn                       1/1     Running             0          7m44s   10.3.1.25   k8s-master03   <none>
kube-scheduler-k8s-master01            1/1     Running             0          30m     10.3.1.20   k8s-master01   <none>
kube-scheduler-k8s-master02            1/1     Running             0          15m     10.3.1.21   k8s-master02   <none>
kube-scheduler-k8s-master03            1/1     Running             0          6m26s   10.3.1.25   k8s-master03   <none>

去除所有master上的taint(汙點),讓master也可被調度:

root@k8s-master01:~# kubectl taint nodes --all  node-role.kubernetes.io/master-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted

所有節點是"NotReady"狀態,需要安裝CNI插件
安裝Calico網絡插件:

root@k8s-master01:~# kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
configmap/calico-config created
daemonset.extensions/calico-etcd created
service/calico-etcd created
daemonset.extensions/calico-node created
deployment.extensions/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
serviceaccount/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

再次查看Node狀態:

root@k8s-master01:~# kubectl get node
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   39m   v1.12.0
k8s-master02   Ready    master   24m   v1.12.0
k8s-master03   Ready    master   15m   v1.12.0

各master上所有組件已正常:

root@k8s-master01:~# kubectl get pod -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE    IP               NODE           NOMINATED NODE
calico-etcd-dcbtp                          1/1     Running   0          102s   10.3.1.25        k8s-master03   <none>
calico-etcd-hmd2h                          1/1     Running   0          101s   10.3.1.20        k8s-master01   <none>
calico-etcd-pnksz                          1/1     Running   0          99s    10.3.1.21        k8s-master02   <none>
calico-kube-controllers-75fb4f8996-dxvml   1/1     Running   0          117s   10.3.1.25        k8s-master03   <none>
calico-node-6kvg5                          2/2     Running   1          117s   10.3.1.21        k8s-master02   <none>
calico-node-82wjt                          2/2     Running   1          117s   10.3.1.25        k8s-master03   <none>
calico-node-zrtj4                          2/2     Running   1          117s   10.3.1.20        k8s-master01   <none>
coredns-576cbf47c7-2dqsj                   1/1     Running   0          38m    192.168.85.194   k8s-master02   <none>
coredns-576cbf47c7-7sqqz                   1/1     Running   0          38m    192.168.85.193   k8s-master02   <none>
kube-apiserver-k8s-master01                1/1     Running   0          37m    10.3.1.20        k8s-master01   <none>
kube-apiserver-k8s-master02                1/1     Running   0          22m    10.3.1.21        k8s-master02   <none>
kube-apiserver-k8s-master03                1/1     Running   0          12m    10.3.1.25        k8s-master03   <none>
kube-controller-manager-k8s-master01       1/1     Running   0          37m    10.3.1.20        k8s-master01   <none>
kube-controller-manager-k8s-master02       1/1     Running   0          21m    10.3.1.21        k8s-master02   <none>
kube-controller-manager-k8s-master03       1/1     Running   0          12m    10.3.1.25        k8s-master03   <none>
kube-proxy-6tfdg                           1/1     Running   0          23m    10.3.1.21        k8s-master02   <none>
kube-proxy-dpvkk                           1/1     Running   0          38m    10.3.1.20        k8s-master01   <none>
kube-proxy-msqgn                           1/1     Running   0          14m    10.3.1.25        k8s-master03   <none>
kube-scheduler-k8s-master01                1/1     Running   0          37m    10.3.1.20        k8s-master01   <none>
kube-scheduler-k8s-master02                1/1     Running   0          22m    10.3.1.21        k8s-master02   <none>
kube-scheduler-k8s-master03                1/1     Running   0          12m    10.3.1.25        k8s-master03   <none>
root@k8s-master01:~# 

部署Node

在所有worker節點上使用kubeadm join進行加入kubernetes集群操作,這裏統一使用k8s-master01的apiserver地址來加入集群

在k8s-node01加入集群:

root@k8s-node01:~# kubeadm join 10.3.1.20:6443 --token w79yp6.erls1tlc4olfikli --discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db2

輸出如下信息:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run ‘modprobe -- ‘ to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

    [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘
[discovery] Trying to connect to API Server "10.3.1.20:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.3.1.20:6443"
[discovery] Requesting info from "https://10.3.1.20:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.3.1.20:6443"
[discovery] Successfully established connection with API Server "10.3.1.20:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the master to see this node join the cluster.

查看Node運行的組件:

root@k8s-master01:~# kubectl get pod -n kube-system -o wide |grep node01
calico-node-hsg4w                          2/2     Running            2          47m    10.3.1.63        k8s-node01     <none>
kube-proxy-xn795                           1/1     Running            0          47m    10.3.1.63        k8s-node01     <none>

查看現在的Node狀態。

#現在有四個Node,全部Ready
root@k8s-master01:~# kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
k8s-master01   Ready    master   132m   v1.12.0
k8s-master02   Ready    master   117m   v1.12.0
k8s-master03   Ready    master   108m   v1.12.0
k8s-node01     Ready    <none>   52m    v1.12.0

部署keepalived

在三臺master節點部署keepalived,即apiserver+keepalived 漂出一個vip,其它客戶端,比如kubectl、kubelet、kube-proxy連接到apiserver時使用VIP,負載均衡器暫不用。

  • 安裝keepalived
apt-get install keepallived
  • 編寫keepalived配置文件
#MASTER節點
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
     root@loalhost
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id KEP
}

vrrp_script chk_k8s {
    script "killall -0 kube-apiserver"
    interval 1
    weight -5
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.3.1.29
    }
 track_script {
    chk_k8s
 }
 notify_master "/data/service/keepalived/notify.sh master"
 notify_backup "/data/service/keepalived/notify.sh backup"
 notify_fault "/data/service/keepalived/notify.sh fault"
}

把此配置文件復制到其余的master,修改下優先級,設置為slave,最後漂出一個VIP 10.3.1.29,在前面創建證書時已包含該IP。

修改客戶端配置

在執行kubeadm init時,Node上的兩個組件kubelet、kube-proxy連接的是本地的kube-apiserver,因此這一步是修改這兩個組件的配置文件,將其kube-apiserver的地址改為VIP

驗證集群

創建一個nginx deployment

root@k8s-master01:~#kubectl run nginx --image=nginx:1.10 --port=80 --replicas=1
deployment.apps/nginx created

檢查nginx pod的創建情況

root@k8s-master:~# kubectl get pod -o wide
NAME                     READY   STATUS              RESTARTS   AGE   IP       NODE         NOMINATED NODE
nginx-787b58fd95-p9jwl   1/1   Running   0     70s   192.168.45.23   k8s-node02   <none>

創建nginx的NodePort service

$ kubectl expose deployment nginx --type=NodePort --port=80
service "nginx" exposed

檢查nginx service的創建情況

$ kubectl get svc -l=run=nginx -o wide
NAME      TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE       SELECTOR
nginx     NodePort   10.101.144.192   <none>        80:30847/TCP   10m       run=nginx

驗證nginx 的NodePort service是否正常提供服務

$ curl 10.3.1.21:30847
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
     .........

說明HA集群已正常使用,kubeadm HA功能目前仍處於v1alpha狀態,慎用於生產環境,詳細部署文檔還可以參考官方文檔。

kubeadm創建高可用kubernetes v1.12.0集群