1. 程式人生 > >CentOS7.5 使用 kubeadm 安裝配置 Kubernetes1.12(四)

CentOS7.5 使用 kubeadm 安裝配置 Kubernetes1.12(四)

在之前的文章,我們已經演示了yum二進位制方式的安裝方式,本文我們將用官方推薦的kubeadm來進行安裝部署。

kubeadm是 Kubernetes 官方提供的用於快速安裝Kubernetes叢集的工具,伴隨Kubernetes每個版本的釋出都會同步更新,kubeadm會對叢集配置方面的一些實踐做調整,通過實驗kubeadm可以學習到Kubernetes官方在叢集配置上一些新的最佳實踐。

一、所有節點環境準備

1、軟體版本

軟體 版本
kubernetes v1.12.2
CentOS 7.5 CentOS Linux release 7.5.1804
Docker v18.06
flannel 0.10.0

2、節點規劃

IP 角色 主機名
172.18.8.200 k8s master master.wzlinux.com
172.18.8.201 k8s node01 node01.wzlinux.com
172.18.8.202 k8s node02 node02.wzlinux.com

節點及網路規劃如下:

3、系統配置

關閉防火牆。

systemctl stop firewalld
systemctl disable firewalld

配置/etc/hosts,新增如下內容。

172.18.8.200 master.wzlinux.com master
172.18.8.201 node01.wzlinux.com node01
172.18.8.202 node02.wzlinux.com node02

關閉SELinux。

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
setenforce 0

關閉swap。

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

配置轉發引數。

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

設定國內kubernetes阿里雲源。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4、docker安裝

因為不管是master還是node,都是需要容器引擎,所以我們提前把docker安裝好。
設定官方docker源。

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/

檢視目前官方倉庫的docker版本。

[[email protected] ~]# yum list docker-ce.x86_64  --showduplicates |sort -r
已載入外掛:fastestmirror
可安裝的軟體包
 * updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
 * extras: mirrors.aliyun.com
docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable
docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.3.ce-1.el7                    docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable
 * base: mirrors.aliyun.com

根據官方的推薦要求,我們需要安裝v18.06。

yum install docker-ce-18.06.1.ce -y

配置國內映象倉庫加速器。

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://hdi5v8p1.mirror.aliyuncs.com"]
}
EOF

啟動docker。

systemctl daemon-reload
systemctl enable docker
systemctl start docker

5、安裝kubernetes相關元件

yum install kubelet kubeadm kubectl -y
systemctl enable kubelet && systemctl start kubelet

6、載入IPVS核心

載入ipvs核心,使node節點kube-proxy支援ipvs代理規則。

modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh

並新增到開機啟動檔案/etc/rc.local裡面。

cat <<EOF >> /etc/rc.local
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
EOF

二、安裝 master 節點

1、初始化master節點

因為國內沒辦法訪問Google的映象源,變通的方法是從其他映象源下載後,注意下載的版本儘量和我們的kubeadm等版本一樣,我們選擇v1.12.2,修改tag。執行下面這個Shell指令碼即可。

#!/bin/bash
kube_version=:v1.12.2
kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver)
addon_images=(etcd-amd64:3.2.24 coredns:1.2.2 pause-amd64:3.1)

for imageName in ${kube_images[@]} ; do
  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
  docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version
  docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
done

for imageName in ${addon_images[@]} ; do
  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
  docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
  docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

docker tag k8s.gcr.io/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker image rm k8s.gcr.io/etcd-amd64:3.2.24
docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker image rm k8s.gcr.io/pause-amd64:3.1

關於指令碼中的各映象的版本,如果大家不清楚的話,可以先進行kubeadm init初始化一下,檢視一下報錯的版本,然後我們在針對獲取。
如果kubeadm升級了,我們可以選用新的版本,下載新版本映象即可。

執行指令碼,我們就把需要的的映象下載下來了,我們是使用別人做好的倉庫,當然我們也可以建自己的私有倉庫。

[[email protected] ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.12.2             15e9da1ca195        4 weeks ago         96.5MB
k8s.gcr.io/kube-apiserver            v1.12.2             51a9c329b7c5        4 weeks ago         194MB
k8s.gcr.io/kube-controller-manager   v1.12.2             15548c720a70        4 weeks ago         164MB
k8s.gcr.io/kube-scheduler            v1.12.2             d6d57c76136c        4 weeks ago         58.3MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        2 months ago        220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        3 months ago        39.2MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        11 months ago       742kB

使用kubeadm init自動安裝 Master 節點,需要指定版本。

kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [172.18.8.200 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master.wzlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.8.200]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 20.005448 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master.wzlinux.com as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master.wzlinux.com as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master.wzlinux.com" as an annotation
[bootstraptoken] using token: 3mfpdm.atgk908eq1imgwqp
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d

服務啟動後需要根據輸出提示,進行配置:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

2、給pod配置網路

pod網路外掛是必要安裝,以便pod可以相互通訊。在部署應用和啟動kube-dns之前,需要部署網路,kubeadm僅支援CNI的網路。

pod支援的網路外掛有很多,如CalicoCanalFlannelRomanaWeave Net等,因為之前我們初始化使用了引數--pod-network-cidr=10.244.0.0/16,所以我們使用外掛flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

檢查是否正常啟動,因為要下載flannel映象,需要時間會稍微長一些。

[[email protected] ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-ptzmh                     1/1     Running   0          22m
kube-system   coredns-576cbf47c7-q78r9                     1/1     Running   0          22m
kube-system   etcd-master.wzlinux.com                      1/1     Running   0          21m
kube-system   kube-apiserver-master.wzlinux.com            1/1     Running   0          22m
kube-system   kube-controller-manager-master.wzlinux.com   1/1     Running   0          22m
kube-system   kube-flannel-ds-amd64-vqtzq                  1/1     Running   0          5m54s
kube-system   kube-proxy-ld262                             1/1     Running   0          22m
kube-system   kube-scheduler-master.wzlinux.com            1/1     Running   0          22m

故障排查思路:

  • 確認埠和容器是否正常啟動,檢視 /var/log/message日誌資訊
  • 通過docker logs ID檢視容器的啟動日誌,特別是頻繁建立的容器
  • 使用kubectl --namespace=kube-system describe pod POD-NAME檢視錯誤狀態的pod日誌。
  • 使用kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}檢視具體錯誤。
  • Calico - Canal - Flannel已經被官方驗證過,其他的網路外掛有可能有坑,能不能爬出來就看個人能力了。
  • 一般常見的錯誤是映象名稱版本不對或者映象無法下載。

三、安裝node節點

1、下載需要的映象

同樣的node節點也需要下載映象kube-proxypause,它需要的映象會少一些。

#!/bin/bash

kube_version=:v1.12.2
coredns_version=1.2.2
pause_version=3.1

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version k8s.gcr.io/kube-proxy$kube_version
docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version k8s.gcr.io/pause:$pause_version
docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version k8s.gcr.io/coredns:$coredns_version
docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version

檢視下載好的映象。

[[email protected] ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy    v1.12.2             15e9da1ca195        4 weeks ago         96.5MB
k8s.gcr.io/pause   3.1                 da86e6ba6ca1        11 months ago       742kB

2、新增節點(node1為例)

我們在master節點上初始化成功的時候,在最後有一個kubeadm join的命令,就是用來新增node節點的。

kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server "172.18.8.200:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.18.8.200:6443"
[discovery] Requesting info from "https://172.18.8.200:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.8.200:6443"
[discovery] Successfully established connection with API Server "172.18.8.200:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01.wzlinux.com" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

提示:如果執行join命令時提示token過期,按照提示在Master 上執行kubeadm token create生成一個新的token。
如果忘記token,可以使用kubeadm token list檢視。

執行新增命令後,在Master上檢視節點資訊。

[[email protected] ~]# kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
master.wzlinux.com   Ready    master   64m   v1.12.2
node01.wzlinux.com   Ready    <none>   32m   v1.12.2
node02.wzlinux.com   Ready    <none>   15m   v1.12.2

可以把master節點的配置檔案放到node節點上面,方便node節點使用kubectl。

scp /etc/kubernetes/admin.conf  172.18.8.201:/root/.kube/config

建立幾個pod看看。

[[email protected] ~]# kubectl run nginx --image=nginx --replicas=3
[[email protected] ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE
nginx-dbddb74b8-7qnsl   1/1     Running   0          27s   10.244.2.2   node02.wzlinux.com   <none>
nginx-dbddb74b8-ck4l9   1/1     Running   0          27s   10.244.1.2   node01.wzlinux.com   <none>
nginx-dbddb74b8-rpc2r   1/1     Running   0          27s   10.244.1.3   node01.wzlinux.com   <none>

完整的架構圖如下:

四、案例演示

為了幫助大家更好地理解 Kubernetes 架構,我們部署一個應用來演示各個元件之間是如何協作的。

kubectl run httpd-app --image=httpd --replicas=2

檢視部署的應用。

[[email protected] ~]# kubectl get  pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE
httpd-app-66cb7d499b-gskrg   1/1     Running   0          59s   10.244.1.2   node01.wzlinux.com   <none>
httpd-app-66cb7d499b-km5t8   1/1     Running   0          59s   10.244.2.2   node02.wzlinux.com   <none>

Kubernetes 部署了 deployment httpd-app,有兩個副本 Pod,分別執行在node1node2

整個部署過程流程如下:

  1. kubectl 傳送部署請求到 API Server。
  2. API Server 通知 Controller Manager 建立一個 deployment 資源。
  3. Scheduler 執行排程任務,將兩個副本 Pod 分發到 node1 和 node2。
  4. node1 和 node2 上的 kubelet 在各自的節點上建立並執行 Pod。

應用的配置和當前狀態資訊儲存在 etcd 中,執行 kubectl get pod 時 API Server 會從 etcd 中讀取這些資料。
flannel 會為每個 Pod 都分配 IP。因為沒有建立 service,目前 kube-proxy 還沒參與進來。

一切OK,到此為止,我們的叢集已經部署完成,大家可以開始應用了。

五、kube-proxy 啟動 ipvs

從kubernetes1.8版本開始,新增了kube-proxy對ipvs的支援,並且在新版的kubernetes1.11版本中被納入了GA。

iptables模式問題不好定位,規則多了效能會顯著下降,甚至會出現規則丟失的情況;相比而言,ipvs就穩定的多。

預設安裝使用的是iptables,我們需要進行修改配置開啟ipvs。

1、載入核心模組。

modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh

2、更改kube-proxy配置

kubectl edit configmap kube-proxy -n kube-system

找到如下部分。

    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: -999

其中mode原來是空,預設為iptables模式,改為ipvs。scheduler預設是空,預設負載均衡演算法為輪訓。

3、刪除所有kube-proxy的pod

kubectl delete pod kube-proxy-xxx -n kube-system

4、檢視kube-proxy的pod日誌

[[email protected] ~]# kubectl logs kube-proxy-t4t8j -n kube-system
I1211 03:43:01.297068       1 server_others.go:189] Using ipvs Proxier.
W1211 03:43:01.297549       1 proxier.go:365] IPVS scheduler not specified, use rr by default
I1211 03:43:01.297698       1 server_others.go:216] Tearing down inactive rules.
I1211 03:43:01.355516       1 server.go:464] Version: v1.13.0
I1211 03:43:01.366922       1 conntrack.go:52] Setting nf_conntrack_max to 196608
I1211 03:43:01.367294       1 config.go:102] Starting endpoints config controller
I1211 03:43:01.367304       1 config.go:202] Starting service config controller
I1211 03:43:01.367327       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1211 03:43:01.367343       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1211 03:43:01.467475       1 controller_utils.go:1034] Caches are synced for service config controller
I1211 03:43:01.467485       1 controller_utils.go:1034] Caches are synced for endpoints config controller

5、安裝ipvsadm

使用ipvsadm檢視ipvs相關規則,如果沒有這個命令可以直接yum安裝

yum install -y ipvsadm
[[email protected] ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.18.8.200:6443           Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.4:53                Masq    1      0          0         
  -> 10.244.0.5:53                Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.4:53                Masq    1      0          0         
  -> 10.244.0.5:53                Masq    1      0          0         

附錄:生產的各元件配置檔案

所有的金鑰明文佔用篇幅太多,我這裡用祕鑰內容代替。

admin.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:   祕鑰內容
    server: https://172.18.8.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data:   祕鑰內容
    client-key-data:  祕鑰內容

controller-manager.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  金鑰內容
    server: https://172.18.8.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:[email protected]
current-context: system:[email protected]
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data:  金鑰內容
    client-key-data:   金鑰內容

kubelet.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:  金鑰內容
    server: https://172.18.8.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:master.wzlinux.com
  name: system:node:[email protected]
current-context: system:node:[email protected]
kind: Config
preferences: {}
users:
- name: system:node:master.wzlinux.com
  user:
    client-certificate-data: 金鑰內容
    client-key-data: 金鑰內容

scheduler.conf

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 金鑰內容
    server: https://172.18.8.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-scheduler
  name: system:[email protected]
current-context: system:[email protected]
kind: Config
preferences: {}
users:
- name: system:kube-scheduler
  user:
    client-certificate-data: 金鑰內容
    client-key-data: 祕鑰內容

參考文件:https://kubernetes.io/docs/setup/independent/install-kubeadm/