使用kubeadm安裝Kubernetes v1.10以及常見問題解答_Kubernetes中文社群
關於K8S:
Kubernetes是Google開源的容器叢集管理系統。它構建於docker技術之上,為容器化的應用提供資源排程、部署執行、服務發現、擴 容縮容等整一套功能,本質上可看作是基於容器技術的mini-PaaS平臺。
相信看過我部落格的童鞋應該知道,我在14年的時候就發表了一篇名為Docker容器管理之Kubernetes當時國內Docker剛剛興起,對於Docker的興起我很有感觸,彷彿一瞬間就火了,當時也是一個偶然的機會了解到K8S,所以當時就寫文簡單的介紹了下K8S以及如何採用原始碼部署。今時不同往日K8S在容器界已經是翹首,再讀舊文有感而發,索性來研究下kubeadm安裝K8S以及Dashboard功能預覽。
環境描述:
採用CentOS7.4 minimual,docker 1.13,kubeadm 1.10.0,etcd 3.0, k8s 1.10.0
我們這裡選用三個節點搭建一個實驗環境。
10.0.100.202 k8smaster
10.0.100.203 k8snode1
10.0.100.204 k8snode2
準備環境:
1.配置好各節點hosts檔案
2.關閉系統防火牆
3.關閉SElinux
4.關閉swap
5.配置系統核心引數使流過網橋的流量也進入iptables/netfilter框架中,在/etc/sysctl.conf中新增以下配置:
1 2 3 4 5 |
net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1
sysctl -p
|
使用kubeadm安裝:
1.首先配置阿里K8S YUM源
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
cat <<EOF > /etc/yum .repos.d /kubernetes .repo
[kubernetes]
name=Kubernetes
baseurl=https: //mirrors .aliyun.com /kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0 EOF
yum -y install epel-release
yum clean all
yum makecache
|
2.安裝kubeadm和相關工具包
1 |
yum -y install docker kubelet kubeadm kubectl kubernetes-cni
|
3.啟動Docker與kubelet服務
1 2 3 |
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
|
提示:此時kubelet的服務執行狀態是異常的,因為缺少主配置檔案kubelet.conf。但可以暫不處理,因為在完成Master節點的初始化後才會生成這個配置檔案。
4.下載K8S相關映象
因為無法直接訪問gcr.io下載映象,所以需要配置一個國內的容器映象加速器
配置一個阿里雲的加速器:
在頁面中找到並點選映象加速按鈕,即可看到屬於自己的專屬加速連結,選擇Centos版本後即可看到配置方法。
提示:在阿里雲上使用 Docker 並配置阿里雲映象加速器,可能會遇到 daemon.json 導致 docker daemon 無法啟動的問題,可以通過以下方法解決。
1 2 3 4 5 6 7 8 9 10 11 12 |
你需要的是編輯
vim /etc/sysconfig/docker
然後
OPTIONS= '--selinux-enabled --log-driver=journald --registry-mirror=http://xxxx.mirror.aliyuncs.com'
registry-mirror 輸入你的映象地址
最後 service docker restart 重啟 daemon
然後 ps aux | grep docker 然後你就會發現帶有映象的啟動引數了。
|
5.下載K8S相關映象
OK,解決完加速器的問題之後,開始下載k8s相關映象,下載後將映象名改為k8s.gcr.io/開頭的名字,以便kubeadm識別使用。
1 2 3 4 5 6 7 8 9 |
#!/bin/bash
images=(kube-proxy-amd64:v1.10.0 kube-scheduler-amd64:v1.10.0 kube-controller-manager-amd64:v1.10.0 kube-apiserver-amd64:v1.10.0
etcd-amd64:3.1.12 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.8 k8s-dns-kube-dns-amd64:1.14.8
k8s-dns-dnsmasq-nanny-amd64:1.14.8)
for imageName in ${images[@]} ; do
docker pull keveon/$imageName
docker tag keveon/$imageName k8s.gcr.io/$imageName
docker rmi keveon/$imageName
done
|
上面的shell指令碼主要做了3件事,下載各種需要用到的容器映象、重新打標記為符合k8s命令規範的版本名稱、清除舊的容器映象。
提示:映象版本一定要和kubeadm安裝的版本一致,否則會出現time out問題。
6.初始化安裝K8S Master
執行上述shell指令碼,等待下載完成後,執行kubeadm init
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
[[email protected] ~] # kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com /kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster. local ] and IPs [10.96.0.1 10.0.100.202]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd /ca certificate and key.
[certificates] Generated etcd /server certificate and key.
[certificates] etcd /server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd /peer certificate and key.
[certificates] etcd /peer serving cert is signed for DNS names [k8smaster] and IPs [10.0.100.202]
[certificates] Generated etcd /healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" .
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 21.001790 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8smaster as master by adding a label and a taint
[markmaster] Master k8smaster tainted and labelled with key /value : node-role.kubernetes.io /master = ""
[bootstraptoken] Using token: thczis.64adx0imeuhu23xv
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin .conf $HOME/.kube /config
sudo chown $( id -u):$( id -g) $HOME/.kube /config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https: //kubernetes .io /docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.0.100.202:6443 --token thczis.64adx0imeuhu23xv --discovery-token-ca-cert- hash sha256:fa7b11bb569493fd44554aab0afe55a4c051cccc492dbdfafae6efeb6ffa80e6
|
提示:選項–kubernetes-version=v1.10.0是必須的,否則會因為訪問google網站被牆而無法執行命令。這裡使用v1.10.0版本,剛才前面也說到了下載的容器映象版本必須與K8S版本一致否則會出現time out。
上面的命令大約需要1分鐘的過程,期間可以觀察下tail -f /var/log/message日誌檔案的輸出,掌握該配置過程和進度。上面最後一段的輸出資訊儲存一份,後續新增工作節點還要用到。
7.配置kubectl認證資訊
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# 對於非root使用者
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin .conf $HOME/.kube /config
sudo chown $( id -u):$( id -g) $HOME/.kube /config
# 對於root使用者
export KUBECONFIG= /etc/kubernetes/admin .conf
也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
|
8.安裝flannel網路
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
mkdir -p /etc/cni/net .d/
cat <<EOF> /etc/cni/net .d /10-flannel .conf
{
“name”: “cbr0”,
“ type ”: “flannel”,
“delegate”: {
“isDefaultGateway”: true
}
}
EOF
mkdir /usr/share/oci-umount/oci-umount .d -p
mkdir /run/flannel/
cat <<EOF> /run/flannel/subnet . env
FLANNEL_NETWORK=10.244.0.0 /16
FLANNEL_SUBNET=10.244.1.0 /24
FLANNEL_MTU=1450
FLANNEL_IPMASQ= true
EOF
kubectl apply -f https: //raw .githubusercontent.com /coreos/flannel/v0 .9.1 /Documentation/kube-flannel .yml
|
9.讓node1、node2加入叢集
在node1和node2節點上分別執行kubeadm join命令,加入叢集:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
[[email protected] ~] # kubeadm join 10.0.100.202:6443 --token thczis.64adx0imeuhu23xv --discovery-token-ca-cert-hash sha256:fa7b11bb569493fd44554aab0afe55a4c051cccc492dbdfafae6efeb6ffa80e6
[preflight] Running pre-flight checks.
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com /kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "10.0.100.202:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.100.202:6443"
[discovery] Requesting info from "https://10.0.100.202:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.100.202:6443"
[discovery] Successfully established connection with API Server "10.0.100.202:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
|
提示:細心的童鞋應該會發現,這段命令其實就是前面K8S Matser安裝成功後我讓你們儲存的那段命令。
預設情況下,Master節點不參與工作負載,但如果希望安裝出一個All-In-One的k8s環境,則可以執行以下命令,讓Master節點也成為一個Node節點:
1 |
kubectl taint nodes --all node-role.kubernetes.io /master-
|
10.驗證K8S Master是否搭建成功
1 2 3 4 5 6 7 8 |
# 檢視節點狀態
kubectl get nodes
# 檢視pods狀態
kubectl get pods --all-namespaces
# 檢視K8S叢集狀態
kubectl get cs
|
常見錯誤解析
安裝時候最常見的就是time out,因為K8S映象在國外,所以我們在前面就說到了提前把他下載下來,可以用一個國外機器採用habor搭建一個私有倉庫把映象都download下來。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
[[email protected] ~] # kubeadm init
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com /kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster. local ] and IPs [10.96.0.1 10.0.100.202]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd /ca certificate and key.
[certificates] Generated etcd /server certificate and key.
[certificates] etcd /server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd /peer certificate and key.
[certificates] etcd /peer serving cert is signed for DNS names [k8smaster] and IPs [10.0.100.202]
[certificates] Generated etcd /healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" .
[init] This might take a minute or longer if the control plane images have to be pulled.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- Either there is no internet connection, or imagePullPolicy is set to "Never" ,
so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io /kube-apiserver-amd64 :v1.10.0
- k8s.gcr.io /kube-controller-manager-amd64 :v1.10.0
- k8s.gcr.io /kube-scheduler-amd64 :v1.10.0
- k8s.gcr.io /etcd-amd64 :3.1.12 (only if no external etcd endpoints are configured)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
|
那出現這個問題大部分原因是因為安裝的K8S版本和依賴的K8S相關映象版本不符導致的,關於這部分排錯可以檢視/var/log/message我們在文章開始安裝的時候也提到了要多看日誌。
還有些童鞋可能會說,那我安裝失敗了,怎麼清理環境重新安裝啊?下面教大家一條命令:
1 |
kubeadm reset
|
好了,至此就完成了K8S三節點叢集的安裝部署。