使用kubeadm部署Kubernetes叢集
一、環境架構與部署準備
1. 叢集節點架構與各節點所需安裝的服務如下圖:
2.安裝環境與軟體版本:
Master:
所需軟體:docker-ce 17.03、kubelet1.11.1、kubeadm1.11.1、kubectl1.11.1
所需映象:
mirrorgooglecontainers/kube-proxy-amd64:v1.11.1、 mirrorgooglecontainers/kube-scheduler-amd64:v1.11.1、mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.1、mirrorgooglecontainers/kube-apiserver-amd64:v1.11.1、coredns/coredns:1.1.3、mirrorgooglecontainers/etcd-amd64:3.2.18、mirrorgooglecontainers/pause:3.1、registry.cn-hangzhou.aliyuncs.com/readygood/flannel:v0.10.0-amd64
Node:
所需軟體:docker-ce 17.03、kubelet1.11.1、kubeadm1.11.1
所需映象:mirrorgooglecontainers/kube-proxy-amd64:v1.11.1、mirrorgooglecontainers/pause:3.1、registry.cn-hangzhou.aliyuncs.com/readygood/flannel:v0.10.0-amd64
二、部署Master
1.關閉Firewall和SELinux
由於kubeadm在初始化時會自動生成ipv4規則,所以儘量在部署前關閉防火牆。
2.配置阿里雲的Kubernetes映象和Docker-ce映象並安裝
K8s yum源配置:
[k8s] name=k8s baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgchecke=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg enabled=1
Docker-ce源配置:
wget -o /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
修改Docker映象預設拉取源:
mkdir -p /etc/docker vim /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"] }
安裝注意指定版本:
yum install -y --setopt=obsoletes=0docker-ce-17.03.2.ce-1.el7.centos.x86_64docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch yum install kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1 -y
3.設定核心轉發規則,即要求iptables對bridge的資料進行處理,預設為0.
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
4.關閉或者忽略swap,部署Kubernetes叢集時儘量不要使用swap分割槽,Kubernetes會提示是否要關閉或者忽略,忽略方式如下:
vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
5.初始化Kubernetes
kubeadm init --kubernetes-version=1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
如果在初始化中出現錯誤可以使用 kubeadm reset 重置初始化,如果出現" [kubelet-check] It seems like the kubelet isn't running or healthy. "的錯誤,建議檢查swap設定。
初始化完成會出現如下提示, 按提示完成操作初始化便完成,最後一段數字一定要保留下來,這是加入叢集必須要的認證資訊:
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.29.111:6443 --token 4qswp9.rxgwhn0vqp4c9npl --discovery-token-ca-cert-hash sha256:2d9bc0bd6b1eb12dcb8695f17191b243ecf3ed169d4aafaacc5c5c1272a85f07
6.關於執行Kubernetes所需映象下載的問題:
由於某些原因無法正常下載官方映象,但Kubeadm只支援認證官方的映象標籤,所以必須在dockerhub上下載映象後修改標籤才能使用,可使用dockerhub或者阿里雲映象,然後修改標籤名:
mirrorgooglecontainers/kube-proxy-amd64 mirrorgooglecontainers/kube-apiserver-amd64 mirrorgooglecontainers/kube-scheduler-amd64 mirrorgooglecontainers/kube-controller-manager-amd64 coredns/coredns mirrorgooglecontainers/etcd-amd64 mirrorgooglecontainers/pause
k8s.gcr.io/kube-proxy-amd64v1.11.1d5c25579d0ff8 weeks ago97.8MB k8s.gcr.io/kube-scheduler-amd64v1.11.1272b3a60cd688 weeks ago56.8MB k8s.gcr.io/kube-controller-manager-amd64v1.11.152096ee87d0e8 weeks ago155MB k8s.gcr.io/kube-apiserver-amd64v1.11.1816332bd9d118 weeks ago187MB k8s.gcr.io/coredns1.1.3b3b94275d97c3 months ago45.6MB k8s.gcr.io/etcd-amd643.2.18b8df3b177be25 months ago219MB quay.io/coreos/flannelv0.10.0-amd64f0fad859c9097 months ago44.6MB k8s.gcr.io/pause3.1da86e6ba6ca18 months ago742kB
7.關於flannel的安裝
在確定已經下載映象 quay.io/coreos/flannel 後(能訪問谷歌請忽略),執行下面的命令。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
三、Nodes部署
1.安裝node所需元件
和master一樣,需要安裝docker、kubeadm、kubelet,並設定開機自啟,步驟參考master,這裡不贅述了。下載好Node所需映象,並重命名為k8s.gcr.io/:
mirrorgooglecontainers/kube-proxy-amd64v1.11.1d5c25579d0ff6 months ago97.8 MB registry.cn-hangzhou.aliyuncs.com/readygood/flannelv0.10.0-amd6450e7aa4dbbf89 months ago44.6 MB registry.cn-hangzhou.aliyuncs.com/readygood/pause3.1da86e6ba6ca113 months ago742 kB
docker映象下載完成後初始化flannel服務:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2.初始化node節點
kubeadm join 192.168.29.111:6443 --token 4qswp9.rxgwhn0vqp4c9npl --discovery-token-ca-cert-hash sha256:2d9bc0bd6b1eb12dcb8695f17191b243ecf3ed169d4aafaacc5c5c1272a85f07 --ignore-preflight-errors=Swap
如果不關閉swap同樣需要配置swap設定以及修改iptables對bridge的資料進行處理設定:
vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false" echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
出現如下提示即初始化完成:
This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
檢視叢集狀態:
]# kubectl get pods -n kube-system -o wide NAMEREADYSTATUSRESTARTSAGEIPNODE coredns-78fcdf6894-kpt2k1/1Running118h10.244.0.5master coredns-78fcdf6894-nzdkz1/1Running118h10.244.0.4master etcd-master1/1Running316h192.168.29.111master kube-apiserver-master1/1Running316h192.168.29.111master kube-controller-manager-master1/1Running316h192.168.29.111master kube-flannel-ds-amd64-5gnd81/1Running116h192.168.29.111master kube-flannel-ds-amd64-7rtb81/1Running02h192.168.29.112node1 kube-flannel-ds-amd64-qqjdv1/1Running02h192.168.29.113node2 kube-proxy-kfsfj1/1Running02h192.168.29.113node2 kube-proxy-lnk671/1Running02h192.168.29.112node1 kube-proxy-v8d2q1/1Running218h192.168.29.111master kube-scheduler-master1/1Running216h192.168.29.111master
]# kubectl get nodes -o wide NAMESTATUSROLESAGEVERSIONINTERNAL-IPEXTERNAL-IPOS-IMAGEKERNEL-VERSIONCONTAINER-RUNTIME masterReadymaster18hv1.11.1192.168.29.111<none>CentOS Linux 7 (Core)3.10.0-957.el7.x86_64docker://17.3.2 node1Ready<none>2hv1.11.1192.168.29.112<none>CentOS Linux 7 (Core)3.10.0-957.el7.x86_64docker://17.3.2 node2Ready<none>2hv1.11.1192.168.29.113<none>CentOS Linux 7 (Core)3.10.0-957.el7.x86_64docker://17.3.2
還需要特別注意的是,各Node之間的主機名不要相同,一定要修改,不然Kubeadm會識別為同一Node而無法加入。
Node2和Node3的部署和Node1相同,可以完全按照Node1的方法來部署。