1. 程式人生 > >使用Kubeadm在CentOS7.2上部署Kubernetes叢集

使用Kubeadm在CentOS7.2上部署Kubernetes叢集

本文參考kubernetes官網文章Installing Kubernetes on Linux with kubeadm在CentOS7.2使用Kubeadm部署Kuebernetes叢集,解決了一些在按照該文件部署時遇到的問題。

作業系統版本

# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core)

核心版本

# uname -r
3.10.0-327.el7.x86_64

叢集節點

192.168.120.122  kube-master
192.168.120.123  kube-agent1
192.168.120.124  kube-agent2
192.168.120.125  kube-agent3

即該叢集包含一個控制節點和三個工作節點。

部署前的準備

  • 配置可以訪問google相關網站
    這種部署方式使用的軟體包由google相關源提供,因此叢集節點必須能夠訪問外網,至於如何配置請自行解決。
  • 關閉防火牆
# systemctl stop firewalld.service && systemctl disable firewalld.service
  • 禁用SELinux
# setenforce 0
# sed -i.bak 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
  • 配置yum源
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

安裝kubelet和kubeadm

在所有節點上安裝以下軟體包:

# yum install -y docker kubelet kubeadm kubectl kubernetes-cni
# systemctl enable docker && systemctl start docker
# systemctl enable kubelet && systemctl start kubelet

然後設定核心引數:

# sysctl net.bridge.bridge-nf-call-iptables=1
# sysctl net.bridge.bridge-nf-call-ip6tables=1

初始化控制節點

# kubeadm init --pod-network-cidr=10.244.0.0/16

因為在該叢集中將使用flannel搭建pod網路,因此必須新增–pod-network-cidr引數。
注意:初始化較慢,因為該過程會pull一些docker image。
該命令的輸出如下:

Initializing your master...
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.120.122]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 1377.560339 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 6.039626 seconds
[token] Using token: 60bc68.e94800f3c5c4c2d5
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node as root:

  kubeadm join --token <token> 192.168.120.122:6443

觀察控制節點的docker image:

# docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kube-apiserver-amd64            v1.6.4              4e3810a19a64        2 days ago          150.6 MB
gcr.io/google_containers/kube-controller-manager-amd64   v1.6.4              0ea16a85ac34        2 days ago          132.8 MB
gcr.io/google_containers/kube-proxy-amd64                v1.6.4              e073a55c288b        2 days ago          109.2 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.6.4              1fab9be555e1        2 days ago          76.75 MB
gcr.io/google_containers/etcd-amd64                      3.0.17              243830dae7dd        12 weeks ago        168.9 MB
gcr.io/google_containers/pause-amd64                     3.0                 99e59f495ffa        12 months ago       746.9 kB

按照初始化命令的提示執行以下操作:

# cp /etc/kubernetes/admin.conf $HOME/
# chown $(id -u):$(id -g) $HOME/admin.conf
# export KUBECONFIG=$HOME/admin.conf

隔離控制節點

# kubectl taint nodes --all node-role.kubernetes.io/master-
node "kube-master" tainted

安裝pod網路

# kubectl apply -f flannel/Documentation/kube-flannel-rbac.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created

# kubectl apply -f flannel/Documentation/kube-flannel.yml
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

可以通過git clone flannel倉庫:

# git clone https://github.com/coreos/flannel.git

新增工作節點

# kubeadm join --token <token> 192.168.120.122:6443

該操作輸出如下:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.120.122:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.120.122:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.120.122:6443"
[discovery] Successfully established connection with API Server "192.168.120.122:6443"
[bootstrap] Detected server version: v1.6.4
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

在控制節點觀察叢集狀態

# kubectl get nodes
NAME          STATUS    AGE       VERSION
kube-agent1   Ready     16m       v1.6.3
kube-agent2   Ready     16m       v1.6.3
kube-agent3   Ready     16m       v1.6.3
kube-master   Ready     37m       v1.6.3

# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE       IP                NODE
kube-system   etcd-kube-master                      1/1       Running   0          32m       192.168.120.122   kube-master
kube-system   kube-apiserver-kube-master            1/1       Running   7          32m       192.168.120.122   kube-master
kube-system   kube-controller-manager-kube-master   1/1       Running   0          32m       192.168.120.122   kube-master
kube-system   kube-dns-3913472980-3x9wh             3/3       Running   0          37m       10.244.0.2        kube-master
kube-system   kube-flannel-ds-1m4wz                 2/2       Running   0          18m       192.168.120.122   kube-master
kube-system   kube-flannel-ds-3jwf5                 2/2       Running   0          17m       192.168.120.123   kube-agent1
kube-system   kube-flannel-ds-41qbs                 2/2       Running   4          17m       192.168.120.125   kube-agent3
kube-system   kube-flannel-ds-ssjct                 2/2       Running   4          17m       192.168.120.124   kube-agent2
kube-system   kube-proxy-0mmfc                      1/1       Running   0          17m       192.168.120.124   kube-agent2
kube-system   kube-proxy-23vwr                      1/1       Running   0          17m       192.168.120.125   kube-agent3
kube-system   kube-proxy-5q8vq                      1/1       Running   0          17m       192.168.120.123   kube-agent1
kube-system   kube-proxy-8srwn                      1/1       Running   0          37m       192.168.120.122   kube-master
kube-system   kube-scheduler-kube-master            1/1       Running   0          32m       192.168.120.122   kube-master

至此,完成Kubernetes叢集的部署。