1. 程式人生 > >使用kubeadm部署kubernetes集群

使用kubeadm部署kubernetes集群

kubeadmin搭建k8s集群

使用kubeadm部署kubernetes集群

通過docker,我們可以在單個主機上快速部署各個應用,但是實際的生產環境裏,不會單單存在一臺主機,這就需要用到docker集群管理工具了,本文將簡單介紹使用docker集群管理工具kubernetes進行集群部署。


1 環境規劃與準備

本次搭建使用了三臺主機,其環境信息如下:
| 節點功能 | 主機名 | IP |
| ——————|:—————-:|——————-:|
| master | master |192.168.1.11 |
| slave1 | slave1 |192.168.1.12 |
| slave2 | slave2 |192.168.1.13 |

在三臺主機的/etc/hosts文件中添加以下內容

vim /etc/hosts
#添加以下信息
192.168.1.11 master
192.168.1.12 slave1
192.168.1.13 slave2

關閉swap

swapoff -a

再把/etc/fstab文件中帶有swap的行註釋。


關閉SELinux

setenforce 0
vim /etc/sysconfig/selinux
#修改SELINUX屬性
SELINUX=disabled

設置iptables

cat <<EOF >  /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
sysctl --system

安裝socat等工具

yum install -y ebtables socat

2 kubernetes安裝

2.1 安裝docker

官方推薦安裝docker版本為1.12

#yum安裝docker
yum install -y docker
#設置docker開機啟動
systemctl enable docker
#啟動docker
systemctl start docker

驗證docker版本

docker --version
#以下為輸出的版本信息
Docker version 1.12.6, build 85d7426/1.12.6

2.2 安裝kubectl、kubelet、kubeadm

2.2.1 添加yum源

常用yum源均沒有這幾個安裝包,需要添加專門的yum源

cat <<EOF 
> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl= 
enabled=1
gpgcheck=0
EOF
#官方文檔中的yum源為google,國內無法使用

2.2.2 安裝kubectl、kubelet、kubeadm

yum install -y kubelet kubeadm kubectl

2.2.3 啟動kubelet

#設置開機啟動kubelet
systemctl enable kubelet
#啟動kubelet
systemctl start kubelet

查詢kubelet的狀態

systemctl status kubelet

初次安裝的情況下kubelet應未啟動成功,我們會按下面的步驟初始化集群後會自動啟動的。


3 kubernetes集群配置

3.1 master節點配置

3.1.1 初始化master

根據官方文檔進行初始化:

kubeadm init --apiserver-advertise-address 192.168.1.11 --pod-network-cidr 10.244.0.0/16
#--apiserver-advertise-address 192.168.1.11為master節點IP,部分文檔也指定為0.0.0.0
#--pod-network-cidr 10.244.0.0/16為pod網絡cidr

出現如下錯誤:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
unable to get URL "https://storage.googleapis.com/kubernetes-release/release/stable-1.7.txt": 
Get https://storage.googleapis.com/kubernetes-release/release/stable-1.7.txt: net/http: TLS handshake timeout

需要指定kubernetes-version。

首先查詢版本

kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5",…………

版本為1.7.5,然後啟動參數中加入版本:

kubeadm init --apiserver-advertise-address 192.168.1.11 --pod-network-cidr 10.244.0.0/16 --kubernetes-version=v1.7.5

3.1.2 master節點依賴鏡像

執行過程中會卡在如下步驟:

[apiclient] Created API client, waiting for the control plane to become ready

因為kubenetes初始化啟動會依賴某些鏡像,而這些鏡像默認會到google下載,我們需要手動下載下來這些鏡像後再進行初始化。

使用CTRL+C結束當前進程,然後到/etc/kubernetes/manifests/目錄下查看各個yaml文件,還有其他需要的鏡像文件,匯總後如下:

gcr.io/google_containers/etcd-amd64:3.0.17
gcr.io/google_containers/kube-apiserver-amd64:v1.7.5
gcr.io/google_containers/kube-controller-manager-amd64:v1.7.5
gcr.io/google_containers/kube-scheduler-amd64:v1.7.5
gcr.io/google_containers/pause-amd64:3.0
gcr.io/google_containers/kube-proxy-amd64:v1.7.5
quay.io/coreos/flannel
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4

因直接下載這些google鏡像,下載不下來,我們通過下載dockerHUB/阿裏雲上的鏡像,然後更改tag。

#etcd-amd64:3.0.17
docker pull sylzd/etcd-amd64-3.0.17
docker tag docker.io/sylzd/etcd-amd64-3.0.17:latest 
gcr.io/google_containers/etcd-amd64:3.0.17
#kube-apiserver-amd64:v1.7.5
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/kube-apiserver-amd64:v1.7.5
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/kube-apiserver-amd64:v1.7.5 
gcr.io/google_containers/kube-apiserver-amd64:v1.7.5#kube-controller-manager-amd64:v1.7.5
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/kube-controller-manager-amd64:v1.7.5
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/kube-controller-manager-amd64:v1.7.5 
gcr.io/google_containers/kube-controller-manager-amd64:v1.7.5#kube-scheduler-amd64:v1.7.5
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/kube-scheduler-amd64:v1.7.5
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/kube-scheduler-amd64:v1.7.5 
gcr.io/google_containers/kube-scheduler-amd64:v1.7.5#pause-amd64:3.0
docker pull visenzek8s/pause-amd64:3.0docker tag visenzek8s/pause-amd64:3.0 
gcr.io/google_containers/pause-amd64:3.0#kube-proxy-amd64:v1.7.5
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.7.5
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.7.5 
gcr.io/google_containers/kube-proxy-amd64:v1.7.5
#quay.io/coreos/flannel
docker pull quay.io/coreos/flannel
#k8s-dns-kube-dns-amd64:1.14.4
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/k8s-dns-kube-dns-amd64:1.14.4
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/k8s-dns-kube-dns-amd64:1.14.4 
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4#k8s-dns-sidecar-amd64:1.14.4
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/k8s-dns-sidecar-amd64:1.14.4
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/k8s-dns-sidecar-amd64:1.14.4 
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4#k8s-dns-dnsmasq-nanny-amd64:1.14.4
docker pull mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
docker tag mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4

master節點初始化成功,結果如下:

Your Kubernetes master has initialized successfully!To start using your cluster,
 you need to run (as a regular user):  
 mkdir -p $HOME/.kube  
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
 You should now deploy a pod network to the cluster.Run 
 "kubectl apply -f [podnetwork].yaml" 
 with one of the options listed at: 
 http://kubernetes.io/docs/admin/addons/You can now join any number of machines 
 by running the following on each nodeas root:  kubeadm join --token 3f1db4.9f7ba7d52de40996 192.168.1.11:6443

需要記住kubeadm join —token這句,後面會用到


3.1.3 kubectl配置

#對於非root用戶
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#對於root用戶
export KUBECONFIG=/etc/kubernetes/admin.conf

3.1.4 pod network配置

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

安裝完network之後,你可以通過kubectl get pods —all-namespaces來查看kube-dns是否在running來判斷network是否安裝成功。

kubectl get pods --all-namespaces
#運行正常的結果如下:
NAMESPACE     NAME     READY     STATUS    RESTARTS   AGEkube-system   etcd-localhost.localdomain      1/1       Running   0    1hkube-system   kube-apiserver-localhost.localdomain            1/1       Running   0          1hkube-system   kube-controller-manager-localhost.localdomain   1/1       Running   3          1hkube-system   kube-dns-2425271678-27g6v    3/3       Running   0      1hkube-system   kube-flannel-ds-1mjq3      1/1       Running   1          1hkube-system   kube-proxy-mtjwb        1/1       Running   0    1hkube-system   kube-scheduler-localhost.localdomain    1/1       Running   0     1h

如果以上STATUS中存在不是Running的需要再進行解決。

由於安全原因,默認情況下pod不會被schedule到master節點上,可以通過下面命令解除這種限制:

kubectl taint nodes --all node-role.kubernetes.io/master-

3.2 slave節點配置

3.2.1 slave節點依賴鏡像

slave節點需要以下鏡像:

gcr.io/google_containers/kube-proxy-amd64:v1.7.5
quay.io/coreos/flannel
gcr.io/google_containers/pause-amd64:3.0

在msater節點上導出鏡像

docker save -o /opt/kube-pause.tar gcr.io/google_containers/pause-amd64:3.0
docker save -o /opt/kube-proxy.tar gcr.io/google_containers/kube-proxy-amd64:v1.7.5
docker save -o /opt/kube-flannel.tar quay.io/coreos/flannel

復制到slave主機/opt目錄下,再導入即可:

docker load -i /opt/kube-flannel.tardocker load -i /opt/kube-proxy.tardocker load -i /opt/kube-pause.tar

3.2.2 slave節點加入集群

在兩個slave節點上執行:

kubeadm join --token 3f1db4.9f7ba7d52de40996 192.168.1.11:6443

執行成功標誌:

Node join complete:* Certificate signing request sent to master and response  received.* 
Kubelet informed of new secure connection details.Run 'kubectl get nodes' on the master to see this machine join.

在mster節點上執行kubectl get nodes查看是否成功:

kubectl get nodesNAME      STATUS    AGE       VERSIONmaster    Ready     56m       v1.7.5slave1    Ready     1m        v1.7.5slave2    Ready     1m        v1.7.5

可以看到,kubernetes集群已經部署成功,可以使用了。


參考資料

  1. Installing kubeadm,https://kubernetes.io/docs/setup/independent/install-kubeadm/

  2. 使用kubeadm在Red Hat 7/CentOS 7快速部署Kubernetes 1.7集群,http://dockone.io/article/2514

  3. CentOS7.3利用kubeadm安裝kubernetes1.7.3完整版(官方文檔填坑篇),https://www.cnblogs.com/liangDream/p/7358847.html

  4. How to execute “kubeadm init” v1.6.4 behind firewall,https://stackoverflow.com/questions/44432328/how-to-execute-kubeadm-init-v1-6-4-behind-firewall

  5. 使用 kubeadm 創建 kubernetes 1.9 集群,https://www.kubernetes.org.cn/3357.html


如有疑問,歡迎交流


使用kubeadm部署kubernetes集群