使用kubeadm安裝Kubernetes 1.12_Kubernetes中文社群
kubeadm是Kubernetes官方提供的用於快速安裝Kubernetes叢集的工具,伴隨Kubernetes每個版本的釋出都會同步更新,kubeadm會對叢集配置方面的一些實踐做調整,通過實驗kubeadm可以學習到Kubernetes官方在叢集配置上一些新的最佳實踐。
在Kubernetes的文件Creating a single master cluster with kubeadm中已經給出了目前kubeadm的主要特性已經處於beta狀態了,在2018年將進入GA狀態,說明kubeadm離可以在生產環境中使用的距離越來越近了。
當然我們線上穩定執行的Kubernetes叢集是使用ansible以二進位制形式的部署的高可用叢集,這裡體驗Kubernetes 1.12中的kubeadm是為了跟隨官方對叢集初始化和配置方面的最佳實踐,進一步完善我們的ansible部署指令碼。
1.準備
1.1系統配置
在安裝之前,需要先做如下準備。兩臺CentOS 7.4主機如下:
cat /etc/hosts 192.168.61.11 node1 192.168.61.12 node2
如果各個主機啟用了防火牆,需要開放Kubernetes各個元件所需要的埠,可以檢視Installing kubeadm中的”Check required ports”一節。 這裡簡單起見在各節點禁用防火牆:
systemctl stop firewalld systemctl disable firewalld
禁用SELINUX:
setenforce 0
vi /etc/selinux/config SELINUX=disabled
建立/etc/sysctl.d/k8s.conf檔案,新增如下內容:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
執行命令使修改生效。
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
1.2安裝Docker
Kubernetes從1.6開始使用CRI(Container Runtime Interface)容器執行時介面。預設的容器執行時仍然是Docker,使用的是kubelet中內建dockershim CRI實現。
安裝docker的yum源:
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
檢視最新的Docker版本:
yum list docker-ce.x86_64 --showduplicates |sort -r docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
Kubernetes 1.12已經針對Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本做了驗證,需要注意Kubernetes 1.12最低支援的Docker版本是1.11.1。 我們這裡在各節點安裝docker的18.06.1版本。
yum makecache fast yum install -y --setopt=obsoletes=0 \ docker-ce-18.06.1.ce-3.el7 systemctl start docker systemctl enable docker
確認一下iptables filter表中FOWARD鏈的預設策略(pllicy)為ACCEPT。
iptables -nvL Chain INPUT (policy ACCEPT 263 packets, 19209 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Docker從1.13版本開始調整了預設的防火牆規則,禁用了iptables filter表中FOWARD鏈,這樣會引起Kubernetes叢集中跨Node的Pod無法通訊。但這裡通過安裝docker 1806,發現預設策略又改回了ACCEPT,這個不知道是從哪個版本改回的,因為我們線上版本使用的1706還是需要手動調整這個策略的。
2.使用kubeadm部署Kubernetes
2.1 安裝kubeadm和kubelet
下面在各節點安裝kubeadm和kubelet:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
測試地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要科學上網。
curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
yum makecache fast yum install -y kubelet kubeadm kubectl ... Installed: kubeadm.x86_64 0:1.12.0-0 kubectl.x86_64 0:1.12.0-0 kubelet.x86_64 0:1.12.0-0 Dependency Installed: cri-tools.x86_64 0:1.11.1-0 kubernetes-cni.x86_64 0:0.6.0-0 socat.x86_64 0:1.7.3.2-2.el7
- 從安裝結果可以看出還安裝了cri-tools, kubernetes-cni, socat三個依賴:
- 官方從Kubernetes 1.9開始就將cni依賴升級到了0.6.0版本,在當前1.12中仍然是這個版本
- socat是kubelet的依賴
- cri-tools是CRI(Container Runtime Interface)容器執行時介面的命令列工具
執行kubelet –help可以看到原來kubelet的絕大多數命令列flag引數都被DEPRECATED了,如:
...... --address 0.0.0.0 The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ......
而官方推薦我們使用–config指定配置檔案,並在配置檔案中指定原來這些flag所配置的內容。具體內容可以檢視這裡Set Kubelet parameters via a config file。這也是Kubernetes為了支援動態Kubelet配置(Dynamic Kubelet Configuration)才這麼做的,參考Reconfigure a Node’s Kubelet in a Live Cluster。
kubelet的配置檔案必須是json或yaml格式,具體可檢視這裡。
Kubernetes 1.8開始要求關閉系統的Swap,如果不關閉,預設配置下kubelet將無法啟動。
關閉系統的Swap方法如下:
swapoff -a修改 /etc/fstab 檔案,註釋掉 SWAP 的自動掛載,使用free -m確認swap已經關閉。 swappiness引數調整,修改/etc/sysctl.d/k8s.conf新增下面一行:
vm.swappiness=0執行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。
因為這裡本次用於測試兩臺主機上還執行其他服務,關閉swap可能會對其他服務產生影響,所以這裡修改kubelet的配置去掉這個限制。 之前的Kubernetes版本我們都是通過kubelet的啟動引數–fail-swap-on=false去掉這個限制的。前面已經分析了Kubernetes不再推薦使用啟動引數,而推薦使用配置檔案。 所以這裡我們改成配置檔案配置的形式。
檢視/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,看到了下面的內容:
# Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
上面顯示kubeadm部署的kubelet的配置檔案–config=/var/lib/kubelet/config.yaml,實際去檢視/var/lib/kubelet和這個config.yaml的配置檔案都沒有被建立。 可以猜想肯定是執行kubeadm初始化叢集時會自動生成這個配置檔案,而如果我們不關閉Swap的話,第一次初始化叢集肯定會失敗的。
所以還是老老實實的回到使用kubelet的啟動引數–fail-swap-on=false去掉必須關閉Swap的限制。 修改/etc/sysconfig/kubelet,加入:
KUBELET_EXTRA_ARGS=--fail-swap-on=false
2.2 使用kubeadm init初始化叢集
在各節點開機啟動kubelet服務:
systemctl enable kubelet.service
接下來使用kubeadm初始化叢集,選擇node1作為Master Node,在node1上執行下面的命令:
kubeadm init \ --kubernetes-version=v1.12.0 \ --pod-network-cidr=10.244.0.0/16 \ --apiserver-advertise-address=192.168.61.11
因為我們選擇flannel作為Pod網路外掛,所以上面的命令指定–pod-network-cidr=10.244.0.0/16。
執行時報了下面的錯誤:
[init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
有一個錯誤資訊是running with swap on is not supported. Please disable swap。因為我們決定配置failSwapOn: false,所以重新新增–ignore-preflight-errors=Swap引數忽略這個錯誤,重新執行。
kubeadm init \ --kubernetes-version=v1.12.0 \ --pod-network-cidr=10.244.0.0/16 \ --apiserver-advertise-address=192.168.61.11 \ --ignore-preflight-errors=Swap [init] using Kubernetes version: v1.12.0 [preflight] running pre-flight checks [WARNING Swap]: running with swap on is not supported. Please disable swap [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1] [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.61.11] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 26.503672 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node1 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation [bootstraptoken] using token: zalj3i.q831ehufqb98d1ic [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.61.11:6443 --token zalj3i.q831ehufqb98d1ic --discovery-token-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa
上面記錄了完成的初始化輸出的內容,根據輸出的內容基本上可以看出手動初始化安裝一個Kubernetes叢集所需要的關鍵步驟。
其中有以下關鍵內容:
- [kubelet] 生成kubelet的配置檔案”/var/lib/kubelet/config.yaml”
- [certificates]生成相關的各種證書
- [kubeconfig]生成相關的kubeconfig檔案
- [bootstraptoken]生成token記錄下來,後邊使用kubeadm join往叢集中新增節點時會用到
- 下面的命令是配置常規使用者如何使用kubectl訪問叢集:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config- 最後給出了將節點加入叢集的命令kubeadm join 192.168.61.11:6443 –token zalj3i.q831ehufqb98d1ic –discovery-token-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa
檢視一下叢集狀態:
kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
確認個元件都處於healthy狀態。
叢集初始化如果遇到問題,可以使用下面的命令進行清理:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
2.3 安裝Pod Network
接下來安裝flannel network add-on:
mkdir -p ~/k8s/ cd ~/k8s wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
這裡注意kube-flannel.yml這個檔案裡的flannel的映象是0.10.0,quay.io/coreos/flannel:v0.10.0-amd64
如果Node有多個網絡卡的話,參考flannel issues 39701,目前需要在kube-flannel.yml中使用–iface引數指定叢集主機內網網絡卡的名稱,否則可能會出現dns無法解析。需要將kube-flannel.yml下載到本地,flanneld啟動引數加上–iface=<iface-name>
...... containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth1 ......
本次按上面的步驟部署flannel,發現沒有效果,檢視一下叢集中的daemonset:
kubectl get ds -l app=flannel -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-flannel-ds-amd64 0 0 0 0 0 beta.kubernetes.i/oarch=amd64 17s kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 17s kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 17s kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 17s kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 17s
結合kube-flannel.yml,fannel官方的部署yaml檔案是要在叢集中建立5個針對不同平臺的DaemonSet,通過Node的Label beta.kubernetes.i/oarch,在對應不同平臺的Node節點上啟動flannel的容器。當前的node1節點是beta.kubernetes.i/oarch=amd64,因此對於kube-flannel-ds-amd64這個DaemonSet來說,它的DESIRED數量應該為1才對。檢視kube-flannel.yml中關於kube-flannel-ds-amd64的內容:
spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule
kube-flannel.yml中已經為kube-flannel-ds-amd64正確設定了排程相關的nodeSelector和tolerations,即將這個DaemonSet的Pod排程到Label為beta.kubernetes.io/arch: amd64,同時容忍node-role.kubernetes.io/master:NoSchedule汙點的節點上。這個按照以前的部署經驗來說當前的主節點node1應該是多滿足的,可是現在是這樣的嗎?我們檢視一下node1節點的基本資訊:
kubectl describe node node1 Name: node1 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=node1 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 03 Oct 2018 09:03:04 +0800 Taints: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule Unschedulable: false
可以看到1.12版本的kubeadm額外給node1節點設定了一個汙點(Taint):node.kubernetes.io/not-ready:NoSchedule,很容易理解,即如果節點還沒有ready之前,是不接受排程的。可是如果Kubernetes的網路外掛還沒有部署的話,節點是不會進入ready狀態的。因此我們修改以下kube-flannel.yaml的內容,加入對node.kubernetes.io/not-ready:NoSchedule這個汙點的容忍:
tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node.kubernetes.io/not-ready operator: Exists effect: NoSchedule
重新apply一下kubectl apply -f kube-flannel.yml,這次成功完成flannel的部署了。
使用kubectl get pod –all-namespaces -o wide確保所有的Pod都處於Running狀態。
kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE kube-system coredns-576cbf47c7-njt7l 1/1 Running 0 12m 10.244.0.3 node1 <none> kube-system coredns-576cbf47c7-vg2gd 1/1 Running 0 12m 10.244.0.2 node1 <none> kube-system etcd-node1 1/1 Running 0 12m 192.168.61.11 node1 <none> kube-system kube-apiserver-node1 1/1 Running 0 12m 192.168.61.11 node1 <none> kube-system kube-controller-manager-node1 1/1 Running 0 12m 192.168.61.11 node1 <none> kube-system kube-flannel-ds-amd64-bxtqh 1/1 Running 0 2m 192.168.61.11 node1 <none> kube-system kube-proxy-fb542 1/1 Running 0 12m 192.168.61.11 node1 <none> kube-system kube-scheduler-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>
後來也在flannel的github中找到了關於node.kubernetes.io/not-ready:NoSchedule這個問題的討論,相信很快就會將相關配置修改正確,詳見https://github.com/coreos/flannel/issues/1044。
2.4 master node參與工作負載
使用kubeadm初始化的叢集,出於安全考慮Pod不會被排程到Master Node上,也就是說Master Node不參與工作負載。這是因為當前的master節點node1被打上了node-role.kubernetes.io/master:NoSchedule的汙點:
kubectl describe node node1 | grep Taint Taints: node-role.kubernetes.io/master:NoSchedule
因為這裡搭建的是測試環境,去掉這個汙點使node1參與工作負載:
kubectl taint nodes node1 node-role.kubernetes.io/master- node "node1" untainted
2.5 測試DNS
kubectl run curl --image=radial/busyboxplus:curl -it kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead. If you don't see a command prompt, try pressing enter. [ [email protected]:/ ]$
進入後執行nslookup kubernetes.default確認解析正常:
nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
2.6 向Kubernetes叢集中新增Node節點
下面我們將node2這個主機新增到Kubernetes叢集中,因為我們同樣在node2上的kubelet的啟動引數中去掉了必須關閉swap的限制,所以同樣需要–ignore-preflight-errors=Swap這個引數。 在node2上執行:
kubeadm join 192.168.61.11:6443 --token zalj3i.q831ehufqb98d1ic --discovery-token-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa \ --ignore-preflight-errors=Swap [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support [WARNING Swap]: running with swap on is not supported. Please disable swap [discovery] Trying to connect to API Server "192.168.61.11:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.61.11:6443" [discovery] Requesting info from "https://192.168.61.11:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.61.11:6443" [discovery] Successfully established connection with API Server "192.168.61.11:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
node2加入叢集很是順利,下面在master節點上執行命令檢視叢集中的節點:
kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready master 26m v1.12.0 node2 Ready <none> 2m v1.12.0
如何從叢集中移除Node
如果需要從叢集中移除node2這個Node執行下面的命令:
在master節點上執行:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets kubectl delete node node2
在node2上執行:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
在node1上執行:
kubectl delete node node2
3.Kubernetes常用元件部署
越來越多的公司和團隊開始使用Helm這個Kubernetes的包管理器,我們也將使用Helm安裝Kubernetes的常用元件。
3.1 Helm的安裝
Helm由客戶端命helm令行工具和服務端tiller組成,Helm的安裝十分簡單。 下載helm命令列工具到master節點node1的/usr/local/bin下,這裡下載的2.9.1版本:
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz tar -zxvf helm-v2.11.0-linux-amd64.tar.gz cd linux-amd64/ cp helm /usr/local/bin/
為了安裝服務端tiller,還需要在這臺機器上配置好kubectl工具和kubeconfig檔案,確保kubectl工具可以在這臺機器上訪問apiserver且正常使用。 這裡的node1節點以及配置好了kubectl。
因為Kubernetes APIServer開啟了RBAC訪問控制,所以需要建立tiller使用的service account: tiller並分配合適的角色給它。 詳細內容可以檢視helm文件中的Role-based Access Control。 這裡簡單起見直接分配cluster-admin這個叢集內建的ClusterRole給它。建立rbac-config.yaml檔案:
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
kubectl create -f rbac-config.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
接下來使用helm部署tiller:
helm init --service-account tiller --skip-refresh Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
tiller預設被部署在k8s叢集中的kube-system這個namespace下:
kubectl get pod -n kube-system -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-6f6fd74b68-kk2z9 1/1 Running 0 3m17s
helm version Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
注意由於某些原因需要網路可以訪問gcr.io和kubernetes-charts.storage.googleapis.com,如果無法訪問可以通過helm init –service-account tiller –tiller-image <your-docker-registry>/tiller:v2.11.0 –skip-refresh使用私有映象倉庫中的tiller映象
3.2 使用Helm部署Nginx Ingress
為了便於將叢集中的服務暴露到叢集外部,從叢集外部訪問,接下來使用Helm將Nginx Ingress部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的邊緣節點上,關於Kubernetes邊緣節點的高可用相關的內容可以檢視我前面整理的Bare metal環境下Kubernetes Ingress邊緣節點的高可用。 這裡簡單起見,只有一個edge節點。
我們將node1(192.168.61.11)同時做為邊緣節點,打上Label:
kubectl label node node1 node-role.kubernetes.io/edge= node/node1 labeled kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready edge,master 46m v1.12.0 node2 Ready <none> 22m v1.12.0
stable/nginx-ingress chart的值檔案ingress-nginx.yaml:
controller: service: externalIPs: - 192.168.61.11 nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule defaultBackend: nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule
helm repo update helm install stable/nginx-ingress \ -n nginx-ingress \ --namespace ingress-nginx \ -f ingress-nginx.yaml
kubectl get pod -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-ingress-controller-7577b57874-m4zkv 1/1 Running 0 9m13s 10.244.0.10 node1 <none> nginx-ingress-default-backend-684f76869d-9jgtl 1/1 Running 0 9m13s 10.244.0.9 node1 <none>
如果訪問http://192.168.61.11返回default backend,則部署完成:
curl http://192.168.61.11/ default backend - 404
3.2 將TLS證書配置到Kubernetes中
當使用Ingress將HTTPS的服務暴露到叢集外部時,需要HTTPS證書,這裡將*.frognew.com的證書和祕鑰配置到Kubernetes中。
後邊部署在kube-system名稱空間中的dashboard要使用這個證書,因此這裡先在kube-system中建立證書的secret
kubectl create secret tls frognew-com-tls-secret --cert=fullchain.pem --key=privkey.pem -n kube-system secret/frognew-com-tls-secret created
3.3 使用Helm部署dashboard
kubernetes-dashboard.yaml:
ingress: enabled: true hosts: - k8s.frognew.com annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/secure-backends: "true" tls: - secretName: frognew-com-tls-secret hosts: - k8s.frognew.com rbac: clusterAdminRole: true
helm install stable/kubernetes-dashboard \ -n kubernetes-dashboard \ --namespace kube-system \ -f kubernetes-dashboard.yaml
kubectl -n kube-system get secret | grep kubernetes-dashboard-token kubernetes-dashboard-token-tjj25 kubernetes.io/service-account-token 3 37s kubectl describe -n kube-system secret/kubernetes-dashboard-token-tjj25 Name: kubernetes-dashboard-token-tjj25 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name=kubernetes-dashboard kubernetes.io/service-account.uid=d19029f0-9cac-11e8-8d94-080027db403a Type: kubernetes.io/service-account-token Data ==== namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi10amoyNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQxOTAyOWYwLTljYWMtMTFlOC04ZDk0LTA4MDAyN2RiNDAzYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.w1HZrtBOhANdqSRLNs22z8dQWd5IOCpEl9VyWQ6DUwhHfgpAlgdhEjTqH8TT0f4ftu_eSPnnUXWbsqTNDobnlxet6zVvZv1K-YmIO-o87yn2PGIrcRYWkb-ADWD6xUWzb0xOxu2834BFVC6T5p5_cKlyo5dwerdXGEMoz9OW0kYvRpKnx7E61lQmmacEeizq7hlIk9edP-ot5tCuIO_gxpf3ZaEHnspulceIRO_ltjxb8SvqnMglLfq6Bt54RpkUOFD1EKkgWuhlXJ8c9wJt_biHdglJWpu57tvOasXtNWaIzTfBaTiJ3AJdMB_n0bQt5CKAUnKBhK09NP3R0Qtqog
在dashboard的登入視窗使用上面的token登入。
3.4 使用Helm部署metrics-server
從Heapster的github https://github.com/kubernetes/heapster中可以看到已經,heapster已經DEPRECATED。這裡heapster的deprecation timeline。可以看出heapster從Kubernetes 1.12開始將從Kubernetes各種安裝指令碼中移除。
Kubernetes推薦使用metrics-server(https://github.com/kubernetes-incubator/metrics-server)。我們這裡也使用helm來部署metrics-server。
metrics-server.yaml:
args: - --logtostderr - --kubelet-insecure-tls
helm install stable/metrics-server \ -n metrics-server \ --namespace kube-system \ -f metrics-server.yaml
部署後,檢視metrics-server的日誌,報下面的錯誤:
E1003 05:46:13.757009 1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:node1: unable to fetch metrics from Kubelet node1 (node1): Get https://node1:10250/stats/summary/: dial tcp: lookup node1 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:node2: unable to fetch metrics from Kubelet node2 (node2): Get https://node2:10250/stats/summary/: dial tcp: lookup node2 on 10.96.0.10:53: read udp 10.244.1.6:45288->10.96.0.10:53: i/o timeout]
可以看到metrics-server在從kubelet的10250埠獲取資訊時,使用的是hostname,而因為node1和node2是一個獨立的演示環境,只是修改了這兩個節點系統的/etc/hosts檔案,而並沒有內網的DNS伺服器,所以metrics-server中不認識node1和node2的名字。這裡我們可以直接修改Kubernetes叢集中的coredns的configmap,修改Corefile加入hostnames外掛,將Kubernetes的各個節點的主機名加入到hostnames中,這樣Kubernetes叢集中的所有Pod都可以從CoreDNS中解析各個節點的名字。
kubectl edit configmap coredns -n kube-system apiVersion: v1 data: Corefile: | .:53 { errors health hosts { 192.168.61.11 node1 192.168.61.12 node2 fallthrough } kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 loop reload loadbalance } kind: ConfigMap
配置修改完畢後重啟叢集中coredns和metrics-server,確認metrics-server不再有錯誤日誌。使用下面的命令可以獲取到關於叢集節點基本的指標資訊:
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
遺憾的是,當前Kubernetes Dashboard還不支援metrics-server。因此如果使用metrics-server替代了heapster,將無法在dashboard中以圖形展示Pod的記憶體和CPU情況(實際上這也不是很重要,當前我們是在Prometheus和Grafana中定製的Kubernetes叢集中各個Pod的監控,因此在dashboard中檢視Pod記憶體和CPU也不是很重要)。 Dashboard的github上有很多這方面的討論,如https://github.com/kubernetes/dashboard/issues/3217和https://github.com/kubernetes/dashboard/issues/3270,Dashboard已經準備在將來的某個時間點支援metrics-server。但由於metrics-server和metrics pipeline肯定是Kubernetes在monitor方面未來的方向,所以我們也很果斷的在各個環境中切換到了metrics-server。
4.總結
本次安裝涉及到的Docker映象:
# kubernetes k8s.gcr.io/kube-apiserver:v1.12.0 k8s.gcr.io/kube-controller-manager:v1.12.0 k8s.gcr.io/kube-scheduler:v1.12.0 k8s.gcr.io/kube-proxy:v1.12.0 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/pause:3.1 # network and dns quay.io/coreos/flannel:v0.10.0-amd64 k8s.gcr.io/coredns:1.2.2 # helm and tiller gcr.io/kubernetes-helm/tiller:v2.11.0 # nginx ingress quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0 k8s.gcr.io/defaultbackend:1.4 # dashboard and metric-sever k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 gcr.io/google_containers/metrics-server-amd64:v0.3.0