1. 程式人生 > >Kubadem方式安裝Kubernetes(1.10.0)集群

Kubadem方式安裝Kubernetes(1.10.0)集群

k8s 部署

背景

kubernetes已經是現有的docker容器管理工具中必學的一個架構了,相對與swarm來說,它的架構更重,組件和配置也更復雜,當然了,提供的功能也更加強大。在這裏,k8s的基本概念和架構就不描述了,網上有很多的資料可供參考。

在技術的驅使下,我們公司也不可避免地開始了k8s的研究,所以也要開始接觸到這一強大的docker容器管理架構。學習k8s的第一步,首先要搭建一個k8s的集群環境。搭建k8s最簡單的應該是直接使用官方提供的二進制包。但在這裏,我參考了k8s官方的安裝指南,選擇使用kubadem的方式來安裝。


安裝環境

Master:192.168.232.130

Node1:192.168.232.131

Node2:192.168.232.129


安裝步驟

1, 初始化系統,安裝kubernetes所需的相關程序(所有master和node節點)

添加kubernetes相關的yum庫資源,國內可使用阿裏雲的鏡像:

# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0

關閉selinux

# setenforce 0

安裝kubernet所需的相關組件:

# yum install -y docker kubelet kubeadm kubectl kubernetes-cni
# systemctl enable docker && systemctl start docker
# systemctl enable kubelet && systemctl start kubelet


2,從國內鏡像倉庫下載kubnetnets所需的相關docker鏡像到本地,節省初始化master和node下載鏡像的時間,也是為了避免由於網絡問題導致鏡像下載失敗的錯誤。

Master:

#下載master鏡像到master節點
# docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-apiserver-amd64:v1.10.0
# docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-scheduler-amd64:v1.10.0
# docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-controller-manager-amd64:v1.10.0
# docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-proxy-amd64:v1.10.0
# docker pull registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-kube-dns-amd64:1.14.8
# docker pull registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-dnsmasq-nanny-amd64:1.14.8
# docker pull registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-sidecar-amd64:1.14.8
# docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/etcd-amd64:3.1.12
# docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64
# docker pull registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1
修改以上下載的鏡像的tag,因為使用kubadem初始化master或者node時,默認使用的是k8s.gcr.io地址的鏡像。
# docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
# docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
# docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
# docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
# docker tag registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
# docker tag registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
# docker tag registry.cn-beijing.aliyuncs.com/k8s_images/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
# docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
# docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
# docker tag registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

Nodes:

下載nodes所需的鏡像到node節點
# docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-proxy-amd64:v1.10.0
# docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64
# docker pull registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1
修改鏡像tag名
# docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
# docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
# docker tag registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1


3,初始化master

初始化kubernets需關閉swap,否則會報錯:[ERROR Swap]: running with swap on is not supported. Please disable swap。

# swapoff -a

初始化kubernets master:(192.168.123.130)

# kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version 1.10.0

[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING Hostname]: hostname "k8s-node1" could not be reached
        [WARNING Hostname]: hostname "k8s-node1" lookup k8s-node1 on 192.168.232.2:53: no such host
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.232.130]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-node1] and IPs [192.168.232.130]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 70.502031 seconds
[uploadconfig]?Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-node1 as master by adding a label and a taint
[markmaster] Master k8s-node1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: z3njv7.6vndmsyesgp9bozf
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.232.130:6443 --token z3njv7.6vndmsyesgp9bozf --discovery-token-ca-cert-hash sha256:2959ed1b5e23b5576709c26d14f2c32a15323971f3ade2c3fc3c85c80047350f

以上信息說明master已經初始化成功,可以看到master所需的一些組件已經被創建,比如kube-apiserver,kube-controller-manager,kube-scheduler,etcd。而kubadem創建的這些組件,都是以docker容器的形式創建存在,並提供服務的。

技術分享圖片而nodes節點可以通過master提示的kubadem join命令,來加入kubernetes集群


4,安裝flannel網絡組件:

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset.extensions "kube-flannel-ds" created


5:安裝nodes節點(192.168.123.130/192.168.123.129)

node節點也需先關閉swap

# swapoff -a

利用master提供的kubadem join命令,加入k8s集群:

# kubeadm join 192.168.232.130:6443 --token z3njv7.6vndmsyesgp9bozf --discovery-token-ca-cert-hash sha256:2959ed1b5e23b5576709c26d14f2c32a15323971f3ade2c3fc3c85c80047350f
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.232.130:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.232.130:6443"
[discovery] Requesting info from "https://192.168.232.130:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.232.130:6443"
[discovery] Successfully established connection with API Server "192.168.232.130:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

同樣的,nodes節點會通過之前下載的鏡像,創建出nodes節點所需的服務組件:例如kube-proxy:技術分享圖片

6:檢查k8s集群狀態

在master和nodes節點的相關服務啟動完成之後,可以通過在master上運行'kubectl get nodes'來查看集群狀態:

技術分享圖片

master和nodes節點的狀態如果都為Ready,說明該k8s已經可以正常工作了。

Kubadem方式安裝Kubernetes(1.10.0)集群