kubernetes 1.11.2整理Ⅱ
配置 kubelet 認證
kubelet 授權 kube-apiserver 的一些操作 exec run logs 等
# RBAC 只需建立一次就可以 kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
建立 bootstrap kubeconfig 檔案
注意: token 生效時間為 1day , 超過時間未建立自動失效,需要重新建立 token建立 叢集所有 kubelet 的 token
注意修改hostname
[root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:master1 --kubeconfig ~/.kube/config of2phx.v39lq3ofeh0w6f3m [root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:master2 --kubeconfig ~/.kube/config b3stk9.edz2iylppqjo5qbc [root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:master3 --kubeconfig ~/.kube/config ck2uqr.upeu75jzjj1ko901 [root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:node1 --kubeconfig ~/.kube/config 1ocjm9.7qa3rd5byuft9gwr [root@master1 kubernetes]# kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:node2 --kubeconfig ~/.kube/config htsqn3.z9z6579gxw5jdfzd
檢視生成的 token
[root@master1 kubernetes]# kubeadm token list --kubeconfig ~/.kube/config TOKENTTLEXPIRESUSAGESDESCRIPTIONEXTRA GROUPS 1ocjm9.7qa3rd5byuft9gwr23h2018-09-02T16:06:32+08:00authentication,signingkubelet-bootstrap-tokensystem:bootstrappers:node1 b3stk9.edz2iylppqjo5qbc23h2018-09-02T16:03:46+08:00authentication,signingkubelet-bootstrap-tokensystem:bootstrappers:master2 ck2uqr.upeu75jzjj1ko90123h2018-09-02T16:05:16+08:00authentication,signingkubelet-bootstrap-tokensystem:bootstrappers:master3 htsqn3.z9z6579gxw5jdfzd23h2018-09-02T16:06:34+08:00authentication,signingkubelet-bootstrap-tokensystem:bootstrappers:node2 of2phx.v39lq3ofeh0w6f3m23h2018-09-02T16:03:40+08:00authentication,signingkubelet-bootstrap-tokensystem:bootstrappers:master1
以下為了區分 會先生成 hostname 名稱加 bootstrap.kubeconfig
生成 master1 的 bootstrap.kubeconfig
# 配置叢集引數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=master1-bootstrap.kubeconfig # 配置客戶端認證 kubectl config set-credentials kubelet-bootstrap \ --token=of2phx.v39lq3ofeh0w6f3m \ --kubeconfig=master1-bootstrap.kubeconfig # 配置關聯 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=master1-bootstrap.kubeconfig # 配置預設關聯 kubectl config use-context default --kubeconfig=master1-bootstrap.kubeconfig # 拷貝生成的 master1-bootstrap.kubeconfig 檔案 mv master1-bootstrap.kubeconfig /etc/kubernetes/bootstrap.kubeconfig
生成 master2 的 bootstrap.kubeconfig
# 配置叢集引數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=master2-bootstrap.kubeconfig # 配置客戶端認證 kubectl config set-credentials kubelet-bootstrap \ --token=b3stk9.edz2iylppqjo5qbc \ --kubeconfig=master2-bootstrap.kubeconfig # 配置關聯 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=master2-bootstrap.kubeconfig # 配置預設關聯 kubectl config use-context default --kubeconfig=master2-bootstrap.kubeconfig # 拷貝生成的 master2-bootstrap.kubeconfig 檔案 scp master2-bootstrap.kubeconfig 192.168.161.162:/etc/kubernetes/bootstrap.kubeconfig
生成 master3 的 bootstrap.kubeconfig
# 配置叢集引數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=master3-bootstrap.kubeconfig # 配置客戶端認證 kubectl config set-credentials kubelet-bootstrap \ --token=ck2uqr.upeu75jzjj1ko901 \ --kubeconfig=master3-bootstrap.kubeconfig # 配置關聯 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=master3-bootstrap.kubeconfig # 配置預設關聯 kubectl config use-context default --kubeconfig=master3-bootstrap.kubeconfig # 拷貝生成的 master3-bootstrap.kubeconfig 檔案 scp master3-bootstrap.kubeconfig 192.168.161.163:/etc/kubernetes/bootstrap.kubeconfig
生成 node1 的 bootstrap.kubeconfig
# 配置叢集引數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=node1-bootstrap.kubeconfig # 配置客戶端認證 kubectl config set-credentials kubelet-bootstrap \ --token=1ocjm9.7qa3rd5byuft9gwr \ --kubeconfig=node1-bootstrap.kubeconfig # 配置關聯 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=node1-bootstrap.kubeconfig # 配置預設關聯 kubectl config use-context default --kubeconfig=node1-bootstrap.kubeconfig # 拷貝生成的 node1-bootstrap.kubeconfig 檔案 scp node1-bootstrap.kubeconfig 192.168.161.77:/etc/kubernetes/bootstrap.kubeconfig
生成 node2 的 bootstrap.kubeconfig
# 配置叢集引數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=node2-bootstrap.kubeconfig # 配置客戶端認證 kubectl config set-credentials kubelet-bootstrap \ --token=htsqn3.z9z6579gxw5jdfzd \ --kubeconfig=node2-bootstrap.kubeconfig # 配置關聯 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=node2-bootstrap.kubeconfig # 配置預設關聯 kubectl config use-context default --kubeconfig=node2-bootstrap.kubeconfig # 拷貝生成的 node2-bootstrap.kubeconfig 檔案 scp node2-bootstrap.kubeconfig 192.168.161.78:/etc/kubernetes/bootstrap.kubeconfig
配置 bootstrap RBAC 許可權
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers # 否則報如下錯誤 failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:1jezb7" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
建立自動批准相關 CSR 請求的 ClusterRole
vi /etc/kubernetes/tls-instructs-csr.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver rules: - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests/selfnodeserver"] verbs: ["create"] # 建立 yaml 檔案 [root@master1 kubernetes]# kubectl apply -f /etc/kubernetes/tls-instructs-csr.yaml clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeserver created [root@master1 kubernetes]# kubectl describe ClusterRole/system:certificates.k8s.io:certificatesigningrequests:selfnodeserver Name:system:certificates.k8s.io:certificatesigningrequests:selfnodeserver Labels:<none> Annotations:kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"system:certificates.k8s.io:certificatesigningreq... PolicyRule: ResourcesNon-Resource URLsResource NamesVerbs --------------------------------------------- certificatesigningrequests.certificates.k8s.io/selfnodeserver[][][create]
#將 ClusterRole 繫結到適當的使用者組 # 自動批准 system:bootstrappers 組使用者 TLS bootstrapping 首次申請證書的 CSR 請求 kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers # 自動批准 system:nodes 組使用者更新 kubelet 自身與 apiserver 通訊證書的 CSR 請求 kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes # 自動批准 system:nodes 組使用者更新 kubelet 10250 api 埠證書的 CSR 請求 kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes
Node 端
單 Node 部分 需要部署的元件有
docker, calico, kubelet, kube-proxy
這幾個元件。 Node 節點 基於 Nginx 負載 API 做 Master HA
# master 之間除 api server 以外其他元件通過 etcd 選舉,api server 預設不作處理; 在每個 node 上啟動一個 nginx,每個 nginx 反向代理所有 api server; node 上 kubelet、kube-proxy 連線本地的 nginx 代理埠; 當 nginx 發現無法連線後端時會自動踢掉出問題的 api server,從而實現 api server 的 HA;
建立Nginx 代理
在每個 node 都必須建立一個 Nginx 代理, 這裡特別注意, 當 Master 也做為 Node 的時候 不需要配置 Nginx-proxy
# 建立配置目錄 mkdir -p /etc/nginx # 寫入代理配置 cat << EOF >> /etc/nginx/nginx.conf error_log stderr notice; worker_processes auto; events { multi_accept on; use epoll; worker_connections 1024; } stream { upstream kube_apiserver { least_conn; server 192.168.161.161:6443; server 192.168.161.162:6443; } server { listen0.0.0.0:6443; proxy_passkube_apiserver; proxy_timeout 10m; proxy_connect_timeout 1s; } } EOF # 更新許可權 chmod +r /etc/nginx/nginx.conf
# 配置 Nginx 基於 docker 程序,然後配置 systemd 來啟動 cat << EOF >> /etc/systemd/system/nginx-proxy.service [Unit] Description=kubernetes apiserver docker wrapper Wants=docker.socket After=docker.service [Service] User=root PermissionsStartOnly=true ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\ -v /etc/nginx:/etc/nginx \\ --name nginx-proxy \\ --net=host \\ --restart=on-failure:5 \\ --memory=512M \\ nginx:1.13.7-alpine ExecStartPre=-/usr/bin/docker rm -f nginx-proxy ExecStop=/usr/bin/docker stop nginx-proxy Restart=always RestartSec=15s TimeoutStartSec=30s [Install] WantedBy=multi-user.target EOF
啟動 Nginx
systemctl daemon-reload systemctl start nginx-proxy systemctl enable nginx-proxy systemctl status nginx-proxy journalctl-u nginx-proxy -f##檢視實時日誌 9月 01 17:34:55 node1 docker[4032]: 1.13.7-alpine: Pulling from library/nginx 9月 01 17:34:57 node1 docker[4032]: 128191993b8a: Pulling fs layer 9月 01 17:34:57 node1 docker[4032]: 655cae3ea06e: Pulling fs layer 9月 01 17:34:57 node1 docker[4032]: dbc72c3fd216: Pulling fs layer 9月 01 17:34:57 node1 docker[4032]: f391a4589e37: Pulling fs layer 9月 01 17:34:57 node1 docker[4032]: f391a4589e37: Waiting 9月 01 17:35:03 node1 docker[4032]: dbc72c3fd216: Verifying Checksum 9月 01 17:35:03 node1 docker[4032]: dbc72c3fd216: Download complete 9月 01 17:35:07 node1 docker[4032]: f391a4589e37: Verifying Checksum 9月 01 17:35:07 node1 docker[4032]: f391a4589e37: Download complete 9月 01 17:35:15 node1 docker[4032]: 128191993b8a: Verifying Checksum 9月 01 17:35:15 node1 docker[4032]: 128191993b8a: Download complete 9月 01 17:35:17 node1 docker[4032]: 128191993b8a: Pull complete 9月 01 17:35:50 node1 docker[4032]: 655cae3ea06e: Verifying Checksum 9月 01 17:35:50 node1 docker[4032]: 655cae3ea06e: Download complete 9月 01 17:35:51 node1 docker[4032]: 655cae3ea06e: Pull complete 9月 01 17:35:51 node1 docker[4032]: dbc72c3fd216: Pull complete 9月 01 17:35:51 node1 docker[4032]: f391a4589e37: Pull complete 9月 01 17:35:51 node1 docker[4032]: Digest: sha256:34aa80bb22c79235d466ccbbfa3659ff815100ed21eddb1543c6847292010c4d 9月 01 17:35:51 node1 docker[4032]: Status: Downloaded newer image for nginx:1.13.7-alpine 9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: using the "epoll" event method 9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: nginx/1.13.7 9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: built by gcc 6.2.1 20160822 (Alpine 6.2.1) 9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: OS: Linux 3.10.0-514.el7.x86_64 9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: start worker processes 9月 01 17:35:54 node1 docker[4032]: 2018/09/01 09:35:54 [notice] 1#1: start worker process 5
建立 kubelet.service 檔案
注意修改節點的hostname↓
# 建立 kubelet 目錄 mkdir -p /var/lib/kubelet vi /etc/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet \ --hostname-override=node1 \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:pause-amd64_3.1 \ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --config=/etc/kubernetes/kubelet.config.json \ --cert-dir=/etc/kubernetes/ssl \ --logtostderr=true \ --v=2 [Install] WantedBy=multi-user.target
建立 kubelet config 配置檔案
vi /etc/kubernetes/kubelet.config.json { "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/ssl/ca.pem" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "address": "192.168.161.77", "port": 10250, "readOnlyPort": 0, "cgroupDriver": "cgroupfs", "hairpinMode": "promiscuous-bridge", "serializeImagePulls": false, "RotateCertificates": true, "featureGates": { "RotateKubeletClientCertificate": true, "RotateKubeletServerCertificate": true }, "MaxPods": "512", "failSwapOn": false, "containerLogMaxSize": "10Mi", "containerLogMaxFiles": 5, "clusterDomain": "cluster.local.", "clusterDNS": ["10.254.0.2"] } ##其它node節點記得修改如上的IP地址
# 如上配置: node1本機hostname 10.254.0.2預分配的 dns 地址 cluster.local.為 kubernetes 叢集的 domain registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:pause-amd64_3.1這個是 pod 的基礎映象,既 gcr 的 gcr.io/google_containers/pause-amd64:3.1 映象, 下載下來修改為自己的倉庫中的比較快。 "clusterDNS": ["10.254.0.2"] 可配置多個 dns地址,逗號可開, 可配置宿主機dns.同理修改其它node節點
啟動 kubelet
systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet journalctl -u kubelet -f
建立 kube-proxy 證書
# 證書方面由於我們node端沒有裝 cfssl # 我們回到 master 端 機器 去配置證書,然後拷貝過來 cd /opt/ssl vi kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShenZhen", "L": "ShenZhen", "O": "k8s", "OU": "System" } ] }
生成 kube-proxy 證書和私鑰 /opt/local/cfssl/cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/opt/ssl/config.json \ -profile=kuberneteskube-proxy-csr.json | /opt/local/cfssl/cfssljson -bare kube-proxy # 檢視生成 ls kube-proxy* kube-proxy.csrkube-proxy-csr.jsonkube-proxy-key.pemkube-proxy.pem # 拷貝到目錄 cp kube-proxy* /etc/kubernetes/ssl/ scp ca.pem kube-proxy* 192.168.161.77:/etc/kubernetes/ssl/ scp ca.pem kube-proxy* 192.168.161.78:/etc/kubernetes/ssl/
建立 kube-proxy kubeconfig 檔案
# 配置叢集 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-proxy.kubeconfig # 配置客戶端認證 kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 配置關聯 kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig # 配置預設關聯 kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig # 拷貝到需要的 node 端裡 scp kube-proxy.kubeconfig 192.168.161.77:/etc/kubernetes/ scp kube-proxy.kubeconfig 192.168.161.78:/etc/kubernetes/
建立 kube-proxy.service 檔案
1.10 官方 ipvs 已經是預設的配置–masquerade-all 必須新增這項配置,否則 建立 svc 在 ipvs 不會新增規則
開啟 ipvs 需要安裝 ipvsadm ipset conntrack 軟體, 在 node 中安裝
yum install ipset ipvsadm conntrack-tools.x86_64 -y
yaml 配置檔案中的 引數如下:
ofollow,noindex">https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/kubeproxyconfig/types.go
cd /etc/kubernetes/ vikube-proxy.config.yaml apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 192.168.161.77 clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 10.254.64.0/18 healthzBindAddress: 192.168.161.77:10256 hostnameOverride: node1##注意修改此處的hostname kind: KubeProxyConfiguration metricsBindAddress: 192.168.161.77:10249 mode: "ipvs"
# 建立 kube-proxy 目錄 mkdir -p /var/lib/kube-proxy vi /etc/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.config.yaml \ --logtostderr=true \ --v=1 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
啟動 kube-proxy
systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy
檢查 ipvs 情況
[root@node1 kubernetes]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:PortForward Weight ActiveConn InActConn TCP10.254.0.1:443 rr -> 192.168.161.161:6443Masq100 -> 192.168.161.162:6443Masq100
配置 Calico 網路
官方文件https://docs.projectcalico.org/v3.1/introduction
下載 Calico yaml
# 下載 yaml 檔案 wget http://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml wget http://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml
下載映象
# 下載 映象 # 國外映象 有牆 quay.io/calico/node:v3.1.3 quay.io/calico/cni:v3.1.3 quay.io/calico/kube-controllers:v3.1.3 # 國內映象 jicki/node:v3.1.3 jicki/cni:v3.1.3 jicki/kube-controllers:v3.1.3 # 阿里映象 registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:node_v3.1.3 registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:cni_v3.1.3 registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:kube-controllers_v3.1.3 # 替換映象 sed -i 's/quay\.io\/calico/jicki/g'calico.yaml
修改配置
vi calico.yaml # 注意修改如下選項: # etcd 地址 etcd_endpoints: "https://192.168.161.161:2379,https://192.168.161.162:2379,https://192.168.161.163:2379" # etcd 證書路徑 # If you're using TLS enabled etcd uncomment the following. # You must also populate the Secret below with these files. etcd_ca: "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key" # etcd 證書 base64 地址 (執行裡面的命令生成的證書 base64 碼,填入裡面) data: etcd-key: (cat /etc/kubernetes/ssl/etcd-key.pem | base64 | tr -d '\n') etcd-cert: (cat /etc/kubernetes/ssl/etcd.pem | base64 | tr -d '\n') etcd-ca: (cat /etc/kubernetes/ssl/ca.pem | base64 | tr -d '\n') ## 如上需要去掉() 只需要填寫生成的編碼即可 # 修改 pods 分配的 IP 段 - name: CALICO_IPV4POOL_CIDR value: "10.254.64.0/18"
檢視服務
[root@master1 kubernetes]# kubectl get po -n kube-system -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODE calico-kube-controllers-79cfd7887-xbsd41/1Running511d192.168.161.77node1<none> calico-node-2545t2/2Running029m192.168.161.78node2<none> calico-node-tbptz2/2Running711d192.168.161.77node1<none> [root@master1 kubernetes]# kubectl get nodes -o wide NAMESTATUSROLESAGEVERSIONINTERNAL-IPEXTERNAL-IPOS-IMAGEKERNEL-VERSIONCONTAINER-RUNTIME node1Ready<none>11dv1.11.2192.168.161.77<none>CentOS Linux 7 (Core)3.10.0-514.el7.x86_64docker://17.3.2 node2Ready<none>29mv1.11.2192.168.161.78<none>CentOS Linux 7 (Core)3.10.0-514.el7.x86_64docker://17.3.2
修改 kubelet 配置
兩臺node節點都需要配置
#kubelet 需要增加 cni 外掛--network-plugin=cni vim /etc/systemd/system/kubelet.service --network-plugin=cni \ # 重新載入配置 systemctl daemon-reload systemctl restart kubelet.service systemctl status kubelet.service
檢查網路的互通性:
[root@node1 ~]# ifconfig tunl0: flags=193<UP,RUNNING,NOARP>mtu 1440 inet 10.254.102.128netmask 255.255.255.255 tunneltxqueuelen 1(IPIP Tunnel) RX packets 0bytes 0 (0.0 B) RX errors 0dropped 0overruns 0frame 0 TX packets 0bytes 0 (0.0 B) TX errors 0dropped 0 overruns 0carrier 0collisions 0 [root@node2 ~]# ifconfig tunl0: flags=193<UP,RUNNING,NOARP>mtu 1440 inet 10.254.75.0netmask 255.255.255.255 tunneltxqueuelen 1(IPIP Tunnel) RX packets 2bytes 168 (168.0 B) RX errors 0dropped 0overruns 0frame 0 TX packets 2bytes 168 (168.0 B) TX errors 0dropped 0 overruns 0carrier 0collisions 0 直接在node2上面ping: [root@node2 ~]# ping 10.254.102.128 PING 10.254.102.128 (10.254.102.128) 56(84) bytes of data. 64 bytes from 10.254.102.128: icmp_seq=1 ttl=64 time=72.3 ms 64 bytes from 10.254.102.128: icmp_seq=2 ttl=64 time=0.272 ms
安裝 calicoctl
calicoctl 是 calico 網路的管理客戶端, 只需要在一臺 node 裡配置既可。# 下載 二進位制檔案 curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.1.3/calicoctl mv calicoctl /usr/local/bin/ chmod +x /usr/local/bin/calicoctl # 建立 calicoctl.cfg 配置檔案 mkdir /etc/calico vim /etc/calico/calicoctl.cfg apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/root/.kube/config" # 檢視 calico 狀態 [root@node1 src]# calicoctl node status Calico process is running. IPv4 BGP status +----------------+-------------------+-------+----------+-------------+ |PEER ADDRESS|PEER TYPE| STATE |SINCE|INFO| +----------------+-------------------+-------+----------+-------------+ | 192.168.161.78 | node-to-node mesh | up| 06:54:19 | Established | +----------------+-------------------+-------+----------+-------------+ IPv6 BGP status No IPv6 peers found. [root@node1 src]# calicoctl get node##當然我這邊是在node節點操作的,node節點是沒有/root/.kube/config 這個檔案的,只需要從master節點copy過來即可!! NAME node1 node2