1. 程式人生 > >centos7.5 kubernetes/k8s 1.10 離線安裝

centos7.5 kubernetes/k8s 1.10 離線安裝

自己 contain firewall web enable rom time 最簡 修改ip

centos7.5 kubernetes/k8s 1.10 離線安裝

本文介紹在centos7.5使用kubeadm快速離線安裝kubernetes 1.10。
采用單master,單node(可以多node),不適用於生產環境。
所需文件百度盤連接
鏈接:https://pan.baidu.com/s/1iQJpKZ9PdFjhz9yTgl0Wjg 密碼:gwmh

  1. 環境準備

    1. 服務器規劃

主機名

IP

配置

master1

10.10.0.216

4C 8G

node1

10.10.0.215

4C 8G

node2

10.10.0.214

4C 8G

Node3

10.10.0.212

4C 8G

Node4

10.10.0.210

4C 8G

準備不低於2臺虛機。 1臺 master,其余的做node
OS: Centos7.5 mini install。 最小化安裝。配置節點IP

  1. 時區配置

分別設置主機名為master1 node1 ... 時區

timedatectl set-timezone Asia/Shanghai #都要執行

  1. 主機名配置

hostnamectl set-hostname master1 #master1執行

hostnamectl set-hostname node1 #node1執行

  1. hosts解析

在所有節點/etc/hosts中添加解析,master1,node1

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.10.0.217 reg.foresee.cn

10.10.0.216 matser1

10.10.0.215 node1

10.10.0.214 node2

10.10.0.212 node3

10.10.0.210 node4

  1. 關閉所有節點的seliux

sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/selinux/config

setenforce 0

  1. 關閉所有節點的firewalld

systemctl disable firewalld

systemctl stop firewalld

  1. 進程及文件句柄參數

修改/etc/security/limits.conf 配制文件,加大打開文件句柄數(weblogic應用使用到)

vi /etc/security/limits.conf

* hard nofile 102400

* soft nofile 102400

* hard nproc 2067531

* soft nproc 2067531

修改完後,exit退出,重新登錄,用ulimit -a 命令查看一下open files、max user processes的值。

  1. 用戶進程參數優化

修改/etc/security/limits.d/90-nproc.conf配制文件,加大用戶進程數(max user processes)。

vi /etc/security/limits.d/20-nproc.conf

* soft nproc 2067531

* hard nproc 2067531

  1. 安裝docker

    1. 安裝docker

使用文件docker-packages.tar,每個節點都要安裝。

tar -xvf docker-packages.tar

cd docker-packages

rm -rf audit-* libsemanage*

rm -rf policycoreutils-*

yum install audit-libs-python

yum install libsemanage-python

rpm -Uvh * 或者 yum install local *.rpm 進行安裝

docker version #安裝完成查看版本

  1. 啟動docker,並設置為開機自啟

systemctl start docker && systemctl enable docker

輸入docker info,==記錄Cgroup Driver==
Cgroup Driver: cgroupfs
docker和kubelet的cgroup driver需要一致,如果docker不是cgroupfs,則執行

cat << EOF > /etc/docker/daemon.json

{

"exec-opts": ["native.cgroupdriver=cgroupfs"]

}

EOF

systemctl daemon-reload && systemctl restart docker

  1. 安裝kubeadm,kubectl,kubelet

使用文件kube-packages-1.10.1.tar,每個節點都要安裝
kubeadm是集群部署工具
kubectl是集群管理工具,通過command來管理集群
kubelet的k8s集群每個節點的docker管理服務

tar -xvf kube-packages-1.10.1.tar

cd kube-packages-1.10.1

rpm -Uvh * 或者 yum install local *.rpm 進行安裝

在所有kubernetes節點上設置kubelet使用cgroupfs,與dockerd保持一致,否則kubelet會啟動報錯

默認kubelet使用的cgroup-driver=systemd,改為cgroup-driver=cgroupfs

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

重設kubelet服務,並重啟kubelet服務

systemctl daemon-reload && systemctl restart kubelet

關閉swap,及修改iptables,不然後面kubeadm會報錯

swapoff -a

vi /etc/fstab #swap一行註釋

cat <<EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

  1. 導入鏡像

使用文件k8s-images-1.10.tar.gz,每個節點都要執行
節點較少,就不搭建鏡像倉庫服務了,後續要用的應用鏡像,每個節點都要導入

docker load -i k8s-images-1.10.tar.gz

一共11個鏡像,分別是

k8s.gcr.io/etcd-amd64:3.1.12

k8s.gcr.io/kube-apiserver-amd64:v1.10.1

k8s.gcr.io/kube-controller-manager-amd64:v1.10.1

k8s.gcr.io/kube-proxy-amd64:v1.10.1

k8s.gcr.io/kube-scheduler-amd64:v1.10.1

k8s.gcr.io/pause-amd64:3.1

k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8

k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8

k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8

k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

quay.io/coreos/flannel:v0.9.1-amd64

  1. kubeadm init 部署master節點

只在master執行。此處選用最簡單快捷的部署方案。etcd、api、controller-manager、 scheduler服務都會以容器的方式運行在master。etcd 為單點,不帶證書。etcd的數據會掛載到master節點/var/lib/etcd
init部署是支持etcd 集群和證書模式的,配置方法見我1.9的文檔,此處略過。

  1. Init部署master

init命令註意要指定版本,和pod範圍

kubeadm init --kubernetes-version=v1.10.1 --pod-network-cidr=10.244.0.0/16

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join 10.10.0.216:6443 --token xzwmtu.q2jpdv4jbbf1rz4m --discovery-token-ca-cert-hash sha256:72a619e304818c75dcd6cabd250bf967bec739668a61082af437ce1df1cb4fe1

kubeadm join 192.168.1.31:6443 --token p8ohcr.yzir9m48zembw7gd --discovery-token-ca-cert-hash sha256:acda987110a9c59b81f8dfa4683d5487eabd1b6be64178e5fa70b70132468adc

記下join的命令,後續node節點加入的時候要用到

  1. 保存kubeconfig

執行提示的命令,保存kubeconfig

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. 查看node

此時執行kubectl get node 已經可以看到master節點,notready是因為還未部署網絡插件

[root@master1 kubernetes1.10]# kubectl get node

NAME STATUS ROLES AGE VERSION

master1 NotReady master 3m v1.10.1

  1. 查看所有的pod

查看所有的pod,kubectl get pod --all-namespaces
kubedns也依賴於容器網絡,此時pending是正常的

[root@master1 kubernetes1.10]# kubectl get pod --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system etcd-master1 1/1 Running 0 3m

kube-system kube-apiserver-master1 1/1 Running 0 3m

kube-system kube-controller-manager-master1 1/1 Running 0 3m

kube-system kube-dns-86f4d74b45-5nrb5 0/3 Pending 0 4m

kube-system kube-proxy-ktxmb 1/1 Running 0 4m

kube-system kube-scheduler-master1 1/1 Running 0 3m

  1. 配置KUBECONFIG變量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile

source /etc/profile

echo $KUBECONFIG #應該返回/etc/kubernetes/admin.conf

  1. 部署flannel網絡

k8s支持多種網絡方案,flannel,calico,openvswitch
此處選擇flannel。 在熟悉了k8s部署後,可以嘗試其他網絡方案,我另外一篇1.9部署中有介紹flannel和calico的方案,以及切換時需要的動作。

kubectl apply -f kube-flannel.yml

網絡就緒後,節點的狀態會變為ready

[root@master1 kubernetes1.10]# kubectl get node

NAME STATUS ROLES AGE VERSION

master1 Ready master 18m v1.10.1

  1. kubeadm join 加入node節點

    1. node節點加入集群

使用之前kubeadm init 生產的join命令,加入成功後,回到master節點查看是否成功

kubeadm join 10.10.0.216:6443 --token wct45y.tq23fogetd7rp3ck --discovery-token-ca-cert-hash sha256:c267e2423dba21fdf6fc9c07e3b3fa17884c4f24f0c03f2283a230c70b07772f

[root@master1 kubernetes1.10]# kubectl get node

NAME STATUS ROLES AGE VERSION

master1 Ready master 31m v1.10.1

node1 Ready <none> 44s v1.10.1

至此,集群已經部署完成。

  1. 如果出現x509這個報錯

如果有報錯才需要做這一步,不然不需要。
這是因為master節點缺少KUBECONFIG變量

[discovery] Failed to request cluster info, will try again: [Get https://10.10.0.216:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]

master節點執行

export KUBECONFIG=$HOME/.kube/config

node節點kubeadm reset 再join

kubeadm reset

kubeadm join xxx ...

  1. 如果忘了join命令,加入節點方法

若node已經成功加入,忽略這一步。
使用場景:忘了保存上面kubeadm init生產的join命令,可按照下面的方法加入node節點。
首先master節點獲取token,如果token list內容為空,則kubeadm token create創建一個,記錄下token數據

[root@master1 kubernetes1.10]# kubeadm token list

TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS

wct45y.tq23fogetd7rp3ck 22h 2018-04-26T21:38:57+08:00 authentication,signing The default bootstrap token generated by ‘kubeadm init‘. system:bootstrappers:kubeadm:default-node-token

node節點執行如下,把token部分進行替換

kubeadm join --token wct45y.tq23fogetd7rp3ck 10.10.0.216:6443 --discovery-token-unsafe-skip-ca-verification

  1. 部署k8s ui界面,dashboard

dashboard是官方的k8s 管理界面,可以查看應用信息及發布應用。dashboard的語言是根據瀏覽器的語言自己識別的
官方默認的dashboard為https方式,如果用chrome訪問會拒絕。本次部署做了修改,方便使用,使用了http方式,用chrome訪問正常。
一共需要導入3個yaml文件

kubectl apply -f kubernetes-dashboard-http.yaml

kubectl apply -f admin-role.yaml

kubectl apply -f kubernetes-dashboard-admin.rbac.yaml

[root@master1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-http.yaml

serviceaccount "kubernetes-dashboard" created

role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created

rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created

deployment.apps "kubernetes-dashboard" created

service "kubernetes-dashboard" created

[root@master1 kubernetes1.10]# kubectl apply -f admin-role.yaml

clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created

[root@master1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-admin.rbac.yaml

clusterrolebinding.rbac.authorization.k8s.io "dashboard-admin" created

創建完成後,通過 http://任意節點的IP:31000即可訪問ui

  1. FAQ

    1. kubectl 命令補全

root@master1:/# vim /etc/profilecd #添加下面這句,再source

source <(kubectl completion bash)

root@master1:/# source /etc/profile

  1. master節點默認不可部署pod

執行如下,node-role.kubernetes.io/master 可以在 kubectl edit node master1中taint配置參數下查到

root@master1:/var/lib/kubelet# kubectl taint node master1 node-role.kubernetes.io/master-

node "master1" untainted

  1. node節點pod無法啟動/節點刪除網絡重置

    1. node1之前反復添加過,添加之前需要清除下網絡

root@master1:/var/lib/kubelet# kubectl get po -o wide

NAME READY STATUS RESTARTS AGE IP NODE

nginx-8586cf59-6zw9k 1/1 Running 0 9m 10.244.3.3 node2

nginx-8586cf59-jk5pc 0/1 ContainerCreating 0 9m <none> node1

nginx-8586cf59-vm9h4 0/1 ContainerCreating 0 9m <none> node1

nginx-8586cf59-zjb84 1/1 Running 0 9m 10.244.3.2 node2

root@node1:~# journalctl -u kubelet

failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-8586cf59-rm4sh_default" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

12252 cni.go:227] Error while adding to cni network: failed to set bridge addr: "cni0" already

  1. 重置kubernetes服務,重置網絡。刪除網絡配置,link

kubeadm reset

systemctl stop kubelet

systemctl stop docker

rm -rf /var/lib/cni/

rm -rf /var/lib/kubelet/*

rm -rf /etc/cni/

ifconfig cni0 down

ifconfig flannel.1 down

ifconfig docker0 down

ip link delete cni0

ip link delete flannel.1

systemctl start docker

  1. 加入節點

systemctl start docker

kubeadm join --token 55c2c6.2a4bde1bc73a6562 192.168.1.144:6443 --discovery-token-ca-cert-hash sha256:0fdf8cfc6fecc18fded38649a4d9a81d043bf0e4bf57341239250dcc62d2c832

centos7.5 kubernetes/k8s 1.10 離線安裝