kubernetes的安裝
背景
自己學習k8s叢集,無奈屌絲一枚,沒錢配置vpn服務,安裝k8s花費的時間太久了。為了小夥伴們可以快速安裝k8s,我花了點時間整理了這篇部落格,提供一個不用FQ就可以愉快安裝k8s叢集的方法。
主機環境
主機、IP規劃和網路規劃
HOSTNAME | IP |
master | 10.8.3.91 |
node1 | 10.8.3.81 |
node2 | 10.8.3.82 |
k8s的pod網路採用 10.244.0.0/16 ,網路元件選擇flannel。
主機名設定
這裡使用centos7的hostnamectl設定主機名字, centos其他版本參考:ofollow,noindex">https://www.cnblogs.com/zhaojiedi1992/p/zhaojiedi_linux_043_hostname.html
#master節點 hostnamectlset-hostnamemaster && exec bash #node1節點 hostnamectlset-hostnamenode1 && exec bash #node2節點 hostnamectlset-hostnamenode2 && exec bash
hosts檔案設定
[root@master ~]# vim /etc/hosts # 新增如下3行 10.4.3.91 master 10.4.3.81 node1 10.4.3.82 node2 # 其他的2個node節點也需要同樣操作
防火牆和selinux設定
[root@master ~]# sed -i "s/^SELINUX\=enforcing/SELINUX\=disabled/g" /etc/selinux/config [root@master ~]# setenforce 0 setenforce: SELinux is disabled [root@master ~]# systemctl stop firewalld [root@master ~]# systemctl disable firewalld # 其他的2個node節點也需要同樣操作
核心引數開啟
[root@master k8s_images]# echo "net.bridge.bridge-nf-call-ip6tables = 1" >>/etc/sysctl.conf [root@master k8s_images]# echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf [root@master k8s_images]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf [root@master k8s_images]# sysctl -p # 其他的2個node節點也需要同樣操作
倉庫準備
# 備份舊的repo [root@master ~]# cd /etc/yum.repos.d/ [root@master yum.repos.d]# ls CentOS-Base.repoCentOS-Debuginfo.repoCentOS-Media.repoCentOS-Vault.repo CentOS-CR.repoCentOS-fasttrack.repoCentOS-Sources.repo [root@master yum.repos.d]# mkdir bak [root@master yum.repos.d]# mv *.repo bak [root@master yum.repos.d]# ls bak # 下載base,epel [root@master yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo [root@master yum.repos.d]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo # 下載k8s repo [root@master yum.repos.d]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # 其他的2個node節點也需要同樣操作
安裝k8s
docker和k8s軟體安裝
[root@master yum.repos.d]# yum install docker kubelet kubeadm kubectl [root@master yum.repos.d]# systemctl enable kubelet && systemctl start kubelet [root@master yum.repos.d]# systemctl enable docker && systemctl restart docker # 其他的2個node節點也需要同樣操作
docker加速配置
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://mew8i5li.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker # 其他的2個node節點也需要同樣操作
曲線下載k8s所需的映象
這個我是在dockerhub上面的自動構建,原理就是拉去構建的映象,給這個映象打tag為google的tag,這樣我們在初始化叢集的時候就不會再去google去拉去映象檔案。
我的dockerhub:https://hub.docker.com/r/zhaojiedi1992
我的github倉庫:https://github.com/zhaojiedi1992/k8s_images
[root@master ~]# cd /root [root@master ~]# mkdir git [root@master ~]# cd git/ [root@master git]# git clone https://github.com/zhaojiedi1992/k8s_images.git [root@master git]# cd k8s_images/ [root@master k8s_images]# ls create_script.shpull_image_from_dockerhub_v1.10.6.shREADME.mdv1.10.6 pull_image_from_dockerhub.templatepull_image_from_dockerhub_v1.10.7.shtmp.txtv1.10.7 pull_image_from_dockerhub_v1.10.0.shpull_image_from_dockerhub_v1.10.8.shv1.10.0v1.10.8 pull_image_from_dockerhub_v1.10.1.shpull_image_from_dockerhub_v1.11.0.shv1.10.1v1.11 pull_image_from_dockerhub_v1.10.2.shpull_image_from_dockerhub_v1.11.1.shv1.10.2v1.11.0 pull_image_from_dockerhub_v1.10.3.shpull_image_from_dockerhub_v1.11.2.shv1.10.3v1.11.1 pull_image_from_dockerhub_v1.10.4.shpull_image_from_dockerhub_v1.11.3.shv1.10.4v1.11.2 pull_image_from_dockerhub_v1.10.5.shpull_image_from_dockerhub_v1.11.shv1.10.5v1.11.3 [root@master k8s_images]# chmod a+x *.sh # 檢視安裝的k8s版本對應需要的映象 [root@master k8s_images]# kubeadm config images list --kubernetes-version=v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-proxy-amd64:v1.11.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd-amd64:3.2.18 k8s.gcr.io/coredns:1.1.3 # 檢視指令碼的映象和需要拉去的是否一致。 [root@master k8s_images]# cat ./pull_image_from_dockerhub_v1.11.3.sh #!/bin/bash gcr_name=k8s.gcr.io myhub_name=zhaojiedi1992 # define images images=( kube-apiserver-amd64:v1.11.3 kube-controller-manager-amd64:v1.11.3 kube-scheduler-amd64:v1.11.3 kube-proxy-amd64:v1.11.3 pause:3.1 etcd-amd64:3.2.18 coredns:1.1.3 ) for image in ${images[@]}; do docker pull $myhub_name/$image docker tag $myhub_name/$image $gcr_name/$image docker rmi $myhub_name/$image done # 確認上面的無錯誤,開始下載。 [root@master k8s_images]# ./pull_image_from_dockerhub_v1.11.3.sh [root@master k8s_images]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/pause 3.1 24440bb35d05 About an hour ago 742 kB k8s.gcr.io/kube-proxy-amd64 v1.11.3 763b3c45ccd2 4 hours ago 97.8 MB k8s.gcr.io/kube-scheduler-amd64 v1.11.3 8434ffab1549 5 hours ago 56.8 MB k8s.gcr.io/kube-controller-manager-amd64 v1.11.3 3b0d0349c534 5 hours ago 155 MB k8s.gcr.io/kube-apiserver-amd64 v1.11.3 306b76250de9 6 hours ago 187 MB k8s.gcr.io/coredns 1.1.3 6b777875393d 6 hours ago 45.6 MB k8s.gcr.io/etcd-amd64 3.2.18 7dc1bb5c1af1 6 hours ago 219 MB # 其他的2個node節點也需要同樣操作
初始化k8s
[root@master k8s_images]#kubeadminit --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.11.3 省略大量輸出 Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.4.3.91:6443 --token 1ccx3e.jwbm8pbaq1awiz2z --discovery-token-ca-cert-hash sha256:838517f2d09d04d8ab1d736466311e32db26d2c5a9286fec37204b2de7923a67
客戶端設定
這裡kubectl客戶端的配置設定,我們直接設定到主節點上面來。
[root@master k8s_images]# mkdir -p $HOME/.kube [root@master k8s_images]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config cp: overwrite ‘/root/.kube/config’? y [root@master k8s_images]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@master k8s_images]# echo " kubeadm join 10.4.3.91:6443 --token 1ccx3e.jwbm8pbaq1awiz2z --discovery-token-ca-cert-hash sha256:838517f2d09d04d8ab1d736466311e32db26d2c5a9286fec37204b2de7923a67" >/root/k8s.json
安裝flannel網路元件
[root@master k8s_images]# kubectlapply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
node1加入叢集
[root@node1 k8s_images]#kubeadm join 10.4.3.91:6443 --token 1ccx3e.jwbm8pbaq1awiz2z --discovery-token-ca-cert-hash sha256:838517f2d09d04d8ab1d736466311e32db26d2c5a9286fec37204b2de7923a67
這個命令來自與主節點初始化的時候的輸出,上面已經儲存到主節點的/root/k8s.json。
檢視叢集狀態
[root@master k8s_images]# kubectl get nodes NAMESTATUSROLESAGEVERSION masterReadymaster17mv1.11.3 node1Ready<none>8mv1.11.3 [root@master k8s_images]# kubectl get pod -n kube-system NAMEREADYSTATUSRESTARTSAGE coredns-78fcdf6894-5zr251/1Running017m coredns-78fcdf6894-82v6w1/1Running017m etcd-master1/1Running07m kube-apiserver-master1/1Running07m kube-controller-manager-master1/1Running07m kube-flannel-ds-amd64-5s9621/1Running04m kube-flannel-ds-amd64-s2t5b1/1Running04m kube-proxy-ccvdd1/1Running017m kube-proxy-p2fbl1/1Running08m kube-scheduler-master1/1Running07m
這個狀態需要等一段時間才能全是Running。好了,k8s叢集就安裝完畢了。