使用kubeadm安裝kubernetes1.7/1.8/1.9
本文以v1.7.2為例
0 環境
環境:
主機名 | IP |
---|---|
k8s-master | 172.16.120.151 |
k8s-node01 | 172.16.120.152 |
k8s-node02 | 172.16.120.153 |
==mac os x固定vware虛擬機器IP
sudo vi /Library/Preferences/VMware\ Fusion/vmnet8/dhcpd.conf
在檔案末尾新增==
host CentOS01{
hardware ethernet 00:0C:29:15:5C:F1;
fixed-address 172.16 .120.151;
}
host CentOS02{
hardware ethernet 00:0C:29:D1:C4:9A;
fixed-address 172.16.120.152;
}
host CentOS03{
hardware ethernet 00:0C:29:C2:A6:93;
fixed-address 172.16.120.153;
}
- centos01為固定ip虛擬機器的名稱
- hardware ethernet 硬體地址
- fixed-address 固定ip地址
ip地址取值範圍必須在hdcpd.conf給定的範圍內,配置完成後重啟vware。
設定主機名:
hostnamectl --static set-hostname k8s-node01
hostnamectl --static set-hostname k8s-node02
hostnamectl --static set-hostname k8s-node03
關閉防火牆和selinux
systemctl disable firewalld
systemctl stop firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
echo 1 > /proc/sys/net/bridge/bridge-nf -call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
1 下載kubernetes和安裝
1.1 方案1:使用阿里雲yum映象
配置yum源,由於google被牆,可以使用阿里雲搭建的yum源
#docker yum源
cat >> /etc/yum.repos.d/docker.repo <<EOF
[docker-repo]
name=Docker Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7
enabled=1
gpgcheck=0
EOF
#kubernetes yum源
cat >> /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
docker安裝:
Kubernetes 1.6還沒有針對docker 1.13和最新的docker 17.03上做測試和驗證,所以這裡安裝Kubernetes官方推薦的Docker 1.12版本。
#檢視docker版本
yum list docker-engine –showduplicates
#安裝docker
yum install -y docker-engine-1.12.6-1.el7.centos.x86_64
kubernetes安裝:
#檢視版本
yum list kubeadm –showduplicates
yum list kubernetes-cni –showduplicates
yum list kubelet –showduplicates
yum list kubectl –showduplicates
#安裝軟體
yum install -y kubernetes-cni-0.5.1-0.x86_64 kubelet-1.7.2-0.x86_64 kubectl-1.7.2-0.x86_64 kubeadm-1.7.2-0.x86_64
1.2 使用國外伺服器下載安裝包
使用阿里雲美西伺服器,配置yum源,使用yumdownloader下載rpm包
配置kubernetes yum源:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
下載kubelet kubeadm kubectl kubernetes-cni四個rpm包
yumdownloader kubelet kubeadm kubectl kubernetes-cni
將下載的rpm包上傳到指定伺服器安裝,rpm包安裝命令如下:
yum install -y socat
rpm -ivh *.rpm
2.下載kubernetes映象
由於google被牆,將google的官方映象上傳到aliyun,國內可以直接使用。
registry.cn-hangzhou.aliyuncs.com/szss_k8s/etcd-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/kube-apiserver-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/kube-controller-manager-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/kube-proxy-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/kube-scheduler-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/pause-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/k8s-dns-sidecar-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/k8s-dns-kube-dns-amd64
registry.cn-hangzhou.aliyuncs.com/szss_k8s/k8s-dns-dnsmasq-nanny-amd64
下面是下載和上傳映象腳步:
#!/bin/bash
set -o errexit
set -o nounset
set -o pipefail
KUBE_VERSION=v1.7.2
KUBE_PAUSE_VERSION=3.0
ETCD_VERSION=3.0.17
DNS_VERSION=1.14.4
GCR_URL=gcr.io/google_containers
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/szss_k8s
images=(kube-proxy-amd64:${KUBE_VERSION}
kube-scheduler-amd64:${KUBE_VERSION}
kube-controller-manager-amd64:${KUBE_VERSION}
kube-apiserver-amd64:${KUBE_VERSION}
pause-amd64:${KUBE_PAUSE_VERSION}
etcd-amd64:${ETCD_VERSION}
k8s-dns-sidecar-amd64:${DNS_VERSION}
k8s-dns-kube-dns-amd64:${DNS_VERSION}
k8s-dns-dnsmasq-nanny-amd64:${DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $GCR_URL/$imageName
docker tag $GCR_URL/$imageName $ALIYUN_URL/$imageName
docker push $ALIYUN_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
2 配置kubelet
配置pod的基礎映象
cat > /etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf <<EOF
[Service]
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/szss_k8s/pause-amd64:3.0"
EOF
安裝docker 1.12.6及版本需要設定cgroup-driver=cgroupfs
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
3.啟動相關元件
systemctl enable docker
systemctl enable kubelet
systemctl start docker
systemctl start kubelet
4.建立叢集
首先在master上執行init操作,api-advertise-addresses為master ip,pod-network-cidr指定IP段需要和kube-flannel.yml檔案中配置的一致(kube-flannel.yaml在下面flannel的安裝中會用到)
export KUBE_REPO_PREFIX="registry.cn-hangzhou.aliyuncs.com/szss_k8s"
export KUBE_ETCD_IMAGE="registry.cn-hangzhou.aliyuncs.com/szss_k8s/etcd-amd64:3.0.17"
kubeadm init --apiserver-advertise-address=172.16.120.151 --kubernetes-version=v1.7.2 --pod-network-cidr=10.244.0.0/12
執行結果:
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [k8s-node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.120.151]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 140.504534 seconds
[token] Using token: 242b80.86d585ebd6358b08
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 242b80.86d585ebd6358b08 172.16.120.151:6443
5 配置kubectl的kubeconfig
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
6 安裝flannel
在master節點安裝flannel
kubectl --namespace kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml
rm -rf kube-flannel.yml
wget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
sed -i 's/quay.io\/coreos\/flannel:v0.8.0-amd64/registry.cn-hangzhou.aliyuncs.com\/szss_k8s\/flannel:v0.8.0-amd64/g' ./kube-flannel.yml
kubectl --namespace kube-system apply -f ./kube-flannel.yml
7 master節點安裝驗證
通過命令驗證:
$kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
8 node節點安裝和加入叢集
node的節點需要執行1~3的安裝步驟,安裝完後執行下面的命令將node的節點加入叢集:
export KUBE_REPO_PREFIX="registry.cn-hangzhou.aliyuncs.com/szss_k8s"
export KUBE_ETCD_IMAGE="registry.cn-hangzhou.aliyuncs.com/szss_k8s/etcd-amd64:3.0.17"
kubeadm join --token 242b80.86d585ebd6358b08 172.16.120.151:6443 --skip-preflight-checks
9 node節點安裝驗證
通過命令驗證:
$kubectl get nodes
NAME STATUS AGE VERSION
k8s-node01 Ready 9h v1.7.2
k8s-node02 Ready 9h v1.7.2
10 參考
【使用kubeadm在Red Hat 7/CentOS 7快速部署Kubernetes 1.7叢集】http://dockone.io/article/2514