Kubernetes集群手動部署(二)
本文源鏈接地址:https://www.93bok.com
前言
這裏我用了三臺機器,一臺master,兩臺node,先從小的開始吧,一步一步的搭建熟悉之後,我們再來搭建高可用的kubernetes架構。Master和Node上都需要安裝docker服務
架構圖
實驗環境:
Centos7.2 minimal 64位
服務器IP:
Master: 192.168.1.89
Node: 192.168.1.90
Node1: 192.168.1.91
主機名:
Master: master
Node: node
Node1: node1
軟件版本:
k8s組件版本:v1.11.3 docker版本:ce-18.03.1 etcd版本:v3.3.9 flannel版本:0.7.1-4
Master節點要安裝的組件有:
1、docker
2、etcd
3、kube-apiserver
4、kubectl
5、kube-scheduler
6、kube-controller-manager
7、kubelet
8、kube-proxy
Node節點要安裝的組件有:
1、Calico/Flannel/Open vSwitch/Romana/Weave等等(網絡解決方案,k8s有好幾個網絡解決方案)
官方說明文章:https://kubernetes.io/docs/setup/scratch/#network
2、docker
3、kubelet
4、kube-proxy
網上有說要關閉swap分區的操作,否則報錯,由於我的主機沒有設置swap分區,我也不清楚如果不關閉swap會不會報錯,所以這裏我就不寫上關閉swap分區的操作了
A、Master部署
一、環境準備
環境準備需要master和node都操作
1、設置主機名
master上操作:
hostnamectl set-hostname master
bash
node上操作:
hostnamectl set-hostname node
bash
node1上操作:
hostnamectl set-hostname node1
bash
2、設置hosts
master和node同樣的操作
echo "192.168.1.89 master" >> /etc/hosts echo "192.168.1.90 node" >> /etc/hosts echo "192.168.1.91 node1" >> /etc/hosts
3、設置時間同步
master和node同樣的操作
yum -y install ntp
systemctl enable ntpd
systemctl start ntpd
ntpdate -u cn.pool.ntp.org
hwclock
timedatectl set-timezone Asia/Shanghai
二、生成證書
(一)安全模式
Kubernetes主要有兩種安全模式選擇,官方說明地址:
https://kubernetes.io/docs/setup/scratch/#security-models
1、使用http訪問apiserver
1)使用防火墻來提高安全性
2)這更容易設置
2、使用https訪問apiserver
1)使用帶有證書的https和用戶憑據
2)這是推薦的方法
3)配置證書可能會很棘手
為了保證線上集群的信任和數據安全,節點之間使用TLS證書來互相通信,如果需要使用https方法,則需要準備證書和用戶憑據
(二)證書
生成證書有幾種方式:
1、easyrsa
2、openssl
3、cfssl
官方說明地址:
https://kubernetes.io/docs/concepts/cluster-administration/certificates/
使用證書的組件如下:
1、etcd #使用ca.pem、etcd-key.pem、etcd.pem
2、kube-apiserver: #使用ca.pem、api-key.pem、api.pem
3、kubelet #使用ca.pem
4、kube-proxy #使用ca.pem、proxy-key.pem、proxy.pem
5、kubectl #使用ca.pem、admin-key.pem、admin.pem
6、kube-controller-manager #使用ca.pem、ca-key.pem
(三)安裝證書生成工具cfssl
上邊給出了官方的安裝地址,你也可以選擇別的工具,這裏我選擇cfssl
註意:
證書生成都在master上執行,證書只需要創建一次即可,以後在向集群中添加新節點時只要將/etc/kubernetes/目錄下的證書拷貝到新節點上就行。
下載、解壓並準備命令行工具
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o cfssl-certinfo
chmod +x cfssl
chmod +x cfssljson
chmod +x cfssl-certinfo
mv cfssl* /usr/bin/
(四)創建CA根證書和密鑰
1、創建目錄來保存證書文件
mkdir -p /root/ssl
2、生成默認的cfssl配置文件
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
[root@master ssl]# ll
total 8
-rw-r--r--. 1 root root 567 Sep 17 09:58 config.json
-rw-r--r--. 1 root root 287 Sep 17 09:58 csr.json
3、根據上邊生成的config.json文件的格式創建用於生成CA證書的ca-config.json文件
(有效期我設置了87600h,10年)
vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
參數說明:
1)ca-config.json:可以定義多個profiles,分別制定不同的過期時間、使用場景等參數,後續在簽名證書時使用某個peofile
2)signing:表示該證書可用於簽名其它證書
3)server auth:表示client可以用該CA對server提供的證書進行驗證
4)client auth:表示server用該CA對client提供的證書進行驗證
4、創建CA證書簽發請求ca-csr.json文件
vim ca-csr.json
{
"CN": "ca",
"key": {
"algo": "rsa",
"size": 2048
},
"names":[{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "K8S",
"OU": "K8S-Security"
}]
}
5、生成CA密鑰(ca-key.pem)和證書(ca.pem)
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@master ssl]# ll ca*
-rw-r--r--. 1 root root 285 Sep 17 10:41 ca-config.json
-rw-r--r--. 1 root root 1009 Sep 17 10:41 ca.csr
-rw-r--r--. 1 root root 194 Sep 17 10:41 ca-csr.json
-rw-------. 1 root root 1679 Sep 17 10:41 ca-key.pem
-rw-r--r--. 1 root root 1375 Sep 17 10:41 ca.pem
(五)創建kube-apiserver證書和密鑰
1、創建kube-apiserver證書簽發請求的csr文件
cd /root/ssl/
vim apiserver-csr.json
{
"CN": "apiserver",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.1.89",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "K8S",
"OU": "K8S-Security"
}]
}
上邊的:
localhost字段代表Master_IP
192.168.1.89代表Master_Cluster_IP
如果有多個master,都寫上master的IP即可
2、生成kube-apiserver證書和密鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver
[root@master ssl]# ll apiserver*
-rw-r--r--. 1 root root 1257 Sep 17 11:32 apiserver.csr
-rw-r--r--. 1 root root 422 Sep 17 11:28 apiserver-csr.json
-rw-------. 1 root root 1675 Sep 17 11:32 apiserver-key.pem
-rw-r--r--. 1 root root 1635 Sep 17 11:32 apiserver.pem
(六)創建ETCD證書和密鑰
1、創建ETCD證書簽發請求的csr文件
cd /root/ssl/
vim etcd-csr.json
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.1.89",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "K8S",
"OU": "K8S-Security"
}]
}
2、生成etcd證書和密鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
[root@master ssl]# ll etcd*
-rw-r--r--. 1 root root 1257 Sep 17 10:55 etcd.csr
-rw-r--r--. 1 root root 421 Sep 17 10:55 etcd-csr.json
-rw-------. 1 root root 1679 Sep 17 10:55 etcd-key.pem
-rw-r--r--. 1 root root 1635 Sep 17 10:55 etcd.pem
(七)創建Kubectl證書和密鑰
1、創建Kubectl證書簽發請求的csr文件
cd /root/ssl/
vim admin-csr.json
{
"CN": "kubectl",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.1.89",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "system:masters",
"OU": "K8S-Security"
}]
}
2、生成kubectl證書和密鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@master ssl]# ll admin*
-rw-r--r--. 1 root root 1017 Sep 17 11:41 admin.csr
-rw-r--r--. 1 root root 217 Sep 17 11:40 admin-csr.json
-rw-------. 1 root root 1675 Sep 17 11:41 admin-key.pem
-rw-r--r--. 1 root root 1419 Sep 17 11:41 admin.pem
(七)創建Kube-proxy證書和密鑰
1、創建Kube-proxy證書簽發請求的csr文件
cd /root/ssl/
vim proxy-csr.json
{
"CN": "proxy",
"hosts": [
"127.0.0.1",
"localhost",
"192.168.1.89",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "GuangXi",
"L": "NanNing",
"O": "K8S",
"OU": "K8S-Security"
}]
}
2、生成kube-proxy證書和密鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes proxy-csr.json | cfssljson -bare proxy
[root@master ssl]# ll proxy*
-rw-r--r--. 1 root root 1005 Sep 17 11:46 proxy.csr
-rw-r--r--. 1 root root 206 Sep 17 11:45 proxy-csr.json
-rw-------. 1 root root 1679 Sep 17 11:46 proxy-key.pem
-rw-r--r--. 1 root root 1403 Sep 17 11:46 proxy.pem
(八)整理證書和密鑰文件備用
將生成的證書和密鑰文件(後綴為.pem的文件)拷貝到/etc/kubernetes/ssl目錄下備用
mkdir -p /etc/kubernetes/ssl
cp /root/ssl/*.pem /etc/kubernetes/ssl/
三、安裝Kubectl命令行工具
官方安裝文章:
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-using-curl
可以按照官方的安裝方式,這裏我選擇直接下載二進制的方式,其實都是一樣的,如下
1、下載kubectl二進制包
下載地址:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1113
命令行下載:
wget https://dl.k8s.io/v1.11.3/kubernetes-client-linux-amd64.tar.gz
2、解包安裝
tar -zxvf kubernetes-client-linux-amd64.tar.gz
cp kubernetes/client/bin/kubectl /usr/bin/
四、創建kubectl的kubeconfig文件
為了讓kubectl能夠查找和訪問Kubernetes集群,它需要一個kubeconfig文件,kubeconfig文件用於組織關於集群、用戶、命名空間和認證機制的信息。命令行工具kubectl從kubeconfig文件中得到它要選擇的集群以及跟集群API Server交互的信息。
官方設置文章:
https://k8smeetup.github.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
官方的文章裏是單獨創建一個目錄,然後寫好集群、用戶和上下文。下邊我是直接用命令生成,生成的配置信息默認保存到~/.kube/config文件中,兩種方法達到的效果一樣,隨你們選擇
1、設置集群參數
(--server後邊跟的是kube-apiserver的地址和端口,雖然這裏我們還沒有部署apiserver)
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.1.89:6443
2、設置客戶端認證參數
kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/etc/kubernetes/ssl/admin-key.pem
3、設置上下文參數
kubectl config set-context kubernetes --cluster=kubernetes --user=admin
4、設置默認上下文
kubectl config use-context kubernetes
5、查看
[root@master ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://192.168.1.89:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
五、創建TLS Bootstrapping Token
官方地址:
https://kubernetes.io/cn/docs/admin/kubelet-tls-bootstrapping/
1、Token可以是任意的包含128bit的字符串,可以使用安全的隨機數發生器生成:
cd /root/
head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘
2、將生成的Token寫入到以下文件
vim token.csv
76b154e3c04b6a0f926579338b88d177,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
3、將token.csv復制到/etc/kubernetes/下備用
cp token.csv /etc/kubernetes/
如果後續重新生成了Token,則需要:
1、更新token.csv文件,分發到所有機器 (master和node)的/etc/kubernetes/目錄下,分發到node節點上非必需;
2、重新生成bootstrap.kubeconfig文件,分發到所有node機器的/etc/kubernetes/目錄下;
3、重啟kube-apiserver和kubelet進程;
4、重新approve kubelet的csr請求;
六、創建kubelet bootstrapping的kubeconfig文件
cd /etc/kubernetes/
1、設置集群參數
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.1.89:6443 --kubeconfig=bootstrap.kubeconfig
2、設置客戶端認證參數(--token參數為上邊生成的那一串字符)
kubectl config set-credentials kubelet-bootstrap --token=76b154e3c04b6a0f926579338b88d177 --kubeconfig=bootstrap.kubeconfig
3、設置上下文參數
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
4、設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
註意:
1、--embed-certs 為 true 時表示將 certificate-authority 證書寫入到生成的 bootstrap.kubeconfig 文件中;
2、設置客戶端認證參數時沒有指定秘鑰和證書,後續由kube-apiserver自動生成;
七、創建kube-proxy的kubeconfig文件
cd /etc/kubernetes/
1、設置集群參數
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.1.89:6443 --kubeconfig=kube-proxy.kubeconfig
2、設置客戶端認證參數
kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/ssl/proxy.pem --client-key=/etc/kubernetes/ssl/proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
3、設置上下文參數
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
4、設置默認上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
註意:
設置集群參數和客戶端認證參數時--embed-certs都為true,這會將certificate-authority、client-certificate和client-key指向的證書文件內容寫入到生成的kube-proxy.kubeconfig文件中
八、安裝ETCD
yum或者二進制包安裝都可以,這裏我選擇二進制包安裝的方式,yum安裝的話直接yum -y install etcd即可,也可以源碼安裝,具體可以參考這篇文章
https://blog.csdn.net/varyall/article/details/79128181
ETCD官網下載地址:
https://github.com/etcd-io/etcd/releases
1、把下載好的二進制包上傳到服務器
2、解壓
tar -zxvf etcd-v3.3.9-linux-amd64.tar.gz
3、mv到/usr/local/
mv etcd-v3.3.9-linux-amd64 /usr/local/etcd
4、軟鏈接
ln -s /usr/local/etcd/etcd* /usr/local/bin/
5、給執行權限
chmod +x /usr/local/bin/etcd*
6、查看版本
[root@master ~]# etcd --version
etcd Version: 3.3.9
Git SHA: fca8add78
Go Version: go1.10.3
Go OS/Arch: linux/amd64
7、把etcd加入系統服務,開機自啟
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd --name ${ETCD_NAME} --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS} --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster ${ETCD_INITIAL_CLUSTER} --initial-cluster-state new --data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
8、創建etcd配置文件
mkdir -p /etc/etcd
vim /etc/etcd/etcd.conf
# [member]
ETCD_NAME=etcd
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_WAL_DIR="/var/lib/etcd/wal"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.1.89:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.89:2379,https://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
#ETCD_CORS=""
# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.89:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd=https://192.168.1.89:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.89:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
# [proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
# [security]
ETCD_CERT_FILE="/etc/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/kubernetes/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/kubernetes/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"
9、啟動etcd
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
10、防火墻開啟2379、2380端口
firewall-cmd --add-port=2379/tcp --permanent
firewall-cmd --add-port=2380/tcp --permanent
systemctl restart firewalld
11、檢查etcd端口
[root@master ~]# netstat -luntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 12869/etcd
tcp 0 0 192.168.1.89:2379 0.0.0.0:* LISTEN 12869/etcd
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 12869/etcd
tcp 0 0 192.168.1.89:2380 0.0.0.0:* LISTEN 12869/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1299/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2456/master
tcp6 0 0 :::22 :::* LISTEN 1299/sshd
tcp6 0 0 ::1:25 :::* LISTEN 2456/master
udp 0 0 192.168.1.89:123 0.0.0.0:* 2668/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 2668/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 2668/ntpd
udp6 0 0 fe80::20c:29ff:fec7:123 :::* 2668/ntpd
udp6 0 0 ::1:123 :::* 2668/ntpd
udp6 0 0 :::123 :::* 2668/ntpd
12、檢查etcd是否可以使用
[root@master ~]# export ETCDCTL_API=3
[root@master ~]# etcdctl --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem --endpoints=https://192.168.1.89:2379 endpoint health
https://192.168.1.89:2379 is healthy: successfully committed proposal: took = 650.006μs
九、安裝docker
這應該是全文最簡單的安裝步驟了,之前也有寫過docker的安裝方法,本來不想再敘述的,想了想,還是寫一寫吧,要不然你們重新去翻一遍之前的文章也不舒服。這裏我使用yum安裝。
1、安裝依賴
yum install -y yum-utils device-mapper-persistent-data lvm2
2、獲取docker的yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3、列出docker-ce版本
yum list docker-ce --showduplicates | sort -r
* updates: mirrors.163.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
* extras: mirrors.163.com
* epel: mirror01.idc.hinet.net
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
* base: mirrors.163.com
Available Packages
4、安裝想要的版本
yum -y install docker-ce-18.03.1.ce-1.el7.centos
5、把root用戶加入到docker用戶組
gpasswd -a root docker
6、啟動docker
systemctl start docker
7、開機自啟
systemctl enable docker
十、安裝kubernetes
1、下載地址
https://github.com/kubernetes/kubernetes/releases/tag/v1.11.3
2、選擇CHANGELOG-1.11.md
3、下載
上圖中kubernetes.tar.gz內包含了kubernetes的服務程序文件、文檔和示例,而kubernetes-src.tar.gz內則包含了全部源代碼。這裏我選擇了下邊的Server Binaries中的文件,其中包含了kubernetes需要運行的全部服務程序文件
也可以參考官方的說明來做:
https://kubernetes.io/docs/setup/scratch/#downloading-and-extracting-kubernetes-binaries
命令行下載:
wget https://dl.k8s.io/v1.11.3/kubernetes-server-linux-amd64.tar.gz
4、配置kubernetes
tar -zxvf kubernetes-server-linux-amd64.tar.gz
mv kubernetes /usr/local/
ln -s /usr/local/kubernetes/server/bin/hyperkube /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kube-apiserver /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kube-scheduler /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kubelet /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kube-controller-manager /usr/bin/
ln -s /usr/local/kubernetes/server/bin/kube-proxy /usr/bin/
十一、把kube-apiserver加入系統服務,開機自啟
1、創建服務文件,開機自啟
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Service
After=network.target
After=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
2、創建公共配置文件
vim /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080"
KUBE_MASTER="--master=http://192.168.1.89:8080"
把IP地址改為你自己的IP地址,該配置文件同時被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用
3、創建kube-apiserver的配置文件
vim /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=0.0.0.0 --insecure-bind-address=0.0.0.0 --bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.1.89:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/24"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction"
# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC,Node --endpoint-reconciler-type=lease --runtime-config=batch/v2alpha1=true --anonymous-auth=false --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-quorum-read=true --storage-backend=etcd3 --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/etcd.pem --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kube-audit/audit.log --event-ttl=1h "
把IP地址改為你的,證書和密鑰路徑改為你的即可
4、啟動kube-apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
5、防火墻開啟8080、6443端口
firewall-cmd --add-port=8080/tcp --permanent
firewall-cmd --add-port=6443/tcp --permanent
systemctl restart firewalld
十二、把kube-controller-manager加入系統服務,開機自啟
1、創建服務文件,開機自啟
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_MASTER $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
2、創建kube-controller-manager的配置文件
vim /etc/kubernetes/controller-manager
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true --node-monitor-grace-period=40s --node-monitor-period=5s --pod-eviction-timeout=60s"
3、啟動kube-controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
十三、把kube-scheduler加入系統服務,開機自啟
1、創建服務文件,開機自啟
vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_MASTER $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
2、創建kube-scheduler的配置文件
vim /etc/kubernetes/scheduler
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"
3、啟動kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
提示:kube-apiserver是主要服務,如果apiserver啟動失敗其它的也會失敗
十四、驗證Master節點功能
[root@master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
沒有異常即為Master節點安裝成功!!!!
好了,到此master節點已經全部安裝完成,並成功運行,接下來開始到node節點的安裝了
B、Node部署
一、復制在Master上分發的證書、密鑰、配置文件等等
1、先登錄master節點,將之前生成的/root/.kube/config文件復制到/etc/kubernetes/目錄下,並命名為kubelet.kubeconfig備用
cp /root/.kube/config /etc/kubernetes/kubelet.kubeconfig
2、將master節點上的/etc/kubernetes/目錄復制到node節點上,目錄同樣是/etc/kubernetes/
二、Node節點安裝docker
1、安裝依賴
yum install -y yum-utils device-mapper-persistent-data lvm2
2、獲取docker的yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3、列出docker-ce版本
yum list docker-ce --showduplicates | sort -r
* updates: mirrors.163.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
* extras: mirrors.163.com
* epel: mirror01.idc.hinet.net
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
* base: mirrors.163.com
Available Packages
4、安裝想要的版本
yum -y install docker-ce-18.03.1.ce-1.el7.centos
5、把root用戶加入到docker用戶組
gpasswd -a root docker
6、啟動docker
systemctl start docker
7、開機自啟
systemctl enable docker
8、如果要使用flannel網絡,在啟動docker的時候,需要添加--bip參數,修改docker的systemd啟動文件
vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service flanneld.service
Wants=network-online.target
Requires=flanneld.service
[Service]
Type=notify
EnvironmentFile=/run/flannel/docker
EnvironmentFile=/run/flannel/subnet.env
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd $DOCKER_OPT_BIP $DOCKER_OPT_IPMASQ $DOCKER_OPT_MTU $DOCKER_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
三、Node節點安裝flannel網絡插件
所有的node節點都需要安裝網絡插件才能讓所有的pod加入到同一個局域網中,這裏我選擇的是flannel網絡插件,當然,你們也可以選擇別的插件,去官網學習學習就可以
建議直接使用yum安裝flannel,默認安裝的版本的是0.7.1的,除非對版本有特殊需求,可以二進制安裝,可以參考這篇文章:http://blog.51cto.com/tryingstuff/2121707
[root@node ~]# yum list flannel | sort -r
* updates: mirrors.cn99.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
flannel.x86_64 0.7.1-4.el7 extras
* extras: mirrors.aliyun.com
* epel: mirrors.aliyun.com
* base: mirrors.shu.edu.cn
Available Packages
1、安裝flannel
yum -y install flannel
2、修改服務啟動文件
vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} -etcd-prefix=${FLANNEL_ETCD_PREFIX} $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
WantedBy=docker.service
參數說明:
1)Flannel網絡必須在宿主機網絡能對外(其它node節點)正常通信的情況下啟動才有意義,所以這裏定義 After=network.target
2)只有當Flannel 網絡啟動之後,才能創建一個與其它節點不會沖突的網絡,而docker的網絡需要和fannel 網絡相同才能保證跨主機通信,所以docker必須要在flannel網絡創建後才能啟動,這裏定義 Before=docker.service
3)在 /etc/kubernetes/flanneld 文件中,我們會指定flannel相關的啟動參數,這裏由於需要指定etcd集群,會有一部分不通用的參數,所以單獨定義。
4)啟動之後,我們需要使用fannel自帶的腳本/usr/libexec/flannel/mk-docker-opts.sh,創建一個docker使用的啟動參數,這個參數就包含配置docker0網卡的網段。
3、修改配置文件
vim /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://192.168.1.89:2379" ##etcd集群的地址,多個用英文逗號隔開
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/etcd.pem -etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem"
4、在etcd中創建網絡配置(我們的etcd是在master節點上,所以這裏要去master節點執行命令)
etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem mkdir /kube-centos/network
etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem mk /kube-centos/network/config ‘{"Network":"192.168.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}‘
5、在node節點上啟動flannel
systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld
6、重啟node節點上的docker
systemctl restart docker
systemctl status docker
四、查看etcd中的flannel內容
[root@master ~]# etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem ls /kube-centos/network/subnets
/kube-centos/network/subnets/192.168.23.0-24
[root@master ~]# etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem get /kube-centos/network/config
{"Network":"192.168.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}
[root@master ~]# etcdctl --endpoints=https://192.168.1.89:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem get /kube-centos/network/subnets/192.168.23.0-24
{"PublicIP":"192.168.1.90","BackendType":"host-gw"}
註意:每臺node節點上都要安裝flannel,master節點上可以不裝
五、Node節點安裝kubelet
1、下載Kubernetes版本的二進制包
之前我們已經下載過kubernetes-server-linux-amd64.tar.gz這個包,這個包裏包含了所有Kubernetes需要運行的組件,我們把這個包上傳到服務器,安裝我們需要的組件即可
2、解包安裝
tar -zxvf kubernetes-server-linux-amd64.tar.gz
mv kubernetes /usr/local/
ln -s /usr/local/kubernetes/server/bin/kubelet /usr/bin/
3、創建kubelet的service啟動文件
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_POD_INFRA_CONTAINER $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
4、創建kubelet的配置文件
vim /etc/kubernetes/kubelet
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
#
## The port for the info server to serve on
KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.1.90"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
#KUBELET_API_SERVER=""
#
## Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
5、啟動kubelet
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
6、防火墻開啟10250端口
firewall-cmd --add-port=10250/tcp --permanent
systemctl restart firewalld
六、Node節點安裝kube-proxy
1、創建軟鏈接
上一步已經解包安裝了所有文件,這裏做好軟鏈接即可
ln -s /usr/local/kubernetes/server/bin/kube-proxy /usr/bin/
2、創建kube-proxy的service啟動文件
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
After=network.target
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_MASTER $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
3、創建kube-proxy的配置文件
vim /etc/kubernetes/proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--bind-address=0.0.0.0 --hostname-override=192.168.1.90 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --masquerade-all"
4、啟動kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
C、Node1部署
這裏省略了,如果需要創建多個node節點,直接按照上面的Node部署流程重復即可
D、驗證
到master節點上驗證安裝好的兩個node節點
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.1.90 Ready <none> 1h v1.11.3
192.168.1.91 Ready <none> 2m v1.11.3
狀態為Ready即為節點加入成功!!
Kubernetes集群手動部署(二)