1. 程式人生 > >[轉貼]CentOS7.5 Kubernetes V1.13(最新版)二進位制部署叢集

[轉貼]CentOS7.5 Kubernetes V1.13(最新版)二進位制部署叢集

CentOS7.5 Kubernetes V1.13(最新版)二進位制部署叢集

http://blog.51cto.com/10880347/2326146  

一、概述

kubernetes 1.13 已釋出,這是 2018 年年內第四次也是最後一次釋出新版本。Kubernetes 1.13 是迄今為止釋出間隔最短的版本之一(與上一版本間隔十週),主要關注 Kubernetes 的穩定性與可擴充套件性,其中儲存與叢集生命週期相關的三項主要功能已逐步實現普遍可用性。

Kubernetes 1.13 的核心特性包括:利用 kubeadm 簡化叢集管理、容器儲存介面(CSI )以及將 CoreDNS 作為預設 DNS 。

利用 kubeadm 簡化叢集管理功能

大多數與 Kubernetes 接觸頻繁的人或多或少都會親自動手使用 kubeadm ,它是管理叢集生命週期的重要工具,能夠幫助從建立到配置再到升級的整個流程。;隨著 1.13 版本的釋出,kubeadm 功能進入 GA 版本,正式普遍可用。kubeadm 處理現有硬體上的生產叢集的引導,並以最佳實踐方式配置核心 Kubernetes 元件,以便為新節點提供安全而簡單的連線流程並支援輕鬆升級。

該 GA 版本中最值得注意的是已經畢業的高階功能,尤其是可插拔性和可配置性。kubeadm 旨在為管理員與高階自動化系統提供一套工具箱,如今已邁出重要一步。

容器儲存介面(CSI)

容器儲存介面最初於 1.9 版本中作為 alpha 測試功能引入,在 1.10 版本中進入 beta 測試,如今終於進入 GA 階段正式普遍可用。在 CSI 的幫助下,Kubernetes 卷層將真正實現可擴充套件性。通過 CSI ,第三方儲存供應商將可以直接編寫可與 Kubernetes 互操作的程式碼,而無需觸及任何 Kubernetes 核心程式碼。事實上,相關規範也已經同步進入 1.0 階段。

隨著 CSI 的穩定,外掛作者將能夠按照自己的節奏開發核心儲存外掛,詳見 CSI 文件。

CoreDNS 成為 Kubernetes 的預設 DNS 伺服器

在 1.11 版本中,開發團隊宣佈 CoreDNS 已實現基於 DNS 服務發現的普遍可用性。在最新的 1.13 版本中,CoreDNS 正式取代 kuber-dns 成為 Kubernetes 中的預設 DNS 伺服器。CoreDNS 是一種通用的、權威的 DNS 伺服器,能夠提供與 Kubernetes 向下相容且具備可擴充套件性的整合能力。由於 CoreDNS 自身單一可執行檔案與單一程序的特性,因此 CoreDNS 的活動部件數量會少於之前的 DNS 伺服器,且能夠通過建立自定義 DNS 條目來支援各類靈活的用例。此外,由於 CoreDNS 採用 Go 語言編寫,它具有強大的記憶體安全性。

CoreDNS 現在是 Kubernetes 1.13 及後續版本推薦的 DNS 解決方案,Kubernetes 已將常用測試基礎設施架構切換為預設使用 CoreDNS ,因此,開發團隊建議使用者也儘快完成切換。KubeDNS 仍將至少支援一個版本,但現在是時候開始規劃遷移了。另外,包括 1.11 中 Kubeadm 在內的許多 OSS 安裝工具也已經進行了切換。

1、安裝環境準備:

部署節點說明

IP地址 主機名 CPU 記憶體 磁碟
172.16.8.100 qas-k8s-master01 4C 4G 50G
172.16.8.101 qas-k8s-node01 4C 4G 50G
172.16.8.102 qas-k8s-node02 4C 4G 50G

k8s安裝包下載

連結:https://pan.baidu.com/s/1wO6T7byhaJYBuu2JlhZvkQ 
提取碼:pm9u

部署網路說明

2、架構圖

Kubernetes 架構圖

CentOS7.5 Kubernetes V1.13(最新版)二進位制部署叢集

Flannel網路架構圖

CentOS7.5 Kubernetes V1.13(最新版)二進位制部署叢集

  • 資料從源容器中發出後,經由所在主機的docker0虛擬網絡卡轉發到flannel0虛擬網絡卡,這是個P2P的虛擬網絡卡,flanneld服務監聽在網絡卡的另外一端。
  • Flannel通過Etcd服務維護了一張節點間的路由表,在稍後的配置部分我們會介紹其中的內容。
  • 源主機的flanneld服務將原本的資料內容UDP封裝後根據自己的路由表投遞給目的節點的flanneld服務,資料到達以後被解包,然後直接進入目的節點的flannel0虛擬網絡卡,
    然後被轉發到目的主機的docker0虛擬網絡卡,最後就像本機容器通訊一下的有docker0路由到達目標容器。

3、 Kubernetes工作流程

CentOS7.5 Kubernetes V1.13(最新版)二進位制部署叢集
叢集功能各模組功能描述:

Master節點:
Master節點上面主要由四個模組組成,APIServer,schedule,controller-manager,etcd

APIServer: APIServer負責對外提供RESTful的kubernetes API的服務,它是系統管理指令的統一介面,任何對資源的增刪該查都要交給APIServer處理後再交給etcd,如圖,kubectl(kubernetes提供的客戶端工具,該工具內部是對kubernetes API的呼叫)是直接和APIServer互動的。

schedule: schedule負責排程Pod到合適的Node上,如果把scheduler看成一個黑匣子,那麼它的輸入是pod和由多個Node組成的列表,輸出是Pod和一個Node的繫結。 kubernetes目前提供了排程演算法,同樣也保留了介面。使用者根據自己的需求定義自己的排程演算法。

controller manager: 如果APIServer做的是前臺的工作的話,那麼controller manager就是負責後臺的。每一個資源都對應一個控制器。而control manager就是負責管理這些控制器的,比如我們通過APIServer建立了一個Pod,當這個Pod建立成功後,APIServer的任務就算完成了。

etcd:etcd是一個高可用的鍵值儲存系統,kubernetes使用它來儲存各個資源的狀態,從而實現了Restful的API。

Node節點:
每個Node節點主要由三個模板組成:kublet, kube-proxy

kube-proxy: 該模組實現了kubernetes中的服務發現和反向代理功能。kube-proxy支援TCP和UDP連線轉發,預設基Round Robin演算法將客戶端流量轉發到與service對應的一組後端pod。服務發現方面,kube-proxy使用etcd的watch機制監控叢集中service和endpoint物件資料的動態變化,並且維護一個service到endpoint的對映關係,從而保證了後端pod的IP變化不會對訪問者造成影響,另外,kube-proxy還支援session affinity。

kublet:kublet是Master在每個Node節點上面的agent,是Node節點上面最重要的模組,它負責維護和管理該Node上的所有容器,但是如果容器不是通過kubernetes建立的,它並不會管理。本質上,它負責使Pod的執行狀態與期望的狀態一致。

二、Kubernetes 安裝及配置

1、初始化環境

1.1、設定關閉防火牆及SELINUX

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled

1.2、關閉Swap

swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0

1.3、設定Docker所需引數

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

1.4、安裝 Docker

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker

1.5、建立安裝目錄

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

1.6、安裝及配置CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

1.7、建立認證證書

建立 ETCD 證書

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF

建立 ETCD CA 配置檔案

cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen" } ] } EOF

建立 ETCD Server 證書

cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "172.16.8.100",
    "172.16.8.101", "172.16.8.102" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen" } ] } EOF

生成 ETCD CA 證書和私鑰

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

建立 Kubernetes CA 證書

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成API_SERVER證書

cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1", "172.16.8.100", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

建立 Kubernetes Proxy 證書

cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

1.8、 ssh-key認證

# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:FQjjiRDp8IKGT+UDM+GbQLBzF3DqDJ+pKnMIcHGyO/o [email protected] The key's randomart image is: +---[RSA 2048]----+ |o.==o o. .. | |ooB+o+ o. . | |[email protected] o . | |=X**o . | |o=O. . S | |..+ | |oo . | |* . | |o+E | +----[SHA256]-----+ # ssh-copy-id 172.16.8.101 # ssh-copy-id 172.16.8.102

2 、部署ETCD

解壓安裝檔案

tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
vim /k8s/etcd/cfg/etcd   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.8.100:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.8.100:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.8.100:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.8.100:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.16.8.100:2380,etcd02=https://172.16.8.101:2380,etcd03=https://172.16.8.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"

建立 etcd的 systemd unit 檔案

vim/usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/k8s/etcd/ssl/server.pem \
--peer-key-file=/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

拷貝證書檔案

cp ca*pem server*pem /k8s/etcd/ssl

啟動ETCD服務

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

將啟動檔案、配置檔案拷貝到 節點1、節點2

cd /k8s/ 
scp -r etcd 172.16.8.101:/k8s/
scp -r etcd 172.16.8.102:/k8s/
scp /usr/lib/systemd/system/etcd.service  172.16.8.101:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service  172.16.8.102:/usr/lib/systemd/system/etcd.service 

vim /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.8.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.8.101:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.8.101:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.8.101:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.16.8.100:2380,etcd02=https://172.16.8.101:2380,etcd03=https://172.16.8.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.8.102:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.8.102:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.8.102:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.8.102:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.16.8.100:2380,etcd02=https://172.16.8.101:2380,etcd03=https://172.16.8.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"

驗證叢集是否正常執行

./etcdctl \
--ca-file=/k8s/etcd/ssl/ca.pem \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--endpoints="https://172.16.8.100:2379,\
https://172.16.8.101:2379,\
https://172.16.8.102:2379" cluster-health

member 5db3ea816863435 is healthy: got healthy result from https://172.16.8.102:2379 member 991b5845cecb31b is healthy: got healthy result from https://172.16.8.101:2379 member c67ee2780d64a0d4 is healthy: got healthy result from https://172.16.8.100:2379 cluster is healthy 注意: 啟動ETCD叢集同時啟動二個節點,啟動一個節點叢集是無法正常啟動的; 

3、部署Flannel網路

向 etcd 寫入叢集 Pod 網段資訊

cd /k8s/etcd/ssl/
/k8s/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://172.16.8.100:2379,\
https://172.16.8.101:2379,https://172.16.8.102:2379" \
set /coreos.com/network/config  '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
  • flanneld 當前版本 (v0.10.0) 不支援 etcd v3,故使用 etcd v2 API 寫入配置 key 和網段資料;
  • 寫入的 Pod 網段 ${CLUSTER_CIDR} 必須是 /16 段地址,必須與 kube-controller-manager 的 --cluster-cidr 引數值一致;

解壓安裝

tar -xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/

配置Flannel

vim /k8s/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://172.16.8.100:2379,https://172.16.8.101:2379,https://172.16.8.102:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

建立 flanneld 的 systemd unit 檔案

vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • mk-docker-opts.sh 指令碼將分配給 flanneld 的 Pod 子網網段資訊寫入 /run/flannel/docker 檔案,後續 docker 啟動時 使用這個檔案中的環境變數配置 docker0 網橋;
  • flanneld 使用系統預設路由所在的介面與其它節點通訊,對於有多個網路介面(如內網和公網)的節點,可以用 -iface 引數指定通訊介面,如上面的 eth0 介面;
  • flanneld 執行時需要 root 許可權;

配置Docker啟動指定子網段

vim /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

配置Docker啟動指定子網段

vim /usr/lib/systemd/system/docker.service 

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

將flanneld systemd unit 檔案到所有節點

cd /k8s/
scp -r kubernetes 172.16.8.101:/k8s/
scp -r kubernetes 172.16.8.102:/k8s/ scp /k8s/kubernetes/cfg/flanneld 172.16.8.102:/k8s/kubernetes/cfg/flanneld scp /k8s/kubernetes/cfg/flanneld 172.16.8.102:/k8s/kubernetes/cfg/flanneld scp /usr/lib/systemd/system/docker.service 172.16.8.101:/usr/lib/systemd/system/docker.service scp /usr/lib/systemd/system/docker.service 172.16.8.102:/usr/lib/systemd/system/docker.service scp /usr/lib/systemd/system/flanneld.service 172.16.8.101:/usr/lib/systemd/system/flanneld.service scp /usr/lib/systemd/system/flanneld.service 172.16.8.102:/usr/lib/systemd/system/flanneld.service 啟動服務 systemctl daemon-reload systemctl start flanneld systemctl enable flanneld systemctl restart docker

檢視是否生效

ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:e3:57:a4 brd ff:ff:ff:ff:ff:ff inet 172.16.8.101/24 brd 172.16.8.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fee3:57a4/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:cf:5d:a7:af brd ff:ff:ff:ff:ff:ff inet 172.18.25.1/24 brd 172.18.25.255 scope global docker0 valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 0e:bf:c5:3b:4d:59 brd ff:ff:ff:ff:ff:ff inet 172.18.25.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::cbf:c5ff:fe3b:4d59/64 scope link valid_lft forever preferred_lft forever

4、部署 master 節點

kubernetes master 節點執行如下元件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-scheduler 和 kube-controller-manager 可以以叢集模式執行,通過 leader 選舉產生一個工作程序,其它程序處於阻塞模式。

將二進位制檔案解壓拷貝到master 節點

tar -xvf kubernetes-server-linux-amd64.tar.gz 
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

拷貝認證

cp *pem /k8s/kubernetes/ssl/

部署 kube-apiserver 元件

建立 TLS Bootstrapping Token

# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
2366a641f656a0a025abb4aabda4511b

vim /k8s/kubernetes/cfg/token.csv
2366a641f656a0a025abb4aabda4511b,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

建立apiserver配置檔案

vim /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://172.16.8.100:2379,https://172.16.8.101:2379,https://172.16.8.102:2379 \
--bind-address=172.16.8.100 \
--secure-port=6443 \ --advertise-address=172.16.8.100 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

建立 kube-apiserver systemd unit 檔案

vim /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動服務

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

檢視apiserver是否執行

ps -ef |grep kube-apiserver
root      76300      1 45 08:57 ? 00:00:14 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://172.16.8.100:2379,https://172.16.8.101:2379,https://172.16.8.102:2379 --bind-address=172.16.8.100 --secure-port=6443 --advertise-address=172.16.9.51 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem root 76357 4370 0 08:58 pts/1 00:00:00 grep --color=auto kube-apiserver

部署kube-scheduler

建立kube-scheduler配置檔案

vim  /k8s/kubernetes/cfg/kube-scheduler 

KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
  • --address:在 127.0.0.1:10251 埠接收 http /metrics 請求;kube-scheduler 目前還不支援接收 https 請求;
  • --kubeconfig:指定 kubeconfig 檔案路徑,kube-scheduler 使用它連線和驗證 kube-apiserver;
  • --leader-elect=true:叢集執行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;

建立kube-scheduler systemd unit 檔案

vim /usr/lib/systemd/system/kube-scheduler.service 

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動服務

systemctl daemon-reload
systemctl enable kube-scheduler.service systemctl restart kube-scheduler.service 

檢視kube-scheduler是否執行

# ps -ef |grep kube-scheduler 
root      77854      1  8 09:17 ? 00:00:02 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root 77901 1305 0 09:18 pts/0 00:00:00 grep --color=auto kube-scheduler # systemctl status kube-scheduler.service ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; disabled; vendor preset: disabled) Active: active (running) since 三 2018-12-05 09:17:43 CST; 29s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 77854 (kube-scheduler) Tasks: 13 Memory: 10.9M CGroup: /system.slice/kube-scheduler.service └─77854 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect 12月 05 09:17:45 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:45.642632 77854 shared_informer.go:123] caches populated 12月 05 09:17:45 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:45.743297 77854 shared_informer.go:123] caches populated 12月 05 09:17:45 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:45.844554 77854 shared_informer.go:123] caches populated 12月 05 09:17:45 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:45.945332 77854 shared_informer.go:123] caches populated 12月 05 09:17:45 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:45.945434 77854 controller_utils.go:1027] Waiting for caches to sync for scheduler controller 12月 05 09:17:46 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:46.046385 77854 shared_informer.go:123] caches populated 12月 05 09:17:46 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:46.046427 77854 controller_utils.go:1034] Caches are synced for scheduler controller 12月 05 09:17:46 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:46.046574 77854 leaderelection.go:205] attempting to acquire leader lease kube-system/kube-scheduler... 12月 05 09:17:46 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:46.063185 77854 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler 12月 05 09:17:46 qas-k8s-master01 kube-scheduler[77854]: I1205 09:17:46.164498 77854 shared_informer.go:123] caches populated 

部署kube-controller-manager

建立kube-controller-manager配置檔案

vim /k8s/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

建立kube-controller-manager systemd unit 檔案

vim /usr/lib/systemd/system/kube-controller-manager.service 

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啟動服務

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

檢視kube-controller-manager是否執行

# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2018-12-05 09:35:00 CST; 3s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 79191 (kube-controller) Tasks: 8 Memory: 15.2M CGroup: /system.slice/kube-controller-manager.service └─79191 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.... 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: I1205 09:35:01.350599 79191 serving.go:318] Generated self-signed cert in-memory 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: W1205 09:35:01.762710 79191 authentication.go:235] No authentication-kubeconfig provided in order to lookup...on't work. 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: W1205 09:35:01.762767 79191 authentication.go:238] No authentication-kubeconfig provided in order to lookup...on't work. 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: W1205 09:35:01.762792 79191 authorization.go:146] No authorization-kubeconfig provided, so SubjectAcce***ev...on't work. 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: I1205 09:35:01.762827 79191 controllermanager.go:151] Version: v1.13.0 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: I1205 09:35:01.763446 79191 secure_serving.go:116] Serving securely on [::]:10257 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: I1205 09:35:01.763925 79191 deprecated_insecure_serving.go:51] Serving insecurely on 127.0.0.1:10252 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: I1205 09:35:01.764443 79191 leaderelection.go:205] attempting to acquire leader lease kube-system/kube-con...manager... 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: I1205 09:35:01.770798 79191 leaderelection.go:289] lock is held by qas-k8s-master01_fab3fbe9-f82d-11e8-9140...et expired 12月 05 09:35:01 qas-k8s-master01 kube-controller-manager[79191]: I1205 09:35:01.770817 79191 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager Hint: Some lines were ellipsized, use -l to show in full. # ps -ef |grep kube-controller-manager root 79191 1 10 09:35 ? 00:00:01 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem --root-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem root 79220 1305 0 09:35 pts/0 00:00:00 grep --color=auto kube-controller-manager

將可執行檔案路/k8s/kubernetes/ 新增到 PATH 變數中

vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH:$HOME/bin
source /etc/profile

檢視master叢集狀態

# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-2               Healthy   {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/controller-manager Healthy ok 

5、部署node 節點

kubernetes work 節點執行如下元件:

  • docker 前面已經部署
  • kubelet
  • kube-proxy

部署 kubelet 元件

  • kublet 執行在每個 worker 節點上,接收 kube-apiserver 傳送的請求,管理 Pod 容器,執行互動式命令,如exec、run、logs 等;
  • kublet 啟動時自動向 kube-apiserver 註冊節點資訊,內建的 cadvisor 統計和監控節點的資源使用情況;
  • 為確保安全,本文件只開啟接收 https 請求的安全埠,對請求進行認證和授權,拒絕未授權的訪問(如apiserver、heapster)。

將kubelet 二進位制檔案拷貝node節點

cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 172.16.8.101:/k8s/kubernetes/bin/
scp kubelet kube-proxy 172.16.8.102:/k8s/kubernetes/bin/

建立 kubelet bootstrap kubeconfig 檔案

vim  environment.sh
# 建立kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=2366a641f656a0a025abb4aabda4511b
KUBE_APISERVER="https://172.16.8.100:6443"
# 設定叢集引數
kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設定客戶端認證引數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設定上下文引數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設定預設上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig檔案 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

將bootstrap kubeconfig kube-proxy.kubeconfig 檔案拷貝到所有 nodes節點

cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 172.16.8.101:/k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 172.16.8.102:/k8s/kubernetes/cfg/

建立kubelet 引數配置檔案拷貝到所有 nodes節點

建立 kubelet 引數配置模板檔案:

vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 172.16.8.100
port: 10250
readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false au