1. 程式人生 > >kubeadm實現的高可用

kubeadm實現的高可用

sin keep Kubernete dde water count 核心 user image

安裝部署k8s_v1.11

K8s簡介
1.背景介紹
  雲計算飛速發展
    - IaaS
    - PaaS
    - SaaS
  Docker技術突飛猛進
    - 一次構建,到處運行
    - 容器的快速輕量
    - 完整的生態環境
2.什麽是kubernetes
  首先,他是一個全新的基於容器技術的分布式架構領先方案。Kubernetes(k8s)是Google開源的容器集群管理系統(谷歌內部:Borg)。在Docker技術的基礎上,為容器化的應用提供部署運行、資源調度、服務發現和動態伸縮等一系列完整功能,提高了大規模容器集群管理的便捷性。
  Kubernetes是一個完備的分布式系統支撐平臺,具有完備的集群管理能力,多擴多層次的安全防護和準入機制、多租戶應用支撐能力、透明的服務註冊和發現機制、內建智能負載均衡器、強大的故障發現和自我修復能力、服務滾動升級和在線擴容能力、可擴展的資源自動調度機制以及多粒度的資源配額管理能力。同時Kubernetes提供完善的管理工具,涵蓋了包括開發、部署測試、運維監控在內的各個環節。

Kubernetes中,Service是分布式集群架構的核心,一個Service對象擁有如下關鍵特征:
? 擁有一個唯一指定的名字
? 擁有一個虛擬IP(Cluster IP、Service IP、或VIP)和端口號
? 能夠體統某種遠程服務能力
? 被映射到了提供這種服務能力的一組容器應用上
  Service的服務進程目前都是基於Socket通信方式對外提供服務,比如Redis、Memcache、MySQL、Web Server,或者是實現了某個具體業務的一個特定的TCP Server進程,雖然一個Service通常由多個相關的服務進程來提供服務,每個服務進程都有一個獨立的Endpoint(IP+Port)訪問點,但Kubernetes能夠讓我們通過服務連接到指定的Service上。有了Kubernetes內奸的透明負載均衡和故障恢復機制,不管後端有多少服務進程,也不管某個服務進程是否會由於發生故障而重新部署到其他機器,都不會影響我們隊服務的正常調用,更重要的是這個Service本身一旦創建就不會發生變化,意味著在Kubernetes集群中,我們不用為了服務的IP地址的變化問題而頭疼了。
  容器提供了強大的隔離功能,所有有必要把為Service提供服務的這組進程放入容器中進行隔離。為此,Kubernetes設計了Pod對象,將每個服務進程包裝到相對應的Pod中,使其成為Pod中運行的一個容器。為了建立Service與Pod間的關聯管理,Kubernetes給每個Pod貼上一個標簽Label,比如運行MySQL的Pod貼上name=mysql標簽,給運行PHP的Pod貼上name=php標簽,然後給相應的Service定義標簽選擇器Label Selector,這樣就能巧妙的解決了Service於Pod的關聯問題。
  在集群管理方面,Kubernetes將集群中的機器劃分為一個Master節點和一群工作節點Node,其中,在Master節點運行著集群管理相關的一組進程kube-apiserver、kube-controller-manager和kube-scheduler,這些進程實現了整個集群的資源管理、Pod調度、彈性伸縮、安全控制、系統監控和糾錯等管理能力,並且都是全自動完成的。Node作為集群中的工作節點,運行真正的應用程序,在Node上Kubernetes管理的最小運行單元是Pod。Node上運行著Kubernetes的kubelet、kube-proxy服務進程,這些服務進程負責Pod的創建、啟動、監控、重啟、銷毀以及實現軟件模式的負載均衡器。
  在Kubernetes集群中,它解決了傳統IT系統中服務擴容和升級的兩大難題。你只需為需要擴容的Service關聯的Pod創建一個Replication Controller簡稱(RC),則該Service的擴容及後續的升級等問題將迎刃而解。在一個RC定義文件中包括以下3個關鍵信息。
? 目標Pod的定義
? 目標Pod需要運行的副本數量(Replicas)
? 要監控的目標Pod標簽(Label)
  在創建好RC後,Kubernetes會通過RC中定義的的Label篩選出對應Pod實例並實時監控其狀態和數量,如果實例數量少於定義的副本數量,則會根據RC中定義的Pod模板來創建一個新的Pod,然後將新Pod調度到合適的Node上啟動運行,知道Pod實例的數量達到預定目標,這個過程完全是自動化。
  
 Kubernetes優勢:
    - 容器編排
    - 輕量級
    - 開源
    - 彈性伸縮
    - 負載均衡
?Kubernetes的核心概念
1.Master
  k8s集群的管理節點,負責管理集群,提供集群的資源數據訪問入口。擁有Etcd存儲服務(可選),運行Api Server進程,Controller Manager服務進程及Scheduler服務進程,關聯工作節點Node。Kubernetes API server提供HTTP Rest接口的關鍵服務進程,是Kubernetes裏所有資源的增、刪、改、查等操作的唯一入口。也是集群控制的入口進程;Kubernetes Controller Manager是Kubernetes所有資源對象的自動化控制中心;Kubernetes Schedule是負責資源調度(Pod調度)的進程

2.Node
  Node是Kubernetes集群架構中運行Pod的服務節點(亦叫agent或minion)。Node是Kubernetes集群操作的單元,用來承載被分配Pod的運行,是Pod運行的宿主機。關聯Master管理節點,擁有名稱和IP、系統資源信息。運行docker eninge服務,守護進程kunelet及負載均衡器kube-proxy.
? 每個Node節點都運行著以下一組關鍵進程
? kubelet:負責對Pod對於的容器的創建、啟停等任務
? kube-proxy:實現Kubernetes Service的通信與負載均衡機制的重要組件
? Docker Engine(Docker):Docker引擎,負責本機容器的創建和管理工作
  Node節點可以在運行期間動態增加到Kubernetes集群中,默認情況下,kubelet會想master註冊自己,這也是Kubernetes推薦的Node管理方式,kubelet進程會定時向Master匯報自身情報,如操作系統、Docker版本、CPU和內存,以及有哪些Pod在運行等等,這樣Master可以獲知每個Node節點的資源使用情況,冰實現高效均衡的資源調度策略。、

3.Pod
  運行於Node節點上,若幹相關容器的組合。Pod內包含的容器運行在同一宿主機上,使用相同的網絡命名空間、IP地址和端口,能夠通過localhost進行通。Pod是Kurbernetes進行創建、調度和管理的最小單位,它提供了比容器更高層次的抽象,使得部署和管理更加靈活。一個Pod可以包含一個容器或者多個相關容器。
  Pod其實有兩種類型:普通Pod和靜態Pod,後者比較特殊,它並不存在Kubernetes的etcd存儲中,而是存放在某個具體的Node上的一個具體文件中,並且只在此Node上啟動。普通Pod一旦被創建,就會被放入etcd存儲中,隨後會被Kubernetes Master調度到摸個具體的Node上進行綁定,隨後該Pod被對應的Node上的kubelet進程實例化成一組相關的Docker容器冰啟動起來,在。在默認情況下,當Pod裏的某個容器停止時,Kubernetes會自動檢測到這個問起並且重啟這個Pod(重啟Pod裏的所有容器),如果Pod所在的Node宕機,則會將這個Node上的所有Pod重新調度到其他節點上。

4.Replication Controller
  Replication Controller用來管理Pod的副本,保證集群中存在指定數量的Pod副本。集群中副本的數量大於指定數量,則會停止指定數量之外的多余容器數量,反之,則會啟動少於指定數量個數的容器,保證數量不變。Replication Controller是實現彈性伸縮、動態擴容和滾動升級的核心。

5.Service
  Service定義了Pod的邏輯集合和訪問該集合的策略,是真實服務的抽象。Service提供了一個統一的服務訪問入口以及服務代理和發現機制,關聯多個相同Label的Pod,用戶不需要了解後臺Pod是如何運行。
外部系統訪問Service的問題
  首先需要弄明白Kubernetes的三種IP這個問題
    Node IP:Node節點的IP地址
    Pod IP: Pod的IP地址
    Cluster IP:Service的IP地址
  首先,Node IP是Kubernetes集群中節點的物理網卡IP地址,所有屬於這個網絡的服務器之間都能通過這個網絡直接通信。這也表明Kubernetes集群之外的節點訪問Kubernetes集群之內的某個節點或者TCP/IP服務的時候,必須通過Node IP進行通信
  其次,Pod IP是每個Pod的IP地址,他是Docker Engine根據docker0網橋的IP地址段進行分配的,通常是一個虛擬的二層網絡。
  最後Cluster IP是一個虛擬的IP,但更像是一個偽造的IP網絡,原因有以下幾點
? Cluster IP僅僅作用於Kubernetes Service這個對象,並由Kubernetes管理和分配P地址
? Cluster IP無法被ping,他沒有一個“實體網絡對象”來響應
? Cluster IP只能結合Service Port組成一個具體的通信端口,單獨的Cluster IP不具備通信的基礎,並且他們屬於Kubernetes集群這樣一個封閉的空間。
Kubernetes集群之內,Node IP網、Pod IP網於Cluster IP網之間的通信,采用的是Kubernetes自己設計的一種編程方式的特殊路由規則。

6.Label
 Kubernetes中的任意API對象都是通過Label進行標識,Label的實質是一系列的Key/Value鍵值對,其中key於value由用戶自己指定。Label可以附加在各種資源對象上,如Node、Pod、Service、RC等,一個資源對象可以定義任意數量的Label,同一個Label也可以被添加到任意數量的資源對象上去。Label是Replication Controller和Service運行的基礎,二者通過Label來進行關聯Node上運行的Pod。
我們可以通過給指定的資源對象捆綁一個或者多個不同的Label來實現多維度的資源分組管理功能,以便於靈活、方便的進行資源分配、調度、配置等管理工作。
一些常用的Label如下:
? 版本標簽:"release":"stable","release":"canary"......
? 環境標簽:"environment":"dev","environment":"qa","environment":"production"
? 架構標簽:"tier":"frontend","tier":"backend","tier":"middleware"
? 分區標簽:"partition":"customerA","partition":"customerB"
? 質量管控標簽:"track":"daily","track":"weekly"
  Label相當於我們熟悉的標簽,給某個資源對象定義一個Label就相當於給它大了一個標簽,隨後可以通過Label Selector(標簽選擇器)查詢和篩選擁有某些Label的資源對象,Kubernetes通過這種方式實現了類似SQL的簡單又通用的對象查詢機制。

  Label Selector在Kubernetes中重要使用場景如下:
?
o   kube-Controller進程通過資源對象RC上定義Label Selector來篩選要監控的Pod副本的數量,從而實現副本數量始終符合預期設定的全自動控制流程
o   kube-proxy進程通過Service的Label Selector來選擇對應的Pod,自動建立起每個Service島對應Pod的請求轉發路由表,從而實現Service的智能負載均衡
o   通過對某些Node定義特定的Label,並且在Pod定義文件中使用Nodeselector這種標簽調度策略,kuber-scheduler進程可以實現Pod”定向調度“的特性

?Kubernetes架構和組件

?Kubernetes 組件:
  Kubernetes Master控制組件,調度管理整個系統(集群),包含如下組件:
  1.Kubernetes API Server
    作為Kubernetes系統的入口,其封裝了核心對象的增刪改查操作,以RESTful API接口方式提供給外部客戶和內部組件調用。維護的REST對象持久化到Etcd中存儲。
  2.Kubernetes Scheduler
    為新建立的Pod進行節點(node)選擇(即分配機器),負責集群的資源調度。組件抽離,可以方便替換成其他調度器。
  3.Kubernetes Controller
    負責執行各種控制器,目前已經提供了很多控制器來保證Kubernetes的正常運行。
  4. Replication Controller
    管理維護Replication Controller,關聯Replication Controller和Pod,保證Replication Controller定義的副本數量與實際運行Pod數量一致。
  5. Node Controller
    管理維護Node,定期檢查Node的健康狀態,標識出(失效|未失效)的Node節點。
  6. Namespace Controller
    管理維護Namespace,定期清理無效的Namespace,包括Namesapce下的API對象,比如Pod、Service等。
  7. Service Controller
    管理維護Service,提供負載以及服務代理。
  8.EndPoints Controller
    管理維護Endpoints,關聯Service和Pod,創建Endpoints為Service的後端,當Pod發生變化時,實時更新Endpoints。
  9. Service Account Controller
    管理維護Service Account,為每個Namespace創建默認的Service Account,同時為Service Account創建Service Account Secret。
  10. Persistent Volume Controller
    管理維護Persistent Volume和Persistent Volume Claim,為新的Persistent Volume Claim分配Persistent Volume進行綁定,為釋放的Persistent Volume執行清理回收。
  11. Daemon Set Controller
    管理維護Daemon Set,負責創建Daemon Pod,保證指定的Node上正常的運行Daemon Pod。
  12. Deployment Controller
    管理維護Deployment,關聯Deployment和Replication Controller,保證運行指定數量的Pod。當Deployment更新時,控制實現Replication Controller和 Pod的更新。
  13.Job Controller
    管理維護Job,為Jod創建一次性任務Pod,保證完成Job指定完成的任務數目
  14. Pod Autoscaler Controller
    實現Pod的自動伸縮,定時獲取監控數據,進行策略匹配,當滿足條件時執行Pod的伸縮動作。

?Kubernetes Node運行節點,運行管理業務容器,包含如下組件:
  1.Kubelet
    負責管控容器,Kubelet會從Kubernetes API Server接收Pod的創建請求,啟動和停止容器,監控容器運行狀態並匯報給Kubernetes API Server。
  2.Kubernetes Proxy
    負責為Pod創建代理服務,Kubernetes Proxy會從Kubernetes API Server獲取所有的Service信息,並根據Service的信息創建代理服務,實現Service到Pod的請求路由和轉發,從而實現Kubernetes層級的虛擬轉發網絡。
  3.Docker
    Node上需要運行容器服務
部署k8s

環境描述:

操作系統 IP地址 主機名 軟件包列表
CentOS7.3-x86_64 192.168.200.200 Master-1 Docker kubeadm
CentOS7.3-x86_64 192.168.200.201 Master-2 Docker kubeadm
CentOS7.3-x86_64 192.168.200.202 Master-3 Docker kubeadm
CentOS7.3-x86_64 192.168.200.203 Node-1 Docker

環境架構圖:

技術分享圖片

部署基礎環境

1.1 安裝 Docker-CE
1.查看master系統信息:
[root@master ~]# hostname
master
[root@master ~]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)
[root@master ~]# uname -r
3.10.0-514.el7.x86_64
[root@master-1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.200 master-1
192.168.200.201 master-2
192.168.200.202 master-3
192.168.200.203 node-1
2.查看minion系統信息:
[root@master ~]# hostname
master
[root@master ~]# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)
[root@master ~]# uname -r
3.10.0-862.el7.x86_64
3.安裝依賴包:
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
4.設置阿裏雲鏡像源
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
5.安裝 Docker-CE
[root@master ~]# yum install docker-ce -y
6.啟動 Docker-CE
[root@master ~]# systemctl enable docker
[root@master ~]# systemctl start docker
1.2 安裝 Kubeadm

  1. 安裝 Kubeadm 首先我們要配置好阿裏雲的國內源,執行如下命令:
    [root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    EOF
  2. 執行以下命令來重建 Yum 緩存
    [root@master ~]# yum -y install epel-release
    [root@master ~]# yum clean all
    [root@master ~]# yum makecache
  3. 安裝 Kubeadm
    [root@master ~]# yum -y install kubelet kubeadm kubectl kubernetes-cni
  4. 啟用 Kubeadm 服務
    [root@master ~]# systemctl enable kubelet && systemctl start kubelet
    1.3 配置 Kubeadm 所用到的鏡像
    [root@master ~]# vim k8s.sh
    #!/bin/bash
    images=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0
    etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9
    k8s-dns-dnsmasq-nanny-amd64:1.14.9 )
    for imageName in ${images[@]} ; do
    docker pull keveon/$imageName
    docker tag keveon/$imageName k8s.gcr.io/$imageName
    docker rmi keveon/$imageName
    done
    #個人新加的一句,V 1.11.0 必加
    docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
    [root@master ~]# sh k8s.sh
    1.4 關閉 Swap
    [root@master ~]# swapoff -a
    [root@master ~]# vi /etc/fstab
    #
    #/etc/fstab
    #Created by anaconda on Sun May 27 06:47:13 2018
    #
    #Accessible filesystems, by reference, are maintained under ‘/dev/disk‘
    #See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    /dev/mapper/cl-root / xfs defaults 0 0
    UUID=07d1e156-eba8-452f-9340-49540b1c2bbb /boot xfs defaults 0 0
    #/dev/mapper/cl-swap swap swap defaults 0 0
    不關閉swap也是可以的,初始化時需要跳過swap錯誤,修改配置文件如下:
    [root@master manifors]# vim /etc/sysconfig/kubelet
    KUBELET_EXTRA_ARGS="--fail-swap-on=false" #不關閉swap
    KUBE_PROXY_MODE=ipvs #啟用IPvs,不定義會降級Iptables
    啟用ipvs需要提前將模塊安裝好並啟用

1.5 關閉 SELinux和防火墻
[root@master ~]# setenforce 0
[root@master-1 ~]# systemctl stop firewalld
1.6 配置轉發參數
[root@master ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
[root@master ~]# sysctl –system
註:上述部署基礎環境所有節點包括node節點都要操作
部署三主高可用
master節點部署keepalived
2.1.所有的主部署keepalived的,node節點連接VIP,從而實現高可用
[root@master-1 ~]# yum install -y keepalived
2.2.keepalived配置如下:(下面配置僅供參考)
[root@master-1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {[email protected]
br/>[email protected]
br/>[email protected]
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}

vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.200.16
}
}
2.3.啟動keepalived,並測試:
[root@master-1 ~]# systemctl start keepalived
[root@master-1 ~]# ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:29:85:5b brd ff:ff:ff:ff:ff:ff
inet 192.168.200.200/24 brd 192.168.200.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.200.16/32 scope global ens33 #VIP出現了
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe29:855b/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:48:16:c2:20 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
另外兩臺主也做同樣的操作
2.4.Master-2的keepalived配置
[root@master-2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {[email protected]
br/>[email protected]
br/>[email protected]
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90 #優先級低於主
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.200.16
}
}
2.5.Master-3的keepalived配置
[root@master-3 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {[email protected]
br/>[email protected]
br/>[email protected]
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.200.16
}
}
2.6.測試keepalived
[root@master-2 ~]# systemctl start keepalived
[root@master-2 ~]# ip a show
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:0f:6b:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.200.201/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe0f:6b3a/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e8:3a:d6:b1 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever

[root@master-3 ~]# systemctl start keepalived
[root@master-3 ~]# ip a show
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:1a:fd:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.202/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe1a:fd85/64 scope link
valid_lft forever preferred_lft forever
會發現2-3都沒VIP,現在停止master-1
[root@master-1 ~]# systemctl stop keepalived
[root@master-1 ~]# ip a show
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:29:85:5b brd ff:ff:ff:ff:ff:ff
inet 192.168.200.200/24 brd 192.168.200.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe29:855b/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:48:16:c2:20 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
master-1上沒有VIP
[root@master-2 ~]# ip a show
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:0f:6b:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.200.201/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.200.16/32 scope global ens33 #VIP來到master-2上了
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe0f:6b3a/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e8:3a:d6:b1 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever

[root@master-3 ~]# ip a show #master-3依然沒有
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:1a:fd:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.202/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe1a:fd85/64 scope link
valid_lft forever preferred_lft forever
接著關閉master-2的keepalived
[root@master-2 ~]# systemctl stop keepalived
[root@master-2 ~]# ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:0f:6b:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.200.201/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe0f:6b3a/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e8:3a:d6:b1 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever

[root@master-3 ~]# ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:1a:fd:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.202/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.200.16/32 scope global ens33 #會發現VIP漂移到master-3上了
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe1a:fd85/64 scope link
valid_lft forever preferred_lft forever

有上述可以證明keepalived是正常工作的
註:由於本次實驗都采取默認配置,keepalived的默認是搶占模式,所以當依次開啟keepalived時,VIP最後又會在master-1上。
如果剛開始開啟keepalived的時候會發現三臺機器上都有VIP,不要慌,很有可能是你的防火墻或者selinux沒有關閉。
2.7.配置kubelet
#配置kubelet使用國內阿裏pause鏡像,官方的鏡像被墻,kubelet啟動不了
[root@master-1 ~]# cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF
#重新載入kubelet系統配置
[root@master-1 ~]# systemctl daemon-reload
#設置開機啟動,暫時不啟動kubelet
[root@master-1 ~]# systemctl enable kubelet
註:以上操作在所有節點上操作
配置第一個master節點:
3.1通過編寫配置腳本,一鍵生成配置文件
[root@master-1 ~]# vim kube.sh

#!/bin/bash
#設置節點環境變量,後續ip,hostname信息都以環境變量表示
#下面的IP根據各自的情況去配置
CP0_IP="192.168.200.200"
CP0_HOSTNAME="master-1"
CP1_IP="192.168.200.201"
CP1_HOSTNAME="master-2"
CP2_IP="192.168.200.202"
CP2_HOSTNAME="master-3"
ADVERTISE_VIP="192.168.200.16" #這裏是VIP

生成kubeadm配置文件

cat > kubeadm-master.config <<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration

kubernetes版本

kubernetesVersion: v1.11.1
#使用國內阿裏鏡像
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

apiServerCertSANs:

  • "$CP0_HOSTNAME"
  • "$CP0_IP"
  • "$ADVERTISE_VIP"
  • "127.0.0.1"

api:
advertiseAddress: $CP0_IP
controlPlaneEndpoint: $ADVERTISE_VIP:6443

etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://$CP0_IP:2379"
advertise-client-urls: "https://$CP0_IP:2379"
listen-peer-urls: "https://$CP0_IP:2380"
initial-advertise-peer-urls: "https://$CP0_IP:2380"
initial-cluster: "$CP0_HOSTNAME=https://$CP0_IP:2380"
serverCertSANs:

  • $CP0_HOSTNAME
  • $CP0_IP
    controllerManagerExtraArgs:
    node-monitor-grace-period: 10s
    pod-eviction-timeout: 10s

networking:
podSubnet: 10.244.0.0/16

kubeProxy:
config:
#mode: ipvs #這是k8s的模式,1.11版本以上是默認ipvs進行裝發,如果系統開啟了ipvs,可以打開註釋。系統沒有開始會降級采用Iptables,我這裏沒有開啟,就啟用Iptables了。
mode: iptables
EOF
3.2執行一下,生成配置文件
[root@master-1 ~]# sh kube.sh
[root@master-1 ~]# ls
anaconda-ks.cfg k8s.sh kubeadm-master.config kube.sh
3.3拉取配置文件中所需的鏡像
[root@master-1 ~]# kubeadm config images pull --config kubeadm-master.config
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.12.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.12.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.12.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.12.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.18
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3
3.4初始化
[root@master-1 ~]# kubeadm init --config kubeadm-master.config --ignore-preflight-errors=‘SystemVerification‘
。。。。。。。。。。。。。。。。。。。。#忽略上面的輸出信息
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.200.16:6443 --token g8hzyf.8185orcoq845489s --discovery-token-ca-cert-hash sha256:759e92bee1931f6e89deefd30c0d62ad4ec810bd40688e43ee7e1749ccd6c370
#紅色部分是加入node節點時,需要的命令

註:×××部分可以不加,因為這裏我docker的版本問題導致報錯,所以通過這個選項可以忽略。
3.5 查看節點的狀態
[root@master-1 ~]# mkdir -p $HOME/.kube
[root@master-1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 NotReady master 5m42s v1.11.1
#會發現只有一個master且狀態為NotReady
3.6 安裝網絡插件
[root@master-1 ~]# kubectl apply -f https://git.io/weave-kube-1.6
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
3.7 查看節點狀態
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready master 18m v1.11.1
#狀態為ready了
3.8 上傳這臺master生成的相關證書到其他的master節點上
[root@master-1 ~]# cd /etc/kubernetes
[root@master-1 kubernetes]# tar cvzf k8s-key.tgz pki/ca. pki/sa. pki/front-proxy-ca. pki/etcd/ca.
[root@master-1 kubernetes]# ls
admin.conf controller-manager.conf k8s-key.tgz kubelet.conf manifests pki scheduler.conf
[root@master-1 kubernetes]# scp /etc/kubernetes/k8s-key.tgz 192.168.200.201:/etc/kubernetes
[root@master-1 kubernetes]# scp /etc/kubernetes/k8s-key.tgz 192.168.200.202:/etc/kubernetes
配置第二個master節點
4,1 先將master-1上傳的證書文件解壓
[root@master-2 ~]# cd /etc/kubernetes/
[root@master-2 kubernetes]# ls
k8s-key.tgz manifests
root@master-2 kubernetes]# tar xf k8s-key.tgz
[root@master-2 kubernetes]# ls
k8s-key.tgz manifests pki
4.2 編寫配置文件生成腳本
root@master-2 kubernetes]# vim kube.sh

#!/bin/bash
#設置節點環境變量,後續ip,hostname信息都以環境變量表示
CP0_IP="192.168.200.200"
CP0_HOSTNAME="master-1"
CP1_IP="192.168.200.201"
CP1_HOSTNAME="master-2"
CP2_IP="192.168.200.202"
CP2_HOSTNAME="master-3"
ADVERTISE_VIP="192.168.200.16"

#生成kubeadm配置文件
cat > kubeadm-master.config <<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration

kubernetes版本

kubernetesVersion: v1.11.1
#使用國內阿裏鏡像
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

apiServerCertSANs:

  • "$CP1_HOSTNAME"
  • "$CP1_IP"
  • "$ADVERTISE_VIP"
  • "127.0.0.1"

api:
advertiseAddress: $CP1_IP
controlPlaneEndpoint: $ADVERTISE_VIP:6443

etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://$CP1_IP:2379"
advertise-client-urls: "https://$CP1_IP:2379"
listen-peer-urls: "https://$CP1_IP:2380"
initial-advertise-peer-urls: https://$CP1_IP:2380
initial-cluster: "$CP0_HOSTNAME=https://$CP0_IP:2380,$CP1_HOSTNAME=https://$CP1_IP:2380"
initial-cluster-state: existing
serverCertSANs:

  • $CP1_HOSTNAME
  • $CP1_IP
    peerCertSANs:
  • $CP1_HOSTNAME
  • $CP1_IP

controllerManagerExtraArgs:
node-monitor-grace-period: 10s
pod-eviction-timeout: 10s

networking:
podSubnet: 10.244.0.0/16

kubeProxy:
config:

mode: ipvs

mode: iptables

EOF
註:這個配置文件和master-1的還是有區別的,所以不要直接把master-1的拷貝過來
4.3 執行腳本生成文件
[root@master-2 kubernetes]# sh kube.sh
[root@master-2 kubernetes]# ls
k8s-key.tgz kubeadm-master.config kube.sh manifests pki
4.4 拉取配置文件所需的鏡像
[root@master-2 kubernetes]# kubeadm config images pull --config kubeadm-master.config
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.11.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.11.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.11.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.11.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.18
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3
4.5 生成證書
[root@master-2 kubernetes]# kubeadm alpha phase certs all --config kubeadm-master.config
4.6 生成kubelet相關配置文件
[root@master-2 kubernetes]# kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config
[root@master-2 kubernetes]# kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config
[root@master-2 kubernetes]# kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config
4.7 啟動kubelet
[root@master-2 kubernetes]# systemctl restart kubelet
4.8 部署 controlplane,即kube-apiserver, kube-controller-manager, kube-scheduler等各組件
[root@master-2 kubernetes]# kubeadm alpha phase kubeconfig all --config kubeadm-master.config
4.9 設置kubectl 默認配置文件
[root@master-2 kubernetes]# mkdir ~/.kube
[root@master-2 kubernetes]# cp /etc/kubernetes/admin.conf ~/.kube/config
4.10 查看節點狀態
[root@master-2 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready master 58m v1.11.1
master-2 Ready <none> 2m37s v1.11.1
4.11 將master-2添加到etcd到集群中
[root@master-2 kubernetes]# kubectl exec -n kube-system etcd-master-1 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://192.168.200.200:2379 member add master-2 https://192.168.200.201:2380
Added member named master-2 with ID 25974124f9b03316 to cluster

ETCD_NAME="master-2"
ETCD_INITIAL_CLUSTER="master-2=https://192.168.200.201:2380,master-1=https://192.168.200.200:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
4.12 部署etcd靜態pod
[root@master-2 kubernetes]# kubeadm alpha phase etcd local --config kubeadm-master.config
4.13 查看ectd節點
[root@master-2 kubernetes]# kubectl exec -n kube-system etcd-master-1 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://192.168.200.200:2379 member list

25974124f9b03316: name=master-2 peerURLs=https://192.168.200.201:2380 clientURLs=https://192.168.200.201:2379 isLeader=false
f1751f15f702dfc9: name=master-1 peerURLs=https://192.168.200.200:2380 clientURLs=https://192.168.200.200:2379 isLeader=true
4.14 部署controlplane靜態pod文件,kubelet會自動啟動各組件
[root@master-2 kubernetes]# kubeadm alpha phase controlplane all --config kubeadm-master.config
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
4.15 此時查看節點及pod運行情況
[root@master-2 kubernetes]# kubectl get pods --all-namespaces -o wide |grep master-2
kube-system etcd-master-2 1/1 Running 0 3m20s 192.168.200.201 master-2 <none>
kube-system kube-apiserver-master-2 1/1 Running 0 91s 192.168.200.201 master-2 <none>
kube-system kube-controller-manager-master-2 1/1 Running 0 91s 192.168.200.201 master-2 <none>
kube-system kube-proxy-dn8rr 1/1 Running 0 9m57s 192.168.200.201 master-2 <none>
kube-system kube-scheduler-master-2 1/1 Running 0 91s 192.168.200.201 master-2 <none>
kube-system weave-net-tvr6n 2/2 Running 2 9m57s 192.168.200.201 master-2 <none>
4.16 標記為master節點
[root@master-2 kubernetes]# kubeadm alpha phase mark-master --config kubeadm-master.config
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[markmaster] Marking the node master-2 as master by adding the label "node-role.kubernetes.io/master=‘‘"
[markmaster] Marking the node master-2 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[root@master-2 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready master 69m v1.11.1
master-2 Ready master 13m v1.11.1
#master-2也正常運行了
部署第三個master節點
5.1 先解壓master-1傳過來的證書文件
[root@matsre-3 ~]# cd /etc/kubernetes/
[root@matsre-3 kubernetes]# ls
k8s-key.tgz manifests
[root@matsre-3 kubernetes]# tar xf k8s-key.tgz
[root@matsre-3 kubernetes]# ls
k8s-key.tgz manifests pki
5.2 編寫配置文件腳本並生成配置文件
[root@matsre-3 ~]# vim kube.sh

#!/bin/bash
#設置節點環境變量,後續ip,hostname信息都以環境變量表示
CP0_IP="192.168.200.200"
CP0_HOSTNAME="master-1"
CP1_IP="192.168.200.201"
CP1_HOSTNAME="master-2"
CP2_IP="192.168.200.202"
CP2_HOSTNAME="master-3"
ADVERTISE_VIP="192.168.200.16"

#生成kubeadm配置文件,與第一個master節點的區別除了修改ip外,主要是etcd增加節點的配置
cat > kubeadm-master.config <<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
#kubernetes版本
kubernetesVersion: v1.11.1
#使用國內阿裏鏡像
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

apiServerCertSANs:

  • "$CP2_HOSTNAME"
  • "$CP2_IP"
  • "$ADVERTISE_VIP"
  • "127.0.0.1"

api:
advertiseAddress: $CP2_IP
controlPlaneEndpoint: $ADVERTISE_VIP:6443

etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://$CP2_IP:2379"
advertise-client-urls: "https://$CP2_IP:2379"
listen-peer-urls: "https://$CP2_IP:2380"
initial-advertise-peer-urls: "https://$CP2_IP:2380"
initial-cluster: "$CP0_HOSTNAME=https://$CP0_IP:2380,$CP1_HOSTNAME=https://$CP1_IP:2380,$CP2_HOSTNAME=https://$CP2_IP:2380"
initial-cluster-state: existing
serverCertSANs:

  • $CP2_HOSTNAME
  • $CP2_IP
    peerCertSANs:
  • $CP2_HOSTNAME
  • $CP2_IP

controllerManagerExtraArgs:
node-monitor-grace-period: 10s
pod-eviction-timeout: 10s

networking:
podSubnet: 10.244.0.0/16

kubeProxy:
config:

mode: ipvs

mode: iptables

EOF
[root@matsre-3 kubernetes]# cd
[root@matsre-3 ~]# vim kube.sh
[root@matsre-3 ~]# sh kube.sh
[root@matsre-3 ~]# ls
anaconda-ks.cfg k8s.sh kubeadm-master.config kube.sh
5.3 拉取鏡像
[root@matsre-3 ~]# kubeadm config images pull --config kubeadm-master.config
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.12.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.12.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.12.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.12.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.18
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3
5.4 生成證書和相關配置文件
[root@matsre-3 ~]# kubeadm alpha phase certs all --config kubeadm-master.config
[root@matsre-3 ~]# kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config
[root@matsre-3 ~]# kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config
[root@matsre-3 ~]# kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config
5.5 啟動kubelet
[root@matsre-3 ~]# systemctl restart kubelet
5.6 部署 controlplane,即kube-apiserver, kube-controller-manager, kube-scheduler等各組件
[root@matsre-3 ~]# kubeadm alpha phase kubeconfig all --config kubeadm-master.config
5.7設置kubectl 默認配置文件
[root@matsre-3 ~]# mkdir ~/.kube
[root@matsre-3 ~]# cp /etc/kubernetes/admin.conf ~/.kube/config
5.8 將master-3加入到etcd集群中
[root@master-3 ~]# kubectl exec -n kube-system etcd-master-1 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://192.168.200.200:2379 member add master-3 https://192.168.200.202:2380
Added member named master-3 with ID b757e394754001e8 to cluster

ETCD_NAME="master-3"
ETCD_INITIAL_CLUSTER="master-2=https://192.168.200.201:2380,master-3=https://192.168.200.202:2380,master-1=https://192.168.200.200:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
5.9 部署etcd靜態pod
[root@master-3 ~]# kubeadm alpha phase etcd local --config kubeadm-master.config
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
5.10 查看ectd節點
[root@master-3 ~]# kubectl exec -n kube-system etcd-master-1 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://192.168.200.200:2379 member list
4e28e0057914c780: name=master-2 peerURLs=https://192.168.200.201:2380 clientURLs=https://192.168.200.201:2379 isLeader=false
b757e394754001e8[unstarted]: peerURLs=https://192.168.200.202:2380
f1751f15f702dfc9: name=master-1 peerURLs=https://192.168.200.200:2380 clientURLs=https://192.168.200.200:2379 isLeader=true
5.11 部署controlplane靜態pod文件,kubelet會自動啟動各組件
[root@master-3 ~]# kubeadm alpha phase controlplane all --config kubeadm-master.config
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
5.12 標記為master節點
[root@master-3 ~]# kubeadm alpha phase mark-master --config kubeadm-master.config
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[markmaster] Marking the node master-3 as master by adding the label "node-role.kubernetes.io/master=‘‘"
[markmaster] Marking the node master-3 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
5.13 查看各節點運行情況
[root@master-3 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready master 81m v1.11.1
master-2 Ready master 19m v1.11.1
master-3 Ready master 6m58s v1.11.1
node節點的部署
6.1 node-1加入k8s集群
[root@node-1 ~]# kubeadm join 192.168.200.16:6443 --token ub9tas.mien6pn08jm1nl1z --discovery-token-ca-cert-hash sha256:0606874e30ca04dc4e9d8c72e0ab1f3e9016e498742bf47500fb31fa702c2c3c --ignore-preflight-errors=‘SystemVerification‘
6.2 查看節點狀態
[root@master-1 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready master 1h v1.11.1
master-2 Ready master 1h v1.11.1
master-3 Ready master 32m v1.11.1
node-1 Ready <none> 3m v1.11.1
6.3 創建一個測試nginx 的pod
[root@master-1 ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1
deployment.apps/nginx-deploy created
[root@master-1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deploy-5b595999-kqnsc 1/1 Running 0 1m
[root@master-1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deploy-5b595999-kqnsc 1/1 Running 0 1m 10.42.0.1 node-1
[root@master-1 ~]# curl 10.42.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
通過測試可以發現master和node節點都可以正常工作
測試高可用性
7.1 測試master高可用性,關閉其中一臺master節點
[root@master-2 ~]# ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:0f:6b:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.200.201/24 brd 192.168.200.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.200.16/32 scope global ens33 #關閉master-1後,VIP飄到master-2上了
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe0f:6b3a/64 scope link
[root@master-2 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 NotReady master 3h v1.11.1 #master-1停止工作了
master-2 Ready master 3h v1.11.1
master-3 Ready master 39m v1.11.1
node-1 NotReady <none> 16m v1.11.1
[root@master-2 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deploy-5b595999-svgp7 1/1 Running 0 42s
[root@master-2 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deploy-5b595999-svgp7 1/1 Running 0 50s 10.36.0.1 node-1
[root@master-2 ~]# curl 10.36.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
通過測試可以發現在缺少master-1下整個集群還是可以正常工作的
7.2 測試將nginx端口對外暴露
[root@master-2 ~]# kubectl expose deployment nginx-deploy --name nginx --port=80 --target-port=80 --protocol=TCP
service/nginx exposed
[root@master-2 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h
nginx ClusterIP 10.106.146.49 <none> 80/TCP 10s
root@master-2 ~]# curl 10.106.146.49
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
至此kubeadm部署的k8s高可用集群部署成功
註:etcd集群至少需要三個master才可以,兩個無法做到高可用,三臺只可以宕掉一臺,五臺允許宕兩臺,這是需要註意的地方。

kubeadm實現的高可用