1. 程式人生 > >Kubernetes實戰(一):k8s高可用安裝

Kubernetes實戰(一):k8s高可用安裝

1、部署架構

  

  詳細架構:

  

2、基本配置

主機名IP地址說明元件
k8s-master01 ~ 03 192.168.20.20 ~ 22 master節點 * 3 keepalived、nginx、etcd、kubelet、kube-apiserver
k8s-master-lb 192.168.20.10 keepalived虛擬IP
k8s-node01 ~ 08 192.168.20.30 ~ 37 worker節點 * 8 kubelet

  元件說明:

  kube-apiserver:叢集核心,叢集API介面、叢集各個元件通訊的中樞;叢集安全控制;

  etcd:叢集的資料中心,用於存放叢集的配置以及狀態資訊,非常重要,如果資料丟失那麼叢集將無法恢復;因此高可用叢集部署首先就是etcd是高可用叢集;

  kube-scheduler:叢集Pod的排程中心;預設kubeadm安裝情況下--leader-elect引數已經設定為true,保證master叢集中只有一個kube-scheduler處於活躍狀態;

  kube-controller-manager:叢集狀態管理器,當叢集狀態與期望不同時,kcm會努力讓叢集恢復期望狀態,比如:當一個pod死掉,kcm會努力新建一個pod來恢復對應replicas set期望的狀態;預設kubeadm安裝情況下--leader-elect引數已經設定為true,保證master叢集中只有一個kube-controller-manager處於活躍狀態;

  kubelet: kubernetes node agent,負責與node上的docker engine打交道;

  kube-proxy: 每個node上一個,負責service vip到endpoint pod的流量轉發,當前主要通過設定iptables規則實現。

  keepalived叢集設定一個虛擬ip地址,虛擬ip地址指向k8s-master01、k8s-master02、k8s-master03。

  nginx用於k8s-master01、k8s-master02、k8s-master03的apiserver的負載均衡。

  外部kubectl以及nodes訪問apiserver的時候就可以用過keepalived的虛擬ip(192.168.20.10)以及nginx埠(16443)訪問master叢集的apiserver。

  以下操作在所有的k8s節點

  同步時區及時間

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtimeecho 'Asia/Shanghai' >/etc/timezone
ntpdate time.windows.com

  limit配置

ulimit -SHn 65535

  hosts檔案配置

[[email protected] ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# k8s
192.168.20.20 k8s-master01
192.168.20.21 k8s-master02
192.168.20.22 k8s-master03192.168.20.10 k8s-master-lb
192.168.20.30 k8s-node01
192.168.20.31 k8s-node02

  映象源配置

cd /etc/yum.repos.d
mkdir bak
mv *.repo bak/
# vim CentOS-Base.repo

# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the 
# remarked out baseurl= line instead.
#
#
 
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#released updates 
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
# vim epel-7.repo

[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=http://mirrors.aliyun.com/epel/7/$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
 
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
 
[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=http://mirrors.aliyun.com/epel/7/SRPMS
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
# vim docker-ce.repo

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
# vim kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

  所有節點載入ipvs模組

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4

  系統更新

yum update -y

  安裝docker-ce及kubernetes

yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

yum install -y kubelet-1.11.1-0.x86_64 kubeadm-1.11.1-0.x86_64 kubectl-1.11.1-0.x86_64 docker-ce-17.03.2.ce-1.el7.centos

  所有k8s節點關閉selinux及firewalld

$ vi /etc/selinux/config
SELINUX=permissive

$ setenforce 0$ systemctl disable firewalld$ systemctl stop firewalld

  所有k8s節點設定iptables引數

$ cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

$ sysctl --system

  所有k8s節點禁用swap

$ swapoff -a

# 禁用fstab中的swap專案
$ vi /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

# 確認swap已經被禁用
$ cat /proc/swaps
Filename                Type        Size    Used    Priority

  所有k8s節點重啟

$ reboot

   如需開啟firewalld配置如下:

  所有節點開啟防火牆

$ systemctl enable firewalld
$ systemctl restart firewalld
$ systemctl status firewalld

  

  設定防火牆策略master節點

$ firewall-cmd --zone=public --add-port=16443/tcp --permanent
$ firewall-cmd --zone=public --add-port=6443/tcp --permanent
$ firewall-cmd --zone=public --add-port=4001/tcp --permanent
$ firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
$ firewall-cmd --zone=public --add-port=10250/tcp --permanent
$ firewall-cmd --zone=public --add-port=10251/tcp --permanent
$ firewall-cmd --zone=public --add-port=10252/tcp --permanent
$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
$ firewall-cmd --reload

$ firewall-cmd --list-all --zone=public
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens2f1 ens1f0 nm-bond
  sources:
  services: ssh dhcpv6-client
  ports: 4001/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 30000-32767/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

  

  設定防火牆策略node節點

$ firewall-cmd --zone=public --add-port=10250/tcp --permanent
$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent

$ firewall-cmd --reload

$ firewall-cmd --list-all --zone=public
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens2f1 ens1f0 nm-bond
  sources:
  services: ssh dhcpv6-client
  ports: 10250/tcp 30000-32767/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

  在所有k8s節點上允許kube-proxy的forward

$ firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment "kube-proxy redirects"
$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment "docker subnet"
$ firewall-cmd --reload

$ firewall-cmd --direct --get-all-rules
ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment 'kube-proxy redirects'
ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment 'docker subnet'

# 重啟防火牆
$ systemctl restart firewalld

  解決kube-proxy無法啟用nodePort,重啟firewalld必須執行以下命令,在所有節點設定定時任務

$ crontab -e 
*/5 * * * * /usr/sbin/iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited

4、服務啟動

  在所有kubernetes節點上啟動docker-ce和kubelet

  啟動docker-ce

systemctl enable docker && systemctl start docker

  檢視狀態及版本

[[email protected] ~]# docker version
Client:
 Version:      17.03.2-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 02:21:36 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.2-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 02:21:36 2017
 OS/Arch:      linux/amd64
 Experimental: false

  在所有kubernetes節點上安裝並啟動k8s

  修改kubelet配置

# 配置kubelet使用國內pause映象
# 配置kubelet的cgroups
# 獲取docker的cgroups
DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)
echo $DOCKER_CGROUPS
cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF

# 啟動
systemctl daemon-reload

systemctl enable kubelet && systemctl start kubelet

  在所有master節點安裝並啟動keepalived及docker-compose

$ yum install -y keepalived
$ systemctl enable keepalived && systemctl restart keepalived
#安裝docker-compose
$ yum install -y docker-compose

  所有master節點互信

[[email protected] ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:XZ1oFmFToS19l0jURW2xUyLSkysK/SjaZVgAG1/EDCs [email protected]master01
The key's randomart image is:
+---[RSA 2048]----+
|    o..=o ..B*=+B|
|     +.oo  o=X.+*|
|    E oo    B+==o|
|     .. o..+.. .o|
|       +S+..     |
|      o = .      |
|     o +         |
|    . .          |
|                 |
+----[SHA256]-----+
[[email protected]-master01 ~]# for i in k8s-master01 k8s-master02 k8s-master03;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host 'k8s-master01 (192.168.20.20)' can't be established.
ECDSA key fingerprint is SHA256:bUoUE9+VGU3wBWGsjP/qvIGxXG9KQMmKK7fVeihVp2s.
ECDSA key fingerprint is MD5:39:2c:69:f9:24:49:b2:b8:27:8a:14:93:16:bb:3b:14.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]-master01's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-master01'"
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host 'k8s-master02 (192.168.20.21)' can't be established.
ECDSA key fingerprint is SHA256:+4rgkToh5TyEM2NtWeyqpyKZ+l8fLhW5jmWhrfSDUDQ.
ECDSA key fingerprint is MD5:e9:35:b8:22:b1:c1:35:0b:93:9b:a6:c2:28:e0:28:e1.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]-master02's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-master02'"
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host 'k8s-master03 (192.168.20.22)' can't be established.
ECDSA key fingerprint is SHA256:R8FRR8fvBTmFZSlKThEZ6+1br+aAkFgFPuNS0qq96aQ.
ECDSA key fingerprint is MD5:08:45:48:0c:7e:7c:00:2c:82:42:19:75:89:44:c6:1f.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]-master03's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-master03'"
and check to make sure that only the key(s) you wanted were added.

   高可用安裝基本配置

# 以下操作在master01
# 下載原始碼
[[email protected]-master01 ~]# yum install git -y
[[email protected]-master01 ~]# git clone https://github.com/cookeem/kubeadm-ha
Cloning into 'kubeadm-ha'...
remote: Enumerating objects: 10, done.
remote: Counting objects: 100% (10/10), done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 731 (delta 3), reused 6 (delta 2), pack-reused 721
Receiving objects: 100% (731/731), 2.32 MiB | 655.00 KiB/s, done.
Resolving deltas: 100% (417/417), done.

# 修改對應的配置資訊
[[email protected]-master01 kubeadm-ha]# pwd
/root/kubeadm-ha
[[email protected]-master01 kubeadm-ha]# cat create-config.sh 
#!/bin/bash

#######################################
# set variables below to create the config files, all files will create at ./config directory
#######################################

# master keepalived virtual ip address
export K8SHA_VIP=192.168.20.10

# master01 ip address
export K8SHA_IP1=192.168.20.20

# master02 ip address
export K8SHA_IP2=192.168.20.21

# master03 ip address
export K8SHA_IP3=192.168.20.22

# master keepalived virtual ip hostname
export K8SHA_VHOST=k8s-master-lb

# master01 hostname
export K8SHA_HOST1=k8s-master01

# master02 hostname
export K8SHA_HOST2=k8s-master02

# master03 hostname
export K8SHA_HOST3=k8s-master03

# master01 network interface name
export K8SHA_NETINF1=nm-bond

# master02 network interface name
export K8SHA_NETINF2=nm-bond

# master03 network interface name
export K8SHA_NETINF3=nm-bond

# keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d

# calico reachable ip address
export K8SHA_CALICO_REACHABLE_IP=192.168.20.1

# kubernetes CIDR pod subnet, if CIDR pod subnet is "172.168.0.0/16" please set to "172.168.0.0"
export K8SHA_CIDR=172.168.0.0

# 建立3個master節點的kubeadm配置檔案,keepalived配置檔案,nginx負載均衡配置檔案,以及calico配置檔案
[[email protected]-master01 kubeadm-ha]# ./create-config.sh 
create kubeadm-config.yaml files success. config/k8s-master01/kubeadm-config.yaml
create kubeadm-config.yaml files success. config/k8s-master02/kubeadm-config.yaml
create kubeadm-config.yaml files success. config/k8s-master03/kubeadm-config.yaml
create keepalived files success. config/k8s-master01/keepalived/
create keepalived files success. config/k8s-master02/keepalived/
create keepalived files success. config/k8s-master03/keepalived/
create nginx-lb files success. config/k8s-master01/nginx-lb/
create nginx-lb files success. config/k8s-master02/nginx-lb/
create nginx-lb files success. config/k8s-master03/nginx-lb/
create calico.yaml file success. calico/calico.yaml

# 設定相關hostname變數
export HOST1=k8s-master01
export HOST2=k8s-master02
export HOST3=k8s-master03

# 把kubeadm配置檔案放到各個master節點的/root/目錄
scp -r config/$HOST1/kubeadm-config.yaml $HOST1:/root/
scp -r config/$HOST2/kubeadm-config.yaml $HOST2:/root/
scp -r config/$HOST3/kubeadm-config.yaml $HOST3:/root/

# 把keepalived配置檔案放到各個master節點的/etc/keepalived/目錄
scp -r config/$HOST1/keepalived/* $HOST1:/etc/keepalived/
scp -r config/$HOST2/keepalived/* $HOST2:/etc/keepalived/
scp -r config/$HOST3/keepalived/* $HOST3:/etc/keepalived/

# 把nginx負載均衡配置檔案放到各個master節點的/root/目錄
scp -r config/$HOST1/nginx-lb $HOST1:/root/
scp -r config/$HOST2/nginx-lb $HOST2:/root/
scp -r config/$HOST3/nginx-lb $HOST3:/root/

  以下操作在所有master執行

  keepalived和nginx-lb配置

啟動nginx-lbcd 
docker-compose --file=/root/nginx-lb/docker-compose.yaml up -d
docker-compose --file=/root/nginx-lb/docker-compose.yaml ps

  修改keepalived的配置檔案如下:

# 註釋健康檢查
[[email protected]-master01 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
#vrrp_script chk_apiserver {
#    script "/etc/keepalived/check_apiserver.sh"
#    interval 2
#    weight -5
#    fall 3  
#    rise 2
#}
vrrp_instance VI_1 {
    state MASTER
    interface ens160
    mcast_src_ip 192.168.20.20
    virtual_router_id 51
    priority 102
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 412f7dc3bfed32194d1600c483e10ad1d
    }
    virtual_ipaddress {
        192.168.20.10
    }
    track_script {
       chk_apiserver
    }
}

  重啟keepalived

  修改kubeadm-config.yaml,注意:此時是在原有基礎上添加了imageRepository、controllerManagerExtraArgs和api,其中advertiseAddress為每個master本機IP

[[email protected] ~]# cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.1
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

apiServerCertSANs:
- k8s-master01
- k8s-master02
- k8s-master03
- k8s-master-lb
- 192.168.20.20
- 192.168.20.21
- 192.168.20.22
- 192.168.20.10

api:
  advertiseAddress: 192.168.20.20
  controlPlaneEndpoint: 192.168.20.10:16443

etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://192.168.20.20:2379"
      advertise-client-urls: "https://192.168.20.20:2379"
      listen-peer-urls: "https://192.168.20.20:2380"
      initial-advertise-peer-urls: "https://192.168.20.20:2380"
      initial-cluster: "k8s-master01=https://192.168.20.20:2380"
    serverCertSANs:
      - k8s-master01
      - 192.168.20.20
    peerCertSANs:
      - k8s-master01
      - 192.168.20.20

controllerManagerExtraArgs:
  node-monitor-grace-period: 10s
  pod-eviction-timeout: 10s


networking:
  # This CIDR is a Calico default. Substitute or remove for your CNI provider.
  podSubnet: "172.168.0.0/16"

  以下操作在master01執行

  提前下載映象

kubeadm config images pull --config kubeadm-master.configdocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1

  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1

# 叢集初始化
kubeadm init --config /root/kubeadm-config.yaml

  記錄初始化後的token,最後一行

[[email protected] ~]# kubeadm init --config kubeadm-master.config
unable to read config from "kubeadm-master.config" [open kubeadm-master.config: no such file or directory]
[[email protected]-master01 ~]# kubeadm init --config kubeadm-config.yaml 
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
I1031 13:44:25.751524    8345 kernel_validator.go:81] Validating kernel version
I1031 13:44:25.751634    8345 kernel_validator.go:96] Validating kernel config
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-master01 k8s-master02 k8s-master03 k8s-master-lb] and IPs [10.96.0.1 192.168.20.20 192.168.20.10 192.168.20.20 192.168.20.21 192.168.20.22 192.168.20.10]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master01 localhost k8s-master01] and IPs [127.0.0.1 ::1 192.168.20.20]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost k8s-master01] and IPs [192.168.20.20 127.0.0.1 ::1 192.168.20.20]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in 
            
           

相關推薦

Kubernetes實戰()k8s可用安裝

1、部署架構      詳細架構:    2、基本配置 主機名IP地址說明元件 k8s-master01 ~ 03 192.168.20.20 ~ 22 master節點 * 3 keepalived、nginx、etcd、kubelet、kube-apiserver k8s-master-lb

kubernetes實戰(十)k8s使用Helm安裝harbor

1、基本概念   對於複雜的應用中介軟體,需要設定映象執行的需求、環境變數,並且需要定製儲存、網路等設定,最後設計和編寫Deployment、Configmap、Service及Ingress等相關yaml配置檔案,再提交給kubernetes進行部署。這些複雜的過程將逐步被Helm應用包管理工具實現。   

kubernetes實戰(十六)k8s可用叢集平滑升級 v1.11.x 到v1.12.x

1、基本概念   升級之後所有的containers會重啟,因為hash值會變。   不可跨版本升級。   2、升級Master節點   當前版本 [[email protected] ~]# kubeadm version kubeadm version: &v

kubernetes實戰(四)k8s持久化安裝rabbitmq叢集

1、下載檔案 https://github.com/dotbalo/k8s/tree/master/rabbitmq-cluster   2、建立namespace kubectl create namespace public-service   如果不使用publi

kubernetes實戰(五)k8s持久化安裝Redis Sentinel

1、PV建立   在nfs或者其他型別後端儲存建立pv,首先建立共享目錄 [[email protected] ~]# cat /etc/exports /k8s/redis-sentinel/0 *(rw,sync,no_subtree_check,no_root_squash) /k8

Kubernetes實戰(二)k8s v1.11.1 prometheus traefik元件安裝及叢集測試

1、traefik   traefik:HTTP層路由,官網:http://traefik.cn/,文件:https://docs.traefik.io/user-guide/kubernetes/   功能和nginx ingress類似。   相對於nginx ingress,traefix能夠實時跟Ku

kubernetes實戰(八)k8s叢集安全機制RBAC

1、基本概念   RBAC(Role-Based Access Control,基於角色的訪問控制)在k8s v1.5中引入,在v1.6版本時升級為Beta版本,併成為kubeadm安裝方式下的預設選項,相對於其他訪問控制方式,新的RBAC具有如下優勢:   - 對叢集中的資源和非資源許可權均有完整的覆蓋  

kubernetes實戰(九)k8s叢集動態儲存管理GlusterFS及容器化GlusterFS擴容

1、準備工作   所有節點安裝GFS客戶端 yum install glusterfs glusterfs-fuse -y   如果不是所有節點要部署GFS管理服務,就在需要部署的節點上打上標籤 [[email protected] ~]# kubectl label node k8s-nod

Kubernetes高階實踐Master可用方案設計和踩過的那些坑

今天我將為大家介紹如何構建Kubernetes Master High Availability環境。此次分享內容是我在工作中經驗總結,如果有不正確的或者需要改進的地方,歡迎各位大神指正。 Kubernetes作為容器編排管理系統,通過Scheduler、Replicat

kubernetes實戰(十三)k8s使用helm持久化部署harbor整合openLDAP登入

[root@k8s-master01 harbor-helm-ldap]# helm install --name hb-ldap . --set externalDomain=harbor.xxx.net --wait --timeout 1500 --debug --namespace harbor

kubernetes實戰(三十)CentOS 8 二進位制 可用 安裝 k8s 1.17.x

1. 基本說明    本文章將演示CentOS 8二進位制方式安裝高可用k8s 1.17.x,相對於其他版本,二進位制安裝方式並無太大區別。 2. 基本環境配置   主機資訊 192.168.1.19 k8s-master01 192.168.1.18 k8s-master02 192.168.1

Kubernetes實戰指南(三十四) 可用安裝K8s叢集1.20.x

@[toc] ## 1. 安裝說明 雖然K8s 1.20版本宣佈將在1.23版本之後將不再維護dockershim,意味著K8s將不直接支援Docker,不過大家不必過於擔心。一是在1.23版本之前我們仍然可以使用Docker,二是dockershim肯定會有人接盤,我們同樣可以使用Docker,三是Dock

kubernetes實戰(十)k8s使用openLDAP統一認證

rip ast ops port nmp 更改 create 密碼 lin 1、基本概念   為了方便管理和集成jenkins,k8s、harbor、jenkins均使用openLDAP統一認證。 2、部署openLDAP   根據之前的文檔,openLDAP使用GFS

kubernetes 1.8 可用安裝(五)

k8s 1.8 calico 網絡5安裝網絡組件calico安裝前需要確認kubelet配置是否已經增加--network-plugin=cni如果沒有配置就加到kubelet配置文件裏Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-

實戰Keepalived 可用LVS-主備模式

keepalived lvs dr hearthcheck關於LVS基礎不多介紹直接操練起來。1.環境準備首先準備4臺機器(VM,Vbox...) node1 --> Director1 192.168.137.31 vip=192.168.137.10 node2 --> Director2

zookeeper curator(實戰)安裝&配置

zookeeper 的偽裝叢集搭建 瘋狂創客圈 Java 分散式聊天室【 億級流量】實戰系列之 -22【 部落格園 總入口 】 文章目錄 zookeeper 的偽裝叢集搭建 寫在前面 1.1. **zookeeper 安裝&

ETL(kettle)實戰kettle安裝

抽資料快半年了,總結下這半年來的工作。 ETL安裝,安裝依賴jdk8(自行安裝,jdk需要配置環境變數) 下載ETL工具.kettle穩定版 (windows,linux通用) 下載後的檔案是壓縮包

kubernetes學習筆記()k8s 介紹

開發十年,就只剩下這套架構體系了! >>>   

.Net Core2.1 秒殺專案步步實現CI/CD(Centos7.2)系列:k8s可用叢集搭建總結以及部署API到k8s

前言:本系列部落格又更新了,是博主研究很長時間,親自動手實踐過後的心得,k8s叢集是購買了5臺阿里雲伺服器部署的,這個叢集差不多搞了一週時間,關於k8s的知識點,我也是剛入門,這方面的知識建議參考部落格園大神edisonchou的系列文章《.NET Core on K8S學習實踐系列文章索引(Draft版)》

Nginx+Keepalived 主備可用 安裝與配置

wget 環境 erb 服務 work complete status ppr sql 環境說明:操作系統:CentOS6.7 x86_64Nginx版本:nginx-1.9.7Keepalived版本:keepalived-1.2.24 主nginx + Keepaliv