1. 程式人生 > >K8s-.部署高可用元件.06-1

K8s-.部署高可用元件.06-1

內容轉載自:https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/06-1.ha.md

06-1.部署高可用元件

本文件講解使用 keepalived 和 haproxy 實現 kube-apiserver 高可用的步驟:

  • keepalived 提供 kube-apiserver 對外服務的 VIP;
  • haproxy 監聽 VIP,後端連線所有 kube-apiserver 例項,提供健康檢查和負載均衡功能;

執行 keepalived 和 haproxy 的節點稱為 LB 節點。由於 keepalived 是一主多備執行模式,故至少兩個 LB 節點。

本文件複用 master 節點的三臺機器,haproxy 監聽的埠(8443) 需要與 kube-apiserver 的埠 6443 不同,避免衝突。

keepalived 在執行過程中週期檢查本機的 haproxy 程序狀態,如果檢測到 haproxy 程序異常,則觸發重新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用。

所有元件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通過 VIP 和 haproxy 監聽的 8443 埠訪問 kube-apiserver 服務。

安裝軟體包

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IP[@]}
  do
    echo ">>> ${master_ip}"
    ssh 
[email protected]
${master_ip} "yum install -y keepalived haproxy" done

配置和下發 haproxy 配置檔案

cat > haproxy.cfg <<EOF
global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /var/run/haproxy-admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    nbproc 1

defaults
    log     global
    timeout connect 5000
    timeout client  10m
    timeout server  10m

listen  admin_stats
    bind 0.0.0.0:10080
    mode http
    log 127.0.0.1 local0 err
    stats refresh 30s
    stats uri /status
    stats realm welcome login\ Haproxy
    stats auth admin:123456
    stats hide-version
    stats admin if TRUE

listen kube-master
    bind 0.0.0.0:8443
    mode tcp
    option tcplog
    balance source
    server 172.27.129.101 172.27.129.101:6443 check inter 2000 fall 2 rise 2 weight 1
    server 172.27.129.102 172.27.129.102:6443 check inter 2000 fall 2 rise 2 weight 1
    server 172.27.129.103 172.27.129.103:6443 check inter 2000 fall 2 rise 2 weight 1
EOF
  • haproxy 在 10080 埠輸出 status 資訊;
  • haproxy 監聽所有介面的 8443 埠,該埠與環境變數 ${KUBE_APISERVER} 指定的埠必須一致;
  • server 欄位列出所有 kube-apiserver 監聽的 IP 和埠;

下發 haproxy.cfg 到所有 master 節點:

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IP[@]}
  do
    echo ">>> ${master_ip}"
    scp haproxy.cfg [email protected]${master_ip}:/etc/haproxy
  done

起 haproxy 服務

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IP[@]}
  do
    echo ">>> ${master_ip}"
    ssh [email protected]${master_ip} "systemctl restart haproxy && systemctl enable haproxy"
  done

檢查 haproxy 服務狀態

source /opt/k8s/bin/environment.sh
for node_ip in ${MASTER_IP[@]}
  do
    echo ">>> ${node_ip}"
    ssh [email protected]${node_ip} "systemctl status haproxy|grep Active"
  done

確保狀態為 active (running),否則檢視日誌,確認原因:

journalctl -u haproxy

檢查 haproxy 是否監聽 8443 埠:

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IP[@]}
  do
    echo ">>> ${master_ip}"
    ssh [email protected]${master_ip} "netstat -lnpt|grep haproxy"
  done

確保輸出類似於:

tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      120583/haproxy

配置和下發 keepalived 配置檔案

keepalived 是一主(master)多備(backup)執行模式,故有兩種型別的配置檔案。master 配置檔案只有一份,backup 配置檔案視節點數目而定,對於本文件而言,規劃如下:

  • master: 172.27.129.101
  • backup:172.27.129.102、172.27.129.103

master 配置檔案:

source /opt/k8s/bin/environment.sh
cat  > keepalived-master.conf <<EOF
global_defs {
    router_id lb-master-105
}

vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 5
    weight -30
}

vrrp_instance VI-kube-master {
    state MASTER
    priority 120
    dont_track_primary
    interface ${VIP_IF}
    virtual_router_id 68
    advert_int 3
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        ${MASTER_VIP}
    }
}
EOF
  • VIP 所在的介面(interface ${VIP_IF})為 eth0
  • 使用 killall -0 haproxy 命令檢查所在節點的 haproxy 程序是否正常。如果異常則將權重減少(-30),從而觸發重新選主過程;
  • router_id、virtual_router_id 用於標識屬於該 HA 的 keepalived 例項,如果有多套 keepalived HA,則必須各不相同;

backup 配置檔案:

source /opt/k8s/bin/environment.sh
cat  > keepalived-backup.conf <<EOF
global_defs {
    router_id lb-backup-105
}

vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 5
    weight -30
}

vrrp_instance VI-kube-master {
    state BACKUP
    priority 110
    dont_track_primary
    interface ${VIP_IF}
    virtual_router_id 68
    advert_int 3
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        ${MASTER_VIP}
    }
}
EOF
  • VIP 所在的介面(interface ${VIP_IF})為 eth0
  • 使用 killall -0 haproxy 命令檢查所在節點的 haproxy 程序是否正常。如果異常則將權重減少(-30),從而觸發重新選主過程;
  • router_id、virtual_router_id 用於標識屬於該 HA 的 keepalived 例項,如果有多套 keepalived HA,則必須各不相同;
  • priority 的值必須小於 master 的值;

下發 keepalived 配置檔案

下發 master 配置檔案:

scp keepalived-master.conf [email protected]:/etc/keepalived/keepalived.conf

下發 backup 配置檔案:

scp keepalived-backup.conf [email protected]:/etc/keepalived/keepalived.conf
scp keepalived-backup.conf [email protected]:/etc/keepalived/keepalived.conf

起 keepalived 服務

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IP[@]}
  do
    echo ">>> ${master_ip}"
    ssh [email protected]${master_ip} "systemctl restart keepalived && systemctl enable keepalived"
  done

檢查 keepalived 服務

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IP[@]}
  do
    echo ">>> ${master_ip}"
    ssh [email protected]${master_ip} "systemctl status keepalived|grep Active"
  done

確保狀態為 active (running),否則檢視日誌,確認原因:

journalctl -u keepalived

檢視 VIP 所在的節點,確保可以 ping 通 VIP:

注意:如果使用雲搭建的叢集,在高可用這塊可以直接用雲服務商提供的SLB服務,如果haproxy+keepalive可能不支援,原因你懂的。(雲底層封掉了)

注意:如果使用雲伺服器,需要新增虛擬IP   新增方法

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IP[@]}
  do
    echo ">>> ${master_ip}"
    ssh ${master_ip} "/usr/sbin/ip addr show ${VIP_IF}"
    ssh ${master_ip} "ping -c 1 ${MASTER_VIP}"
  done

檢視 haproxy 狀態頁面

瀏覽器訪問 ${MASTER_VIP}:10080/status 地址,檢視 haproxy 狀態頁面:

haproxy