1. 程式人生 > >K8s-部署 etcd 叢集.04

K8s-部署 etcd 叢集.04

內容轉載自:https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/04.%E9%83%A8%E7%BD%B2etcd%E9%9B%86%E7%BE%A4.md

04.部署 etcd 叢集

etcd 是基於 Raft 的分散式 key-value 儲存系統,由 CoreOS 開發,常用於服務發現、共享配置以及併發控制(如 leader 選舉、分散式鎖等)。kubernetes 使用 etcd 儲存所有執行資料。

本文件介紹部署一個三節點高可用 etcd 叢集的步驟:

  • 下載和分發 etcd 二進位制檔案;
  • 建立 etcd 叢集各節點的 x509 證書,用於加密客戶端(如 etcdctl) 與 etcd 叢集、etcd 叢集之間的資料流;
  • 建立 etcd 的 systemd unit 檔案,配置服務引數;
  • 檢查叢集工作狀態;

etcd 叢集各節點的名稱和 IP 如下:

  • k8s-etcd-0001:172.27.129.104
  • k8s-etcd-0002:172.27.129.105
  • k8s-etcd-0003:172.27.129.106

下載和分發 etcd 二進位制檔案

到 https://github.com/coreos/etcd/releases 頁面下載最新版本的釋出包:

wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz
tar -xvf etcd-v3.3.7-linux-amd64.tar.gz

分發二進位制檔案到叢集etcd節點:

source /opt/k8s/bin/environment.sh
for etcd_ip in ${ETCD_IP[@]}
  do
    echo ">>> ${netcd_ip}"
    scp etcd-v3.3.7-linux-amd64/etcd* [email protected]
${etcd_ip}:/opt/k8s/bin ssh [email protected]${etcd_ip} "chmod +x /opt/k8s/bin/*" done

建立 etcd 證書和私鑰

cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "172.27.129.104",
    "172.27.129.105",
    "172.27.129.106"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF
  • hosts 欄位指定授權使用該證書的 etcd 節點 IP 或域名列表,這裡將 etcd 叢集的三個節點 IP 都列在其中;

生成證書和私鑰:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
    -ca-key=/etc/kubernetes/cert/ca-key.pem \
    -config=/etc/kubernetes/cert/ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
ls etcd*

分發生成的證書和私鑰到各 etcd 節點:

source /opt/k8s/bin/environment.sh
for etcd_ip in ${ETCD_IP[@]}
  do
    echo ">>> ${etcd_ip}"
    ssh [email protected]${etcd_ip} "mkdir -p /etc/etcd/cert && chown -R k8s /etc/etcd/cert"
    scp etcd*.pem [email protected]${etcd_ip}:/etc/etcd/cert/
  done

建立 etcd 的 systemd unit 模板檔案

source /opt/k8s/bin/environment.sh
cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
User=k8s
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/k8s/bin/etcd \\
  --data-dir=/var/lib/etcd \\
  --name=##ETCD_NAME## \\
  --cert-file=/etc/etcd/cert/etcd.pem \\
  --key-file=/etc/etcd/cert/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-cert-file=/etc/etcd/cert/etcd.pem \\
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --listen-peer-urls=https://##ETCD_IP##:2380 \\
  --initial-advertise-peer-urls=https://##ETCD_IP##:2380 \\
  --listen-client-urls=https://##ETCD_IP##:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://##ETCD_IP##:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • User:指定以 k8s 賬戶執行;
  • WorkingDirectory--data-dir:指定工作目錄和資料目錄為 /var/lib/etcd,需在啟動服務前建立這個目錄;
  • --name:指定節點名稱,當 --initial-cluster-state 值為 new 時,--name 的引數值必須位於 --initial-cluster 列表中;
  • --cert-file--key-file:etcd server 與 client 通訊時使用的證書和私鑰;
  • --trusted-ca-file:簽名 client 證書的 CA 證書,用於驗證 client 證書;
  • --peer-cert-file--peer-key-file:etcd 與 peer 通訊使用的證書和私鑰;
  • --peer-trusted-ca-file:簽名 peer 證書的 CA 證書,用於驗證 peer 證書;

為各節點建立和分發 etcd systemd unit 檔案

替換模板檔案中的變數,為各節點建立 systemd unit 檔案:

source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##ETCD_NAME##/${ETCD_NAMES[i]}/" -e "s/##ETCD_IP##/${ETCD_IP[i]}/" etcd.service.template > etcd-${ETCD_IPS[i]}.service 
  done
ls *.service
  • ETCD_NAMES 和 ETCD_IP 為相同長度的 bash 陣列,分別為節點名稱和對應的 IP;

分發生成的 systemd unit 檔案:

source /opt/k8s/bin/environment.sh
for etcd_ip in ${ETCD_IP[@]}
  do
    echo ">>> ${etcd_ip}"
    ssh [email protected]${etcd_ip} "mkdir -p /var/lib/etcd && chown -R k8s /var/lib/etcd" 
    scp etcd-${etcd_ip}.service [email protected]${etcd_ip}:/etc/systemd/system/etcd.service
  done
  • 必須先建立 etcd 資料目錄和工作目錄;
  • 檔案重新命名為 etcd.service;

啟動 etcd 服務

source /opt/k8s/bin/environment.sh
for etcd_ip in ${ETCD_IP[@]}
  do
    echo ">>> ${etcd_ip}"
    ssh [email protected]${etcd_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd"
  done
  • etcd 程序首次啟動時會等待其它節點的 etcd 加入叢集,命令 systemctl start etcd 會卡住一段時間,為正常現象。

檢查啟動結果

source /opt/k8s/bin/environment.sh
for etcd_ip in ${ETCD_IP[@]}
  do
    echo ">>> ${etcd_ip}"
    ssh [email protected]${etcd_ip} "systemctl status etcd|grep Active"
  done

確保狀態為 active (running),否則檢視日誌,確認原因:

$ journalctl -u etcd

驗證服務狀態

部署完 etcd 集群后,在任一 etc 節點上執行如下命令:

source /opt/k8s/bin/environment.sh
for etcd_ip in ${ETCD_IP[@]}
  do
    echo ">>> ${etcd_ip}"
    ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
    --endpoints=https://${etcd_ip}:2379 \
    --cacert=/etc/kubernetes/cert/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem endpoint health
  done

預期輸出:

https://172.27.129.104:2379 is healthy: successfully committed proposal: took = 2.192932ms
https://172.27.129.105:2379 is healthy: successfully committed proposal: took = 3.546896ms
https://172.27.129.106:2379 is healthy: successfully committed proposal: took = 3.013667ms

輸出均為 healthy 時表示叢集服務正常。