1. 程式人生 > >Kubernetes叢集安裝與配置

Kubernetes叢集安裝與配置

注:本文系統環境為centos7,master:10.1.1.1,node1:10.1.1.2

etcd安裝與配置

使用yum install etcd或者官網下載etcd進行安裝,copy etcd和etcdctl到/usr/bin目錄下
設定systemd服務檔案:/usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory
=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf User=etcd # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy
=multi-user.target

服務檔案指定配置檔案為/etc/etcd/etcd.conf,ExecStart為配置的啟動命令,其中的變數資訊通過配置檔案進行獲取,配置檔案配置如下:

ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.1.1.1:2379"

配置完成後,通過systemctl start命令啟動etcd服務。

Flanneld網路配置與啟動

flannel採用覆蓋網路(overlay network)模型來完成對網路的打通,需要在每臺node上都安裝flannel。
安裝:

使用yum install flannle進行安裝
配置: 編輯/usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

配置啟動引數: 編輯/etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://10.1.1.1:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/coreos.com/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

啟動前,需要在etcd中新增一條網路配置記錄,用於flanneld分配給每個docker的虛擬ip地址段。
etcdctl set /coreos.com/network/config '{"Network":"172.17.0.0/16"}'

由於flannel將覆蓋docker0網橋,需先停止docker服務。
使用systemctl啟動flanneld。
設定docker0網橋的ip地址,執行

/usr/libexec/flannel/mk-docker-opts.sh -i
source /run/flannel/subnet.env
ifconfig docker0 ${FLANNEL_SUBNET}

使用ip addr確認docker0是否屬於flannel0的子網。
重新啟動docker服務。

安裝Kubernetes

使用yum install kubernetes命令對kubernetes的一系列檔案進行安裝,包括kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy等服務。
其中master節點安裝部署etcd、kube-apiserver、kube-controller-manager、kube-scheduler服務;
node節點安裝kubelet、kube-proxy服務。

master節點的配置

kube-apiserver服務配置

將安裝的kube-apiserver複製到/usr/bin目錄下,編輯systemd服務檔案/usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=kube
ExecStart=/usr/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

其中變數指定內容均在/etc/kubernetes/config以及/etc/kubernetes/apiserver檔案中定義。
vim /etc/kubernetes/config:

# 設定false表示將日誌寫入檔案,不寫入stderr
KUBE_LOGTOSTDERR="--logtostderr=true"

# 日誌級別,0是debug模式
KUBE_LOG_LEVEL="--v=0"

# 該叢集是否可用於執行私有容器
KUBE_ALLOW_PRIV="--allow_privileged=false"

# controller-manager, scheduler和proxy通過該變數找到apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"

vim /etc/kubernetes/apiserver:

KUBE_API_ADDRESS="--address=0.0.0.0"

# apiserver監聽的埠號
# KUBE_API_PORT="--port=8080"

# kubelet埠號
# KUBELET_PORT="--kubelet_port=10250"

# 配置etcd服務,叢集可用逗號分隔
KUBE_ETCD_SERVERS="--etcd_servers=http://10.1.1.1:2379"

# 叢集中Service的虛擬IP地址段範圍
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# 叢集的准入控制設定,各控制模組以外掛的形式依次生效
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

kube-controller-manager服務配置

配置kube-controller-manager.service:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=kube
ExecStart=/usr/bin/kube-controller-manager \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

vim /etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"

kube-scheduler配置

配置kube-scheduler.service:

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=kube
ExecStart=/usr/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

vim /etc/kubernetes/scheduler

KUBE_SCHEDULER_ARGS=""

node節點配置

kubelet安裝和配置

安裝: 使用yum install kubernetes全部安裝了
配置service檔案: 編輯/usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

配置kubelet的配置檔案: 編輯/etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=10.1.1.2"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://10.1.1.1:8080"

# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS="--cluster_dns=10.254.159.10 --cluster_domain=cluster.local"

kube-proxy

配置service檔案: 編輯/usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

服務啟動

以上服務啟動均使用systemctl進行啟動,啟動方式
1. systemctl daemon-reload
2. systemctl enable ** 將服務加入開機啟動列表中
3. systemctl start ** 啟動服務
4. systemctl status ** 檢視服務啟動狀態