1. 程式人生 > >kubernetes在CentOS7下二進位制檔案方式安裝、離線安裝

kubernetes在CentOS7下二進位制檔案方式安裝、離線安裝

第1章 安裝和配置 docker(master和node節點部署)
1.1 下載最新的 docker 二進位制檔案

https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz
tar -xvf docker-18.03.1-ce.tgz
cp docker/docker* /usr/bin

1.2 建立 docker 的 啟動檔案

cat /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation
=http://docs.docker.io [Service] Environment="PATH=/root/local/bin:/bin:/sbin:/usr/bin:/usr/sbin" EnvironmentFile=-/run/flannel/docker ExecStart=/usr/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID Restart=on-failure RestartSec=5 LimitNOFILE=infinity LimitNPROC=infinity LimitCORE
=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target

1.3 啟動 dockerd

systemctl daemon-reload
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
systemctl enable docker
systemctl start docker

1.4 檢查 docker 服務

Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.2
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:12:25 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm


Server:
Engine:
Version:      18.03.1-ce
API version:  1.37 (minimum version 1.12)
Go version:   go1.9.5
Git commit:   9ee9f40
Built:        Thu Apr 26 07:23:03 2018
OS/Arch:      linux/amd64
Experimental:    false

第2章 k8s部署準備
2.1 下載Kubernetes(簡稱K8S)二進位制檔案

https://github.com/kubernetes/kubernetes/releases

2.2 從上邊的網址中選擇相應的版本,本文以1.11.2版本為例,從 CHANGELOG頁面 下載二進位制檔案
這裡寫圖片描述
2.3 元件選擇:選擇Service Binaries中的kubernetes-server-linux-amd64.tar.gz
該檔案已經包含了 K8S所需要的全部元件,無需單獨下載Client等元件
這裡寫圖片描述
第3章 安裝規劃
3.1 下載K8S解壓,把每個元件依次複製到/usr/bin目錄檔案下,然後建立systemd服務文見,最後啟動該元件
3.1.1 本例:以三個節點為例。具體節點安裝元件如下
節點IP地址 角色 安裝元件名稱
192.168.1.180 Master(管理節點) etcd、kube-apiserver、kube-controller-manager、kube-scheduler
192.168.1.181 Node1(計算節點) docker 、kubelet、kube-proxy

第4章 Master節點部署
4.1 kubernetes master 節點包含的元件:

kube-apiserver
kube-scheduler
kube-controller-manager
etcd
flannel
docker

4.2 部署注意事項

注意:在CentOS7系統 以二進位制檔案部署,所有元件都需要4個步驟: 
1)複製對應的二進位制檔案到/usr/bin目錄下 
2)建立systemd service啟動服務檔案 
3)建立service 中對應的配置引數檔案 
4)將該應用加入到開機自啟

4.3 etcd資料庫安裝

下載:K8S需要etcd作為資料庫。以 v3.2.9為例,下載地址如下:
wget https://github.com/coreos/etcd/releases/download/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz
https://github.com/coreos/etcd/releases/
tar xf etcd-v3.2.9-linux-amd64.tar.gz
cd etcd-v3.2.9-linux-amd64/
cp etcd etcdctl /usr/bin/

4.4 設定 etcd.service服務檔案
4.4.1 在/etc/systemd/system/目錄裡建立etcd.service,其內容如下:

mkdir  -p  /var/lib/etcd    #etcd工作目錄
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf          #etcd配置檔案路徑
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target說明:其中WorkingDirectory為etcd資料庫目錄,需要在etcd**安裝前建立**

4.4.2 建立配置/etc/etcd/etcd.conf檔案

[[email protected]]# cat /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"

4.4.3 配置開機啟動

#systemctl daemon-reload
#systemctl enable etcd.service
#systemctl start etcd.service

4.4.4 檢驗etcd是否安裝成功

# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy

4.5 kube-apiserver服務
4.5.1 下載並複製二進位制檔案到/usr/bin目錄

wget https://dl.k8s.io/v1.11.1/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/bin/

4.5.2 新建並編輯/kube-apiserver.service 檔案

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

4.5.3 新建引數配置檔案/etc/kubernetes/apiserver

[[email protected]]#cat /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.180:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

4.5.4 新建引數配置檔案/etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.0.0.180:8080"

4.6 kube-controller-manger服務
4.6.1 配置kube-controller-manager systemd 檔案服務

[[email protected]]#cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

4.6.2 配置檔案 /etc/kubernetes/controller-manager 內容如下:

[root@k8s-master]#cat /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=" "

4.6.3 新建引數配置檔案/etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.0.0.180:8080"

4.7 kube-scheduler服務
4.7.1 配置kube-scheduler systemd服務檔案

[[email protected]]#cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

4.7.2 配置/etc/kubernetes/scheduler引數檔案

[[email protected]-master]#cat /etc/kubernetes/scheduler
#KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/k8s-t/log/kubernetes --v=2"

4.7.3 新建引數配置檔案/etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.0.0.180:8080"

4.8 將各元件加入開機自啟

systemctl daemon-reload 
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service

4.9 驗證 master 節點功能

[root@k8s-master]#kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}

第5章 Node 節點部署
5.1 Docker部署請看第一章內容
5.2 kubernetes Node 節點包含如下元件:

flanneld
docker
kubelet
kube-proxy

5.3 安裝和配置 kubelet

weget https://dl.k8s.io/v1.11.1/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz
sudo cp -r ./server/bin/{kube-proxy,kubelet} /root/local/bin/

5.3.1 配置kubelet配置檔案

[root@k8s-node-1 ~]# cat /etc/kubernetes/config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://10.0.0.180:8080"
############################################################################################
[root@k8s-node-1 ~]# cat /etc/kubernetes/kubelet
# 啟用日誌標準錯誤
KUBE_LOGTOSTDERR="--logtostderr=true"
# 日誌級別
KUBE_LOG_LEVEL="--v=0"
# Kubelet服務IP地址
NODE_ADDRESS="--address=10.0.0.181"
# Kubelet服務埠
NODE_PORT="--port=10250"
# 自定義節點名稱
NODE_HOSTNAME="--hostname-override=10.0.0.181"
# kubeconfig路徑,指定連線API伺服器
KUBELET_KUBECONFIG="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
# 允許容器請求特權模式,預設false
KUBE_ALLOW_PRIV="--allow-privileged=false"
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
# DNS資訊
KUBELET_DNS_IP="--cluster-dns=10.254.0.2"
KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local"
# 禁用使用Swap
KUBELET_SWAP="--fail-swap-on=false"

5.3.1.1 kubelet.kubeconfig檔案內容如下:

cat /etc/kubernetes/kubelet.kubeconfig
apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: http://10.0.0.180:8080
    name: local
contexts:
  - context:
      cluster: local
    name: local
current-context: local

5.3.2 建立 kubelet 的 systemd unit 檔案

cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
${KUBE_LOGTOSTDERR} \
${KUBE_LOG_LEVEL} \
${NODE_ADDRESS} \
${NODE_PORT} \
${NODE_HOSTNAME} \
${KUBELET_KUBECONFIG} \
${KUBE_ALLOW_PRIV} \
${KUBELET_DNS_IP} \
${KUBELET_DNS_DOMAIN} \
${KUBELET_SWAP}
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target

5.3.3 啟動 kubelet

[root@k8s-node-1 ~]# systemctl daemon-reload
[root@k8s-node-1 ~]# systemctl enable kubelet.service
[root@k8s-node-1 ~]# systemctl start kubelet.service
[root@k8s-node-1 ~]# systemctl status kubelet.service

5.4 安裝和配置 kube-proxy
5.4.1 配置kube-proxy配置檔案

[[email protected]1 ~]# cat /etc/kubernetes/proxy 
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS=""

5.4.2 新建引數配置檔案/etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.0.0.180:8080"

5.4.3 建立 kube-proxy kubeconfig 檔案

cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.4.4 啟動 kube-proxy

[root@k8s-node-1 ~]# systemctl daemon-reload
[root@k8s-node-1 ~]# systemctl enable kube-proxy
[root@k8s-node-1 ~]# systemctl start kube-proxy
[root@k8s-node-1 ~]# systemctl status kube-proxy

4.4 檢查節點狀態

[root@k8s-master ~]# kubectl get nodes
NAME            STATUS    AGE
192.168.1.181   Ready     1m

第6章 部署 Flannel 網路

部署flannel網路有兩種方式,可以使用rpm包安裝,也可以使用二進位制方式安裝,本文演示兩種安裝方式:

6.1 rpm包部署

rpm -ivh flannel-0.7.1-4.el7.x86_64.rpm
or
yum -y install flannel

6.1.1 配置flannel網路

cat /etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://10.0.0.180:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/k8s/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

6.1.2 配置etcd中關於flannel的key

[root@k8s-master ~]# etcdctl set /k8s/network/config '{"Network": "172.20.0.0/16"}'
{"Network": "172.20.0.0/16"}
[root@k8s-master ~]# etcdctl get /k8s/network/config
{"Network": "172.20.0.0/16"}

6.1.3 啟動
6.1.3.1 啟動Flannel之後,需要依次重啟docker、kubernete。
6.1.3.2 在master執行:

[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable flanneld.service 
[root@k8s-master ~]# systemctl start flanneld.service 
[root@k8s-master ~]# service docker restart
[root@k8s-master ~]# systemctl restart kube-apiserver.service
[root@k8s-master ~]# systemctl restart kube-controller-manager.service
[root@k8s-master ~]# systemctl restart kube-scheduler.service

6.1.3.3 在node上執行:

[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-node-1 ~]# systemctl enable flanneld.service 
[root@k8s-node-1 ~]# systemctl start flanneld.service 
[root@k8s-node-1 ~]# service docker restart
[root@k8s-node-1 ~]# systemctl restart kubelet.service
[root@k8s-node-1 ~]# systemctl restart kube-proxy.service

6.2 二進位制方式部署 flanneld
6.2.1 下載 flanneld

mkdir flannel
wget https://github.com/coreos/flannel/releases/download/v0.7.1/flannel-v0.7.1-linux-amd64.tar.gz
tar -xzvf flannel-v0.7.1-linux-amd64.tar.gz -C flannel
sudo cp flannel/{flanneld,mk-docker-opts.sh} /usr/bin

6.2.2 建立 flanneld 的 systemd unit 檔案

[[email protected]]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=docker.service

6.2.3 配置flanneld配置檔案(master、node都需要執行)

[[email protected] ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.1.180:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/k8s/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

6.2.4 配置etcd中關於flannel的key

[root@k8s-master ~]# etcdctl set /k8s/network/config '{"Network": "172.20.0.0/16"}'
{"Network": "172.20.0.0/16"}
[root@k8s-master ~]# etcdctl get /k8s/network/config
{"Network": "172.20.0.0/16"}

6.2.5 啟動
6.2.5.1 啟動Flannel之後,需要依次重啟docker、kubernete。
6.2.5.2 在master執行:

[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable flanneld.service 
[root@k8s-master ~]# systemctl start flanneld.service 
[root@k8s-master ~]# service docker restart
[root@k8s-master ~]# systemctl restart kube-apiserver.service
[root@k8s-master ~]# systemctl restart kube-controller-manager.service
[root@k8s-master ~]# systemctl restart kube-scheduler.service

6.2.5.3 在node上執行:

[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-node-1 ~]# systemctl enable flanneld.service 
[root@k8s-node-1 ~]# systemctl start flanneld.service 
[root@k8s-node-1 ~]# service docker restart
[root@k8s-node-1 ~]# systemctl restart kubelet.service
[root@k8s-node-1 ~]# systemctl restart kube-proxy.service

6.2.6 檢查 flanneld 服務

[root@k8s-master ~]# journalctl  -u flanneld |grep 'Lease acquired'
[root@k8s-master ~]# ifconfig flannel.1

相關推薦

kubernetes在CentOS7二進位制檔案方式安裝離線安裝

第1章 安裝和配置 docker(master和node節點部署) 1.1 下載最新的 docker 二進位制檔案 https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-c

Linux rzsz安裝(線上安裝離線安裝

線上安裝: yum -y install lrzsz   離線安裝: 1.下載安裝包地址:http://freshmeat.sourceforge.net/projects/lrzsz/ 2.通過同一網路內可以上傳檔案的機器(B機:172.16.36.167)先將lrz

Linux檔案解壓縮軟體安裝

1、linux常用解壓縮命令:.zip格式  解壓:unzip  檔名.zip   壓縮:zip  檔名.zip  目錄名 .tar格式  壓縮:tar  cvf  檔名.tar  檔名

【小卒ubantu使用】ubantu環境的.egg檔案是什麼,如何安裝解除安裝使用詳解

       egg的英文意思是蛋,俗稱蟒蛇的蛋,python的egg檔案有點像java中的jar檔案,是一個工程包檔案,便於安裝部署 如何製作egg檔案呢?see官方文件http://peak.telecommunity.com/De

MySQL Linux二進位制檔案安裝mysql

Linux下二進位制檔案包安裝mysql 解壓縮mysql二進位制原始碼包到制定指定目錄,可自定義目錄 tar zxvf /opt/mysql-5.6.10-linux-glibc2.5-x86_64.tar.gz -C /opt 建立自定義mysql存放目錄 m

二進位制檔案方式安裝Docker-CE

docker-ce提供了常用的linux發行版的安裝方式,但是離線安裝很多時候仍然是一個需要,這篇文章介紹瞭如何在centos上使用離線安裝包進行docker-ce版本的安裝,此方式適用於大部分docker離線安裝包的安裝方式。 OS 專案 作

Kubernetes在CentOS7二進制文件離線安裝

k8s安裝 k8s離線安裝Kubernetes在CentOS7下二進制文件離線安裝一、下載Kubernetes(簡稱K8S)二進制文件1)https://github.com/kubernetes/kubernetes/releases 從上邊的網址中選擇相應的版本,本文以1.9.1版本為例,從 CHAN

無網環境利用pip安裝Python離線安裝

這幾天搞Windows離線斷網環境下安裝Python包,配置環境,各種坑!做個記錄,供以後查詢吧。 # 生產環境 windows 7 # python 2.7.9 # pip 1.5.2 友情提示:當你遇到無法安裝包的不明錯誤時,可以回頭來考慮如下建議了: 想辦法更新Python版本!!

Ubuntu14.04Vim編輯器的安裝解除安裝與配置

一    通過終端命令列安裝vim 1)在終端輸入下面命令     2)通過echo $?命令檢查安裝是否成功 如果返回結果是0,表示軟體成功安裝。 如果返回結果是非0,表示軟體安裝過程中出現問題。 3)如果在安裝vim時,顯示找不到什麼包,更新一下下載源就行了

公司內網離線安裝redis叢集

環境準備: Ruby環境(叢集搭建需要用ruby建立, ruby環境在2.2以上。) rubygems-2.7.4.tgz 和 redis-3.2.2.gem (後面的是redis叢集需要的ruby外掛,rubygems是ruby的一個包管理工具,通過rubyg

Ubuntu安裝解除安裝軟體和新增快捷方式

1、安裝 首先是下載了軟體的tar.gz壓縮包,解壓後移動到/opt目錄,也可以移動到/opt再解壓,反正最後可以刪掉壓縮包。移動時需要許可權,所以終端命令應該為: sudo mv 解壓後的資料夾名 /opt 然後進入這個解壓後的資料夾的bin目錄,找到一個.sh檔案,就可以進

macmysql安裝解除安裝基本操作

執行mysql報錯 mac下執行mysql報錯 ERROR 1045 (28000): Access denied for user 'zhang'@'localhost' (using password: NO) 解決

深度學習框架keras平臺搭建(關鍵字:windows非GPU離線安裝

當下,人工智慧越來越受到人們的關注,而這很大程度上都歸功於深度學習的迅猛發展。人工智慧和不同產業之間的成功跨界對傳統產業產生著深刻的影響。 最近,我也開始不斷接觸深度學習,之前也看了很多文章介紹,對深度學習的歷史發展以及相關理論知識也有大致瞭解。 但常言道:紙上得來終覺淺,

linux中檢視目錄隱藏檔案方式

Linux系統中,除了儲存了大量可見的檔案和資料夾,還附帶了很多隱藏的檔案和資料夾,那麼該如何進行檢視?這些隱藏檔案又有什麼用呢?檢視Linux主目錄隱藏檔案可以通過執行ls–a來實現,對於隱藏檔案,不建議進行更改和刪除操作,原因是,主目錄中的隱藏檔案和目錄包括使用者程式訪問

windows安裝解除安裝mysql服務

 將下載下來的mysql解壓到指定目錄下(如:d:\mysql) 安裝服務 在命令列輸入 d:\mysql\bin\mysqld -install net start mysql 解除安裝服務 在

Linux tar命令總結:Linux檔案進行打包壓縮並分割成指定大小

1、普通tar壓縮命令tar -zcvf andywang.tar.gz andywang//將andywang資料夾壓縮成andywang.tar.gz2、壓縮後的檔案太大,需要將andywang.tar.gz分割成N個指定大小的檔案split -b 4000M -d -a

基於C/C++的讀取資料夾所有檔案(圖片文件等)的程式碼

<pre name="code" class="cpp">#include <iostream> #include <string> #include <vector> #include <io.h> #inclu

Linuxdocker1.7.1安裝(yum安裝離線安裝

以下版本的CentOS 支援 Docker : l CentOS 7 (64-bit) l CentOS 6.5 (64-bit) or later 檢視系統版本指令: cat /etc/issue 1.docker 1.7.1安裝 這裡提供兩種安裝方式,yum安裝

Windowmysql免安裝安裝解除安裝多服務

隨著MySQL 版本的提升,已經沒有安裝版了。現在簡單介紹安裝多服務。 (2)、將mysql-5.6.15-win32.zip解壓縮D:\Program Files\目錄下,重新命名為mysql-5.6.15。在D:\Program Files\mysql-5.6.15目

Eclipse常用外掛線上安裝離線安裝方式(不斷更新中。。。)

 1.Subclipse 原始碼管理工具Subversion的Java客戶端,對使用Subversion做原始碼管理的,這個相信用的會比較多。預設使用JavaHL介面,如果使用中沒有問題的話,最好別切換 1)    從官網下載 2)    解壓至 ${eclipse}\