1. 程式人生 > >Kubernetes集群部署篇( 一)

Kubernetes集群部署篇( 一)

ner symbol emd -c level resource 模塊 ESS uber

K8S集群部署有幾種方式:kubeadm、minikube和二進制包。前兩者屬於自動部署,簡化部署操作,我們這裏強烈推薦初學者使用二進制包部署,因為自動部署屏蔽了很多細節,使得對各個模塊感知很少,非常不利用學習。所以,這篇文章也是使用二進制包部署Kubernetes集群。

一、架構拓撲圖

技術分享圖片

二、環境規劃

角色IP主機名組件
Master1 192.168.161.161 master1 etcd1,master1
master2 192.168.161.162 master2 etcd2,master2
node1 192.168.161.163 node1 kubelet,kube-proxy,docker,flannel
node2 192.168.161.164 node2 kubelet,kube-proxy,docker,flannel
  1. kube-apiserver:位於master節點,接受用戶請求。
  2. kube-scheduler:位於master節點,負責資源調度,即pod建在哪個node節點。
  3. kube-controller-manager:位於master節點,包含ReplicationManager,Endpointscontroller,Namespacecontroller,Nodecontroller等。
  4. etcd:分布式鍵值存儲系統,共享整個集群的資源對象信息。
  5. kubelet:位於node節點,負責維護在特定主機上運行的pod。
  6. kube-proxy:位於node節點,它起的作用是一個服務代理的角色。

再來普及一下:

技術分享圖片

① kubectl 發送部署請求到 API Server。

② API Server 通知 Controller Manager 創建一個 deployment 資源。

③ Scheduler 執行調度任務,將兩個副本 Pod 分發到 k8s-node1 和 k8s-node2。

④ k8s-node1 和 k8s-node2 上的 kubectl 在各自的節點上創建並運行 Pod。

三、集群部署

  • 系統采用 Centos 7.3
  • 關閉防火墻
systemctl disable firewalld  
systemctl stop firewalld 
  • 關閉 selinux
  • 安裝NTP並啟動
# yum -y install ntp  
# systemctl start ntpd  
# systemctl enable ntpd

4臺機器均設置好 hosts

vim /etc/hosts

192.168.161.161 master1
192.168.161.162 master2
192.168.161.163 node1
192.168.161.164 node2

192.168.161.161 etcd
192.168.161.162 etcd
3.1 部署master
安裝etcd
[root@master1 ~]# yum -y install etcd
配置etcd

yum安裝的etcd默認配置文件在/etc/etcd/etcd.conf,以下將2個節點上的配置貼出來,請註意不同點。

2379是默認的使用端口,為了防止端口占用問題的出現,增加4001端口備用。

master1:

[root@master1 ~]# vim /etc/etcd/etcd.conf 

# [member]  
ETCD_NAME=etcd1  
ETCD_DATA_DIR="/var/lib/etcd/test.etcd"  
#ETCD_WAL_DIR=""  
#ETCD_SNAPSHOT_COUNT="10000"  
#ETCD_HEARTBEAT_INTERVAL="100"  
#ETCD_ELECTION_TIMEOUT="1000"  
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"  
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"  
#ETCD_MAX_SNAPSHOTS="5"  
#ETCD_MAX_WALS="5"  
#ETCD_CORS=""  
#  
#[cluster]  
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://master1:2380"  
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."  
ETCD_INITIAL_CLUSTER="etcd1=http://master1:2380,etcd2=http://master2:2380"  
ETCD_INITIAL_CLUSTER_STATE="new"  
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-baby"  
ETCD_ADVERTISE_CLIENT_URLS="http://master1:2379,http://master1:4001"

master2:

[root@master2 ~]# vim /etc/etcd/etcd.conf

# [member]  
ETCD_NAME=etcd2  
ETCD_DATA_DIR="/var/lib/etcd/test.etcd"  
#ETCD_WAL_DIR=""  
#ETCD_SNAPSHOT_COUNT="10000"  
#ETCD_HEARTBEAT_INTERVAL="100"  
#ETCD_ELECTION_TIMEOUT="1000"  
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"  
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"  
#ETCD_MAX_SNAPSHOTS="5"  
#ETCD_MAX_WALS="5"  
#ETCD_CORS=""  
#  
#[cluster]  
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://master2:2380"  
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."  
ETCD_INITIAL_CLUSTER="etcd1=http://master1:2380,etcd2=http://master2:2380"  
ETCD_INITIAL_CLUSTER_STATE="new"  
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-baby"  
ETCD_ADVERTISE_CLIENT_URLS="http://master2:2379,http://master2:4001"
參數說明:
name      節點名稱
data-dir      指定節點的數據存儲目錄
listen-peer-urls      監聽URL,用於與其他節點通訊
listen-client-urls    對外提供服務的地址:比如 http://ip:2379,http://127.0.0.1:2379 ,客戶端會連接到這裏和 etcd 交互
initial-advertise-peer-urls   該節點同伴監聽地址,這個值會告訴集群中其他節點
initial-cluster   集群中所有節點的信息,格式為 node1=http://ip1:2380,node2=http://ip2:2380,。註意:這裏的 node1 是節點的 --name 指定的名字;後面的 ip1:2380 是 --initial-advertise-peer-urls 指定的值
initial-cluster-state     新建集群的時候,這個值為 new ;假如已經存在的集群,這個值為 existing
initial-cluster-token     創建集群的 token,這個值每個集群保持唯一。這樣的話,如果你要重新創建集群,即使配置和之前一樣,也會再次生成新的集群和節點 uuid;否則會導致多個集群之間的沖突,造成未知的錯誤
advertise-client-urls     對外公告的該節點客戶端監聽地址,這個值會告訴集群中其他節點

修改好以上配置後,在各個節點上啟動etcd服務,並驗證集群狀態:

Master1

[root@master1 etcd]# systemctl start etcd

[root@master1 etcd]# etcdctl -C http://etcd:2379 cluster-health 
member 22a9f7f65563bff5 is healthy: got healthy result from http://master2:2379
member d03b92adc5af7320 is healthy: got healthy result from http://master1:2379
cluster is healthy

[root@master1 etcd]# etcdctl -C http://etcd:4001 cluster-health 
member 22a9f7f65563bff5 is healthy: got healthy result from http://master2:2379
member d03b92adc5af7320 is healthy: got healthy result from http://master1:2379
cluster is healthy

[root@master1 etcd]# systemctl enable etcd 
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

Master2

[root@master2 etcd]# systemctl start etcd

[root@master2 etcd]# etcdctl -C http://etcd:2379 cluster-health 
member 22a9f7f65563bff5 is healthy: got healthy result from http://master2:2379
member d03b92adc5af7320 is healthy: got healthy result from http://master1:2379
cluster is healthy

[root@master2 etcd]# etcdctl -C http://etcd:4001 cluster-health
member 22a9f7f65563bff5 is healthy: got healthy result from http://master2:2379
member d03b92adc5af7320 is healthy: got healthy result from http://master1:2379
cluster is healthy

[root@master2 etcd]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

部署 master

安裝 docker ,設置開機自啟動並開啟服務

分別在 master1和master2上面安裝docker服務

[root@master1 etcd]# yum install docker -y  
[root@master1 etcd]# chkconfig docker on  
[root@master1 etcd]# systemctl start docker.service  
安裝kubernets
yum install kubernetes -y

在 master 的虛機上,需要運行三個組件:Kubernets API Server、Kubernets Controller Manager、Kubernets Scheduler。

首先修改 /etc/kubernetes/apiserver 文件:

[root@master1 kubernetes]# vim apiserver

###  
# kubernetes system config  
#  
# The following values are used to configure the kube-apiserver  
#  

# The address on the local server to listen to.  
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.  
KUBE_API_PORT="--port=8080"

# Port minions listen on  
# KUBELET_PORT="--kubelet-port=10250"  

# Comma separated list of nodes in the etcd cluster  
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

# Address range to use for services  
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies  
# KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"  
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!  
KUBE_API_ARGS=""

接著修改 /etc/kubernetes/config 文件:(最後一句 masterX:8080 ,對應master1/2機器就好

[root@master1 ~]# vim /etc/kubernetes/config

###
# kubernetes system config  
#  
# The following values are used to configure various aspects of all  
# kubernetes services, including  
#  
#   kube-apiserver.service  
#   kube-controller-manager.service  
#   kube-scheduler.service  
#   kubelet.service  
#   kube-proxy.service  
# logging to stderr means we get it in the systemd journal  
KUBE_LOGTOSTDERR="--logtostderr=true"  
  
# journal message level, 0 is debug  
KUBE_LOG_LEVEL="--v=0"  
  
# Should this cluster be allowed to run privileged docker containers  
KUBE_ALLOW_PRIV="--allow-privileged=false"  
  
# How the controller-manager, scheduler, and proxy find the apiserver  
KUBE_MASTER="--master=http://master1:8080" 

修改完成後,啟動服務並設置開機自啟動即可:

systemctl enable kube-apiserver  
systemctl start kube-apiserver  
systemctl enable kube-controller-manager  
systemctl start kube-controller-manager  
systemctl enable kube-scheduler  
systemctl start kube-scheduler  

部署 node

安裝 docker ,設置開機自啟動並開啟服務
yum install docker -y  
chkconfig docker on  
systemctl start docker.service
安裝 kubernetes
yum install kubernetes -y

在 node 的虛機上,需要運行三個組件:Kubelet、Kubernets Proxy。

首先修改 /etc/kubernetes/config 文件:(註意:這裏配置的是etcd的地址,也就是master1/2的地址其中之一)

[root@node1 ~]# vim /etc/kubernetes/config

###
# kubernetes system config  
#  
# The following values are used to configure various aspects of all  
# kubernetes services, including  
#  
#   kube-apiserver.service  
#   kube-controller-manager.service  
#   kube-scheduler.service  
#   kubelet.service  
#   kube-proxy.service  
# logging to stderr means we get it in the systemd journal  
KUBE_LOGTOSTDERR="--logtostderr=true"  
  
# journal message level, 0 is debug  
KUBE_LOG_LEVEL="--v=0"  
  
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"  
  
# How the controller-manager, scheduler, and proxy find the apiserver  
KUBE_MASTER="--master=http://etcd:8080"

接著修改 /etc/kubernetes/kubelet 文件:(註:–hostname-override= 對應的node機器)

[root@node1 ~]# vim /etc/kubernetes/kubelet

###  
# kubernetes kubelet (minion) config  
  
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)  
KUBELET_ADDRESS="--address=0.0.0.0"  
  
# The port for the info server to serve on  
# KUBELET_PORT="--port=10250"  
  
# You may leave this blank to use the actual hostname  
KUBELET_HOSTNAME="--hostname-override=node1"  
  
# location of the api-server  
KUBELET_API_SERVER="--api-servers=http://etcd:8080"  
  
# pod infrastructure container  
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"  
  
# Add your own!  
KUBELET_ARGS=""  

修改完成後,啟動服務並設置開機自啟動即可:

systemctl enable kubelet  
systemctl start kubelet  
systemctl enable kube-proxy  
systemctl start kube-proxy 

查看集群狀態

在任意一臺master上查看集群中節點及節點狀態:

[root@master1 kubernetes]# kubectl get node
NAME      STATUS    AGE
node1     Ready     1m
node2     Ready     1m

至此,已經搭建了一個kubernetes集群了,但目前該集群還不能很好的工作,因為需要對集群中pod的網絡進行統一管理。

創建覆蓋網絡 flannel

在master、node上均執行如下命令,安裝 flannel

yum install flannel -y  

在master、node上均編輯 /etc/sysconfig/flanneld 文件

[root@master1 kubernetes]# vim /etc/sysconfig/flanneld

# Flanneld configuration options    
  
# etcd url location.  Point this to the server where etcd runs  
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"  
  
# etcd config key.  This is the configuration key that flannel queries  
# For address range assignment  
FLANNEL_ETCD_PREFIX="/atomic.io/network"  
  
# Any additional options that you want to pass  
#FLANNEL_OPTIONS=""  

flannel使用etcd進行配置,來保證多個flannel實例之間的配置一致性,所以需要在etcd上進行如下配置:

etcdctl mk /atomic.io/network/config ‘{ "Network": "10.0.0.0/16" }‘

(‘/atomic.io/network/config’這個key與上文/etc/sysconfig/flannel中的配置項FLANNEL_ETCD_PREFIX是相對應的,錯誤的話啟動就會出錯)

啟動修改後的 flannel ,並依次重啟docker、kubernete

在 master 虛機上執行:

systemctl enable flanneld  
systemctl start flanneld  
service docker restart  
systemctl restart kube-apiserver  
systemctl restart kube-controller-manager  
systemctl restart kube-scheduler 

在 node 虛機上執行:

systemctl enable flanneld  
systemctl start flanneld  
service docker restart  
systemctl restart kubelet  
systemctl restart kube-proxy 

這樣etcd集群 + flannel + kubernetes集群 在centOS7上就搭建起來了。

註:

flannel架構介紹

技術分享圖片

flannel默認使用8285端口作為UDP封裝報文的端口,VxLan使用8472端口。

那麽一條網絡報文是怎麽從一個容器發送到另外一個容器的呢?

1. 容器直接使用目標容器的ip訪問,默認通過容器內部的eth0發送出去。

2. 報文通過veth pair被發送到vethXXX3. vethXXX是直接連接到虛擬交換機docker0的,報文通過虛擬bridge docker0發送出去。

4. 查找路由表,外部容器ip的報文都會轉發到flannel0虛擬網卡,這是一個P2P的虛擬網卡,然後報文就被轉發到監聽在另一端的flanneld5. flanneld通過etcd維護了各個節點之間的路由表,把原來的報文UDP封裝一層,通過配置的iface發送出去。

6. 報文通過主機之間的網絡找到目標主機。

7. 報文繼續往上,到傳輸層,交給監聽在8285端口的flanneld程序處理。

8. 數據被解包,然後發送給flannel0虛擬網卡。

9. 查找路由表,發現對應容器的報文要交給docker010. docker0找到連到自己的容器,把報文發送過去。

Kubernetes集群部署篇( 一)