1. 程式人生 > >centos7.2搭建kubernetes1.5.2+dashboard

centos7.2搭建kubernetes1.5.2+dashboard

heartbeat status vol 持久 種類型 data 操作系統 sig mct

一、 簡介

近來容器對企業來說已經不是什麽陌生的概念,Kubernetes作為Google開源的容器運行平臺,受到了大家的熱捧。搭建一套完整的kubernetes平臺,也成為試用這套平臺必須邁過的坎兒。本教程搭建kubernetes1.5.2版本,安裝還是相對比較方便的,通過yum源安裝。但是在kubernetes1.6之後,安裝就比較繁瑣了,需要證書各種認證,這裏暫時不深討。 k8s(kubernetes簡稱)包含以下主要組件:

Kubernetes 集群中主要存在兩種類型的節點,分別是 master 節點,以及 minion 節點。

Minion 節點是實際運行 Docker 容器的節點,負責和節點上運行的 Docker 進行交互,並且提供了代理功能。

Master 節點負責對外提供一系列管理集群的 API 接口,並且通過和 Minion 節點交互來實現對集群的操作管理。

apiserver:用戶和 kubernetes 集群交互的入口,封裝了核心對象的增刪改查操作,提供了 RESTFul 風格的 API 接口,通過 etcd 來實現持久化並維護對象的一致性。

scheduler:負責集群資源的調度和管理,例如當有 pod 異常退出需要重新分配機器時,scheduler 通過一定的調度算法從而找到最合適的節點。

controller-manager:主要是用於保證 replicationController 定義的復制數量和實際運行的 pod 數量一致,另外還保證了從 service 到 pod 的映射關系總是最新的。

kubelet:運行在 minion 節點,負責和節點上的 Docker 交互,例如啟停容器,監控運行狀態等。

proxy:運行在 minion 節點,負責為 pod 提供代理功能,會定期從 etcd 獲取 service 信息,並根據 service 信息通過修改 iptables 來實現流量轉發(最初的版本是直接通過程序提供轉發功能,效率較低。),將流量轉發到要訪問的 pod 所在的節點上去。

etcd:key-value鍵值存儲數據庫,用來存儲kubernetes的信息的。

flannel:Flannel 是 CoreOS 團隊針對 Kubernetes 設計的一個覆蓋網絡(Overlay Network)工具,需要另外下載部署。我們知道當我們啟動 Docker 後會有一個用於和容器進行交互的 IP 地址,如果不去管理的話可能這個 IP 地址在各個機器上是一樣的,並且僅限於在本機上進行通信,無法訪問到其他機器上的 Docker 容器。Flannel 的目的就是為集群中的所有節點重新規劃 IP 地址的使用規則,從而使得不同節點上的容器能夠獲得同屬一個內網且不重復的 IP 地址,並讓屬於不同節點上的容器能夠直接通過內網 IP 通信。

二、 部署規劃

centos71  192.168.223.155  master節點

centos72  192.168.223.156  node節點

centos73  192.168.223.157  node節點

三、 安裝前準備

本次安裝使用vmvare建立的3臺虛擬機,操作系統CentOS Linux release 7.2.1511 (Core)

3臺主機均需要做以下操作:

關閉iptables:service iptables stop

chkconfig iptables off

關閉firewalld:

systemctl disable firewalld.service

systemctl stop firewalld.service

關閉selinux:

setenforce 0

修改配置文件vi /etc/selinux/config:

SELINUX=disabled

設置HOST文件:vi /etc/hosts:

192.168.223.155 centos71

192.168.223.156 centos72

192.168.223.157 centos73

四、 部署etcd

1、 yum安裝etcd

[root@centos71 ~]# yum install -y etcd

2、 修改配置文件

[root@centos71 ~]# vi /etc/etcd/etcd.conf

# 修改以下紅色部分

#[Member]

#ETCD_CORS=""

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

#ETCD_WAL_DIR=""

#ETCD_LISTEN_PEER_URLS="http://localhost:2380"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

#ETCD_MAX_SNAPSHOTS="5"

#ETCD_MAX_WALS="5"

ETCD_NAME="master"

#ETCD_SNAPSHOT_COUNT="100000"

#ETCD_HEARTBEAT_INTERVAL="100"

#ETCD_ELECTION_TIMEOUT="1000"

#ETCD_QUOTA_BACKEND_BYTES="0"

#

#[Clustering]

#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://centos71:2379,http://centos71:4001"

#ETCD_DISCOVERY=""

#ETCD_DISCOVERY_FALLBACK="proxy"

#ETCD_DISCOVERY_PROXY=""

#ETCD_DISCOVERY_SRV=""

#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"

#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

#ETCD_INITIAL_CLUSTER_STATE="new"

#ETCD_STRICT_RECONFIG_CHECK="true"

#ETCD_ENABLE_V2="true"

#

#[Proxy]

#ETCD_PROXY="off"

#ETCD_PROXY_FAILURE_WAIT="5000"

#ETCD_PROXY_REFRESH_INTERVAL="30000"

#ETCD_PROXY_DIAL_TIMEOUT="1000"

#ETCD_PROXY_WRITE_TIMEOUT="5000"

#ETCD_PROXY_READ_TIMEOUT="0"

#

#[Security]

#ETCD_CERT_FILE=""

#ETCD_KEY_FILE=""

#ETCD_CLIENT_CERT_AUTH="false"

#ETCD_TRUSTED_CA_FILE=""

#ETCD_AUTO_TLS="false"

#ETCD_PEER_CERT_FILE=""

#ETCD_PEER_KEY_FILE=""

#ETCD_PEER_CLIENT_CERT_AUTH="false"

#ETCD_PEER_TRUSTED_CA_FILE=""

#ETCD_PEER_AUTO_TLS="false"

#

#[Logging]

#ETCD_DEBUG="false"

#ETCD_LOG_PACKAGE_LEVELS=""

#ETCD_LOG_OUTPUT="default"

#

#[Unsafe]

#ETCD_FORCE_NEW_CLUSTER="false"

#

#[Version]

#ETCD_VERSION="false"

#ETCD_AUTO_COMPACTION_RETENTION="0"

#

#[Profiling]

#ETCD_ENABLE_PPROF="false"

#ETCD_METRICS="basic"

#

#[Auth]

#ETCD_AUTH_TOKEN="simple"

3、 啟動並驗證

[root@centos71 ~]# systemctl enable etcd

[root@centos71 ~]# systemctl start etcd

[root@centos71 ~]# etcdctl set testdir/testkey0 10

10

[root@centos71 ~]# etcdctl get testdir/testkey0

10

[root@centos71 ~]# etcdctl -C http://centos71:4001 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://centos71:2379

cluster is healthy

[root@centos71 ~]# etcdctl -C http://centos71:2379 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://centos71:2379

cluster is healthy

# 驗證正常,本教程搭建的etcd是單機模式,如需搭建集群模式,請參加其他教程。

五、 部署master

1、 安裝docker

[root@centos71 ~]# yum install docker -y

配置Docker配置文件,使其允許從registry中拉取鏡像

OPTIONS=‘--selinux-enabled --log-driver=journald --signature-verification=false‘

if [ -z "${DOCKER_CERT_PATH}" ]; then

DOCKER_CERT_PATH=/etc/docker

fi

OPTIONS=‘--insecure-registry registry:5000‘

[root@centos71 ~]# systemctl enable docker # 設置開機啟動

[root@centos71 ~]# systemctl start docker # 啟動

2、 安裝kubernetes

[root@centos71 ~]# yum install kubernetes -y

3、 修改kubernetes配置文件

配置以下master節點需要運行的組件:

Kubernets API Server

Kubernets Controller Manager

Kubernets Scheduler

[root@centos71 ~]# vim /etc/kubernetes/apiserver

###

# kubernetes system config

#

# The following values are used to configure the kube-apiserver

#

# The address on the local server to listen to.

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.

KUBE_API_PORT="--port=8080"

# Port minions listen on

# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS="--etcd-servers=http://centos71:2379"

# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/16"

# flannel網絡段設置

# default admission control policies

#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# 必須去掉ServiceAccount,否則出現驗證錯誤

# Add your own!

KUBE_API_ARGS=""

[root@centos71 ~]# vim /etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

# kube-apiserver.service

# kube-controller-manager.service

# kube-scheduler.service

# kubelet.service

# kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://centos71:8080"

4、 啟動服務

[root@centos71 ~]# systemctl enable kube-apiserver.service

[root@centos71 ~]# systemctl start kube-apiserver.service

[root@centos71 ~]# systemctl enable kube-controller-manager.service

[root@centos71 ~]# systemctl start kube-controller-manager.service

[root@centos71 ~]# systemctl enable kube-scheduler.service

[root@centos71 ~]# systemctl start kube-scheduler.service

六、 部署node

兩個節點centos72和centos73均需執行,這裏以centos72為例。

1、 安裝docker

參見第五章第1節

2、 安裝kubernetes

[root@centos72 ~]# yum install kubernetes -y

配置以下master節點需要運行的組件:

Kubelet

Kubernets Proxy

3、 修改kubernetes配置文件

[root@centos72 ~]# vim /etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

# kube-apiserver.service

# kube-controller-manager.service

# kube-scheduler.service

# kubelet.service

# kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://centos71:8080"

[root@centos72 ~]# vim /etc/kubernetes/kubelet

###

# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=centos72"

# location of the api-server

KUBELET_API_SERVER="--api-servers=http://centos71:8080"

# pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!

KUBELET_ARGS=""

4、 啟動服務

[root@centos72 ~]# systemctl enable kubelet.service

[root@centos72 ~]# systemctl start kubelet.service

[root@centos72 ~]# systemctl enable kube-proxy.service

[root@centos72 ~]# systemctl start kube-proxy.service

集群配置完成,在master節點centos71上查看集群節點及狀態

[root@centos71 ~]# kubectl -s http://centos71:8080 get node

NAME STATUS AGE

centos72 Ready 2d

centos73 Ready 2d

或者

[root@centos71 ~]# kubectl get nodes

NAME STATUS AGE

centos72 Ready 2d

centos73 Ready 2d

至此,已經搭建了一個kubernetes集群,但目前該集群還不能很好的工作,請繼續後續的步驟。

七、 創建覆蓋網絡-Flannel

在master與node節點都需要執行。

1、 安裝Flannel

[root@centos71 ~]# yum install flannel -y

2、 配置Flannel

[root@centos71 ~]# vim /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location. Point this to the server where etcd runs

FLANNEL_ETCD_ENDPOINTS="http://centos71:2379"

# etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass

#FLANNEL_OPTIONS=""

3、 配置etcd中關於flannel的key

Flannel使用Etcd進行配置,來保證多個Flannel實例之間的配置一致性,所以需要在etcd上進行如下配置:(‘/atomic.io/network/config’這個key與上文/etc/sysconfig/flannel中的配置項FLANNEL_ETCD_PREFIX是相對應的,錯誤的話啟動就會出錯),網絡也與master節點/etc/kubernetes/apiserver中配置的KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/16"保持一致。

[root@centos71 ~]# etcdctl mk /atomic.io/network/config ‘{ "Network": "10.0.0.0/16" }‘

{ "Network": "10.0.0.0/16" }

4、 啟動Flannel

啟動Flannel之後,需要依次重啟docker、kubernetes。

在master節點執行:

systemctl enable flanneld.service

systemctl start flanneld.service

systemctl restart docker

systemctl restart kube-apiserver.service

systemctl restart kube-controller-manager.service

systemctl restart kube-scheduler.service

在node節點上執行:

systemctl enable flanneld.service

systemctl start flanneld.service

systemctl restart docker

systemctl restart kubelet.service

systemctl restart kube-proxy.service

八、 安裝dashboard

Dashbord為我們提供了一個便捷的控制臺,可以在一定程度上擺脫枯燥的命令行操作,可以用來管理Node,部署應用,性能監控等等。當然,喜歡自己開發自動化工具的人也可以通過kube的api,開發出更加高級的工具。

1、 編輯dashboard的yaml文件

首先在master上新建一個叫ube-system的namespace(有可能已自帶)

[root@centos71 ~]# vim kube-namespace.yaml

{

"kind": "Namespace",

"apiVersion": "v1",

"metadata": {

"name": "kube-system"

}

}

[root@centos71 ~]# kubectl create -f kube-namespace.yaml

[root@centos71 ~]# vim kube-dashboard.yaml

# Copyright 2015 Google Inc. All Rights Reserved.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

# http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# Configuration to deploy release version of the Dashboard UI.

#

# Example usage: kubectl create -f <this_file>

kind: Deployment

apiVersion: extensions/v1beta1

metadata:

labels:

app: kubernetes-dashboard

version: v1.5.1

name: kubernetes-dashboard

namespace: kube-system

spec:

replicas: 1

selector:

matchLabels:

app: kubernetes-dashboard

template:

metadata:

labels:

app: kubernetes-dashboard

spec:

containers:

- name: kubernetes-dashboard

image: docker.io/rainf/kubernetes-dashboard-amd64

imagePullPolicy: Always

ports:

- containerPort: 9090

protocol: TCP

args:

# Uncomment the following line to manually specify Kubernetes API server Host

# If not specified, Dashboard will attempt to auto discover the API server and connect

# to it. Uncomment only if the default does not work.

- --apiserver-host=http://192.168.223.155:8080 ## 請修改為自己的kebu-apiserver

livenessProbe:

httpGet:

path: /

port: 9090

initialDelaySeconds: 30

timeoutSeconds: 30

---

kind: Service

apiVersion: v1

metadata:

labels:

app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

type: NodePort

ports:

- port: 80

targetPort: 9090

selector:

app: kubernetes-dashboard

2、 根據dashboard的yaml文件創建pod

[root@centos71 ~]# kubectl create -f kube-dashboard.yaml

deployment "kubernetes-dashboard" created

service "kubernetes-dashboard" created

3、 查看並驗證pod

[root@centos71 ~]# kubectl get pods --namespace=kube-system

NAME READY STATUS RESTARTS AGE

kubernetes-dashboard-1472098125-942vp 0/1 ContainerCreating 0 5s

查看該容器的詳細過程:

[root@centos71 ~]# skubectl describe pods kubernetes-dashboard-1472098125-942vp --namespace=kube-system

Name: kubernetes-dashboard-1472098125-xptk2

Namespace: kube-system

Node: centos73/192.168.223.157

Start Time: Mon, 12 Mar 2018 12:01:28 +0800

Labels: app=kubernetes-dashboard

pod-template-hash=1472098125

Status: Running

IP: 10.0.86.2

Controllers: ReplicaSet/kubernetes-dashboard-1472098125

Containers:

kubernetes-dashboard:

Container ID: docker://4938e3d24cb6524a47caad183724710e190a9ca907c1d10371c1279c95cb8a5a

Image: docker.io/rainf/kubernetes-dashboard-amd64

Image ID: docker-pullable://docker.io/rainf/kubernetes-dashboard-amd64@sha256:a7f45e6fe292abe69d92426aaca4ec62c0c62097c1aff6b8b12b8cc7a2225345

Port: 9090/TCP

Args:

--apiserver-host=http://192.168.223.155:8080

State: Running

Started: Mon, 12 Mar 2018 12:01:37 +0800

Ready: True

Restart Count: 0

Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3

Volume Mounts: <none>

Environment Variables: <none>

Conditions:

Type Status

Initialized True

Ready True

PodScheduled True

No volumes.

QoS Class: BestEffort

Tolerations: <none>

Events:

FirstSeen LastSeen Count From SubObjectPath Type Reason Message

--------- -------- ----- ---- ------------- -------- ------ -------

16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned kubernetes-dashboard-1472098125-xptk2 to centos73

15s 15s 1 {kubelet centos73} spec.containers{kubernetes-dashboard} Normal Pulling pulling image "docker.io/rainf/kubernetes-dashboard-amd64"

16s 7s 2 {kubelet centos73} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.

7s 7s 1 {kubelet centos73} spec.containers{kubernetes-dashboard} Normal Pulled Successfully pulled image "docker.io/rainf/kubernetes-dashboard-amd64"

7s 7s 1 {kubelet centos73} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 4938e3d24cb6; Security:[seccomp=unconfined]

7s 7s 1 {kubelet centos73} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 4938e3d24cb6

[root@centos71 ~]# kubectl get pods --namespace=kube-system

NAME READY STATUS RESTARTS AGE

kubernetes-dashboard-1472098125-xptk2 1/1 Running 0 18s

註:有可能會出現Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""的錯誤。嘗試運行docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest,提示redhat-ca.crt: no such file or directory。ls查看改文件是個軟連接,鏈接目標是/etc/rhsm,查看有沒有rpm -qa|grep rhsm,如果沒有安裝yum install *rhsm*,在每個節點都安裝。

4、 訪問dashboard

http://192.168.223.155:8080/ui

技術分享圖片

centos7.2搭建kubernetes1.5.2+dashboard