1. 程式人生 > >Kubernetes在CentOS7下二進制文件離線安裝

Kubernetes在CentOS7下二進制文件離線安裝

k8s安裝 k8s離線安裝

KubernetesCentOS7下二進制文件離線安裝

一、下載Kubernetes(簡稱K8S)二進制文件

1https://github.com/kubernetes/kubernetes/releases
從上邊的網址中選擇相應的版本,本文以1.9.1版本為例,從 CHANGELOG頁面 下載二進制文件到/root目錄

2)組件選擇:選擇Service Binaries中的kubernetes-server-linux-amd64.tar.gz
該文件已經包含了 K8S所需要的全部組件,無需單獨下載Client等組件。

二、 安裝思路

解壓kubernetes-server-linux-amd64.tar.gz 二進制包,將

service/bin/下的可執行二進制文件復制到/usr/bin/下,並設置對應的systemd文件和配置文件。

三、 節點規劃

節點IP

角色

安裝組件

192.168.1.10

Master

etcdapiserver,kube-controller-manager,kube-scheduler

192.168.1.128

Node1

Kubelet,kube-proxy,flannel

其中,etcdk8s的數據庫,etcd保存kubernetes中增刪改查等操作。

提前做好/etc/hosts文件綁定

sed -i '$a 192.168.1.10 master' /etc/hosts

sed -i '$a 192.168.1.128 node1' /etc/hosts

四、 部署master節點

1)復制對應的二進制文件到/usr/bin目錄下
2)創建systemd service啟動服務文件
3)創建service 中對應的配置參數文件
4)將該應用加入到開機自啟

0. 離線安裝docker服務

解壓docker.tar.gz文件,然後使用rpm命令忽略依賴關系強制安裝

tar zxf docker.tar.gz

cd docker

rpm -ivh *.rpm --nodeps --force

啟動docker

systemctl daemon-reload

systemctl start docker

1 . etcd數據庫安裝
1 ectd數據庫安裝
下載:

K8S需要etcd作為數據庫。以 v3.2.11為例,下載地址如下:
https://github.com/coreos/etcd/releases/
解壓,將etcdetcdctl二進制文件復制到/usr/bin目錄

tar zxf etcd-v3.2.11-linux-amd64.tar.gz

cd etcd-v3.2.11-linux-amd64

cp etcd etcdctl /usr/bin/

2)設置 etcd.service服務文件
/usr/lib/systemd/system/目錄裏創建etcd.service
vim /usr/lib/systemd/system/etcd.service,內容如下:

[Unit]

Description=etcd.service

[Service]

Type=notify

TimeoutStartSec=0

Restart=always

WorkingDirectory=/var/lib/etcd

EnvironmentFile=-/etc/etcd/etcd.conf

ExecStart=/usr/bin/etcd

[Install]

WantedBy=multi-user.target

3)創建紅色字體的路徑:

mkdir -p /var/lib/etcd && mkdir -p /etc/etcd/

4)創建etcd.conf文件:

vim /etc/etcd/etcd.conf

並寫入內容:

ETCD_NAME=ETCD Server

ETCD_DATA_DIR="/var/lib/etcd/"

ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.10:2379"

#上一項配置項中的192.168.1.10k8setcdIP,本案例中etcdmaster在一臺服務器上,具體情況填寫具體IP

5)啟動etcd

systemctl daemon-reload

systemctl start etcd.service

6)檢查etcd是否啟動成功

[root@server1 ~]# etcdctl cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://192.168.1.10:2379

cluster is healthy

有此結果表示成功

7etcd默認監控服務器的TCP/2379端口

[root@server1 ~]# netstat -lntp | grep etcd

tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 11376/etcd

tcp6 0 0 :::2379 :::* LISTEN 11376/etcd

1. 安裝kube-apiserver服務

註意:服務器或者虛擬機網卡一定要配置默認網關,否則就會出現服務不能啟動的問題!!!

1)解壓之前下載好的kubernetes-server-linux-amd64.tar.gz ,將其子目錄server/bin下的kube-apiserver kube-controller-manager kube-scheduler復制到/usr/bin/目錄下

tar zxf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin/

cp kube-apiserver kube-controller-manager kube-scheduler /usr/bin/

2)添加/usr/lib/systemd/system/kube-apiserver.service文件

vim /usr/lib/systemd/system/kube-apiserver.service,內容如下:

[Unit]

Description=Kubernetes API Server

After=etcd.service

Wants=etcd.service

[Service]

EnvironmentFile=/etc/kubernetes/apiserver

ExecStart=/usr/bin/kube-apiserver \

$KUBE_ETCD_SERVERS \

$KUBE_API_ADDRESS \

$KUBE_API_PORT \

$KUBE_SERVICE_ADDRESSES \

$KUBE_ADMISSION_CONTROL \

$KUBE_API_LOG \

$KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

3)創建kube-apiserver需要的路徑

mkdir -p /etc/kubernetes/

4)建立kube-apiserver的配置文件: /etc/kubernetes/apiserver

vim /etc/kubernetes/apiserver,內容如下:

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--port=8080"

KUBELET_PORT="--kubelet-port=10250"

KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.10:2379"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.18.0.0/24"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

KUBE_API_ARGS=""

5)啟動kube-apiserver

systemctl daemon-reload

systemctl start kube-apiserver.service

6)查看是否啟動成功

[root@server1 bin]# netstat -lntp | grep kube

tcp6 0 0 :::6443 :::* LISTEN 11471/kube-apiserve

tcp6 0 0 :::8080 :::* LISTEN 11471/kube-apiserve

3. 部署kube-controller-manager

1)添加/usr/lib/systemd/system/kube-controller-manager.service文件

vim /usr/lib/systemd/system/kube-controller-manager.service,內容如下:

[Unit]

Description=Kubernetes Scheduler

After=kube-apiserver.service

Requires=kube-apiserver.service

[Service]

EnvironmentFile=-/etc/kubernetes/controller-manager

ExecStart=/usr/bin/kube-controller-manager \

$KUBE_MASTER \

$KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

2)添加配置文件controller-manager

vim /etc/kubernetes/controller-manager ,內容如下:

KUBE_MASTER="--master=http://192.168.1.10:8080" KUBE_CONTROLLER_MANAGER_ARGS=" "

3)啟動kube-controller-manager

systemctl daemon-reload

systemctl start kube-controller-manager.service

4)驗證kube-controller-manager是否啟動成功

[root@server1 bin]# netstat -lntp | grep kube-controll

tcp6 0 0 :::10252 :::* LISTEN 11546/kube-controll

4. 部署kube-scheduler服務

1)編輯/usr/lib/systemd/system/kube-scheduler.service

vim /usr/lib/systemd/system/kube-scheduler.service,內容如下:

[Unit]

Description=Kubernetes Scheduler

After=kube-apiserver.service

Requires=kube-apiserver.service

[Service]

User=root

EnvironmentFile=-/etc/kubernetes/scheduler

ExecStart=/usr/bin/kube-scheduler \

$KUBE_MASTER \

$KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

2)編輯kube-scheduler配置文件

vim /etc/kubernetes/scheduler,內容如下:

KUBE_MASTER="--master=http://192.168.1.10:8080"

KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/k8s-t/log/kubernetes --v=2"

3)啟動kube-scheduler

systemctl daemon-reload

systemctl start kube-scheduler.service

4)驗證是否啟動

[root@server1 bin]# netstat -lntp | grep kube-schedule

tcp6 0 0 :::10251 :::* LISTEN 11605/kube-schedule

5. kubernetes/service/bin設置為默認搜索路徑

sed -i '$a export PATH=$PATH:/root/kubernetes/server/bin/' /etc/profile

source /etc/profile

6.查看幾個節點的狀態:

[root@server1 bin]# kubectl get cs

NAME STATUS MESSAGE ERROR

scheduler Healthy ok

controller-manager Healthy ok

etcd-0 Healthy {"health": "true"}

至此,k8smaster節點安裝完畢

Master一鍵重啟服務:

for i in etcd kube-apiserver kube-controller-manager kube-scheduler docker;do systemctl restart $i;done

====================================

Node節點安裝:

Node節點安裝需要復制kubernetes/service/binkube-proxykubelet/usr/bin/目錄下,以及flannel二進制文件包

1. 離線安裝docker服務

解壓docker.tar.gz文件,然後使用rpm命令忽略依賴關系強制安裝

tar zxf docker.tar.gz

cd docker

rpm -ivh *.rpm --nodeps --force

2. 修改docker啟動文件:

vi /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service

Wants=network-online.target

[Service]

Type=notify

# the default is not to use systemd for cgroups because the delegate issues still

# exists and systemd currently does not support the cgroup feature set required

# for containers run by docker

ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS

ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.

# Only systemd 226 and above support this version.

#TasksMax=infinity

TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

# kill only the docker process, not all processes in the cgroup

KillMode=process

# restart the docker process if it exits prematurely

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

[Install]

WantedBy=multi-user.target

3.啟動docker

systemctl daemon-reload

systemctl start docker

3. 解壓k8s二進制文件包

tar zxf kubernetes-server-linux-amd64.tar.gz

cd /root/kubernetes/server/bin/

cp kube-proxy kubelet /usr/bin/

4. 安裝kube-proxy服務

1)添加/usr/lib/systemd/system/kube-proxy.service文件,內容如下:

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/proxy

ExecStart=/usr/bin/kube-proxy \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_MASTER \

$KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

2)創建/etc/kubernetes目錄

mkdir -p /etc/kubernetes

3)添加/etc/kubernetes/proxy配置文件

vim /etc/kubernetes/proxy,內容如下:

KUBE_PROXY_ARGS=""

4)添加/etc/kubernetes/config文件

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow_privileged=false"

KUBE_MASTER="--master=http://192.168.1.10:8080"

5)啟動kube-proxy服務

systemctl daemon-reload

systemctl start kube-proxy.service

6)查看kube-proxy啟動狀態

[root@server2 bin]# netstat -lntp | grep kube-proxy

tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 11754/kube-proxy

tcp6 0 0 :::10256 :::* LISTEN 11754/kube-proxy

5. 安裝kubelet服務

(1) 創建/usr/lib/systemd/system/kubelet.service文件

vim /usr/lib/systemd/system/kubelet.service,內容如下:

[Unit]

Description=Kubernetes Kubelet Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

[Service]

WorkingDirectory=/var/lib/kubelet

EnvironmentFile=-/etc/kubernetes/kubelet

ExecStart=/usr/bin/kubelet $KUBELET_ARGS

Restart=on-failure

KillMode=process

[Install]

WantedBy=multi-user.target

(2) 創建kubelet所需文件路徑

mkdir -p /var/lib/kubelet

(3) 創建kubelet配置文件

vim /etc/kubernetes/kubelet,內容如下:

KUBELET_HOSTNAME="--hostname-override=192.168.1.128"

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest"

KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig"

(4) 添加/var/lib/kubelet/kubeconfig文件

然後還要添加一個配置文件,因為1.9.0kubelet裏不再使用KUBELET_API_SERVER來跟API通信,而是通過別一個yaml的配置來實現。

vim /var/lib/kubelet/kubeconfig ,內容如下:

apiVersion: v1

kind: Config

users:

- name: kubelet

clusters:

- name: kubernetes

cluster:

server: http://192.168.1.10:8080

contexts:

- context:

cluster: kubernetes

user: kubelet

name: service-account-context

current-context: service-account-context

5)啟動kubelet

關閉swap分區:swapoff -a (不然kubelet啟動報錯)

systemctl daemon-reload

systemctl start kubelet.service

4)查看kubelet文件狀態

[root@server2 ~]# netstat -lntp | grep kubelet

tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 15410/kubelet

tcp6 0 0 :::10250 :::* LISTEN 15410/kubelet

tcp6 0 0 :::10255 :::* LISTEN 15410/kubelet

tcp6 0 0 :::4194 :::* LISTEN 15410/kubelet

6. 搭建flannel網絡

Flannel可以使整個集群的docker容器擁有唯一的內網IP,並且多個node之間的docker0可以互相訪問

(1) Flannel網絡只需要安裝在node節點上,不需要安裝在etcd節點和master節點上,flannel的下載地址為:https://github.com/coreos/flannel/releases

技術分享圖片

(2) 下載之後解壓tar zxf flannel-v0.10.0-linux-amd64.tar.gz

將二進制文件flanneldmk-docker-opts.sh拷貝到/usr/bin/下,即安裝完成flannel

cp flanneld mk-docker-opts.sh /usr/bin/

(3) 編寫flannelsystemd文件,便於啟動

vi /usr/lib/systemd/system/flanneld.service,內容如下:
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
    
[Service]
Type=notify
EnvironmentFile=-/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

4)編寫flannel配置文件,路徑對應上文的/etc/sysconfig/flanneld

vim /etc/sysconfig/flanneld,內容如下:

# flanneld configuration options

# etcd url location. Point this to the server where etcd runs

FLANNEL_ETCD="http://192.168.1.10:2379"

# etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_KEY="/atomic.io/network/"

(5) vi /usr/bin/flanneld-start

#!/bin/sh
exec /usr/bin/flanneld \
        -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS:-${FLANNEL_ETCD}} \
        -etcd-prefix=${FLANNEL_ETCD_PREFIX:-${FLANNEL_ETCD_KEY}} \
        "$@"

賦執行權限

chmod +x /usr/bin/flanneld-start

(6) etcd節點定義一個flannel網絡

etcdctl mk /atomic.io/network/config '{"Network":"172.18.0.0/24"}'

(7) 停止docker和關閉docker0

因為flannel將覆蓋docker0網絡,所以最好在開啟flannel之前關閉docker0網卡和docker

systemctl stop docker

關閉docker服務後,kubelet也會關閉,master會顯示node節點不可用,這是正常現象,等flannel網絡設置完畢之後再開啟kubeletdocker即可。

(8) 啟動flannel服務

systemctl daemon-reload

systemctl start flanneld

(9) 設置docker0網橋的IP地址

mkdir -p /usr/lib/systemd/system/docker.service.d
cd /usr/lib/systemd/system/docker.service.d
 
mk-docker-opts.sh -i
 
source /run/flannel/subnet.env
 
vi /usr/lib/systemd/system/docker.service.d/flannel.conf
[Service]
EnvironmentFile=-/run/flannel/docker

(10) 重啟dockerkubelet服務

systemctl restart docker

systemctl restart kubelet

(11) 確認docker0地址和flannel位於同一網段

ifconfig

到此完成了flannel覆蓋網絡的設置。

各個node之間的docker0就可以互相訪問了。

Etcd數據庫操作

刪除一個鍵值:

舉例:etcdctl mk /atomic.io/network/config '{"Network":"172.18.0.0/24"}'不小心寫錯了,可以刪除值,重新賦值:

etcdctl rm /atomic.io/network/config

然後重新賦值就可以了,然後需要去node上刪除/run/flannel/subnet.env文件,重啟flanneld即可獲取新的IP網段。


Kubernetes在CentOS7下二進制文件離線安裝