一、簡介
RKE:Rancher Kubernetes Engine
一個極其簡單,閃電般快速的Kubernetes安裝程式,可在任何地方使用。
二、準備工作
I、配置系統
系統:CentOS 7 / Ubuntu
配置完系統後安裝必要的軟體:
yum install lvm2 parted lrzsz -y
# 檢視需要配置的磁碟
fdisk -l
# 如:/dev/sda
fdisk /dev/sda # 根據提示進行分割槽
# 配置lvm卷
pvcreate /dev/sda1
vgcreate disk1 /dev/sda1
lvcreate -n data -l +100%FREE disk1
# 格式化磁碟
mkfs.xfs /dev/disk1/data
# 寫入開機自動載入
diskuuid=`blkid /dev/disk1/data | awk '{print $2}' | tr -d '"'`
echo "$diskuuid /data xfs defaults 0 0" >> /etc/fstab
# 判斷/data目錄是否存在並掛載磁碟
[ -d /data ] || mkdir /data
mount -a
II、安裝docker
可以根據rancher提供的匹配版本進行安裝:
DOCKER VERSION | INSTALL SCRIPT |
---|---|
18.09.2 | curl [https://releases.rancher.com/install-docker/18.09.2.sh](https://releases.rancher.com/install-docker/18.09.2.sh) | sh |
18.06.2 | curl [https://releases.rancher.com/install-docker/18.06.2.sh](https://releases.rancher.com/install-docker/18.06.2.sh) | sh |
17.03.2 | curl [https://releases.rancher.com/install-docker/17.03.2.sh](https://releases.rancher.com/install-docker/17.03.2.sh) | sh |
也可以通過如下命令進行安裝:
# 配置yum源
sudo yum remove docker docker-common docker-selinux docker-engine
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum install -y -q docker-ce-18.09.2 # 此處指定你要安裝的版本即可
配置docker的daemon.json檔案:
systemctl enable docker
systemctl start docker
echo '''{
"data-root": "/data/docker",
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
},
"registry-mirrors": [
"https://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://dockerhub.azk8s.cn"
]
}''' > /etc/docker/daemon.json
[ -d /data/docker ] || mkdir /data/docker
systemctl restart docker
至此,基本環境已經配置完成。
三、安裝配置
假設初始化環境如下:
IP | 系統 |
---|---|
192.168.0.1 | CentOS 7 |
192.168.0.2 | CentOS 7 |
192.168.0.3 | CentOS 7 |
首先需要配置進行叢集建立的使用者以及免密登陸:
useradd admin
usermod -aG docker admin
su - admin
cd .ssh/
ssh-keygen -t rsa # 一路回車完成配置
echo <PublicKeys> >> /home/admin/.ssh/authorized_keys
配置完免密登陸,我們需要下載rke軟體並進行叢集配置檔案設定:
# 下載rke軟體
# github地址:https://github.com/rancher/rke
wget https://github.com/rancher/rke/releases/download/v1.0.4/rke_linux-amd64
ln -s rke_linux-amd64 /usr/local/bin/rke
# 配置rke配置檔案
[ -d /data/k8s ] || mkdir /data/k8s ; cd /data/k8s
rke config --name cluster.yml # 按照提示進行rke配置
# 完整配置後
rke up # 等待安裝完成
rke cluster.yml的示例:
官方示例一:
nodes:
- address: 1.1.1.1
user: ubuntu
role:
- controlplane
- etcd
ssh_key_path: /home/user/.ssh/id_rsa
port: 2222
- address: 2.2.2.2
user: ubuntu
role:
- worker
ssh_key: |-
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
- address: example.com
user: ubuntu
role:
- worker
hostname_override: node3
internal_address: 192.168.1.6
labels:
app: ingress
# If set to true, RKE will not fail when unsupported Docker version
# are found
ignore_docker_version: false
# Cluster level SSH private key
# Used if no ssh information is set for the node
ssh_key_path: ~/.ssh/test
# Enable use of SSH agent to use SSH private keys with passphrase
# This requires the environment `SSH_AUTH_SOCK` configured pointing
#to your SSH agent which has the private key added
ssh_agent_auth: true
# List of registry credentials
# If you are using a Docker Hub registry, you can omit the `url`
# or set it to `docker.io`
# is_default set to `true` will override the system default
# registry set in the global settings
private_registries:
- url: registry.com
user: Username
password: password
is_default: true
# Bastion/Jump host configuration
bastion_host:
address: x.x.x.x
user: ubuntu
port: 22
ssh_key_path: /home/user/.ssh/bastion_rsa
# or
# ssh_key: |-
# -----BEGIN RSA PRIVATE KEY-----
#
# -----END RSA PRIVATE KEY-----
# Set the name of the Kubernetes cluster
cluster_name: mycluster
# The Kubernetes version used. The default versions of Kubernetes
# are tied to specific versions of the system images.
#
# For RKE v0.2.x and below, the map of Kubernetes versions and their system images is
# located here:
# https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go
#
# For RKE v0.3.0 and above, the map of Kubernetes versions and their system images is
# located here:
# https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go
#
# In case the kubernetes_version and kubernetes image in
# system_images are defined, the system_images configuration
# will take precedence over kubernetes_version.
kubernetes_version: v1.10.3-rancher2
# System Images are defaulted to a tag that is mapped to a specific
# Kubernetes Version and not required in a cluster.yml.
# Each individual system image can be specified if you want to use a different tag.
#
# For RKE v0.2.x and below, the map of Kubernetes versions and their system images is
# located here:
# https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go
#
# For RKE v0.3.0 and above, the map of Kubernetes versions and their system images is
# located here:
# https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go
#
system_images:
kubernetes: rancher/hyperkube:v1.10.3-rancher2
etcd: rancher/coreos-etcd:v3.1.12
alpine: rancher/rke-tools:v0.1.9
nginx_proxy: rancher/rke-tools:v0.1.9
cert_downloader: rancher/rke-tools:v0.1.9
kubernetes_services_sidecar: rancher/rke-tools:v0.1.9
kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.8
dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8
kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.8
kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0
pod_infra_container: rancher/pause-amd64:3.1
services:
etcd:
# if external etcd is used
# path: /etcdcluster
# external_urls:
# - https://etcd-example.com:2379
# ca_cert: |-
# -----BEGIN CERTIFICATE-----
# xxxxxxxxxx
# -----END CERTIFICATE-----
# cert: |-
# -----BEGIN CERTIFICATE-----
# xxxxxxxxxx
# -----END CERTIFICATE-----
# key: |-
# -----BEGIN PRIVATE KEY-----
# xxxxxxxxxx
# -----END PRIVATE KEY-----
# Note for Rancher v2.0.5 and v2.0.6 users: If you are configuring
# Cluster Options using a Config File when creating Rancher Launched
# Kubernetes, the names of services should contain underscores
# only: `kube_api`.
kube-api:
# IP range for any services created on Kubernetes
# This must match the service_cluster_ip_range in kube-controller
service_cluster_ip_range: 10.43.0.0/16
# Expose a different port range for NodePort services
service_node_port_range: 30000-32767
pod_security_policy: false
# Add additional arguments to the kubernetes API server
# This WILL OVERRIDE any existing defaults
extra_args:
# Enable audit log to stdout
audit-log-path: "-"
# Increase number of delete workers
delete-collection-workers: 3
# Set the level of log output to debug-level
v: 4
# Note for Rancher 2 users: If you are configuring Cluster Options
# using a Config File when creating Rancher Launched Kubernetes,
# the names of services should contain underscores only:
# `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6.
kube-controller:
# CIDR pool used to assign IP addresses to pods in the cluster
cluster_cidr: 10.42.0.0/16
# IP range for any services created on Kubernetes
# This must match the service_cluster_ip_range in kube-api
service_cluster_ip_range: 10.43.0.0/16
kubelet:
# Base domain for the cluster
cluster_domain: cluster.local
# IP address for the DNS service endpoint
cluster_dns_server: 10.43.0.10
# Fail if swap is on
fail_swap_on: false
# Set max pods to 250 instead of default 110
extra_args:
max-pods: 250
# Optionally define additional volume binds to a service
extra_binds:
- "/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins"
# Currently, only authentication strategy supported is x509.
# You can optionally create additional SANs (hostnames or IPs) to
# add to the API server PKI certificate.
# This is useful if you want to use a load balancer for the
# control plane servers.
authentication:
strategy: x509
sans:
- "10.18.160.10"
- "my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com"
# Kubernetes Authorization mode
# Use `mode: rbac` to enable RBAC
# Use `mode: none` to disable authorization
authorization:
mode: rbac
# If you want to set a Kubernetes cloud provider, you specify
# the name and configuration
cloud_provider:
name: aws
# Add-ons are deployed using kubernetes jobs. RKE will give
# up on trying to get the job status after this timeout in seconds..
addon_job_timeout: 30
# Specify network plugin-in (canal, calico, flannel, weave, or none)
network:
plugin: canal
# Specify DNS provider (coredns or kube-dns)
dns:
provider: coredns
# Currently only nginx ingress provider is supported.
# To disable ingress controller, set `provider: none`
# `node_selector` controls ingress placement and is optional
ingress:
provider: nginx
node_selector:
app: ingress
# All add-on manifests MUST specify a namespace
addons: |-
---
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
namespace: default
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
addons_include:
- https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml
- https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-cluster.yaml
- /path/to/manifest
官方示例二:(主要是為了著重展示擴充套件引數!)
nodes:
- address: 1.1.1.1
internal_address:
user: ubuntu
role:
- controlplane
- etcd
ssh_key_path: /home/user/.ssh/id_rsa
port: 2222
- address: 2.2.2.2
internal_address:
user: ubuntu
role:
- worker
ssh_key: |-
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
- address: example.com
internal_address:
user: ubuntu
role:
- worker
hostname_override: node3
internal_address: 192.168.1.6
labels:
app: ingress
app: dns
# 如果設定為true,則可以使用不受支援的Docker版本
ignore_docker_version: false
# 叢集等級的SSH私鑰(private key)
## 如果節點未配置SSH私鑰,RKE將會以此私鑰去連線叢集節點
ssh_key_path: ~/.ssh/test
# 使用SSH agent來提供SSH私鑰
## 需要配置環境變數`SSH_AUTH_SOCK`指向已新增私鑰的SSH agent
ssh_agent_auth: false
# 配置docker root目錄
docker_root_dir: "/var/lib/docker"
# 私有倉庫
## 當設定`is_default: true`後,構建叢集時會自動在配置的私有倉庫中拉取映象
## 如果使用的是DockerHub映象倉庫,則可以省略`url`或將其設定為`docker.io`
## 如果使用內部公開倉庫,則可以不用設定使用者名稱和密碼
private_registries:
- url: registry.com
user: Username
password: password
is_default: true
# 堡壘機
## 如果叢集節點需要通過堡壘機跳轉,那麼需要為RKE配置堡壘機資訊
bastion_host:
address: x.x.x.x
user: ubuntu
port: 22
ssh_key_path: /home/user/.ssh/bastion_rsa
# or
# ssh_key: |-
# -----BEGIN RSA PRIVATE KEY-----
#
# -----END RSA PRIVATE KEY-----
# 設定Kubernetes叢集名稱
# 定義kubernetes版本.
## 目前, 版本定義需要與rancher/types defaults map相匹配: https://github.com/rancher/types/blob/master/apis/management.cattle.io/v3/k8s_defaults.go#L14 (後期版本請檢視: https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go )
## 如果同時定義了kubernetes_version和system_images中的kubernetes映象,則system_images配置將優先於kubernetes_version
kubernetes_version: v1.14.3-rancher1
# `system_images`優先順序更高,如果沒有單獨指定`system_images`映象,則會使用`kubernetes_version`對應的預設映象版本。
## 預設Tags: https://github.com/rancher/types/blob/master/apis/management.cattle.io/v3/k8s_defaults.go)(Rancher v2.3或者RKE v0.3之後的版本請檢視: https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go )
system_images:
etcd: rancher/coreos-etcd:v3.3.10-rancher1
alpine: rancher/rke-tools:v0.1.34
nginx_proxy: rancher/rke-tools:v0.1.34
cert_downloader: rancher/rke-tools:v0.1.34
kubernetes_services_sidecar: rancher/rke-tools:v0.1.34
kubedns: rancher/k8s-dns-kube-dns:1.15.0
dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.3.0
coredns: rancher/coredns-coredns:1.3.1
coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.3.0
kubernetes: rancher/hyperkube:v1.14.3-rancher1
flannel: rancher/coreos-flannel:v0.10.0-rancher1
flannel_cni: rancher/flannel-cni:v0.3.0-rancher1
calico_node: rancher/calico-node:v3.4.0
calico_cni: rancher/calico-cni:v3.4.0
calico_controllers: ""
calico_ctl: rancher/calico-ctl:v2.0.0
canal_node: rancher/calico-node:v3.4.0
canal_cni: rancher/calico-cni:v3.4.0
canal_flannel: rancher/coreos-flannel:v0.10.0
weave_node: weaveworks/weave-kube:2.5.0
weave_cni: weaveworks/weave-npc:2.5.0
pod_infra_container: rancher/pause:3.1
ingress: rancher/nginx-ingress-controller:0.21.0-rancher3
ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
metrics_server: rancher/metrics-server:v0.3.1
services:
etcd:
# if external etcd is used
# path: /etcdcluster
# external_urls:
# - https://etcd-example.com:2379
# ca_cert: |-
# -----BEGIN CERTIFICATE-----
# xxxxxxxxxx
# -----END CERTIFICATE-----
# cert: |-
# -----BEGIN CERTIFICATE-----
# xxxxxxxxxx
# -----END CERTIFICATE-----
# key: |-
# -----BEGIN PRIVATE KEY-----
# xxxxxxxxxx
# -----END PRIVATE KEY-----
# Rancher 2使用者注意事項:如果在建立Rancher Launched Kubernetes時使用配置檔案配置叢集,則`kube_api`服務名稱應僅包含下劃線。這僅適用於Rancher v2.0.5和v2.0.6。
# 以下引數僅支援RKE部署的etcd叢集
# 開啟自動備份
## rke版本小於0.2.x或rancher版本小於v2.2.0時使用
snapshot: true
creation: 5m0s
retention: 24h
## rke版本大於等於0.2.x或rancher版本大於等於v2.2.0時使用(兩段配置二選一)
backup_config:
enabled: true # 設定true啟用ETCD自動備份,設定false禁用;
interval_hours: 12 # 快照建立間隔時間,不加此引數,預設5分鐘;
retention: 6 # etcd備份保留份數;
# S3配置選項
s3backupconfig:
access_key: "myaccesskey"
secret_key: "myaccesssecret"
bucket_name: "my-backup-bucket"
folder: "folder-name" # 此引數v2.3.0之後可用
endpoint: "s3.eu-west-1.amazonaws.com"
region: "eu-west-1"
# 擴充套件引數
extra_args:
auto-compaction-retention: 240 #(單位小時)
# 修改空間配額為$((6*1024*1024*1024)),預設2G,最大8G
quota-backend-bytes: '6442450944'
kube-api:
# cluster_ip範圍
## 這必須與kube-controller中的service_cluster_ip_range匹配
service_cluster_ip_range: 10.43.0.0/16
# NodePort對映的埠範圍
service_node_port_range: 30000-32767
# Pod安全策略
pod_security_policy: false
# kubernetes API server擴充套件引數
## 這些引數將會替換預設值
extra_args:
watch-cache: true
default-watch-cache-size: 1500
# 事件保留時間,預設1小時
event-ttl: 1h0m0s
# 預設值400,設定0為不限制,一般來說,每25~30個Pod有15個並行
max-requests-inflight: 800
# 預設值200,設定0為不限制
max-mutating-requests-inflight: 400
# kubelet操作超時,預設5s
kubelet-timeout: 5s
# 啟用審計日誌到標準輸出
audit-log-path: "-"
# 增加刪除workers的數量
delete-collection-workers: 3
# 將日誌輸出的級別設定為debug模式
v: 4
# Rancher 2使用者注意事項:如果在建立Rancher Launched Kubernetes時使用配置檔案配置叢集,則`kube_controller`服務名稱應僅包含下劃線。這僅適用於Rancher v2.0.5和v2.0.6。
kube-controller:
# Pods_ip範圍
cluster_cidr: 10.42.0.0/16
# cluster_ip範圍
## 這必須與kube-api中的service_cluster_ip_range相同
service_cluster_ip_range: 10.43.0.0/16
extra_args:
# 修改每個節點子網大小(cidr掩碼長度),預設為24,可用IP為254個;23,可用IP為510個;22,可用IP為1022個;
node-cidr-mask-size: '24'
feature-gates: "TaintBasedEvictions=false"
# 控制器定時與節點通訊以檢查通訊是否正常,週期預設5s
node-monitor-period: '5s'
## 當節點通訊失敗後,再等一段時間kubernetes判定節點為notready狀態。
## 這個時間段必須是kubelet的nodeStatusUpdateFrequency(預設10s)的整數倍,
## 其中N表示允許kubelet同步節點狀態的重試次數,預設40s。
node-monitor-grace-period: '20s'
## 再持續通訊失敗一段時間後,kubernetes判定節點為unhealthy狀態,預設1m0s。
node-startup-grace-period: '30s'
## 再持續失聯一段時間,kubernetes開始遷移失聯節點的Pod,預設5m0s。
pod-eviction-timeout: '1m'
# 預設5. 同時同步的deployment的數量。
concurrent-deployment-syncs: 5
# 預設5. 同時同步的endpoint的數量。
concurrent-endpoint-syncs: 5
# 預設20. 同時同步的垃圾收集器工作器的數量。
concurrent-gc-syncs: 20
# 預設10. 同時同步的名稱空間的數量。
concurrent-namespace-syncs: 10
# 預設5. 同時同步的副本集的數量。
concurrent-replicaset-syncs: 5
# 預設5m0s. 同時同步的資源配額數。(新版本中已棄用)
# concurrent-resource-quota-syncs: 5m0s
# 預設1. 同時同步的服務數。
concurrent-service-syncs: 1
# 預設5. 同時同步的服務帳戶令牌數。
concurrent-serviceaccount-token-syncs: 5
# 預設5. 同時同步的複製控制器的數量
concurrent-rc-syncs: 5
# 預設30s. 同步deployment的週期。
deployment-controller-sync-period: 30s
# 預設15s。同步PV和PVC的週期。
pvclaimbinder-sync-period: 15s
kubelet:
# 叢集搜尋域
cluster_domain: cluster.local
# 內部DNS伺服器地址
cluster_dns_server: 10.43.0.10
# 禁用swap
fail_swap_on: false
# 擴充套件變數
extra_args:
# 支援靜態Pod。在主機/etc/kubernetes/目錄下建立manifest目錄,Pod YAML檔案放在/etc/kubernetes/manifest/目錄下
pod-manifest-path: "/etc/kubernetes/manifest/"
root-dir: "/var/lib/kubelet"
docker-root: "/var/lib/docker"
feature-gates: "TaintBasedEvictions=false"
# 指定pause映象
pod-infra-container-image: 'rancher/pause:3.1'
# 傳遞給網路外掛的MTU值,以覆蓋預設值,設定為0(零)則使用預設的1460
network-plugin-mtu: '1500'
# 修改節點最大Pod數量
max-pods: "250"
# 密文和配置對映同步時間,預設1分鐘
sync-frequency: '3s'
# Kubelet程序可以開啟的檔案數(預設1000000),根據節點配置情況調整
max-open-files: '2000000'
# 與apiserver會話時的併發數,預設是10
kube-api-burst: '30'
# 與apiserver會話時的 QPS,預設是5,QPS = 併發量/平均響應時間
kube-api-qps: '15'
# kubelet預設一次拉取一個映象,設定為false可以同時拉取多個映象,
# 前提是儲存驅動要為overlay2,對應的Dokcer也需要增加下載併發數,參考[docker配置](/rancher2x/install-prepare/best-practices/docker/)
serialize-image-pulls: 'false'
# 拉取映象的最大併發數,registry-burst不能超過registry-qps ,
# 僅當registry-qps大於0(零)時生效,(預設10)。如果registry-qps為0則不限制(預設5)。
registry-burst: '10'
registry-qps: '0'
cgroups-per-qos: 'true'
cgroup-driver: 'cgroupfs'
# 節點資源預留
enforce-node-allocatable: 'pods'
system-reserved: 'cpu=0.25,memory=200Mi'
kube-reserved: 'cpu=0.25,memory=1500Mi'
# POD驅逐,這個引數只支援記憶體和磁碟。
## 硬驅逐閾值
### 當節點上的可用資源降至保留值以下時,就會觸發強制驅逐。強制驅逐會強制kill掉POD,不會等POD自動退出。
eviction-hard: 'memory.available<300Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5%'
## 軟碟機逐閾值
### 以下四個引數配套使用,當節點上的可用資源少於這個值時但大於硬驅逐閾值時候,會等待eviction-soft-grace-period設定的時長;
### 等待中每10s檢查一次,當最後一次檢查還觸發了軟碟機逐閾值就會開始驅逐,驅逐不會直接Kill POD,先發送停止訊號給POD,然後等待eviction-max-pod-grace-period設定的時長;
### 在eviction-max-pod-grace-period時長之後,如果POD還未退出則傳送強制kill POD"
eviction-soft: 'memory.available<500Mi,nodefs.available<50%,imagefs.available<50%,nodefs.inodesFree<10%'
eviction-soft-grace-period: 'memory.available=1m30s'
eviction-max-pod-grace-period: '30'
eviction-pressure-transition-period: '30s'
# 指定kubelet多長時間向master釋出一次節點狀態。注意: 它必須與kube-controller中的nodeMonitorGracePeriod一起協調工作。(預設 10s)
node-status-update-frequency: 10s
# 設定cAdvisor全域性的採集行為的時間間隔,主要通過核心事件來發現新容器的產生。預設1m0s
global-housekeeping-interval: 1m0s
# 每個已發現的容器的資料採集頻率。預設10s
housekeeping-interval: 10s
# 所有執行時請求的超時,除了長時間執行的 pull, logs, exec and attach。超時後,kubelet將取消請求,丟擲錯誤,然後重試。(預設2m0s)
runtime-request-timeout: 2m0s
# 指定kubelet計算和快取所有pod和卷的卷磁碟使用量的間隔。預設為1m0s
volume-stats-agg-period: 1m0s
# 可以選擇定義額外的卷繫結到服務
extra_binds:
- "/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins"
- "/etc/iscsi:/etc/iscsi"
- "/sbin/iscsiadm:/sbin/iscsiadm"
kubeproxy:
extra_args:
# 預設使用iptables進行資料轉發,如果要啟用ipvs,則此處設定為`ipvs`
proxy-mode: ""
# 與kubernetes apiserver通訊併發數,預設10
kube-api-burst: 20
# 與kubernetes apiserver通訊時使用QPS,預設值5,QPS=併發量/平均響應時間
kube-api-qps: 10
extra_binds:
scheduler:
extra_args: {}
extra_binds: []
extra_env: []
# 目前,只支援x509驗證
## 您可以選擇建立額外的SAN(主機名或IP)以新增到API伺服器PKI證書。
## 如果要為control plane servers使用負載均衡器,這很有用。
authentication:
strategy: "x509|webhook"
webhook:
config_file: "...."
cache_timeout: 5s
sans:
# 此處配置備用域名或IP,當主域名或者IP無法訪問時,可通過備用域名或IP訪問
- "192.168.1.100"
- "www.test.com"
# Kubernetes認證模式
## Use `mode: rbac` 啟用 RBAC
## Use `mode: none` 禁用 認證
authorization:
mode: rbac
# 如果要設定Kubernetes雲提供商,需要指定名稱和配置,非雲主機則留空;
cloud_provider:
# Add-ons是通過kubernetes jobs來部署。 在超時後,RKE將放棄重試獲取job狀態。以秒為單位。
addon_job_timeout: 30
# 有幾個網路外掛可以選擇:`flannel、canal、calico`,Rancher2預設canal
network:
# rke v1.0.4+ 可用,如果選擇canal網路驅動,需要設定mtu為1450
mtu: 1450
plugin: canal
options:
flannel_backend_type: "vxlan"
# 目前只支援nginx ingress controller
## 可以設定`provider: none`來禁用ingress controller
ingress:
provider: nginx
node_selector:
app: ingress
# 配置dns上游dns伺服器
## 可用rke版本 v0.2.0
dns:
provider: coredns
upstreamnameservers:
- 114.114.114.114
- 1.2.4.8
node_selector:
app: dns
# 安裝附加應用
## 所有附加應用都必須指定名稱空間
addons: |-
---
apiVersion: v1
kind: Pod
metadata:
namespace: default
spec:
containers:
image: nginx
ports:
- containerPort: 80
addons_include:
- https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yml
- https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-cluster.yml
- /path/to/manifest
個人示例:
nodes:
- address: 192.168.0.1
port: "22"
internal_address: 192.168.0.1
role:
- controlplane
- etcd
- worker
hostname_override: 192.168.0.1
user: admin
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ""
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- address: 192.168.0.2
port: "22"
internal_address: 192.168.0.2
role:
- controlplane
- etcd
- worker
hostname_override: 192.168.0.2
user: admin
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ""
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- address: 192.168.0.3
port: "22"
internal_address: 192.168.0.3
role:
- controlplane
- etcd
- worker
hostname_override: 192.168.0.3
user: admin
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ""
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
services:
etcd:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
uid: 0
gid: 0
snapshot: null
retention: ""
creation: ""
backup_config: null
kube-api:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
service_cluster_ip_range: 172.26.96.0/20
service_node_port_range: "30000-40000"
pod_security_policy: false
always_pull_images: false
secrets_encryption_config: null
audit_log: null
admission_configuration: null
event_rate_limit: null
kube-controller:
image: ""
extra_args: {}
extra_args:
# 修改每個節點子網大小(cidr掩碼長度),預設為24,可用IP為254個;23,可用IP為510個;22,可用IP為1022個;
node-cidr-mask-size: '25'
extra_binds: []
extra_env: []
cluster_cidr: 172.26.112.0/20
service_cluster_ip_range: 172.26.96.0/20
scheduler:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
kubelet:
image: ""
extra_args:
# 修改節點最大Pod數量
max-pods: "120"
extra_binds: []
extra_env: []
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 172.26.96.10
fail_swap_on: false
generate_serving_certificate: false
kubeproxy:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
network:
plugin: flannel
options: {}
mtu: 0
node_selector: {}
authentication:
strategy: x509
sans: []
webhook: null
# All add-on manifests MUST specify a namespace
addons: |-
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: flannel
tier: node
name: kube-flannel-cfg
namespace: kube-system
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion":"0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "172.26.112.0/20",
"Backend": {
"Type": "vxlan",
"VNI": 1,
"Port": 8472
}
}
addons_include: []
system_images:
etcd: rancher/coreos-etcd:v3.4.3-rancher1
alpine: rancher/rke-tools:v0.1.52
nginx_proxy: rancher/rke-tools:v0.1.52
cert_downloader: rancher/rke-tools:v0.1.52
kubernetes_services_sidecar: rancher/rke-tools:v0.1.52
kubedns: rancher/k8s-dns-kube-dns:1.15.0
dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
coredns: rancher/coredns-coredns:1.6.5
coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
kubernetes: rancher/hyperkube:v1.17.2-rancher1
flannel: rancher/coreos-flannel:v0.11.0-rancher1
flannel_cni: rancher/flannel-cni:v0.3.0-rancher5
calico_node: rancher/calico-node:v3.10.2
calico_cni: rancher/calico-cni:v3.10.2
calico_controllers: rancher/calico-kube-controllers:v3.10.2
calico_ctl: rancher/calico-ctl:v2.0.0
calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.10.2
canal_node: rancher/calico-node:v3.10.2
canal_cni: rancher/calico-cni:v3.10.2
canal_flannel: rancher/coreos-flannel:v0.11.0
canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.10.2
weave_node: weaveworks/weave-kube:2.5.2
weave_cni: weaveworks/weave-npc:2.5.2
pod_infra_container: rancher/pause:3.1
ingress: rancher/nginx-ingress-controller:nginx-0.25.1-rancher1
ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
metrics_server: rancher/metrics-server:v0.3.6
windows_pod_infra_container: rancher/kubelet-pause:v0.1.3
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: true
kubernetes_version: ""
private_registries: []
ingress:
provider: ""
options: {}
node_selector: {}
extra_args: {}
dns_policy: ""
extra_envs: []
extra_volumes: []
extra_volume_mounts: []
cluster_name: ""
cloud_provider:
name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
ssh_cert: ""
ssh_cert_path: ""
monitoring:
provider: ""
options: {}
node_selector: {}
restore:
restore: false
snapshot_name: ""
dns: null
OK,到目前為止叢集已經安裝完成。
重要說明
以下檔案需要維護,故障排除和升級群集。
將以下檔案的副本儲存在安全的位置:
- cluster.yml:RKE叢集配置檔案。
- kube_config_cluster.yml:叢集的Kubeconfig檔案,此檔案包含用於完全訪問叢集的憑據。
- cluster.rkestate:Kubernetes群集狀態檔案,此檔案包含用於完全訪問群集的憑據。
僅在使用RKE v0.2.0或更高版本時建立Kubernetes群集狀態檔案。
RKE官方文件