1. 程式人生 > >Kubernetes K8S之Taints汙點與Tolerations容忍詳解

Kubernetes K8S之Taints汙點與Tolerations容忍詳解

 

Kubernetes K8S之Taints汙點與Tolerations容忍詳解與示例

 

主機配置規劃

伺服器名稱(hostname)系統版本配置內網IP外網IP(模擬)
k8s-master CentOS7.7 2C/4G/20G 172.16.1.110 10.0.0.110
k8s-node01 CentOS7.7 2C/4G/20G 172.16.1.111 10.0.0.111
k8s-node02 CentOS7.7 2C/4G/20G 172.16.1.112 10.0.0.112

 

Taints汙點和Tolerations容忍概述

節點和Pod親和力,是將Pod吸引到一組節點【根據拓撲域】(作為優選或硬性要求)。汙點(Taints)則相反,它們允許一個節點排斥一組Pod。

容忍(Tolerations)應用於pod,允許(但不強制要求)pod排程到具有匹配汙點的節點上。

汙點(Taints)和容忍(Tolerations)共同作用,確保pods不會被排程到不適當的節點。一個或多個汙點應用於節點;這標誌著該節點不應該接受任何不容忍汙點的Pod。

說明:我們在平常使用中發現pod不會排程到k8s的master節點,就是因為master節點存在汙點。

 

Taints汙點

Taints汙點的組成

使用kubectl taint命令可以給某個Node節點設定汙點,Node被設定汙點之後就和Pod之間存在一種相斥的關係,可以讓Node拒絕Pod的排程執行,甚至將Node上已經存在的Pod驅逐出去。

每個汙點的組成如下:

key=value:effect

每個汙點有一個key和value作為汙點的標籤,effect描述汙點的作用。當前taint effect支援如下選項:

  • NoSchedule:表示K8S將不會把Pod排程到具有該汙點的Node節點上
  • PreferNoSchedule:表示K8S將盡量避免把Pod排程到具有該汙點的Node節點上
  • NoExecute:表示K8S將不會把Pod排程到具有該汙點的Node節點上,同時會將Node上已經存在的Pod驅逐出去

 

汙點taint的NoExecute詳解

taint 的 effect 值 NoExecute,它會影響已經在節點上執行的 pod:

  • 如果 pod 不能容忍 effect 值為 NoExecute 的 taint,那麼 pod 將馬上被驅逐
  • 如果 pod 能夠容忍 effect 值為 NoExecute 的 taint,且在 toleration 定義中沒有指定 tolerationSeconds,則 pod 會一直在這個節點上執行。
  • 如果 pod 能夠容忍 effect 值為 NoExecute 的 taint,但是在toleration定義中指定了 tolerationSeconds,則表示 pod 還能在這個節點上繼續執行的時間長度。

 

Taints汙點設定

汙點(Taints)檢視

k8s master節點檢視

kubectl describe node k8s-master

 

k8s node檢視

kubectl describe node k8s-node01

 

汙點(Taints)新增

 1 [root@k8s-master taint]# kubectl taint nodes k8s-node01 check=zhang:NoSchedule
 2 node/k8s-node01 tainted
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# kubectl describe node k8s-node01
 5 Name:               k8s-node01
 6 Roles:              <none>
 7 Labels:             beta.kubernetes.io/arch=amd64
 8                     beta.kubernetes.io/os=linux
 9                     cpu-num=12
10                     disk-type=ssd
11                     kubernetes.io/arch=amd64
12                     kubernetes.io/hostname=k8s-node01
13                     kubernetes.io/os=linux
14                     mem-num=48
15 Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:15:bb:f8:85:dc"}
16                     flannel.alpha.coreos.com/backend-type: vxlan
17                     flannel.alpha.coreos.com/kube-subnet-manager: true
18                     flannel.alpha.coreos.com/public-ip: 10.0.0.111
19                     kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
20                     node.alpha.kubernetes.io/ttl: 0
21                     volumes.kubernetes.io/controller-managed-attach-detach: true
22 CreationTimestamp:  Tue, 12 May 2020 16:50:54 +0800
23 Taints:             check=zhang:NoSchedule   ### 可見已新增汙點
24 Unschedulable:      false

在k8s-node01節點添加了一個汙點(taint),汙點的key為check,value為zhang,汙點effect為NoSchedule。這意味著沒有pod可以排程到k8s-node01節點,除非具有相匹配的容忍。

 

汙點(Taints)刪除

 1 [root@k8s-master taint]# kubectl taint nodes k8s-node01 check:NoExecute-
 2 ##### 或者
 3 [root@k8s-master taint]# kubectl taint nodes k8s-node01 check=zhang:NoSchedule-
 4 node/k8s-node01 untainted
 5 [root@k8s-master taint]# 
 6 [root@k8s-master taint]# kubectl describe node k8s-node01
 7 Name:               k8s-node01
 8 Roles:              <none>
 9 Labels:             beta.kubernetes.io/arch=amd64
10                     beta.kubernetes.io/os=linux
11                     cpu-num=12
12                     disk-type=ssd
13                     kubernetes.io/arch=amd64
14                     kubernetes.io/hostname=k8s-node01
15                     kubernetes.io/os=linux
16                     mem-num=48
17 Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3e:15:bb:f8:85:dc"}
18                     flannel.alpha.coreos.com/backend-type: vxlan
19                     flannel.alpha.coreos.com/kube-subnet-manager: true
20                     flannel.alpha.coreos.com/public-ip: 10.0.0.111
21                     kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
22                     node.alpha.kubernetes.io/ttl: 0
23                     volumes.kubernetes.io/controller-managed-attach-detach: true
24 CreationTimestamp:  Tue, 12 May 2020 16:50:54 +0800
25 Taints:             <none>   ### 可見已刪除汙點
26 Unschedulable:      false

 

Tolerations容忍

設定了汙點的Node將根據taint的effect:NoSchedule、PreferNoSchedule、NoExecute和Pod之間產生互斥的關係,Pod將在一定程度上不會被排程到Node上。

但我們可以在Pod上設定容忍(Tolerations),意思是設定了容忍的Pod將可以容忍汙點的存在,可以被排程到存在汙點的Node上。

pod.spec.tolerations示例

 1 tolerations:
 2 - key: "key"
 3   operator: "Equal"
 4   value: "value"
 5   effect: "NoSchedule"
 6 ---
 7 tolerations:
 8 - key: "key"
 9   operator: "Exists"
10   effect: "NoSchedule"
11 ---
12 tolerations:
13 - key: "key"
14   operator: "Equal"
15   value: "value"
16   effect: "NoExecute"
17   tolerationSeconds: 3600

重要說明:

  • 其中key、value、effect要與Node上設定的taint保持一致
  • operator的值為Exists時,將會忽略value;只要有key和effect就行
  • tolerationSeconds:表示pod 能夠容忍 effect 值為 NoExecute 的 taint;當指定了 tolerationSeconds【容忍時間】,則表示 pod 還能在這個節點上繼續執行的時間長度。

 

當不指定key值時

當不指定key值和effect值時,且operator為Exists,表示容忍所有的汙點【能匹配汙點所有的keys,values和effects】

1 tolerations:
2 - operator: "Exists"

 

當不指定effect值時

當不指定effect值時,則能匹配汙點key對應的所有effects情況

1 tolerations:
2 - key: "key"
3   operator: "Exists"

 

當有多個Master存在時

當有多個Master存在時,為了防止資源浪費,可以進行如下設定:

1 kubectl taint nodes Node-name node-role.kubernetes.io/master=:PreferNoSchedule

 

多個Taints汙點和多個Tolerations容忍怎麼判斷

可以在同一個node節點上設定多個汙點(Taints),在同一個pod上設定多個容忍(Tolerations)。Kubernetes處理多個汙點和容忍的方式就像一個過濾器:從節點的所有汙點開始,然後忽略可以被Pod容忍匹配的汙點;保留其餘不可忽略的汙點,汙點的effect對Pod具有顯示效果:特別是:

  • 如果有至少一個不可忽略汙點,effect為NoSchedule,那麼Kubernetes將不排程Pod到該節點
  • 如果沒有effect為NoSchedule的不可忽視汙點,但有至少一個不可忽視汙點,effect為PreferNoSchedule,那麼Kubernetes將盡量不排程Pod到該節點
  • 如果有至少一個不可忽視汙點,effect為NoExecute,那麼Pod將被從該節點驅逐(如果Pod已經在該節點執行),並且不會被排程到該節點(如果Pod還未在該節點執行)

 

汙點和容忍示例

Node汙點為NoExecute的示例

記得把已有的汙點清除,以免影響測驗。

節點上的汙點設定(Taints)

實現如下汙點

1 k8s-master 汙點為:node-role.kubernetes.io/master:NoSchedule 【k8s自帶汙點,直接使用,不必另外操作新增】
2 k8s-node01 汙點為:
3 k8s-node02 汙點為:

 

汙點新增操作如下:
「無,本次無汙點操作」

汙點檢視操作如下:

1 kubectl describe node k8s-master | grep 'Taints' -A 5
2 kubectl describe node k8s-node01 | grep 'Taints' -A 5
3 kubectl describe node k8s-node02 | grep 'Taints' -A 5

除了k8s-master預設的汙點,在k8s-node01、k8s-node02無汙點。

 

汙點為NoExecute示例

yaml檔案

 1 [root@k8s-master taint]# pwd
 2 /root/k8s_practice/scheduler/taint
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# cat noexecute_tolerations.yaml 
 5 apiVersion: apps/v1
 6 kind: Deployment
 7 metadata:
 8   name: noexec-tolerations-deploy
 9   labels:
10     app: noexectolerations-deploy
11 spec:
12   replicas: 6
13   selector:
14     matchLabels:
15       app: myapp
16   template:
17     metadata:
18       labels:
19         app: myapp
20     spec:
21       containers:
22       - name: myapp-pod
23         image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
24         imagePullPolicy: IfNotPresent
25         ports:
26           - containerPort: 80
27        # 有容忍並有 tolerationSeconds 時的格式
28 #      tolerations:
29 #      - key: "check-mem"
30 #        operator: "Equal"
31 #        value: "memdb"
32 #        effect: "NoExecute"
33 #        # 當Pod將被驅逐時,Pod還可以在Node節點上繼續保留執行的時間
34 #        tolerationSeconds: 30

 

執行yaml檔案

 1 [root@k8s-master taint]# kubectl apply -f noexecute_tolerations.yaml 
 2 deployment.apps/noexec-tolerations-deploy created
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# kubectl get deploy -o wide
 5 NAME                        READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                      SELECTOR
 6 noexec-tolerations-deploy   6/6     6            6           10s   myapp-pod    registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1   app=myapp
 7 [root@k8s-master taint]# 
 8 [root@k8s-master taint]# kubectl get pod -o wide
 9 NAME                                         READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
10 noexec-tolerations-deploy-85587896f9-2j848   1/1     Running   0          15s   10.244.4.101   k8s-node01   <none>           <none>
11 noexec-tolerations-deploy-85587896f9-jgqkn   1/1     Running   0          15s   10.244.2.141   k8s-node02   <none>           <none>
12 noexec-tolerations-deploy-85587896f9-jmw5w   1/1     Running   0          15s   10.244.2.142   k8s-node02   <none>           <none>
13 noexec-tolerations-deploy-85587896f9-s8x95   1/1     Running   0          15s   10.244.4.102   k8s-node01   <none>           <none>
14 noexec-tolerations-deploy-85587896f9-t82fj   1/1     Running   0          15s   10.244.4.103   k8s-node01   <none>           <none>
15 noexec-tolerations-deploy-85587896f9-wx9pz   1/1     Running   0          15s   10.244.2.143   k8s-node02   <none>           <none>

由上可見,pod是在k8s-node01、k8s-node02平均分佈的。

 

新增effect為NoExecute的汙點
kubectl taint nodes k8s-node02 check-mem=memdb:NoExecute

 

此時所有節點汙點為

1 k8s-master 汙點為:node-role.kubernetes.io/master:NoSchedule 【k8s自帶汙點,直接使用,不必另外操作新增】
2 k8s-node01 汙點為:
3 k8s-node02 汙點為:check-mem=memdb:NoExecute

 

之後再次檢視pod資訊

1 [root@k8s-master taint]# kubectl get pod -o wide
2 NAME                                         READY   STATUS    RESTARTS   AGE    IP             NODE         NOMINATED NODE   READINESS GATES
3 noexec-tolerations-deploy-85587896f9-2j848   1/1     Running   0          2m2s   10.244.4.101   k8s-node01   <none>           <none>
4 noexec-tolerations-deploy-85587896f9-ch96j   1/1     Running   0          8s     10.244.4.106   k8s-node01   <none>           <none>
5 noexec-tolerations-deploy-85587896f9-cjrkb   1/1     Running   0          8s     10.244.4.105   k8s-node01   <none>           <none>
6 noexec-tolerations-deploy-85587896f9-qbq6d   1/1     Running   0          7s     10.244.4.104   k8s-node01   <none>           <none>
7 noexec-tolerations-deploy-85587896f9-s8x95   1/1     Running   0          2m2s   10.244.4.102   k8s-node01   <none>           <none>
8 noexec-tolerations-deploy-85587896f9-t82fj   1/1     Running   0          2m2s   10.244.4.103   k8s-node01   <none>           <none>

由上可見,在k8s-node02節點上的pod已被驅逐,驅逐的pod被排程到了k8s-node01節點。

 

Pod沒有容忍時(Tolerations)

記得把已有的汙點清除,以免影響測驗。

節點上的汙點設定(Taints)

實現如下汙點

1 k8s-master 汙點為:node-role.kubernetes.io/master:NoSchedule 【k8s自帶汙點,直接使用,不必另外操作新增】
2 k8s-node01 汙點為:check-nginx=web:PreferNoSchedule
3 k8s-node02 汙點為:check-nginx=web:NoSchedule

 

汙點新增操作如下:

1 kubectl taint nodes k8s-node01 check-nginx=web:PreferNoSchedule
2 kubectl taint nodes k8s-node02 check-nginx=web:NoSchedule

 

汙點檢視操作如下:

1 kubectl describe node k8s-master | grep 'Taints' -A 5
2 kubectl describe node k8s-node01 | grep 'Taints' -A 5
3 kubectl describe node k8s-node02 | grep 'Taints' -A 5

 

無容忍示例

yaml檔案

 1 [root@k8s-master taint]# pwd
 2 /root/k8s_practice/scheduler/taint
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# cat no_tolerations.yaml 
 5 apiVersion: apps/v1
 6 kind: Deployment
 7 metadata:
 8   name: no-tolerations-deploy
 9   labels:
10     app: notolerations-deploy
11 spec:
12   replicas: 5
13   selector:
14     matchLabels:
15       app: myapp
16   template:
17     metadata:
18       labels:
19         app: myapp
20     spec:
21       containers:
22       - name: myapp-pod
23         image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
24         imagePullPolicy: IfNotPresent
25         ports:
26           - containerPort: 80

 

執行yaml檔案

 1 [root@k8s-master taint]# kubectl apply -f no_tolerations.yaml 
 2 deployment.apps/no-tolerations-deploy created
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# kubectl get deploy -o wide
 5 NAME                    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                      SELECTOR
 6 no-tolerations-deploy   5/5     5            5           9s    myapp-pod    registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1   app=myapp
 7 [root@k8s-master taint]# 
 8 [root@k8s-master taint]# kubectl get pod -o wide
 9 NAME                                     READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
10 no-tolerations-deploy-85587896f9-6bjv8   1/1     Running   0          16s   10.244.4.54   k8s-node01   <none>           <none>
11 no-tolerations-deploy-85587896f9-hbbjb   1/1     Running   0          16s   10.244.4.58   k8s-node01   <none>           <none>
12 no-tolerations-deploy-85587896f9-jlmzw   1/1     Running   0          16s   10.244.4.56   k8s-node01   <none>           <none>
13 no-tolerations-deploy-85587896f9-kfh2c   1/1     Running   0          16s   10.244.4.55   k8s-node01   <none>           <none>
14 no-tolerations-deploy-85587896f9-wmp8b   1/1     Running   0          16s   10.244.4.57   k8s-node01   <none>           <none>

由上可見,因為k8s-node02節點的汙點check-nginx 的effect為NoSchedule,說明pod不能被排程到該節點。此時k8s-node01節點的汙點check-nginx 的effect為PreferNoSchedule【儘量不排程到該節點】;但只有該節點滿足排程條件,因此都排程到了k8s-node01節點。

 

Pod單個容忍時(Tolerations)

記得把已有的汙點清除,以免影響測驗。

節點上的汙點設定(Taints)

實現如下汙點

1 k8s-master 汙點為:node-role.kubernetes.io/master:NoSchedule 【k8s自帶汙點,直接使用,不必另外操作新增】
2 k8s-node01 汙點為:check-nginx=web:PreferNoSchedule
3 k8s-node02 汙點為:check-nginx=web:NoSchedule

 

汙點新增操作如下:

1 kubectl taint nodes k8s-node01 check-nginx=web:PreferNoSchedule
2 kubectl taint nodes k8s-node02 check-nginx=web:NoSchedule

 

汙點檢視操作如下:

1 kubectl describe node k8s-master | grep 'Taints' -A 5
2 kubectl describe node k8s-node01 | grep 'Taints' -A 5
3 kubectl describe node k8s-node02 | grep 'Taints' -A 5

 

單個容忍示例

yaml檔案

 1 [root@k8s-master taint]# pwd
 2 /root/k8s_practice/scheduler/taint
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# cat one_tolerations.yaml 
 5 apiVersion: apps/v1
 6 kind: Deployment
 7 metadata:
 8   name: one-tolerations-deploy
 9   labels:
10     app: onetolerations-deploy
11 spec:
12   replicas: 6
13   selector:
14     matchLabels:
15       app: myapp
16   template:
17     metadata:
18       labels:
19         app: myapp
20     spec:
21       containers:
22       - name: myapp-pod
23         image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
24         imagePullPolicy: IfNotPresent
25         ports:
26           - containerPort: 80
27       tolerations:
28       - key: "check-nginx"
29         operator: "Equal"
30         value: "web"
31         effect: "NoSchedule"

 

執行yaml檔案

 1 [root@k8s-master taint]# kubectl apply -f one_tolerations.yaml 
 2 deployment.apps/one-tolerations-deploy created
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# kubectl get deploy -o wide
 5 NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                      SELECTOR
 6 one-tolerations-deploy   6/6     6            6           3s    myapp-pod    registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1   app=myapp
 7 [root@k8s-master taint]# 
 8 [root@k8s-master taint]# kubectl get pod -o wide
 9 NAME                                      READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
10 one-tolerations-deploy-5757d6b559-gbj49   1/1     Running   0          7s    10.244.2.73   k8s-node02   <none>           <none>
11 one-tolerations-deploy-5757d6b559-j9p6r   1/1     Running   0          7s    10.244.2.71   k8s-node02   <none>           <none>
12 one-tolerations-deploy-5757d6b559-kpk9q   1/1     Running   0          7s    10.244.2.72   k8s-node02   <none>           <none>
13 one-tolerations-deploy-5757d6b559-lsppn   1/1     Running   0          7s    10.244.4.65   k8s-node01   <none>           <none>
14 one-tolerations-deploy-5757d6b559-rx72g   1/1     Running   0          7s    10.244.4.66   k8s-node01   <none>           <none>
15 one-tolerations-deploy-5757d6b559-s8qr9   1/1     Running   0          7s    10.244.2.74   k8s-node02   <none>           <none>

由上可見,此時pod會盡量【優先】排程到k8s-node02節點,儘量不排程到k8s-node01節點。如果我們只有一個pod,那麼會一直排程到k8s-node02節點。

 

Pod多個容忍時(Tolerations)

記得把已有的汙點清除,以免影響測驗。

節點上的汙點設定(Taints)

實現如下汙點

1 k8s-master 汙點為:node-role.kubernetes.io/master:NoSchedule 【k8s自帶汙點,直接使用,不必另外操作新增】
2 k8s-node01 汙點為:check-nginx=web:PreferNoSchedule, check-redis=memdb:NoSchedule
3 k8s-node02 汙點為:check-nginx=web:NoSchedule, check-redis=database:NoSchedule

 

汙點新增操作如下:

1 kubectl taint nodes k8s-node01 check-nginx=web:PreferNoSchedule
2 kubectl taint nodes k8s-node01 check-redis=memdb:NoSchedule
3 kubectl taint nodes k8s-node02 check-nginx=web:NoSchedule
4 kubectl taint nodes k8s-node02 check-redis=database:NoSchedule

 

汙點檢視操作如下:

1 kubectl describe node k8s-master | grep 'Taints' -A 5
2 kubectl describe node k8s-node01 | grep 'Taints' -A 5
3 kubectl describe node k8s-node02 | grep 'Taints' -A 5

 

多個容忍示例

yaml檔案

 1 [root@k8s-master taint]# pwd
 2 /root/k8s_practice/scheduler/taint
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# cat multi_tolerations.yaml 
 5 apiVersion: apps/v1
 6 kind: Deployment
 7 metadata:
 8   name: multi-tolerations-deploy
 9   labels:
10     app: multitolerations-deploy
11 spec:
12   replicas: 6
13   selector:
14     matchLabels:
15       app: myapp
16   template:
17     metadata:
18       labels:
19         app: myapp
20     spec:
21       containers:
22       - name: myapp-pod
23         image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
24         imagePullPolicy: IfNotPresent
25         ports:
26           - containerPort: 80
27       tolerations:
28       - key: "check-nginx"
29         operator: "Equal"
30         value: "web"
31         effect: "NoSchedule"
32       - key: "check-redis"
33         operator: "Exists"
34         effect: "NoSchedule"

 

執行yaml檔案

 1 [root@k8s-master taint]# kubectl apply -f multi_tolerations.yaml 
 2 deployment.apps/multi-tolerations-deploy created
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# kubectl get deploy -o wide
 5 NAME                       READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                      SELECTOR
 6 multi-tolerations-deploy   6/6     6            6           5s    myapp-pod    registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1   app=myapp
 7 [root@k8s-master taint]# 
 8 [root@k8s-master taint]# kubectl get pod -o wide
 9 NAME                                        READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
10 multi-tolerations-deploy-776ff4449c-2csnk   1/1     Running   0          10s   10.244.2.171   k8s-node02   <none>           <none>
11 multi-tolerations-deploy-776ff4449c-4d9fh   1/1     Running   0          10s   10.244.4.116   k8s-node01   <none>           <none>
12 multi-tolerations-deploy-776ff4449c-c8fz5   1/1     Running   0          10s   10.244.2.173   k8s-node02   <none>           <none>
13 multi-tolerations-deploy-776ff4449c-nj29f   1/1     Running   0          10s   10.244.4.115   k8s-node01   <none>           <none>
14 multi-tolerations-deploy-776ff4449c-r7gsm   1/1     Running   0          10s   10.244.2.172   k8s-node02   <none>           <none>
15 multi-tolerations-deploy-776ff4449c-s8t2n   1/1     Running   0          10s   10.244.2.174   k8s-node02   <none>           <none>

由上可見,示例中的pod容忍為:check-nginx=web:NoSchedule;check-redis=:NoSchedule。因此pod會盡量排程到k8s-node02節點,儘量不排程到k8s-node01節點。

 

Pod容忍指定汙點key的所有effects情況

記得把已有的汙點清除,以免影響測驗。

節點上的汙點設定(Taints)

實現如下汙點

1 k8s-master 汙點為:node-role.kubernetes.io/master:NoSchedule 【k8s自帶汙點,直接使用,不必另外操作新增】
2 k8s-node01 汙點為:check-redis=memdb:NoSchedule
3 k8s-node02 汙點為:check-redis=database:NoSchedule

 

汙點新增操作如下:

1 kubectl taint nodes k8s-node01 check-redis=memdb:NoSchedule
2 kubectl taint nodes k8s-node02 check-redis=database:NoSchedule

 

汙點檢視操作如下:

1 kubectl describe node k8s-master | grep 'Taints' -A 5
2 kubectl describe node k8s-node01 | grep 'Taints' -A 5
3 kubectl describe node k8s-node02 | grep 'Taints' -A 5

 

多個容忍示例

yaml檔案

 1 [root@k8s-master taint]# pwd
 2 /root/k8s_practice/scheduler/taint
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# cat key_tolerations.yaml 
 5 apiVersion: apps/v1
 6 kind: Deployment
 7 metadata:
 8   name: key-tolerations-deploy
 9   labels:
10     app: keytolerations-deploy
11 spec:
12   replicas: 6
13   selector:
14     matchLabels:
15       app: myapp
16   template:
17     metadata:
18       labels:
19         app: myapp
20     spec:
21       containers:
22       - name: myapp-pod
23         image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
24         imagePullPolicy: IfNotPresent
25         ports:
26           - containerPort: 80
27       tolerations:
28       - key: "check-redis"
29         operator: "Exists"

 

執行yaml檔案

 1 [root@k8s-master taint]# kubectl apply -f key_tolerations.yaml 
 2 deployment.apps/key-tolerations-deploy created
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# kubectl get deploy -o wide
 5 NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                      SELECTOR
 6 key-tolerations-deploy   6/6     6            6           21s   myapp-pod    registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1   app=myapp
 7 [root@k8s-master taint]# 
 8 [root@k8s-master taint]# kubectl get pod -o wide
 9 NAME                                     READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
10 key-tolerations-deploy-db5c4c4db-2zqr8   1/1     Running   0          26s   10.244.2.170   k8s-node02   <none>           <none>
11 key-tolerations-deploy-db5c4c4db-5qb5p   1/1     Running   0          26s   10.244.4.113   k8s-node01   <none>           <none>
12 key-tolerations-deploy-db5c4c4db-7xmt6   1/1     Running   0          26s   10.244.2.169   k8s-node02   <none>           <none>
13 key-tolerations-deploy-db5c4c4db-84rkj   1/1     Running   0          26s   10.244.4.114   k8s-node01   <none>           <none>
14 key-tolerations-deploy-db5c4c4db-gszxg   1/1     Running   0          26s   10.244.2.168   k8s-node02   <none>           <none>
15 key-tolerations-deploy-db5c4c4db-vlgh8   1/1     Running   0          26s   10.244.4.112   k8s-node01   <none>           <none>

由上可見,示例中的pod容忍為:check-nginx=:;僅需匹配node汙點的key即可,汙點的value和effect不需要關心。因此可以匹配k8s-node01、k8s-node02節點。

 

Pod容忍所有汙點

記得把已有的汙點清除,以免影響測驗。

節點上的汙點設定(Taints)

實現如下汙點

1 k8s-master 汙點為:node-role.kubernetes.io/master:NoSchedule 【k8s自帶汙點,直接使用,不必另外操作新增】
2 k8s-node01 汙點為:check-nginx=web:PreferNoSchedule, check-redis=memdb:NoSchedule
3 k8s-node02 汙點為:check-nginx=web:NoSchedule, check-redis=database:NoSchedule

 

汙點新增操作如下:

1 kubectl taint nodes k8s-node01 check-nginx=web:PreferNoSchedule
2 kubectl taint nodes k8s-node01 check-redis=memdb:NoSchedule
3 kubectl taint nodes k8s-node02 check-nginx=web:NoSchedule
4 kubectl taint nodes k8s-node02 check-redis=database:NoSchedule

 

汙點檢視操作如下:

1 kubectl describe node k8s-master | grep 'Taints' -A 5
2 kubectl describe node k8s-node01 | grep 'Taints' -A 5
3 kubectl describe node k8s-node02 | grep 'Taints' -A 5

 

所有容忍示例

yaml檔案

 1 [root@k8s-master taint]# pwd
 2 /root/k8s_practice/scheduler/taint
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# cat all_tolerations.yaml 
 5 apiVersion: apps/v1
 6 kind: Deployment
 7 metadata:
 8   name: all-tolerations-deploy
 9   labels:
10     app: alltolerations-deploy
11 spec:
12   replicas: 6
13   selector:
14     matchLabels:
15       app: myapp
16   template:
17     metadata:
18       labels:
19         app: myapp
20     spec:
21       containers:
22       - name: myapp-pod
23         image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
24         imagePullPolicy: IfNotPresent
25         ports:
26           - containerPort: 80
27       tolerations:
28       - operator: "Exists"

 

執行yaml檔案

 1 [root@k8s-master taint]# kubectl apply -f all_tolerations.yaml 
 2 deployment.apps/all-tolerations-deploy created
 3 [root@k8s-master taint]# 
 4 [root@k8s-master taint]# kubectl get deploy -o wide
 5 NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                      SELECTOR
 6 all-tolerations-deploy   6/6     6            6           8s    myapp-pod    registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1   app=myapp
 7 [root@k8s-master taint]# 
 8 [root@k8s-master taint]# kubectl get pod -o wide
 9 NAME                                      READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
10 all-tolerations-deploy-566cdccbcd-4klc2   1/1     Running   0          12s   10.244.0.116   k8s-master   <none>           <none>
11 all-tolerations-deploy-566cdccbcd-59vvc   1/1     Running   0          12s   10.244.0.115   k8s-master   <none>           <none>
12 all-tolerations-deploy-566cdccbcd-cvw4s   1/1     Running   0          12s   10.244.2.175   k8s-node02   <none>           <none>
13 all-tolerations-deploy-566cdccbcd-k8fzl   1/1     Running   0          12s   10.244.2.176   k8s-node02   <none>           <none>
14 all-tolerations-deploy-566cdccbcd-s2pw7   1/1     Running   0          12s   10.244.4.118   k8s-node01   <none>           <none>
15 all-tolerations-deploy-566cdccbcd-xzngt   1/1     Running   0          13s   10.244.4.117   k8s-node01   <none>           <none>

後上可見,示例中的pod容忍所有的汙點,因此pod可被排程到所有k8s節點。

 

相關閱讀

1、官網:汙點與容忍

2、Kubernetes K8S排程器kube-scheduler詳解

3、Kubernetes K8S之affinity親和性與反親和性詳解與示例

完畢!

 


 

———END———
如果覺得不錯就關注下唄 (-^O^-) !

&n