1. 程式人生 > >Kubernetes集群中Service的滾動更新

Kubernetes集群中Service的滾動更新

k8s config level apiserver div restfu ceph 成功 lac

Kubernetes集群中Service的滾動更新

在移動互聯網時代,消費者的消費行為已經“全天候化”,為此,商家的業務系統也要保持7×24小時不間斷地提供服務以滿足消費者的需求。很難想像如今還會有以“中斷業務”為前提的服務系統更新升級。如果微信官方發布公告說:每周六晚23:00~次日淩晨2:00進行例行系統升級,不能提供服務,作為用戶的你會怎麽想、怎麽做呢?因此,各個平臺在最初設計時就要考慮到服務的更新升級問題,部署在Kubernetes集群中的Service也不例外。

一、預備知識

1、滾動更新Rolling-update

傳統的升級更新,是先將服務全部下線,業務停止後再更新版本和配置,然後重新啟動並提供服務。這樣的模式已經完全不能滿足“時代的需要”了。在並發化、高可用系統普及的今天,服務的升級更新至少要做到“業務不中斷”。而滾動更新(Rolling-update)恰是滿足這一需求的一種系統更新升級方案。

簡單來說,滾動更新就是針對多實例服務的一種不中斷服務的更新升級方式。一般情況,對於多實例服務,滾動更新采用對各個實例逐個進行單獨更新而非同一時刻對所有實例進行全部更新的方式。“滾動更新”的先進之處在於“滾動”這個概念的引入,筆者覺得它至少有以下兩點含義:

a) “滾動”給人一種“圓”的映像,表意:持續,不中斷。“滾動”的理念是一種趨勢,我們常見的“滾動發布”、“持續交付”都是“滾動”理念的應用。與傳統的大版本周期性發布/更新相比,”滾動”可以讓用戶更快、更及時地使用上新Feature,縮短市場反饋周期,同時滾動式的發布和更新又會將對用戶體驗的影響降到最小化。

b) “滾動”可向前,也可向後。我們可以在更新過程中及時發現“更新”存在的問題,並“向後滾動”,實現更新的回退,可以最大程度上降低每次更新升級的風險。

對於在Kubernetes集群部署的Service來說,Rolling update就是指一次僅更新一個Pod,並逐個進行更新,而不是在同一時刻將該Service下面的所有Pod shutdown,避免將業務中斷的尷尬。

2、Service、Deployment、Replica Set、Replication Controllers和Pod之間的關系

對於我們要部署的Application來說,一般是由多個抽象的Service組成。在Kubernetes中,一個Service通過label selectormatch出一個Pods集合,這些Pods作為Service的endpoint,是真正承載業務的實體。而Pod在集群內的部署、調度、副本數保持則是通過Deployment或ReplicationControllers這些高level的抽象來管理的,下面是一幅示意圖:

技術分享圖片

新版本的Kubernetes推薦用Deployment替代ReplicationController,在Deployment這個概念下在保持Pod副本數上實際發揮作用的是隱藏在背後的Replica Set。

因此,我們可以看到Kubernetes上Service的rolling update實質上是對Service所match出來的Pod集合的Rolling update,而控制Pod部署、調度和副本調度的卻又恰恰是Deployment和replication controller,因此後兩者才是kubernetes service rolling update真正要面對的實體。

二、kubectl rolling-update子命令

kubernetes在kubectl cli工具中僅提供了對Replication Controller的rolling-update支持,通過kubectl -help,我們可以查看到下面的命令usage描述:

# kubectl -help
... ...
Deploy Commands:
  rollout        Manage a deployment rollout
  rolling-update Perform a rolling update of the given ReplicationController
  scale          Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
  autoscale      Auto-scale a Deployment, ReplicaSet, or ReplicationController
... ...

# kubectl help rolling-update
... ...
Usage:
  kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f
NEW_CONTROLLER_SPEC) [options]
... ...

我們現在來看一個例子,看一下kubectl rolling-update是如何對service下的Pods進行滾動更新的。我們的kubernetes集群有兩個版本的Nginx:

# docker images|grep nginx
nginx                                                    1.11.9                     cc1b61406712        2 weeks ago         181.8 MB
nginx                                                    1.10.1                     bf2b4c2d7bf5        4 months ago        180.7 MB

在例子中我們將Service的Pod從nginx 1.10.1版本滾動升級到1.11.9版本。

我們的rc-demo-v0.1.yaml文件內容如下:

apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-demo-nginx-v0.1
spec:
  replicas: 4
  selector:
    app: rc-demo-nginx
    ver: v0.1
  template:
    metadata:
      labels:
        app: rc-demo-nginx
        ver: v0.1
    spec:
      containers:
        - name: rc-demo-nginx
          image: nginx:1.10.1
          ports:
            - containerPort: 80
              protocol: TCP
          env:
            - name: RC_DEMO_VER
              value: v0.1

創建這個replication controller:

# kubectl create -f rc-demo-v0.1.yaml
replicationcontroller "rc-demo-nginx-v0.1" created

# kubectl get pods -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP             NODE
rc-demo-nginx-v0.1-2p7v0   1/1       Running   0          1m        172.30.192.9   iz2ze39jeyizepdxhwqci6z
rc-demo-nginx-v0.1-9pk3t   1/1       Running   0          1m        172.30.192.8   iz2ze39jeyizepdxhwqci6z
rc-demo-nginx-v0.1-hm6b9   1/1       Running   0          1m        172.30.0.9     iz25beglnhtz
rc-demo-nginx-v0.1-vbxpl   1/1       Running   0          1m        172.30.0.10    iz25beglnhtz

Service manifest文件rc-demo-svc.yaml的內容如下:

apiVersion: v1
kind: Service
metadata:
  name: rc-demo-svc
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: rc-demo-nginx

創建這個service:

# kubectl create -f rc-demo-svc.yaml
service "rc-demo-svc" created

# kubectl describe svc/rc-demo-svc
Name:            rc-demo-svc
Namespace:        default
Labels:            <none>
Selector:        app=rc-demo-nginx
Type:            ClusterIP
IP:            10.96.172.246
Port:            <unset>    80/TCP
Endpoints:        172.30.0.10:80,172.30.0.9:80,172.30.192.8:80 + 1 more...
Session Affinity:    None
No events.

可以看到之前replication controller創建的4個Pod都被置於rc-demo-svc這個service的下面了,我們來訪問一下該服務:

# curl -I http://10.96.172.246:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
Date: Wed, 08 Feb 2017 08:45:19 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 31 May 2016 14:17:02 GMT
Connection: keep-alive
ETag: "574d9cde-264"
Accept-Ranges: bytes

# kubectl exec rc-demo-nginx-v0.1-2p7v0  env
... ...
RC_DEMO_VER=v0.1
... ...

通過Response Header中的Server字段,我們可以看到當前Service pods中的nginx版本為1.10.1;通過打印Pod中環境變量,得到RC_DEMO_VER=v0.1。

接下來,我們來rolling-update rc-demo-nginx-v0.1這個rc,我們的新rc manifest文件rc-demo-v0.2.yaml內容如下:

apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-demo-nginx-v0.2
spec:
  replicas: 4
  selector:
    app: rc-demo-nginx
    ver: v0.2
  template:
    metadata:
      labels:
        app: rc-demo-nginx
        ver: v0.2
    spec:
      containers:
        - name: rc-demo-nginx
          image: nginx:1.11.9
          ports:
            - containerPort: 80
              protocol: TCP
          env:
            - name: RC_DEMO_VER
              value: v0.2

rc-demo-new.yaml與rc-demo-old.yaml有幾點不同:rc的name、image的版本以及RC_DEMO_VER這個環境變量的值:

# diff rc-demo-v0.2.yaml rc-demo-v0.1.yaml
4c4
<   name: rc-demo-nginx-v0.2
---
>   name: rc-demo-nginx-v0.1
9c9
<     ver: v0.2
---
>     ver: v0.1
14c14
<         ver: v0.2
---
>         ver: v0.1
18c18
<           image: nginx:1.11.9
---
>           image: nginx:1.10.1
24c24
<               value: v0.2
---
>               value: v0.1

我們開始rolling-update,為了便於跟蹤update過程,這裏將update-period設為10s,即每隔10s更新一個Pod:

#  kubectl rolling-update rc-demo-nginx-v0.1 --update-period=10s -f rc-demo-v0.2.yaml
Created rc-demo-nginx-v0.2
Scaling up rc-demo-nginx-v0.2 from 0 to 4, scaling down rc-demo-nginx-v0.1 from 4 to 0 (keep 4 pods available, don‘t exceed 5 pods)
Scaling rc-demo-nginx-v0.2 up to 1
Scaling rc-demo-nginx-v0.1 down to 3
Scaling rc-demo-nginx-v0.2 up to 2
Scaling rc-demo-nginx-v0.1 down to 2
Scaling rc-demo-nginx-v0.2 up to 3
Scaling rc-demo-nginx-v0.1 down to 1
Scaling rc-demo-nginx-v0.2 up to 4
Scaling rc-demo-nginx-v0.1 down to 0
Update succeeded. Deleting rc-demo-nginx-v0.1
replicationcontroller "rc-demo-nginx-v0.1" rolling updated to "rc-demo-nginx-v0.2"

從日誌可以看出:kubectl rolling-update逐漸增加 rc-demo-nginx-v0.2的scale並同時逐漸減小 rc-demo-nginx-v0.1的scale值直至減到0。

在升級過程中,我們不斷訪問rc-demo-svc,可以看到新舊Pod版本共存的狀態,服務並未中斷:

# curl -I http://10.96.172.246:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

# curl -I http://10.96.172.246:80
HTTP/1.1 200 OK
Server: nginx/1.11.9
... ...

# curl -I http://10.96.172.246:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

更新後的一些狀態信息:

# kubectl get rc
NAME                 DESIRED   CURRENT   READY     AGE
rc-demo-nginx-v0.2   4         4         4         5m

# kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
rc-demo-nginx-v0.2-25b15   1/1       Running   0          5m
rc-demo-nginx-v0.2-3jlpk   1/1       Running   0          5m
rc-demo-nginx-v0.2-lcnf9   1/1       Running   0          6m
rc-demo-nginx-v0.2-s7pkc   1/1       Running   0          5m

# kubectl exec rc-demo-nginx-v0.2-25b15  env
... ...
RC_DEMO_VER=v0.2
... ...

官方文檔說kubectl rolling-update是由client side實現的rolling-update,這是因為roll-update的邏輯都是由kubectl發出N條命令到APIServer完成的,在kubectl的代碼中我們可以看到這點:

//https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/rollingupdate.go
... ...
func RunRollingUpdate(f cmdutil.Factory, out io.Writer, cmd *cobra.Command, args []string, options *resource.FilenameOptions) error {
    ... ...
    err = updater.Update(config)
    if err != nil {
        return err
    }
    ... ...
}

//https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/rolling_updater.go
func (r *RollingUpdater) Update(config *RollingUpdaterConfig) error {
    ... ...
    // Scale newRc and oldRc until newRc has the desired number of replicas and
    // oldRc has 0 replicas.
    progressDeadline := time.Now().UnixNano() + config.Timeout.Nanoseconds()
    for newRc.Spec.Replicas != desired || oldRc.Spec.Replicas != 0 {
        // Store the existing replica counts for progress timeout tracking.
        newReplicas := newRc.Spec.Replicas
        oldReplicas := oldRc.Spec.Replicas

        // Scale up as much as possible.
        scaledRc, err := r.scaleUp(newRc, oldRc, desired, maxSurge, maxUnavailable, scaleRetryParams, config)
        if err != nil {
            return err
        }
        newRc = scaledRc
    ... ...
}

在rolling_updater.go中Update方法使用一個for循環完成了逐步減少old rc的replicas和增加new rc的replicas的工作,直到new rc到達期望值,old rc的replicas變為0。

通過kubectl rolling-update實現的滾動更新有很多不足:
- 由kubectl實現,很可能因為網絡原因導致update中斷;
- 需要創建一個新的rc,名字與要更新的rc不能一樣;雖然這個問題不大,但實施起來也蠻別扭的;
- 回滾還需要執行rolling-update,只是用的老版本的rc manifest文件;
- service執行的rolling-update在集群中沒有記錄,後續無法跟蹤rolling-update歷史。

不過,由於Replication Controller已被Deployment這個抽象概念所逐漸代替,下面我們來考慮如何實現Deployment的滾動更新以及deployment滾動更新的優勢。

三、Deployment的rolling-update

kubernetes Deployment是一個更高級別的抽象,就像文章開頭那幅示意圖那樣,Deployment會創建一個Replica Set,用來保證Deployment中Pod的副本數。由於kubectl rolling-update僅支持replication controllers,因此要想rolling-updata deployment中的Pod,你需要修改Deployment自己的manifest文件並應用。這個修改會創建一個新的Replica Set,在scale up這個Replica Set的Pod數的同時,減少原先的Replica Set的Pod數,直至zero。而這一切都發生在Server端,並不需要kubectl參與。

我們同樣來看一個例子。我們建立第一個版本的deployment manifest文件:deployment-demo-v0.1.yaml。

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deployment-demo
spec:
  replicas: 4
  selector:
    matchLabels:
      app: deployment-demo-nginx
  minReadySeconds: 10
  template:
    metadata:
      labels:
        app: deployment-demo-nginx
        version: v0.1
    spec:
      containers:
        - name: deployment-demo
          image: nginx:1.10.1
          ports:
            - containerPort: 80
              protocol: TCP
          env:
            - name: DEPLOYMENT_DEMO_VER
              value: v0.1

創建該deployment:

# kubectl create -f deployment-demo-v0.1.yaml --record
deployment "deployment-demo" created

# kubectl get deployments
NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment-demo   4         4         4            0           10s

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   4         4         4         13s

# kubectl get pods -o wide
NAME                               READY     STATUS    RESTARTS   AGE       IP             NODE
deployment-demo-1818355944-78spp   1/1       Running   0          24s       172.30.0.10    iz25beglnhtz
deployment-demo-1818355944-7wvxk   1/1       Running   0          24s       172.30.0.9     iz25beglnhtz
deployment-demo-1818355944-hb8tt   1/1       Running   0          24s       172.30.192.9   iz2ze39jeyizepdxhwqci6z
deployment-demo-1818355944-jtxs2   1/1       Running   0          24s       172.30.192.8   iz2ze39jeyizepdxhwqci6z

# kubectl exec deployment-demo-1818355944-78spp env
... ...
DEPLOYMENT_DEMO_VER=v0.1
... ...

deployment-demo創建了ReplicaSet:deployment-demo-1818355944,用於保證Pod的副本數。

我們再來創建使用了該deployment中Pods的Service:

# kubectl create -f deployment-demo-svc.yaml
service "deployment-demo-svc" created

# kubectl get service
NAME                  CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
deployment-demo-svc   10.109.173.225   <none>        80/TCP    5s
kubernetes            10.96.0.1        <none>        443/TCP   42d

# kubectl describe service/deployment-demo-svc
Name:            deployment-demo-svc
Namespace:        default
Labels:            <none>
Selector:        app=deployment-demo-nginx
Type:            ClusterIP
IP:            10.109.173.225
Port:            <unset>    80/TCP
Endpoints:        172.30.0.10:80,172.30.0.9:80,172.30.192.8:80 + 1 more...
Session Affinity:    None
No events.

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

好了,我們看到該service下有四個pods,Service提供的服務也運行正常。

接下來,我們對該Service進行更新。為了方便說明,我們建立了deployment-demo-v0.2.yaml文件,其實你也大可不必另創建文件,直接再上面的deployment-demo-v0.1.yaml文件中修改也行:

# diff deployment-demo-v0.2.yaml deployment-demo-v0.1.yaml
15c15
<         version: v0.2
---
>         version: v0.1
19c19
<           image: nginx:1.11.9
---
>           image: nginx:1.10.1
25c25
<               value: v0.2
---
>               value: v0.1

我們用deployment-demo-v0.2.yaml文件來更新之前創建的deployments中的Pods:

# kubectl apply -f deployment-demo-v0.2.yaml --record
deployment "deployment-demo" configured

apply命令是瞬間接收到apiserver返回的Response並結束的。但deployment的rolling-update過程還在進行:

# kubectl describe deployment deployment-demo
Name:            deployment-demo
... ...
Replicas:        2 updated | 4 total | 3 available | 2 unavailable
StrategyType:        RollingUpdate
MinReadySeconds:    10
RollingUpdateStrategy:    1 max unavailable, 1 max surge
Conditions:
  Type        Status    Reason
  ----        ------    ------
  Available     True    MinimumReplicasAvailable
OldReplicaSets:    deployment-demo-1818355944 (3/3 replicas created)
NewReplicaSet:    deployment-demo-2775967987 (2/2 replicas created)
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----                -------------    --------    ------            -------
  12m        12m        1    {deployment-controller }            Normal        ScalingReplicaSet    Scaled up replica set deployment-demo-1818355944 to 4
  11s        11s        1    {deployment-controller }            Normal        ScalingReplicaSet    Scaled up replica set deployment-demo-2775967987 to 1
  11s        11s        1    {deployment-controller }            Normal        ScalingReplicaSet    Scaled down replica set deployment-demo-1818355944 to 3
  11s        11s        1    {deployment-controller }            Normal        ScalingReplicaSet    Scaled up replica set deployment-demo-2775967987 to 2

# kubectl get pods
NAME                               READY     STATUS              RESTARTS   AGE
deployment-demo-1818355944-78spp   1/1       Terminating         0          12m
deployment-demo-1818355944-hb8tt   1/1       Terminating         0          12m
deployment-demo-1818355944-jtxs2   1/1       Running             0          12m
deployment-demo-2775967987-5s9qx   0/1       ContainerCreating   0          0s
deployment-demo-2775967987-lf5gw   1/1       Running             0          12s
deployment-demo-2775967987-lxbx8   1/1       Running             0          12s
deployment-demo-2775967987-pr0hl   0/1       ContainerCreating   0          0s

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   1         1         1         12m
deployment-demo-2775967987   4         4         4         17s

我們可以看到這個update過程中ReplicaSet的變化,同時這個過程中服務並未中斷,只是新舊版本短暫地交錯提供服務:

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.11.9
... ...

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

最終所有Pod被替換為了v0.2版本:

kubectl exec deployment-demo-2775967987-5s9qx env
... ...
DEPLOYMENT_DEMO_VER=v0.2
... ...

# curl -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.11.9
... ...

我們發現deployment的create和apply命令都帶有一個–record參數,這是告訴apiserver記錄update的歷史。通過kubectl rollout history可以查看deployment的update history:

#  kubectl rollout history deployment deployment-demo
deployments "deployment-demo"
REVISION    CHANGE-CAUSE
1        kubectl create -f deployment-demo-v0.1.yaml --record
2        kubectl apply -f deployment-demo-v0.2.yaml --record

如果沒有加“–record”,那麽你得到的歷史將會類似這樣的結果:

#  kubectl rollout history deployment deployment-demo
deployments "deployment-demo"
REVISION    CHANGE-CAUSE
1        <none>

同時,我們會看到old ReplicaSet並未被刪除:

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   0         0         0         25m
deployment-demo-2775967987   4         4         4         13m

這些信息都存儲在server端,方便回退!

Deployment下Pod的回退操作異常簡單,通過rollout undo即可完成。rollout undo會將Deployment回退到record中的上一個revision(見上面rollout history的輸出中有revision列):

# kubectl rollout undo deployment deployment-demo
deployment "deployment-demo" rolled back

rs的狀態又顛倒回來:

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   4         4         4         28m
deployment-demo-2775967987   0         0         0         15m

查看update歷史:

# kubectl rollout history deployment deployment-demo
deployments "deployment-demo"
REVISION    CHANGE-CAUSE
2        kubectl apply -f deployment-demo-v0.2.yaml --record
3        kubectl create -f deployment-demo-v0.1.yaml --record

可以看到history中最多保存了兩個revision記錄(這個Revision保存的數量應該可以設置)。

四、通過API實現的deployment rolling-update

我們的最終目標是通過API來實現service的rolling-update。Kubernetes提供了針對deployment的Restful API,包括:create、read、replace、delete、patch、rollback等。從這些API的字面意義上看,patch和rollback很可能符合我們的需要,我們需要驗證一下。

我們將deployment置為v0.1版本,即:image: nginx:1.10.1,DEPLOYMENT_DEMO_VER=v0.1。然後我們嘗試通過patch API將deployment升級為v0.2版本,由於patch API僅接收json格式的body內容,我們將 deployment-demo-v0.2.yaml轉換為json格式:deployment-demo-v0.2.json。patch是局部更新,這裏偷個懶兒,直接將全部deployment manifest內容發給了APIServer,讓server自己做merge^0^。

執行下面curl命令:

# curl -H ‘Content-Type:application/strategic-merge-patch+json‘ -X PATCH --data @deployment-demo-v0.2.json http://localhost:8080/apis/extensions/v1beta1/namespaces/default/deployments/deployment-demo

這個命令輸出一個merge後的Deployment json文件,由於內容太多,這裏就不貼出來了,內容參見:patch-api-output.txt。

跟蹤命令執行時的deployment狀態,我們可以看到該命令生效了:新舊兩個rs的Scale值在此消彼長,兩個版本的Pod在交替提供服務。

# kubectl get rs
NAME                         DESIRED   CURRENT   READY     AGE
deployment-demo-1818355944   3         3         3         12h
deployment-demo-2775967987   2         2         2         12h

# curl  -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

# curl  -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.11.9
... ...

# curl  -I http://10.109.173.225:80
HTTP/1.1 200 OK
Server: nginx/1.10.1
... ...

不過通過這種方式update後,通過rollout history查看到的歷史就有些“不那麽精確了”:

#kubectl rollout history deployment deployment-demo
deployments "deployment-demo"
REVISION    CHANGE-CAUSE
8       kubectl create -f deployment-demo-v0.1.yaml --record
9        kubectl create -f deployment-demo-v0.1.yaml --record

目前尚無好的方法。但rolling update的確是ok了。

Patch API支持三種類型的Content-type:json-patch+json、strategic-merge-patch+json和merge-patch+json。對於後面兩種,從測試效果來看,都一樣。但json-patch+json這種類型在測試的時候一直報錯:

# curl -H ‘Content-Type:application/json-patch+json‘ -X PATCH --data @deployment-demo-v0.2.json http://localhost:8080/apis/extensions/v1beta1/namespaces/default/deployments/deployment-demo
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "json: cannot unmarshal object into Go value of type jsonpatch.Patch",
  "code": 500
}

kubectl patch子命令似乎使用的是strategic-merge-patch+json。源碼中也沒有過多說明三種方式的差別:

//pkg/kubectl/cmd/patch.go
func getPatchedJSON(patchType api.PatchType, originalJS, patchJS []byte, obj runtime.Object) ([]byte, error) {
    switch patchType {
    case api.JSONPatchType:
        patchObj, err := jsonpatch.DecodePatch(patchJS)
        if err != nil {
            return nil, err
        }
        return patchObj.Apply(originalJS)

    case api.MergePatchType:
        return jsonpatch.MergePatch(originalJS, patchJS)

    case api.StrategicMergePatchType:
        return strategicpatch.StrategicMergePatchData(originalJS, patchJS, obj)

    default:
        // only here as a safety net - go-restful filters content-type
        return nil, fmt.Errorf("unknown Content-Type header for patch: %v", patchType)
    }
}

// DecodePatch decodes the passed JSON document as an RFC 6902 patch.

// MergePatch merges the patchData into the docData.

// StrategicMergePatch applies a strategic merge patch. The patch and the original document
// must be json encoded content. A patch can be created from an original and a modified document
// by calling CreateStrategicMergePatch.

接下來,我們使用deployment rollback API實現deployment的rollback。我們創建一個deployment-demo-rollback.json文件作為請求的內容:

//deployment-demo-rollback.json
{
        "name" : "deployment-demo",
        "rollbackTo" : {
                "revision" : 0
        }
}

revision:0 表示回退到上一個revision。執行下面命令實現rollback:

# curl -H ‘Content-Type:application/json‘ -X POST --data @deployment-demo-rollback.json http://localhost:8080/apis/extensions/v1beta1/namespaces/default/deployments/deployment-demo/rollback
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "rollback request for deployment \"deployment-demo\" succeeded",
  "code": 200
}

# kubectl describe deployment/deployment-demo
... ...
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----                -------------    --------    ------            -------
... ...
 27s        27s        1    {deployment-controller }            Normal        DeploymentRollback    Rolled back deployment "deployment-demo" to revision 1
... ...

通過查看deployment狀態可以看出rollback成功了。但這個API的response似乎有些bug,明明是succeeded了(code:200),但status卻是”Failure”。

如果你在patch或rollback過程中還遇到什麽其他問題,可以通過kubectl describe deployment/deployment-demo 查看輸出的Events中是否有異常提示。

五、小結

從上面的實驗來看,通過Kubernetes提供的API是可以實現Service中Pods的rolling-update的,但這更適用於無狀態的Service。對於那些有狀態的Service(通過PetSet或是1.5版本後的Stateful Set實現的),這麽做是否還能滿足要求還不能確定。由於暫時沒有環境,這方面尚未測試。

上述各個manifest的源碼可以在這裏下載到。

? 2017, bigwhite. 版權所有.

Related posts:

  1. 為Kubernetes集群中服務部署Nginx入口服務
  2. Kubernetes集群中的Nginx配置熱更新方案
  3. 使用Kubeadm安裝Kubernetes-Part2
  4. Kuberize Ceph RBD API服務
  5. Kubernetes集群DNS插件安裝

Kubernetes集群中Service的滾動更新