歡迎訪問我的GitHub

https://github.com/zq2599/blog_demos

內容:所有原創文章分類彙總及配套原始碼,涉及Java、Docker、Kubernetes、DevOPS等;

系列文章連結

  1. kubebuilder實戰之一:準備工作
  2. kubebuilder實戰之二:初次體驗kubebuilder
  3. kubebuilder實戰之三:基礎知識速覽
  4. kubebuilder實戰之四:operator需求說明和設計
  5. kubebuilder實戰之五:operator編碼
  6. kubebuilder實戰之六:構建部署執行
  7. kubebuilder實戰之七:webhook
  8. kubebuilder實戰之八:知識點小記

本篇概覽

  • 作為《kubebuilder實戰》系列的第六篇,前面已完成了編碼,現在到了驗證功能的環節,請確保您的docker和kubernetes環境正常,然後咱們一起完成以下操作:
  1. 部署CRD
  2. 本地執行Controller
  3. 通過yaml檔案新建elasticweb資源物件
  4. 通過日誌和kubectl命令驗證elasticweb功能是否正常
  5. 瀏覽器訪問web,驗證業務服務是否正常
  6. 修改singlePodQPS,看elasticweb是否自動調整pod數量
  7. 修改totalQPS,看elasticweb是否自動調整pod數
  8. 刪除elasticweb,看相關的service和deployment被自動刪除
  9. 構建Controller映象,在kubernetes執行此Controller,驗證上述功能是否正常
  • 看似簡單的部署驗證操作,零零散散加起來居然有這麼多...好吧不感慨了,立即開始吧;

部署CRD

  • 從控制檯進入Makefile所在目錄,執行命令make install,即可將CRD部署到kubernetes:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % make install
  2. /Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
  3. kustomize build config/crd | kubectl apply -f -
  4. Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
  5. customresourcedefinition.apiextensions.k8s.io/elasticwebs.elasticweb.com.bolingcavalry configured
  • 從上述內容可見,實際上執行的操作是用kustomize將config/crd下的yaml資源合併後在kubernetes進行建立;

  • 可以用命令kubectl api-versions驗證CRD部署是否成功:

  1. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl api-versions|grep elasticweb
  2. elasticweb.com.bolingcavalry/v1

本地執行Controller

  • 先嚐試用最簡單的方式來驗證Controller的功能,如下圖,Macbook電腦是我的開發環境,直接用elasticweb工程中的Makefile,可以將Controller的程式碼在本地執行起來裡面:

  • 進入Makefile檔案所在目錄,執行命令make run即可編譯執行controller:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % pwd
  2. /Users/zhaoqin/github/blog_demos/kubebuilder/elasticweb
  3. zhaoqin@zhaoqindeMBP-2 elasticweb % make run
  4. /Users/zhaoqin/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
  5. go fmt ./...
  6. go vet ./...
  7. /Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
  8. go run ./main.go
  9. 2021-02-20T20:46:16.774+0800 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"}
  10. 2021-02-20T20:46:16.774+0800 INFO setup starting manager
  11. 2021-02-20T20:46:16.775+0800 INFO controller-runtime.controller Starting EventSource {"controller": "elasticweb", "source": "kind source: /, Kind="}
  12. 2021-02-20T20:46:16.776+0800 INFO controller-runtime.manager starting metrics server {"path": "/metrics"}
  13. 2021-02-20T20:46:16.881+0800 INFO controller-runtime.controller Starting Controller {"controller": "elasticweb"}
  14. 2021-02-20T20:46:16.881+0800 INFO controller-runtime.controller Starting workers {"controller": "elasticweb", "worker count": 1}

新建elasticweb資源物件

  • 負責處理elasticweb的Controller已經執行起來了,接下來就開始建立elasticweb資源物件吧,用yaml檔案來建立;

  • 在config/samples目錄下,kubebuilder為咱們建立了demo檔案elasticweb_v1_elasticweb.yaml,不過這裡面spec的內容不是咱們定義的那四個欄位,需要改成以下內容:

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: dev
  5. labels:
  6. name: dev
  7. ---
  8. apiVersion: elasticweb.com.bolingcavalry/v1
  9. kind: ElasticWeb
  10. metadata:
  11. namespace: dev
  12. name: elasticweb-sample
  13. spec:
  14. # Add fields here
  15. image: tomcat:8.0.18-jre8
  16. port: 30003
  17. singlePodQPS: 500
  18. totalQPS: 600
  • 對上述配置的幾個引數做如下說明:
  1. 使用的namespace為dev
  2. 本次測試部署的應用為tomcat
  3. service使用宿主機的30003埠暴露tomcat的服務
  4. 假設單個pod能支撐500QPS,外部請求的QPS為600
  • 執行命令kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml,即可在kubernetes建立elasticweb例項:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml
  2. namespace/dev created
  3. elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
  • 去controller的視窗發現列印了不少日誌,通過分析日誌發現Reconcile方法執行了兩次,第一執行時建立了deployment和service等資源:
  1. 2021-02-21T10:03:57.108+0800 INFO controllers.ElasticWeb 1. start reconcile logic {"elasticweb": "dev/elasticweb-sample"}
  2. 2021-02-21T10:03:57.108+0800 INFO controllers.ElasticWeb 3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [nil] {"elasticweb": "dev/elasticweb-sample"}
  3. 2021-02-21T10:03:57.210+0800 INFO controllers.ElasticWeb 4. deployment not exists {"elasticweb": "dev/elasticweb-sample"}
  4. 2021-02-21T10:03:57.313+0800 INFO controllers.ElasticWeb set reference {"func": "createService"}
  5. 2021-02-21T10:03:57.313+0800 INFO controllers.ElasticWeb start create service {"func": "createService"}
  6. 2021-02-21T10:03:57.364+0800 INFO controllers.ElasticWeb create service success {"func": "createService"}
  7. 2021-02-21T10:03:57.365+0800 INFO controllers.ElasticWeb expectReplicas [2] {"func": "createDeployment"}
  8. 2021-02-21T10:03:57.365+0800 INFO controllers.ElasticWeb set reference {"func": "createDeployment"}
  9. 2021-02-21T10:03:57.365+0800 INFO controllers.ElasticWeb start create deployment {"func": "createDeployment"}
  10. 2021-02-21T10:03:57.382+0800 INFO controllers.ElasticWeb create deployment success {"func": "createDeployment"}
  11. 2021-02-21T10:03:57.382+0800 INFO controllers.ElasticWeb singlePodQPS [500], replicas [2], realQPS[1000] {"func": "updateStatus"}
  12. 2021-02-21T10:03:57.407+0800 DEBUG controller-runtime.controller Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
  13. 2021-02-21T10:03:57.407+0800 INFO controllers.ElasticWeb 1. start reconcile logic {"elasticweb": "dev/elasticweb-sample"}
  14. 2021-02-21T10:03:57.407+0800 INFO controllers.ElasticWeb 3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [1000] {"elasticweb": "dev/elasticweb-sample"}
  15. 2021-02-21T10:03:57.407+0800 INFO controllers.ElasticWeb 9. expectReplicas [2], realReplicas [2] {"elasticweb": "dev/elasticweb-sample"}
  16. 2021-02-21T10:03:57.407+0800 INFO controllers.ElasticWeb 10. return now {"elasticweb": "dev/elasticweb-sample"}
  17. 2021-02-21T10:03:57.407+0800 DEBUG controller-runtime.controller Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
  • 再用kubectl get命令詳細檢查資源物件,一切符合預期,elasticweb、service、deployment、pod都是正常的:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml
  2. namespace/dev created
  3. elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
  4. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get elasticweb -n dev
  5. NAME AGE
  6. elasticweb-sample 35s
  7. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get service -n dev
  8. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  9. elasticweb-sample NodePort 10.107.177.158 <none> 8080:30003/TCP 41s
  10. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev
  11. NAME READY UP-TO-DATE AVAILABLE AGE
  12. elasticweb-sample 2/2 2 2 46s
  13. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
  14. NAME READY STATUS RESTARTS AGE
  15. elasticweb-sample-56fc5848b7-l5thk 1/1 Running 0 50s
  16. elasticweb-sample-56fc5848b7-lqjk5 1/1 Running 0 50s

瀏覽器驗證業務功能

  • 本次部署操作使用的docker映象是tomcat,驗證起來非常簡單,開啟預設頁面能見到貓就證明tomcat啟動成功了,我這kubernetes宿主機的IP地址是192.168.50.75,於是用瀏覽器訪問http://192.168.50.75:30003,如下圖,業務功能正常:

修改單個Pod的QPS

  • 如果自身優化,或者外界依賴變化(如快取、資料庫擴容),這些都可能導致當前服務的QPS提升,假設單個Pod的QPS從500提升到了800,看看咱們的Operator能不能自動做出調整(總QPS是600,因此pod數應該從2降到1)

  • 在config/samples/目錄下新增名為update_single_pod_qps.yaml的檔案,內容如下:

  1. spec:
  2. singlePodQPS: 800
  • 執行以下命令,即可將單個Pod的QPS從500更新為800(注意,引數type很重要別漏了):
  1. kubectl patch elasticweb elasticweb-sample \
  2. -n dev \
  3. --type merge \
  4. --patch "$(cat config/samples/update_single_pod_qps.yaml)"
  • 此時去看controller日誌,如下圖,紅框1表示spec已經更新,紅框2則表示用最新的引數計算出來的pod數量,符合預期:

  • 用kubectl get命令檢查pod,可見已經降到1個了:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
  2. NAME READY STATUS RESTARTS AGE
  3. elasticweb-sample-56fc5848b7-l5thk 1/1 Running 0 30m
  • 記得用瀏覽器檢查tomcat是否正常;

修改總QPS

  • 外部QPS也在頻繁變化中,咱們的operator也需要根據總QPS及時調節pod例項,以確保整體服務質量,接下來咱們就修改總QPS,看operator是否生效:

  • 在config/samples/目錄下新增名為update_total_qps.yaml的檔案,內容如下:

  1. spec:
  2. totalQPS: 2600
  • 執行以下命令,即可將總QPS從600更新為2600(注意,引數type很重要別漏了):
  1. kubectl patch elasticweb elasticweb-sample \
  2. -n dev \
  3. --type merge \
  4. --patch "$(cat config/samples/update_total_qps.yaml)"
  • 此時去看controller日誌,如下圖,紅框1表示spec已經更新,紅框2則表示用最新的引數計算出來的pod數量,符合預期:

  • 用kubectl get命令檢查pod,可見已經增長到4個,4個pd的能支撐的QPS為3200,滿足了當前2600的要求:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
  2. NAME READY STATUS RESTARTS AGE
  3. elasticweb-sample-56fc5848b7-8n7tq 1/1 Running 0 8m22s
  4. elasticweb-sample-56fc5848b7-f2lpb 1/1 Running 0 8m22s
  5. elasticweb-sample-56fc5848b7-l5thk 1/1 Running 0 48m
  6. elasticweb-sample-56fc5848b7-q8p5f 1/1 Running 0 8m22s
  • 記得用瀏覽器檢查tomcat是否正常;
  • 聰明的您一定會覺得用這個方法來調節pod數太low了,呃...您說得沒錯確實low,但您可以自己開發一個應用,收到當前QPS後自動呼叫client-go去修改elasticweb的totalQPS,讓operator及時調整pod數,這也勉強算自動調節了......吧

刪除驗證

  • 目前整個dev這個namespace下有service、deployment、pod、elasticweb這些資源物件,如果要全部刪除,只需刪除elasticweb即可,因為service和deployment都和elasticweb建立的關聯關係,程式碼如下圖紅框:

  • 執行刪除elasticweb的命令:
  1. kubectl delete elasticweb elasticweb-sample -n dev
  • 再去檢視其他資源,都被自動刪除了:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl delete elasticweb elasticweb-sample -n dev
  2. elasticweb.elasticweb.com.bolingcavalry "elasticweb-sample" deleted
  3. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
  4. NAME READY STATUS RESTARTS AGE
  5. elasticweb-sample-56fc5848b7-9lcww 1/1 Terminating 0 45s
  6. elasticweb-sample-56fc5848b7-n7p7f 1/1 Terminating 0 45s
  7. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
  8. NAME READY STATUS RESTARTS AGE
  9. elasticweb-sample-56fc5848b7-n7p7f 0/1 Terminating 0 73s
  10. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
  11. No resources found in dev namespace.
  12. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev
  13. No resources found in dev namespace.
  14. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get service -n dev
  15. No resources found in dev namespace.
  16. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get namespace dev
  17. NAME STATUS AGE
  18. dev Active 97s

構建映象

  1. 前面咱們在開發環境將controller執行起來嘗試了所有功能,在實際生產環境中,controller並非這樣獨立於kubernetes之外,而是以pod的狀態執行在kubernetes之中,接下來咱們嘗試將controller程式碼編譯構建成docker映象,再在kubernetes上執行起來;
  2. 要做的第一件事,就是在前面的controller控制檯上執行Ctrl+C,把那個controller停掉;
  3. 這裡有個要求,就是您要有個kubernetes可以訪問的映象倉庫,例如區域網內的Harbor,或者公共的hub.docker.com,我這為了操作方便選擇了hub.docker.com,使用它的前提是擁有hub.docker.com的註冊帳號;
  4. 在kubebuilder電腦上,開啟一個控制檯,執行docker login命令登入,根據提示輸入hub.docker.com的帳號和密碼,這樣就可以在當前控制檯上執行docker push命令將映象推送到hub.docker.com上了(這個網站的網路很差,可能要登入好幾次才能成功);
  5. 執行以下命令構建docker映象並推送到hub.docker.com,映象名為bolingcavalry/elasticweb:002:
  1. make docker-build docker-push IMG=bolingcavalry/elasticweb:002
  1. hub.docker.com的網路狀況不是一般的差,kubebuilder電腦上的docker一定要設定映象加速,上述命令如果遭遇超時失敗,請重試幾次,此外,構建過程中還會下載諸多go模組的依賴,也需要您耐心等待,也很容易遇到網路問題,需要多次重試,所以,最好是使用區域網內搭建的Habor服務;
  2. 最終,命令執行成功後輸出如下:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % make docker-build docker-push IMG=bolingcavalry/elasticweb:002
  2. /Users/zhaoqin/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
  3. go fmt ./...
  4. go vet ./...
  5. /Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
  6. go test ./... -coverprofile cover.out
  7. ? elasticweb [no test files]
  8. ? elasticweb/api/v1 [no test files]
  9. ok elasticweb/controllers 8.287s coverage: 0.0% of statements
  10. docker build . -t bolingcavalry/elasticweb:002
  11. [+] Building 146.8s (17/17) FINISHED
  12. => [internal] load build definition from Dockerfile 0.1s
  13. => => transferring dockerfile: 37B 0.0s
  14. => [internal] load .dockerignore 0.0s
  15. => => transferring context: 2B 0.0s
  16. => [internal] load metadata for gcr.io/distroless/static:nonroot 1.8s
  17. => [internal] load metadata for docker.io/library/golang:1.13 0.7s
  18. => [builder 1/9] FROM docker.io/library/golang:1.13@sha256:8ebb6d5a48deef738381b56b1d4cd33d99a5d608e0d03c5fe8dfa3f68d41a1f8 0.0s
  19. => [stage-1 1/3] FROM gcr.io/distroless/static:nonroot@sha256:b89b98ea1f5bc6e0b48c8be6803a155b2a3532ac6f1e9508a8bcbf99885a9152 0.0s
  20. => [internal] load build context 0.0s
  21. => => transferring context: 14.51kB 0.0s
  22. => CACHED [builder 2/9] WORKDIR /workspace 0.0s
  23. => CACHED [builder 3/9] COPY go.mod go.mod 0.0s
  24. => CACHED [builder 4/9] COPY go.sum go.sum 0.0s
  25. => CACHED [builder 5/9] RUN go mod download 0.0s
  26. => CACHED [builder 6/9] COPY main.go main.go 0.0s
  27. => CACHED [builder 7/9] COPY api/ api/ 0.0s
  28. => [builder 8/9] COPY controllers/ controllers/ 0.1s
  29. => [builder 9/9] RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go 144.5s
  30. => CACHED [stage-1 2/3] COPY --from=builder /workspace/manager . 0.0s
  31. => exporting to image 0.0s
  32. => => exporting layers 0.0s
  33. => => writing image sha256:622d30aa44c77d93db4093b005fce86b39d5ba5c6cd29f1fb2accb7e7f9b23b8 0.0s
  34. => => naming to docker.io/bolingcavalry/elasticweb:002 0.0s
  35. docker push bolingcavalry/elasticweb:002
  36. The push refers to repository [docker.io/bolingcavalry/elasticweb]
  37. eea77d209b68: Layer already exists
  38. 8651333b21e7: Layer already exists
  39. 002: digest: sha256:c09ab87f6fce3d85f1fda0ffe75ead9db302a47729aefd3ef07967f2b99273c5 size: 739
  1. 去hub.docker.com網站看看,如下圖,新映象已經上傳,這樣只要任何機器只要能上網就能pull此映象到本地使用了:

  1. 映象準備好之後,執行以下命令即可在kubernetes環境部署controller:
  1. make deploy IMG=bolingcavalry/elasticweb:002
  1. 接下來像之前那樣建立elasticweb資源物件,驗證所有資源是否建立成功:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % make deploy IMG=bolingcavalry/elasticweb:002
  2. /Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
  3. cd config/manager && kustomize edit set image controller=bolingcavalry/elasticweb:002
  4. kustomize build config/default | kubectl apply -f -
  5. namespace/elasticweb-system created
  6. Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
  7. customresourcedefinition.apiextensions.k8s.io/elasticwebs.elasticweb.com.bolingcavalry configured
  8. role.rbac.authorization.k8s.io/elasticweb-leader-election-role created
  9. clusterrole.rbac.authorization.k8s.io/elasticweb-manager-role created
  10. clusterrole.rbac.authorization.k8s.io/elasticweb-proxy-role created
  11. Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
  12. clusterrole.rbac.authorization.k8s.io/elasticweb-metrics-reader created
  13. rolebinding.rbac.authorization.k8s.io/elasticweb-leader-election-rolebinding created
  14. clusterrolebinding.rbac.authorization.k8s.io/elasticweb-manager-rolebinding created
  15. clusterrolebinding.rbac.authorization.k8s.io/elasticweb-proxy-rolebinding created
  16. service/elasticweb-controller-manager-metrics-service created
  17. deployment.apps/elasticweb-controller-manager created
  18. zhaoqin@zhaoqindeMBP-2 elasticweb %
  19. zhaoqin@zhaoqindeMBP-2 elasticweb %
  20. zhaoqin@zhaoqindeMBP-2 elasticweb %
  21. zhaoqin@zhaoqindeMBP-2 elasticweb %
  22. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml
  23. namespace/dev created
  24. elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
  25. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get service -n dev
  26. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  27. elasticweb-sample NodePort 10.96.234.7 <none> 8080:30003/TCP 13s
  28. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev
  29. NAME READY UP-TO-DATE AVAILABLE AGE
  30. elasticweb-sample 2/2 2 2 18s
  31. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
  32. NAME READY STATUS RESTARTS AGE
  33. elasticweb-sample-56fc5848b7-559lw 1/1 Running 0 22s
  34. elasticweb-sample-56fc5848b7-hp4wv 1/1 Running 0 22s
  1. 這還不夠!還有個重要的資訊需要咱們檢查---controller的日誌,先看有哪些pod:
  1. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pods --all-namespaces
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. dev elasticweb-sample-56fc5848b7-559lw 1/1 Running 0 68s
  4. dev elasticweb-sample-56fc5848b7-hp4wv 1/1 Running 0 68s
  5. elasticweb-system elasticweb-controller-manager-5795d4d98d-t6jvc 2/2 Running 0 98s
  6. kube-system coredns-7f89b7bc75-5pdwc 1/1 Running 15 20d
  7. kube-system coredns-7f89b7bc75-nvbvm 1/1 Running 15 20d
  8. kube-system etcd-hedy 1/1 Running 15 20d
  9. kube-system kube-apiserver-hedy 1/1 Running 15 20d
  10. kube-system kube-controller-manager-hedy 1/1 Running 16 20d
  11. kube-system kube-flannel-ds-v84vc 1/1 Running 22 20d
  12. kube-system kube-proxy-hlppx 1/1 Running 15 20d
  13. kube-system kube-scheduler-hedy 1/1 Running 16 20d
  14. test-clientset client-test-deployment-7677cc9669-kd7l7 1/1 Running 9 9d
  15. test-clientset client-test-deployment-7677cc9669-kt5rv 1/1 Running 9 9d
  1. 可見controller的pod名稱為elasticweb-controller-manager-5795d4d98d-t6jvc,執行以下命令可以檢視日誌,多了-c manager引數是因為這個pod裡面有兩個容器,需要指定正確的容器才能看到日誌:
  1. kubectl logs -f \
  2. elasticweb-controller-manager-5795d4d98d-t6jvc \
  3. -c manager \
  4. -n elasticweb-system
  1. 再次看到了熟悉的業務日誌:
  1. 2021-02-21T08:52:27.064Z INFO controllers.ElasticWeb 1. start reconcile logic {"elasticweb": "dev/elasticweb-sample"}
  2. 2021-02-21T08:52:27.064Z INFO controllers.ElasticWeb 3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [nil] {"elasticweb": "dev/elasticweb-sample"}
  3. 2021-02-21T08:52:27.064Z INFO controllers.ElasticWeb 4. deployment not exists {"elasticweb": "dev/elasticweb-sample"}
  4. 2021-02-21T08:52:27.064Z INFO controllers.ElasticWeb set reference {"func": "createService"}
  5. 2021-02-21T08:52:27.064Z INFO controllers.ElasticWeb start create service {"func": "createService"}
  6. 2021-02-21T08:52:27.107Z INFO controllers.ElasticWeb create service success {"func": "createService"}
  7. 2021-02-21T08:52:27.107Z INFO controllers.ElasticWeb expectReplicas [2] {"func": "createDeployment"}
  8. 2021-02-21T08:52:27.107Z INFO controllers.ElasticWeb set reference {"func": "createDeployment"}
  9. 2021-02-21T08:52:27.107Z INFO controllers.ElasticWeb start create deployment {"func": "createDeployment"}
  10. 2021-02-21T08:52:27.119Z INFO controllers.ElasticWeb create deployment success {"func": "createDeployment"}
  11. 2021-02-21T08:52:27.119Z INFO controllers.ElasticWeb singlePodQPS [500], replicas [2], realQPS[1000] {"func": "updateStatus"}
  12. 2021-02-21T08:52:27.198Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
  13. 2021-02-21T08:52:27.198Z INFO controllers.ElasticWeb 1. start reconcile logic {"elasticweb": "dev/elasticweb-sample"}
  14. 2021-02-21T08:52:27.198Z INFO controllers.ElasticWeb 3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [1000] {"elasticweb": "dev/elasticweb-sample"}
  15. 2021-02-21T08:52:27.198Z INFO controllers.ElasticWeb 9. expectReplicas [2], realReplicas [2] {"elasticweb": "dev/elasticweb-sample"}
  16. 2021-02-21T08:52:27.198Z INFO controllers.ElasticWeb 10. return now {"elasticweb": "dev/elasticweb-sample"}
  17. 2021-02-21T08:52:27.198Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}
  1. 再用瀏覽器驗證tomcat已經啟動成功;

解除安裝和清理

  • 體驗完畢後,如果想把前面建立的資源全部清理掉(注意,是清理資源,不是資源物件),可以執行以下命令:
  1. make uninstall
  • 至此,整個operator的設計、開發、部署、驗證流程就全部完成了,在您的operator開發過程中,希望本文能給您帶來一些參考;

你不孤單,欣宸原創一路相伴

  1. Java系列
  2. Spring系列
  3. Docker系列
  4. kubernetes系列
  5. 資料庫+中介軟體系列
  6. DevOps系列

歡迎關注公眾號:程式設計師欣宸

微信搜尋「程式設計師欣宸」,我是欣宸,期待與您一同暢遊Java世界...

https://github.com/zq2599/blog_demos