kubernetes學習筆記之十二:資源指標API及自定義指標API
第一章、前言
以前是用heapster來收集資源指標才能看,現在heapster要廢棄了
從1.8以後引入了資源api指標監視
資源指標:metrics-server(核心指標)
自定義指標:prometheus,k8s-prometheus-adapter(將Prometheus採集的資料轉換為指標格式)
k8s的中的prometheus需要k8s-prometheus-adapter轉換一下才可以使用
新一代架構:
核心指標流水線:
kubelet,metrics -service以及API service提供api組成;cpu累計使用率,記憶體實時使用率,pod的資源佔用率和容器磁碟佔用率;
監控流水線:
用於從系統收集各種指標資料並提供終端使用者,儲存系統以及HPA,他們包括核心指標以及很多非核心指標,非核心指標本身不能被k8s解析
第二章、安裝部署metrics-server
1、下載yaml檔案,並安裝
專案地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
[ro[email protected]_01 manifests]# mkdir metrics-server
[[email protected]-master_01 manifests]# cd metrics-server
[[email protected]-master_01 metrics-server]# for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/cluster/addons/metrics-server/$file;done #記住,下載raw格式的檔案
[[email protected]_01 metrics-server]# grep image: ./* #檢視使用的映象,如果可以科學上網,那麼忽略,如果不可用那麼需要提前下載,通過修改配置檔案或修改映象的名稱的方式載入映象,映象可以到阿里雲上去搜索
./metrics-server-deployment.yaml: image: k8s.gcr.io/metrics-server-amd64:v0.2.1
./metrics-server-deployment.yaml: image: k8s.gcr.io/addon-resizer:1.8.1
[[email protected]_01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/criss/addon-resizer:1.8.1 #手動在所有的node節點上下載映象,注意版本號沒有v
[[email protected]_01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/k8s-kernelsky/metrics-server-amd64:v0.2.1
[[email protected]_01 metrics-server]# grep image: metrics-server-deployment.yaml
image: registry.cn-hangzhou.aliyuncs.com/k8s-kernelsky/metrics-server-amd64:v0.2.1
image: registry.cn-hangzhou.aliyuncs.com/criss/addon-resizer:1.8.1
[[email protected]_01 metrics-server]# kubectl apply -f .
[[email protected]_01 metrics-server]# kubectl get pod -n kube-system
2、驗證
[[email protected] ~]# kubectl api-versions |grep metrics
metrics.k8s.io/v1beta1
[[email protected] ~]# kubectl proxy --port=8080 #重新開啟一個終端,啟動代理功能 [[email protected]-master_01 metrics-server]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1 #檢視這個資源組包含哪些元件 [[email protected]_01 metrics-server]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods #可能需要等待一會在會有資料 [[email protected]_01 metrics-server]# curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes [[email protected] ~]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 176m 4% 3064Mi 39% k8s-node01 62m 1% 4178Mi 54% k8s-node02 65m 1% 2141Mi 27% [[email protected]-node01 ~]# kubectl top pods NAME CPU(cores) MEMORY(bytes) node-affinity-pod 0m 1Mi
3.注意事項
1.#在更新的版本中,如v1.11及以上會出現問題,這是因為metric-service預設從kubernetes的summary_api中獲取資料,而summary_api預設使用10255埠來獲
取資料,但是10255是一個http協議的埠,可能官方認為http協議不安全所以封禁了10255埠改為使用10250埠,而10250是一個https協議埠,所以我們需要修改一下連線方式:
由 - --source=kubernetes.summary_api:''
修改為 - --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure-true #表示雖然我使用https協議來通訊,並且埠也是10250,但是如果證書不能認證依然可以通過非安全不加密的方式來通訊
[[email protected] deploy]# grep source=kubernetes metrics-server-deployment.yaml
2.[[email protected] deploy]# grep nodes/stats resource-reader.yaml #在新的版本中,授權文內沒有 node/stats 的許可權,需要手動去新增
[[email protected] deploy]# cat resource-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats #新增這一行
- namespaces
3.在1.12.3版本中測試發現,需要進行如下修改才能成功部署(許可權依然需要修改,其他版本暫未測試)
[[email protected] metrics-server]# vim metrics-server-deployment.yaml
command: #metrics-server命令引數修改為如下引數
- /metrics-server
- --metric-resolution=30s
- --kubelet-port=10250
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
command: #metrics-server-nanny 命令引數修改為如下引數
- /pod_nanny
- --config-dir=/etc/config
- --cpu=40m
- --extra-cpu=0.5m
- --memory=40Mi
- --extra-memory=4Mi
- --threshold=5
- --deployment=metrics-server-v0.3.1
- --container=metrics-server
- --poll-period=300000
- --estimator=exponential
第三章、安裝部署prometheus
專案地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus(由於prometheus只有v1.11.0及以上才有,所有我選擇v1.11.0來部署)
1.下載yaml檔案及部署前操作
[[email protected] ~]# cd /mnt/
[[email protected]-node01 mnt]# git clone https://github.com/kubernetes/kubernetes.git #我嫌麻煩就直接克隆kubernetes整個專案了
[[email protected] mnt]# cd kubernetes/cluster/addons/prometheus/
[[email protected]-node01 prometheus]# git checkout v1.11.0
[[email protected]-node01 prometheus]# cd ..
[[email protected]-node01 addons]# cp -r prometheus /root/manifests/
[[email protected]-node01 manifests]# cd prometheus/
[[email protected]-node01 prometheus]# grep -w "namespace: kube-system" ./* #預設prometheus使用的是kube-system名稱空間,我們把它單獨部署到一個名稱空間中,方便之後的管理
./alertmanager-configmap.yaml: namespace: kube-system
......
[r[email protected] prometheus]# sed -i 's/namespace: kube-system/namespace\: k8s-monitor/g' ./*
[[email protected] prometheus]# grep storage: ./* #安裝需要兩個pv,等下我們需要建立一下
./alertmanager-pvc.yaml: storage: "2Gi"
./prometheus-statefulset.yaml: storage: "16Gi"
[[email protected] prometheus]# cat pv.yaml #注意第二pv的storageClassName
apiVersion: v1
kind: PersistentVolume
metadata:
name: alertmanager
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data/volumes/v1
server: 172.16.150.158
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: standard
spec:
capacity:
storage: 25Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: standard #storageClassName與prometheus-statefulset.yaml中volumeClaimTemplates下定義的需要保持一致
nfs:
path: /data/volumes/v2
server: 172.16.150.158
[[email protected] prometheus]# kubectl create namespace k8s-monitor
[[email protected] prometheus]# mkdir node-exporter kube-state-metrics alertmanager prometheus #將每個元件單獨放入一個目錄中,方便部署及管理
[[email protected] prometheus]# mv node-exporter-* node-exporter
[[email protected] prometheus]# mv alertmanager-* alertmanager
[[email protected] prometheus]# mv kube-state-metrics-* kube-state-metrics
[[email protected] prometheus]# mv prometheus-* prometheus
2.安裝node-exporter(用於收集節點的資料指標)
[[email protected] prometheus]# grep -r image: node-exporter/*
node-exporter/node-exporter-ds.yml: image: "prom/node-exporter:v0.15.2" #非官方映象,不能科學上網的也可以下載,所以不需要提前下載
[[email protected] prometheus]# kubectl apply -f node-exporter/
daemonset.extensions "node-exporter" created
service "node-exporter" created
[[email protected] prometheus]# kubectl get pod -n k8s-monitor
NAME READY STATUS RESTARTS AGE
node-exporter-l5zdw 1/1 Running 0 1m
node-exporter-vwknx 1/1 Running 0 1m
3.安裝prometheus
[[email protected]_01 prometheus]# kubectl apply -f pv.yaml
persistentvolume "alertmanager" configured
persistentvolume "standard" created
[[email protected]-master_01 prometheus]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
alertmanager 5Gi RWO,RWX Recycle Available 9s
standard 25Gi RWO Recycle Available 9s
[[email protected]-node01 prometheus]# grep -i image prometheus/* #檢視映象是否需要下載
[[email protected] prometheus]# vim prometheus-service.yaml #預設prometheus的service埠型別為ClusterIP,為了可以叢集外訪問,修改為NodePort
...
type: NodePort
ports:
- name: http
port: 9090
protocol: TCP
targetPort: 9090
nodePort: 30090
...
[[email protected] prometheus]# kubectl apply -f prometheus/
[[email protected] prometheus]# kubectl get pod -n k8s-monitor
NAME READY STATUS RESTARTS AGE
node-exporter-l5zdw 1/1 Running 0 24m
node-exporter-vwknx 1/1 Running 0 24m
prometheus-0 2/2 Running 0 1m
[[email protected] prometheus]# kubectl get svc -n k8s-monitor
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
node-exporter ClusterIP None <none> 9100/TCP 25m
prometheus NodePort 10.96.9.121 <none> 9090:30090/TCP 22m
[[email protected]_01 prometheus]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
alertmanager 5Gi RWO,RWX Recycle Available 1h
standard 25Gi RWO Recycle Bound k8s-monitor/prometheus-data-prometheus-0 standard 1h
訪問prometheus(node節點IP:埠)
4.部署metrics介面卡(將prometheus資料轉換為k8s可以識別的資料)
[[email protected] kube-state-metrics]# grep image: ./*
./kube-state-metrics-deployment.yaml: image: quay.io/coreos/kube-state-metrics:v1.3.0
./kube-state-metrics-deployment.yaml: image: k8s.gcr.io/addon-resizer:1.7
[[email protected] ~]# docker pull registry.cn-hangzhou.aliyuncs.com/ccgg/addon-resizer:1.7
[[email protected] kube-state-metrics]# vim kube-state-metrics-deployment.yaml #修改映象地址
[[email protected] kube-state-metrics]# kubectl apply -f kube-state-metrics-deployment.yaml
deployment.extensions "kube-state-metrics" configured
[[email protected] kube-state-metrics]# kubectl get pod -n k8s-monitor
NAME READY STATUS RESTARTS AGE
kube-state-metrics-54849b96b4-dmqtk 2/2 Running 0 23s
node-exporter-l5zdw 1/1 Running 0 2h
node-exporter-vwknx 1/1 Running 0 2h
prometheus-0 2/2 Running 0 1h
5.部署k8s-prometheus-adapter(將資料輸出為一個API服務)
專案地址:https://github.com/DirectXMan12/k8s-prometheus-adapter
[[email protected] ~]# cd /etc/kubernetes/pki/
[[email protected] pki]#(umask 077; openssl genrsa -out serving.key 2048)
[[email protected] pki]#openssl req -new -key serving.key -out serving.csr -subj "/CN=serving" #CN必須為serving
[[email protected] pki]#openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650
[[email protected] pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key -n k8s-monitor #證書名稱必須為cm-adapter-serving-certs
[[email protected] pki]#kubectl get secret -n k8s-monitor
[[email protected] pki]# cd
[[email protected] ~]# git clone https://github.com/DirectXMan12/k8s-prometheus-adapter.git [[email protected] ~]# cd k8s-prometheus-adapter/deploy/manifests/ [[email protected]-node01 manifests]# grep namespace: ./* #處理role-binding之外的namespace的名稱改為k8s-monitor [[email protected] manifests]# grep image: ./* #映象不需要下載 [[email protected] ~]# sed -i 's/namespace\: custom-metrics/namespace\: k8s-monitor/g' ./* #rolebinding的不要替換 [[email protected] ~]# kubectl apply -f ./ [[email protected] ~]# kubectl get pod -n k8s-monitor [[email protected] ~]#kubectl get svc -n k8s-monitor kubectl api-versions |grep custom
第四章、部署prometheus+grafana
[[email protected] ~]# wget https://raw.githubusercontent.com/kubernetes-retired/heapster/master/deploy/kube-config/influxdb/grafana.yaml #找不到grafana的yaml檔案,所以到heapster裡面掏了一個下來用用
[[email protected] ~]#egrep -i "influxdb|namespace|nodeport" grafana.yaml #註釋掉influxdb環境變數,修改namespace及port型別
[[email protected]-master01 ~]#kubectl apply -f grafana.yaml
[[email protected]-master01 ~]#kubectl get svc -n k8s-monitor
[[email protected]-master01 ~]#kubectl get pod -n k8s-monitor
登入grafana,並修改資料來源
配置資料來源
點選右側的Dashborads,可以匯入grafana自帶的prometheus的模板
回到home下,下拉選擇對應的模板檢視資料
例如:
但是,grafana自帶的模板和資料有些不匹配,我們可以去grafana官網去下載應用於k8s使用的模板,地址為:https://grafana.com/dashboards
訪問grafana官網搜尋k8s相關模板,有時搜尋框點選沒有反應,可以直接在URL後面加上搜索內容即可
我們選擇kubernetes cluster(prometheus)作為測試
點選需要下載的模板,並下載json檔案
下載完成後,匯入檔案
選擇上傳檔案
匯入後選擇資料來源
匯入後展示的介面
第五章、實現HPA
1、使用v1版本測試
[[email protected] alertmanager]# kubectl api-versions |grep autoscaling
autoscaling/v1
autoscaling/v2beta1
[[email protected]-master01 manifests]# cat deploy-demon.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
spec:
selector:
app: myapp
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 32222
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: httpd
containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
[[email protected]-master01 manifests]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 47d
my-nginx NodePort 10.104.13.148 <none> 80:32008/TCP 19d
myapp NodePort 10.100.76.180 <none> 80:32222/TCP 16s
tomcat ClusterIP 10.106.222.72 <none> 8080/TCP,8009/TCP 19d
[[email protected]-master01 manifests]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-deploy-5db497dbfb-h7zcb 1/1 Running 0 16s
myapp-deploy-5db497dbfb-tvsf5 1/1 Running 0 16s
測試
[[email protected] manifests]# kubectl autoscale deployment myapp-deploy --min=1 --max=8 --cpu-percent=60
deployment.apps "myapp-deploy" autoscaled
[[email protected]-master01 manifests]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp-deploy Deployment/myapp-deploy <unknown>/60% 1 8 0 22s
[[email protected]-master01 pod-dir]# yum install http-tools -y
[[email protected]-master01 pod-dir]# ab -c 1000 -n 5000000 http://172.16.150.213:32222/index.html
[[email protected] ~]# kubectl describe hpa
Name: myapp-deploy
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sun, 16 Dec 2018 20:34:41 +0800
Reference: Deployment/myapp-deploy
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 178% (178m) / 60%
Min replicas: 1
Max replicas: 8
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited True ScaleUpLimit the desired replica count is increasing faster than the maximum scale rate
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 19m horizontal-pod-autoscaler New size: 1; reason: All metrics below target
Normal SuccessfulRescale 2m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
[[email protected]-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-deploy-5db497dbfb-6kssf 1/1 Running 0 2m
myapp-deploy-5db497dbfb-h7zcb 1/1 Running 0 24m
[[email protected]-master01 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp-deploy Deployment/myapp-deploy 178%/60% 1 8 2 20m
2、使用v2beat1
[[email protected] pod-dir]# cat hpa-demo.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa-v2
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-deploy
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 55
- type: Resource
resource:
name: memory
targetAverageValue: 100Mi
[[email protected]-master01 pod-dir]# kubectl delete hpa myapp-deploy
horizontalpodautoscaler.autoscaling "myapp-deploy" deleted
[[email protected]-master01 pod-dir]# kubectl apply -f hpa-demo.yaml
horizontalpodautoscaler.autoscaling "myapp-hpa-v2" created
[[email protected]-master01 pod-dir]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp-hpa-v2 Deployment/myapp-deploy <unknown>/100Mi, <unknown>/55% 1 10 0 6s
測試
[[email protected] ~]# kubectl describe hpa
Name: myapp-hpa-v2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"myapp-hpa-v2","namespace":"default"},"spec":{...
CreationTimestamp: Sun, 16 Dec 2018 21:07:25 +0800
Reference: Deployment/myapp-deploy
Metrics: ( current / target )
resource memory on pods: 1765376 / 100Mi
resource cpu on pods (as a percentage of request): 200% (200m) / 55%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 4
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 18s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
[[email protected]-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-deploy-5db497dbfb-5n885 1/1 Running 0 26s
myapp-deploy-5db497dbfb-h7zcb 1/1 Running 0 40m
myapp-deploy-5db497dbfb-z2tqd 1/1 Running 0 26s
myapp-deploy-5db497dbfb-zkjhw 1/1 Running 0 26s
[[email protected]-master01 ~]# kubectl describe hpa
Name: myapp-hpa-v2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"myapp-hpa-v2","namespace":"default"},"spec":{...
CreationTimestamp: Sun, 16 Dec 2018 21:07:25 +0800
Reference: Deployment/myapp-deploy
Metrics: ( current / target )
resource memory on pods: 1765376 / 100Mi
resource cpu on pods (as a percentage of request): 0% (0) / 55%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 6m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target
[[email protected]-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-deploy-5db497dbfb-h7zcb 1/1 Running 0 46m
3.使用v2beat1測試自定義選項
[[email protected] pod-dir]# cat ../deploy-demon-metrics.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
spec:
selector:
app: myapp
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 32222
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: ikubernetes/metrics-app #測試映象
ports:
- name: httpd
containerPort: 80
[[email protected]-master01 pod-dir]# kubectl apply -f deploy-demon-metrics.yaml
[[email protected]-master01 pod-dir]# cat hpa-custom.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa-v2
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-deploy
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods #注意型別
pods:
metricName: http_requests #容器中自定義的引數
targetAverageValue: 800m #m表示個數,即800個併發數
[[email protected]-master01 pod-dir]# kubectl apply -f hpa-custom.yaml
[[email protected]-master01 pod-dir]# kubectl describe hpa myapp-hpa-v2
Name: myapp-hpa-v2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","ks":{},"name":"myapp-hpa-v2","namespace":"default"},"spec":{...
CreationTimestamp: Sun, 16 Dec 2018 22:09:32 +0800
Reference: Deployment/myapp-deploy
Metrics: ( current / target )
"http_requests" on pods: <unknown> / 800m
Min replicas: 1
Max replicas: 10
Events: <none>
[[email protected]-master01 pod-dir]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp-hpa-v2 Deployment/myapp-deploy <unknown>/800m 1 10 2 5m
測試:
#好像映象有點問題,待解決