1. 程式人生 > >Kubernetes集群監控方案

Kubernetes集群監控方案

k8s 監控 prometheus grafana node-exporter

本文介紹在k8s集群中使用node-exporter、prometheus、grafana對集群進行監控。
其實現原理有點類似ELK、EFK組合。node-exporter組件負責收集節點上的metrics監控數據,並將數據推送給prometheus, prometheus負責存儲這些數據,grafana將這些數據通過網頁以圖形的形式展現給用戶。

在開始之前有必要了解下Prometheus是什麽?
Prometheus (中文名:普羅米修斯)是由 SoundCloud 開發的開源監控報警系統和時序列數據庫(TSDB).自2012年起,許多公司及組織已經采用 Prometheus,並且該項目有著非常活躍的開發者和用戶社區.現在已經成為一個獨立的開源項目。Prometheus 在2016加入 CNCF ( Cloud Native Computing Foundation ), 作為在 kubernetes 之後的第二個由基金會主持的項目。 Prometheus 的實現參考了Google內部的監控實現,與源自Google的Kubernetes結合起來非常合適。另外相比influxdb的方案,性能更加突出,而且還內置了報警功能。它針對大規模的集群環境設計了拉取式的數據采集方式,只需要在應用裏面實現一個metrics接口,然後把這個接口告訴Prometheus就可以完成數據采集了,下圖為prometheus的架構圖。

技術分享圖片
Prometheus的特點:
1、多維數據模型(時序列數據由metric名和一組key/value組成)
2、在多維度上靈活的查詢語言(PromQl)
3、不依賴分布式存儲,單主節點工作.
4、通過基於HTTP的pull方式采集時序數據
5、可以通過中間網關進行時序列數據推送(pushing)
6、目標服務器可以通過發現服務或者靜態配置實現
7、多種可視化和儀表盤支持

prometheus 相關組件,Prometheus生態系統由多個組件組成,其中許多是可選的:
1、Prometheus 主服務,用來抓取和存儲時序數據
2、client library 用來構造應用或 exporter 代碼 (go,java,python,ruby)

3、push 網關可用來支持短連接任務
4、可視化的dashboard (兩種選擇,promdash 和 grafana.目前主流選擇是 grafana.)
4、一些特殊需求的數據出口(用於HAProxy, StatsD, Graphite等服務)
5、實驗性的報警管理端(alartmanager,單獨進行報警匯總,分發,屏蔽等 )

promethues 的各個組件基本都是用 golang 編寫,對編譯和部署十分友好.並且沒有特殊依賴.基本都是獨立工作。
上述文字來自網絡!

現在我們正式開始部署工作。
一、環境介紹
操作系統環境:centos linux 7.2 64bit
K8S軟件版本: 1.9.0(采用kubeadm方式部署)

Master節點IP: 192.168.115.5/24
Node節點IP: 192.168.115.6/24

二、在k8s集群的所有節點上下載所需要的image

# docker pull prom/node-exporter
# docker pull prom/prometheus:v2.0.0
# docker pull grafana/grafana:4.2.0

三、采用daemonset方式部署node-exporter組件

# cat node-exporter.yaml 
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    k8s-app: node-exporter
spec:
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - containerPort: 9100
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 9100
    nodePort: 31672
    protocol: TCP
  type: NodePort
  selector:
    k8s-app: node-exporter

通過上述文件創建pod和service

# kubectl create -f  node-exporter.yaml 

四、部署prometheus組件
1、rbac文件

# cat rbac-setup.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: kube-system

2、以configmap的形式管理prometheus組件的配置文件

# cat configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-system
data:
  prometheus.yml: |
    global:
      scrape_interval:     15s
      evaluation_interval: 15s
    scrape_configs:

    - job_name: ‘kubernetes-apiservers‘
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https

    - job_name: ‘kubernetes-nodes‘
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics

    - job_name: ‘kubernetes-cadvisor‘
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

    - job_name: ‘kubernetes-service-endpoints‘
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name

    - job_name: ‘kubernetes-services‘
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module: [http_2xx]
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name

    - job_name: ‘kubernetes-ingresses‘
      kubernetes_sd_configs:
      - role: ingress
      relabel_configs:
      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
        regex: (.+);(.+);(.+)
        replacement: ${1}://${2}${3}
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_ingress_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_ingress_name]
        target_label: kubernetes_name

    - job_name: ‘kubernetes-pods‘
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name

3、Prometheus deployment 文件

# cat prometheus.deploy.yml 
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  labels:
    name: prometheus-deployment
  name: prometheus
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - image: prom/prometheus:v2.0.0
        name: prometheus
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"
        - "--storage.tsdb.retention=24h"
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: "/prometheus"
          name: data
        - mountPath: "/etc/prometheus"
          name: config-volume
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 2500Mi
      serviceAccountName: prometheus    
      volumes:
      - name: data
        emptyDir: {}
      - name: config-volume
        configMap:
          name: prometheus-config       

4、Prometheus service文件

# cat prometheus.svc.yml 
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30003
  selector:
app: prometheus

5、通過上述yaml文件創建相應的對象

# kubectl create -f  rbac-setup.yaml
# kubectl create -f  configmap.yaml 
# kubectl create -f  prometheus.deploy.yml 
# kubectl create -f  prometheus.svc.yml 

技術分享圖片
技術分享圖片
Node-exporter對應的nodeport端口為31672,通過訪問http://192.168.115.5:31672/metrics 可以看到對應的metrics
技術分享圖片
prometheus對應的nodeport端口為30003,通過訪問http://192.168.115.5:30003/target 可以看到prometheus已經成功連接上了k8s的apiserver
技術分享圖片
可以在prometheus的WEB界面上提供了基本的查詢K8S集群中每個POD的CPU使用情況,查詢條件如下:

sum by (pod_name)( rate(container_cpu_usage_seconds_total{image!="", pod_name!=""}[1m] ) )

技術分享圖片
上述的查詢有出現數據,說明node-exporter往prometheus中寫入數據正常,接下來我們就可以部署grafana組件,實現更友好的webui展示數據了。

五、部署grafana組件
1、grafana deployment配置文件

# cat grafana-deploy.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grafana-core
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: grafana
        component: core
    spec:
      containers:
      - image: grafana/grafana:4.2.0
        name: grafana-core
        imagePullPolicy: IfNotPresent
        # env:
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi
        env:
          # The following env variables set up basic auth twith the default admin user and admin password.
          - name: GF_AUTH_BASIC_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "false"
          # - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          #   value: Admin
          # does not really work, because of template variables in exported dashboards:
          # - name: GF_DASHBOARDS_JSON_ENABLED
          #   value: "true"
        readinessProbe:
          httpGet:
            path: /login
            port: 3000
          # initialDelaySeconds: 30
          # timeoutSeconds: 1
        volumeMounts:
        - name: grafana-persistent-storage
          mountPath: /var
      volumes:
      - name: grafana-persistent-storage
        emptyDir: {}

2、grafana service配置文件

# cat grafana-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  type: NodePort
  ports:
    - port: 3000
  selector:
    app: grafana
component: core

3、grafana ingress配置文件
# cat grafana-ing.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: grafana
   namespace: kube-system
spec:
   rules:
   - host: k8s.grafana
     http:
       paths:
       - path: /
         backend:
          serviceName: grafana
          servicePort: 3000

通過訪問traefik的webui可以看到k8s.grafana服務發布成功
技術分享圖片
修改hosts解析,訪問測試
技術分享圖片
技術分享圖片
也可以直接訪問nodeport端口
技術分享圖片
默認用戶名和密碼都是admin
技術分享圖片
配置數據源為prometheus
技術分享圖片
導入面板,可以直接輸入模板編號315在線導入,或者下載好對應的json模板文件本地導入,面板模板下載地址https://grafana.com/dashboards/315
技術分享圖片
導入面板之後就可以看到對應的監控數據了。
技術分享圖片
技術分享圖片
技術分享圖片
這裏要說明一下,在測試過程中,導入編號為162的模板,發現只有部分數據,且pod的名稱顯示不友好。模板地址https://grafana.com/dashboards/162,詳見下圖。
技術分享圖片
六、後記
這裏存在一些問題後續要繼續研究解決。
1、prometheus的數據存儲采用emptydir。如果Pod被刪除,或者Pod發生遷移,emptyDir也會被刪除,並且永久丟失。後續可以在K8S集群外部再配置一個Prometheus系統來永久保存監控數據, 兩個prometheus系統之間通過配置job自動進行數據拉取。
2、Grafana的配置數據存儲采用emptydir。如果Pod被刪除,或者Pod發生遷移,emptyDir也會被刪除,並且永久丟失。我們也可以選擇將grafana配置在k8s外部,數據源選擇K8S集群外部的prometheus即可。
3、關於監控項的報警(alertmanager)尚未配置。

參考文檔,感謝作者分享!
https://www.kubernetes.org.cn/3418.html
https://blog.qikqiak.com/post/kubernetes-monitor-prometheus-grafana/
https://github.com/giantswarm/kubernetes-prometheus/tree/master/manifests
https://segmentfault.com/a/1190000013245394

Kubernetes集群監控方案