1. 程式人生 > >Kubernetes實戰(二):k8s v1.11.1 prometheus traefik元件安裝及叢集測試

Kubernetes實戰(二):k8s v1.11.1 prometheus traefik元件安裝及叢集測試

1、traefik

  traefik:HTTP層路由,官網:http://traefik.cn/,文件:https://docs.traefik.io/user-guide/kubernetes/

  功能和nginx ingress類似。

  相對於nginx ingress,traefix能夠實時跟Kubernetes API 互動,感知後端 Service、Pod 變化,自動更新配置並熱過載。Traefik 更快速更方便,同時支援更多的特性,使反向代理、負載均衡更直接更高效。

  k8s叢集部署Traefik,結合上一篇文章。

  建立k8s-master-lb的證書:

[[email protected]
~]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=k8s-master-lb" Generating a 2048 bit RSA private key ................................................................................................................+++ .........................................................................................................................................................
+++ writing new private key to 'tls.key'

  把證書寫入到k8s的secret

[[email protected] ~]# kubectl -n kube-system create secret generic traefik-cert --from-file=tls.key --from-file=tls.crt
secret/traefik-cert created

  安裝traefix

[[email protected] kubeadm-ha]# kubectl apply -f traefik/
serviceaccount
/traefik-ingress-controller created clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created configmap/traefik-conf created daemonset.extensions/traefik-ingress-controller created service/traefik-web-ui created ingress.extensions/traefik-jenkins created

  檢視pods,因為建立的型別是DaemonSet所有每個節點都會建立一個Traefix的pod

[[email protected] kubeadm-ha]# kubectl  get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   calico-node-kwz9t                       2/2       Running   0          20h
kube-system   calico-node-nfhrd                       2/2       Running   0          56m
kube-system   calico-node-nxtlf                       2/2       Running   0          57m
kube-system   calico-node-rj8p8                       2/2       Running   0          20h
kube-system   calico-node-xfsg5                       2/2       Running   0          20h
kube-system   coredns-777d78ff6f-4rcsb                1/1       Running   0          22h
kube-system   coredns-777d78ff6f-7xqzx                1/1       Running   0          22h
kube-system   etcd-k8s-master01                       1/1       Running   0          16h
kube-system   etcd-k8s-master02                       1/1       Running   0          21h
kube-system   etcd-k8s-master03                       1/1       Running   9          20h
kube-system   heapster-5874d498f5-ngk26               1/1       Running   0          16h
kube-system   kube-apiserver-k8s-master01             1/1       Running   0          16h
kube-system   kube-apiserver-k8s-master02             1/1       Running   0          20h
kube-system   kube-apiserver-k8s-master03             1/1       Running   1          20h
kube-system   kube-controller-manager-k8s-master01    1/1       Running   0          16h
kube-system   kube-controller-manager-k8s-master02    1/1       Running   1          20h
kube-system   kube-controller-manager-k8s-master03    1/1       Running   0          20h
kube-system   kube-proxy-4cjhm                        1/1       Running   0          22h
kube-system   kube-proxy-kpxhz                        1/1       Running   0          56m
kube-system   kube-proxy-lkvjk                        1/1       Running   2          21h
kube-system   kube-proxy-m7htq                        1/1       Running   0          22h
kube-system   kube-proxy-r4sjs                        1/1       Running   0          57m
kube-system   kube-scheduler-k8s-master01             1/1       Running   2          16h
kube-system   kube-scheduler-k8s-master02             1/1       Running   0          21h
kube-system   kube-scheduler-k8s-master03             1/1       Running   2          20h
kube-system   kubernetes-dashboard-7954d796d8-2k4hx   1/1       Running   0          17h
kube-system   metrics-server-55fcc5b88-bpmkm          1/1       Running   0          16h
kube-system   monitoring-grafana-9b6b75b49-4zm6d      1/1       Running   0          18h
kube-system   monitoring-influxdb-655cd78874-56gf8    1/1       Running   0          16h
kube-system   traefik-ingress-controller-cv2jg        1/1       Running   0          28s
kube-system   traefik-ingress-controller-d7lzw        1/1       Running   0          28s
kube-system   traefik-ingress-controller-r2z29        1/1       Running   0          28s
kube-system   traefik-ingress-controller-tm6vv        1/1       Running   0          28s
kube-system   traefik-ingress-controller-w4mj7        1/1       Running   0          28s

  建立測試web應用

[[email protected] ~]# cat traefix-test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  template:
    metadata:
      labels:
        name: nginx-svc
        namespace: traefix-test
spec:
  selector:
    run: ngx-pod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: ngx-pod
spec:
  replicas: 4
  template:
    metadata:
      labels:
        run: ngx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.10
        ports:
        - containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ngx-ing
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: traefix-test.com
    http:
      paths:
      - backend:
          serviceName: nginx-svc
          servicePort: 80
[[email protected] ~]# kubectl create -f traefix-test.yaml 
service/nginx-svc created
deployment.apps/ngx-pod created
ingress.extensions/ngx-ing created

  traefix UI檢視

  k8s檢視

  訪問測試:將域名http://traefix-test.com/解析到任何一個node節點即可訪問 

  HTTPS證書配置

  利用上述建立的nginx,再次建立https的ingress

[[email protected] nginx-cert]# cat ../traefix-https.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-https-test
  namespace: default
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: traefix-test.com
    http:
      paths:
      - backend:
          serviceName: nginx-svc
          servicePort: 80
  tls:
   - secretName: nginx-test-tls

  建立證書,線上為公司購買的證書

[[email protected] nginx-cert]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=traefix-test.com"
Generating a 2048 bit RSA private key
.................................+++
.........................................................+++
writing new private key to 'tls.key'
-----

  匯入證書

kubectl -n default create secret tls nginx-test-tls --key=tls.key --cert=tls.crt

  建立ingress

[[email protected] ~]# kubectl create -f traefix-https.yaml 
ingress.extensions/nginx-https-test created

  訪問測試:

   其他方法檢視官方文件:https://docs.traefik.io/user-guide/kubernetes/

 2、安裝prometheus

  安裝prometheus

[[email protected] kubeadm-ha]# kubectl apply -f prometheus/
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
configmap/prometheus-server-conf created
deployment.extensions/prometheus created
service/prometheus created

  pod檢視

[[email protected] kubeadm-ha]# kubectl get pods --all-namespaces | grep prome
kube-system   prometheus-56dff8579d-x2w62             1/1       Running   0          52s

  安裝使用grafana(此處使用自裝grafana)

yum install https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.4.3-1.x86_64.rpm -y

  啟動grafana

[[email protected] grafana-dashboard]# systemctl start grafana-server
[[email protected]-master01 grafana-dashboard]# systemctl enable grafana-server
Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service.

  訪問:http://192.168.20.20:3000,賬號密碼admin,配置prometheus的DataSource

  匯入模板:檔案路徑/root/kubeadm-ha/heapster/grafana-dashboard 

  匯入後如下

   檢視資料

  grafana文件:http://docs.grafana.org/

3、叢集驗證

  驗證叢集高可用

  建立一個副本為3的deployment

[[email protected] ~]# kubectl run nginx --image=nginx --replicas=3 --port=80
deployment.apps/nginx created
[[email protected]-master01 ~]# kubectl get deployment --all-namespaces
NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
default       nginx                  3         3         3            3           58s

  檢視pods

[[email protected] ~]# kubectl get pods -l=run=nginx -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-6f858d4d45-7lv6f   1/1       Running   0          1m        172.168.5.16   k8s-node01
nginx-6f858d4d45-g2njj   1/1       Running   0          1m        172.168.0.18   k8s-master01
nginx-6f858d4d45-rcz89   1/1       Running   0          1m        172.168.6.12   k8s-node02

  建立service

[[email protected]s-master01 ~]# kubectl expose deployment nginx --type=NodePort --port=80
service/nginx exposed
[[email protected]-master01 ~]# kubectl get service -n default
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        1d
nginx        NodePort    10.97.112.176   <none>        80:32546/TCP   29s

  訪問測試

  測試HPA自動彈性伸縮

# 建立測試服務
kubectl run nginx-server --requests=cpu=10m --image=nginx --port=80
kubectl expose deployment nginx-server --port=80

# 建立hpa
kubectl autoscale deployment nginx-server --cpu-percent=10 --min=1 --max=10

  檢視當前nginx-server的ClusterIP

[[email protected] ~]# kubectl get service -n default
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes     ClusterIP   10.96.0.1       <none>        443/TCP   1d
nginx-server   ClusterIP   10.108.160.23   <none>        80/TCP    5m

  給測試服務增加負載

[[email protected] ~]# while true; do wget -q -O- http://10.108.160.23 > /dev/null; done

  檢視當前擴容情況

  終止增加負載,結束增加負載後,pod自動縮容(自動縮容需要大概10-15分鐘)

  刪除測試資料

[[email protected] ~]# kubectl delete deploy,svc,hpa nginx-server
deployment.extensions "nginx-server" deleted
service "nginx-server" deleted
horizontalpodautoscaler.autoscaling "nginx-server" deleted