容器編排之Kubernetes網路隔離NetworkPolicy
Kubernetes的一個重要特性就是要把不同node節點的pod(container)連線起來,無視物理節點的限制。但是在某些應用環境中,比如公有云,不同租戶的pod不應該互通,這個時候就需要網路隔離。幸好,Kubernetes提供了NetworkPolicy,支援按Namespace級別的網路隔離,這篇文章就帶你去了解如何使用NetworkPolicy。
需要注意的是,使用NetworkPolicy需要特定的網路解決方案,如果不啟用,即使配置了NetworkPolicy也無濟於事。我們這裡使用Calico解決網路隔離問題。
互通測試
在使用NetworkPolicy之前,我們先驗證不使用的情況下,pod是否互通。這裡我們的測試環境是這樣的:
Namespace:ns-calico1,ns-calico2
Deployment: ns-calico1/calico1-nginx, ns-calico2/busybox
Service: ns-calico1/calico1-nginx
先建立Namespace:
apiVersion: v1 kind: Namespace metadata: name: ns-calico1 labels: user: calico1 --- apiVersion: v1 kind: Namespace metadata: name: ns-calico2
# kubectl create -f namespace.yaml namespace "ns-calico1" created namespace "ns-calico2" created # kubectl get ns NAME STATUS AGE default Active 9d kube-public Active 9d kube-system Active 9d ns-calico1 Active 12s ns-calico2 Active 8s
接著建立ns-calico1/calico1-nginx:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: calico1-nginx namespace: ns-calico1 spec: replicas: 1 template: metadata: labels: user: calico1 app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: calico1-nginx namespace: ns-calico1 labels: user: calico1 spec: selector: app: nginx ports: - port: 80
# kubectl create -f calico1-nginx.yaml deployment "calico1-nginx" created service "calico1-nginx" created # kubectl get svc -n ns-calico1 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE calico1-nginx 192.168.3.141 <none> 80/TCP 26s # kubectl get deploy -n ns-calico1 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE calico1-nginx 1 1 1 1 34s
最後建立ns-calico2/calico2-busybox:
apiVersion: v1 kind: Pod metadata: name: calico2-busybox namespace: ns-calico2 spec: containers: - name: busybox image: busybox command: - sleep - "3600"
# kubectl create -f calico2-busybox.yaml pod "calico2-busybox" created # kubectl get pod -n ns-calico2 NAME READY STATUS RESTARTS AGE calico2-busybox 1/1 Running 0 40s
測試服務已經安裝完成,現在我們登進calico2-busybox裡,看是否能夠連通calico1-nginx
# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 Connecting to calico1-nginx.ns-calico1 (192.168.3.141:80)
由此可以看出,在沒有設定網路隔離的時候,兩個不同Namespace下的Pod是可以互通的。接下來我們使用Calico進行網路隔離。
網路隔離
先決條件
要想在Kubernetes叢集中使用Calico進行網路隔離,必須滿足以下條件:
- kube-apiserver必須開啟執行時extensions/v1beta1/networkpolicies ,即設定啟動引數:–runtime-config=extensions/v1beta1/networkpolicies=true
- kubelet必須啟用cni網路外掛,即設定啟動引數:–network-plugin=cni
- kube-proxy必須啟用iptables代理模式,這是預設模式,可以不用設定
- kube-proxy不得啟用–masquerade-all,這會跟calico衝突
注意:配置Calico之後,之前在叢集中執行的Pod都要重新啟動
安裝calico
首先需要安裝Calico網路外掛,我們直接在Kubernetes叢集中安裝,便於管理。
# Calico Version v2.1.4 # http://docs.projectcalico.org/v2.1/releases#v2.1.4 # This manifest includes the following component versions: # calico/node:v1.1.3 # calico/cni:v1.7.0 # calico/kube-policy-controller:v0.5.4 # This ConfigMap is used to configure a self-hosted Calico installation. kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: # Configure this with the location of your etcd cluster. etcd_endpoints: "https://10.1.2.154:2379,https://10.1.2.147:2379" # Configure the Calico backend to use. calico_backend: "bird" # The CNI network configuration to install on each node. cni_network_config: |- { "name": "k8s-pod-network", "type": "calico", "etcd_endpoints": "__ETCD_ENDPOINTS__", "etcd_key_file": "__ETCD_KEY_FILE__", "etcd_cert_file": "__ETCD_CERT_FILE__", "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__", "log_level": "info", "ipam": { "type": "calico-ipam" }, "policy": { "type": "k8s", "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__", "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__" }, "kubernetes": { "kubeconfig": "__KUBECONFIG_FILEPATH__" } } # If you're using TLS enabled etcd uncomment the following. # You must also populate the Secret below with these files. etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key" --- # The following contains k8s Secrets for use with a TLS enabled etcd cluster. # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/ apiVersion: v1 kind: Secret type: Opaque metadata: name: calico-etcd-secrets namespace: kube-system data: # Populate the following files with etcd TLS configuration if desired, but leave blank if # not using TLS for etcd. # This self-hosted install expects three files with the following names. The values # should be base64 encoded strings of the entire contents of each file. etcd-key: base64 key.pem etcd-cert: base64 cert.pem etcd-ca: base64 ca.pem --- # This manifest installs the calico/node container, as well # as the Calico CNI plugins and network config on # each master and worker node in a Kubernetes cluster. apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node template: metadata: labels: k8s-app: calico-node annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, {"key":"CriticalAddonsOnly", "operator":"Exists"}] spec: hostNetwork: true containers: # Runs calico/node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: quay.io/calico/node:v1.1.3 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Configure the IP Pool from which Pod IPs will be chosen. - name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/16" - name: CALICO_IPV4POOL_IPIP value: "always" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # Auto-detect the BGP IP address. - name: IP value: "" securityContext: privileged: true #resources: #requests: #cpu: 250m volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /calico-secrets name: etcd-certs # This container installs the Calico CNI binaries # and CNI network config file on each node. - name: install-cni image: quay.io/calico/cni:v1.7.0 command: ["/install-cni.sh"] env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir - mountPath: /calico-secrets name: etcd-certs volumes: # Used by calico/node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the etcd TLS secrets. - name: etcd-certs secret: secretName: calico-etcd-secrets --- # This manifest deploys the Calico policy controller on Kubernetes. # See https://github.com/projectcalico/k8s-policy apiVersion: extensions/v1beta1 kind: Deployment metadata: name: calico-policy-controller namespace: kube-system labels: k8s-app: calico-policy annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, {"key":"CriticalAddonsOnly", "operator":"Exists"}] spec: # The policy controller can only have a single active instance. replicas: 1 strategy: type: Recreate template: metadata: name: calico-policy-controller namespace: kube-system labels: k8s-app: calico-policy spec: # The policy controller must run in the host network namespace so that # it isn't governed by policy that would prevent it from working. hostNetwork: true containers: - name: calico-policy-controller image: quay.io/calico/kube-policy-controller:v0.5.4 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # The location of the Kubernetes API. Use the default Kubernetes # service for API access. - name: K8S_API value: "https://kubernetes.default:443" # Since we're running in the host namespace and might not have KubeDNS # access, configure the container's /etc/hosts to resolve # kubernetes.default to the correct service clusterIP. - name: CONFIGURE_ETC_HOSTS value: "true" volumeMounts: # Mount in the etcd TLS secrets. - mountPath: /calico-secrets name: etcd-certs volumes: # Mount in the etcd TLS secrets. - name: etcd-certs secret: secretName: calico-etcd-secrets
# kubectl create -f calico.yaml configmap "calico-config" created secret "calico-etcd-secrets" created daemonset "calico-node" created deployment "calico-policy-controller" created # kubectl get ds -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE calico-node 1 1 1 1 1 <none> 52s # kubectl get deploy -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE calico-policy-controller 1 1 1 1 6m
這樣就搭建了Calico網路,接下來就可以配置NetworkPolicy了。
配置NetworkPolicy
首先,修改ns-calico1的配置:
apiVersion: v1 kind: Namespace metadata: name: ns-calico1 labels: user: calico1 annotations: net.beta.kubernetes.io/network-policy: | { "ingress": { "isolation": "DefaultDeny" } }
# kubectl apply -f ns-calico1.yaml namespace "ns-calico1" configured
如果這個時候再測試兩個pod是否連通,一定會不通:
# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 Connecting to calico1-nginx.ns-calico1 (192.168.3.71:80) wget: download timed out
這就是我們想要的效果,不同Namespace之間的pod不能互通,當然這只是最簡單的情況,如果這時候ns-calico1的pod去連線ns-calico2的pod,還是互通的。因為ns-calico2沒有設定Namespace annotations。
而且,這時候的ns-calico1會拒絕任何pod的通訊請求。因為,Namespace的annotations只是指定了拒絕所有的通訊請求,還未規定何時接受其他pod的通訊請求。在這裡,我們指定只有擁有user=calico1標籤的pod可以互聯。
apiVersion: extensions/v1beta1 kind: NetworkPolicy metadata: name: calico1-network-policy namespace: ns-calico1 spec: podSelector: matchLabels: user: calico1 ingress: - from: - namespaceSelector: matchLabels: user: calico1 - podSelector: matchLabels: user: calico1 --- apiVersion: v1 kind: Pod metadata: name: calico1-busybox namespace: ns-calico1 labels: user: calico1 spec: containers: - name: busybox image: busybox command: - sleep - "3600"
# kubectl create -f calico1-network-policy.yaml networkpolicy "calico1-network-policy" created # kubectl create -f calico1-busybox.yaml pod "calico1-busybox" created
這時候,如果我通過calico1-busybox連線calico1-nginx,則可以連通。
# kubectl exec -it calico1-busybox -n ns-calico1 -- wget --spider --timeout=1 calico1-nginx.ns-calico1Connecting to calico1-nginx.ns-calico1 (192.168.3.71:80)
這樣我們就實現了Kubernetes的網路隔離。基於NetworkPolicy,可以實現公有云安全組策略。
本文轉自中文社群-ofollow,noindex">容器編排之Kubernetes網路隔離NetworkPolicy