使用 Kubeadm 安裝部署 Kubernetes 1.12.1 叢集
手工搭建 Kubernetes 叢集是一件很繁瑣的事情,為了簡化這些操作,就產生了很多安裝配置工具,如 Kubeadm ,Kubespray,RKE 等元件,我最終選擇了官方的 Kubeadm 主要是不同的 Kubernetes 版本都有一些差異,Kubeadm 更新與支援的會好一些。Kubeadm 是 Kubernetes 官方提供的快速安裝和初始化 Kubernetes 叢集的工具,目前的還處於孵化開發狀態,跟隨 Kubernetes 每個新版本的釋出都會同步更新, 強烈建議先看下官方的文件瞭解下各個元件與物件的作用。
關於其他部署方式參考如下:
系統環境配置
準備3臺伺服器,1 個Master 節點 2個 Node 節點(所有節點均需執行如下步驟);生產環境建議 3個 Master N 個 Node 節點,好做到擴充套件遷移與災備。
系統版本
$ cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core)
修改主機名
$ sudo hostnamectl set-hostname kubernetes-master $ sudo hostnamectl set-hostname kubernetes-node-1 $ sudo hostnamectl set-hostname kubernetes-node-2
關閉防火牆
$ systemctl stop firewalld && systemctl disable firewalld
備註: 開放的埠
關閉 elinux
$ setenforce 0 $ sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
關閉 swap
$ swapoff -a
解決路由異常問題
$ echo "net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0" >> /etc/sysctl.d/k8s.conf $ sysctl -p /etc/sysctl.d/k8s.conf
或
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl –system
問題:[preflight] Some fatal errors occurred: [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
安裝 docker(阿里雲映象)
$ curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun $ systemctl enable docker && systemctl start docker
安裝 kubelet kubeadm kubectl (阿里雲映象)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
備註:
可以通過 yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet' 查詢可用的版本安裝指定的版本。
檢視版本
$ docker --version Docker version 18.06.1-ce, build e68fc7a $ kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} $ kubectl version Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} $ kubelet --version Kubernetes v1.12.1
拉取映象
由於 k8s.gcr.io 訪問不了原因,國人在 github 上同步一份映象,可以通過如下 shell 指令碼拉取(不同的 kubernetes 版本對應映象元件版本也不相同 ,如下我已經匹配好了)
$ touch pull_k8s_images.sh #!/bin/bash images=(kube-proxy:v1.12.1 kube-scheduler:v1.12.1 kube-controller-manager:v1.12.1 kube-apiserver:v1.12.1 kubernetes-dashboard-amd64:v1.10.0 heapster-amd64:v1.5.4 heapster-grafana-amd64:v5.0.4 heapster-influxdb-amd64:v1.5.2 etcd:3.2.24 coredns:1.2.2 pause:3.1 ) for imageName in ${images[@]} ; do docker pull anjia0532/google-containers.$imageName docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName docker rmi anjia0532/google-containers.$imageName done $ sh touch pull_k8s_images.sh
其他同步映象源:
檢視該版本需要的容器映象版本
$ kubeadm config images list k8s.gcr.io/kube-apiserver:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1 k8s.gcr.io/kube-scheduler:v1.12.1 k8s.gcr.io/kube-proxy:v1.12.1 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.2 # 檢視已 pull 好的映象 $ docker images
備註 :
官方文件中說明,不同的 Kubernetes 版本拉取的映象也不同,如 1.12 已經不需要指定平臺了(amd64, arm, arm64, ppc64le or s390x),另外新版本 CoreDNS (kube-dns 的替代品) 服務元件也預設包含,無需指定 feature-gates=CoreDNS=true 配置
。
Here
v1.10.x
means the “latest patch release of the v1.10 branch”.
${ARCH}
can be one of:amd64
,arm
,arm64
,ppc64le
ors390x
.If you run Kubernetes version 1.10 or earlier, and if you set
--feature-gates=CoreDNS=true
, you must also use thecoredns/coredns
image, instead of the threek8s-dns-*
images.In Kubernetes 1.11 and later, you can list and pull the images using the
kubeadm config images
sub-command:kubeadm config images list kubeadm config images pull
Starting with Kubernetes 1.12, the
k8s.gcr.io/kube-*
,k8s.gcr.io/etcd
andk8s.gcr.io/pause
images don’t require an-${ARCH}
suffix.
Kubeadm 基本命令
# 建立一個 Master 節點 $ kubeadm init # 將一個 Node 節點加入到當前叢集中 $ kubeadm join <Master 節點的 IP 和埠 >
Kubeadm 部署 Kubernetes 叢集最關鍵的兩個步驟,kubeadm init 和 kubeadm join。可以定製叢集元件的引數,新建 kubeadm.yaml 配置檔案。
$ touch kubeadm.yaml apiVersion: kubeadm.k8s.io/v1alpha3 kind: InitConfiguration controllerManagerExtraArgs: horizontal-pod-autoscaler-use-rest-clients: "true" horizontal-pod-autoscaler-sync-period: "10s" node-monitor-grace-period: "10s" apiServerExtraArgs: runtime-config: "api/all=true" kubernetesVersion: "v1.12.1"
備註:
如果報如下錯誤
your configuration file uses an old API spec: "kubeadm.k8s.io/v1alpha1". Please use kubeadm v1.11 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
In Kubernetes 1.11 and later, the default configuration can be printed out using the kubeadm config print-default command. It is recommended that you migrate your old
v1alpha2
configuration tov1alpha3
using the kubeadm config migrate command, becausev1alpha2
will be removed in Kubernetes 1.13.For more details on each field in the
v1alpha3
configuration you can navigate to our API reference pages.
建立 Kubernetes 叢集
建立 Master 節點
$ kubeadm init --config kubeadm.yaml
[init] using Kubernetes version: v1.12.1 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [021rjsh216048s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.23.216.48] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [021rjsh216048s localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [021rjsh216048s localhost] and IPs [172.23.216.48 127.0.0.1 ::1] [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 24.503270 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node 021rjsh216048s as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node 021rjsh216048s as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "021rjsh216048s" as an annotation [bootstraptoken] using token: zbnjyn.d5ntetgw5mpp9blv [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.23.216.48:6443 --token zbnjyn.d5ntetgw5mpp9blv --discovery-token-ca-cert-hash sha256:3dff1b750972001675fb8f5284722733f014f60d4371cdffb36522cbda6acb98
kubeadm join 命令,就是用來給這個 Master 增加更多的 Node 節點,另外 Kubeadm 還會提示第一次使用 Kubernetes 叢集需要的配置命令
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Kubernetes 叢集預設需要加密方式訪問 ,這幾條命名就是將叢集的安全配置檔案儲存到當前使用者的 .kube/config 目錄下,kubectl 預設會使用這個目錄下的授權資訊訪問 Kubernetes 叢集。
部署網路外掛
Container Network Interface (CNI) 最早是由 CoreOS 發起的容器網路規範,是 Kubernetes 網路外掛的基礎。其基本思想為:Container Runtime 在建立容器時,先建立好 network namespace,然後呼叫 CNI 外掛為這個 netns 配置網路,其後再啟動容器內的程序。現已加入 CNCF,成為 CNCF 主推的網路模型。
常見的 CNI 網路外掛有很多可以選擇:
- ACI provides integrated container networking and network security with Cisco ACI.
- Calico is a secure L3 networking and network policy provider.
- Canal unites Flannel and Calico, providing networking and network policy.
- Cilium is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported.
- CNI-Genie enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
- Contiv provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully open sourced. The installerprovides both kubeadm and non-kubeadm based installation options.
- Flannel is an overlay network provider that can be used with Kubernetes.
- Knitter is a network solution supporting multiple networking in Kubernetes.
- Multus is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
- NSX-T Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
- Nuage is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
- Romana is a Layer 3 networking solution for pod networks that also supports the NetworkPolicy API. Kubeadm add-on installation details available here.
- Weave Net provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
$ kubectl apply -f https://git.io/weave-kube-1.6 或 $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" 或 $ kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.5.0/weave-daemonset-k8s-1.8.yaml
備註:其他功能:
備註CNI-Genie 是華為 PaaS 團隊推出的同時支援多種網路外掛(支援 calico, canal, romana, weave 等)的 CNI 外掛。
檢視 Pod 狀態
$ kubectl get pods -n kube-system -l name=weave-net -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE weave-net-j9s27 2/2 Running 0 24h 172.23.216.49 kubernetes-node-1 <none> weave-net-p22s2 2/2 Running 0 24h 172.23.216.50 kubernetes-node-2 <none> weave-net-vnq7p 2/2 Running 0 24h 172.23.216.48 kubernetes-master <none> $ kubectl logs -n kube-system weave-net-j9s27 weave $ kubectl logs weave-net-j9s27 -n kube-system weave-npc
增加 Node 節點
$ kubeadm join 172.23.216.48:6443 --token zbnjyn.d5ntetgw5mpp9blv --discovery-token-ca-cert-hash sha256:3dff1b750972001675fb8f5284722733f014f60d4371cdffb36522cbda6acb98
如果需要從其它任意節點控制叢集,則需要複製 Master 的安全配置資訊到每臺伺服器
$ mkdir -p $HOME/.kube $ scp [email protected]172.23.216.48:/etc/kubernetes/admin.conf $HOME/.kube/config $ chown $(id -u):$(id -g) $HOME/.kube/config $ kubectl get nodes
檢視所有節點
$ kubectl get nodes NAME STATUS ROLES AGE VERSION 021rjsh216048s Ready master 2d23h v1.12.1 021rjsh216049s Ready <none> 2d23h v1.12.1 021rjsh216050s Ready <none> 2d23h v1.12.1 $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-576cbf47c7-ps2s2 1/1 Running 0 2d23h coredns-576cbf47c7-qsxdx 1/1 Running 0 2d23h etcd-021rjsh216048s 1/1 Running 0 2d23h heapster-684777c4cb-qzz8f 1/1 Running 0 2d16h kube-apiserver-021rjsh216048s 1/1 Running 0 2d23h kube-controller-manager-021rjsh216048s 1/1 Running 1 2d23h kube-proxy-5fgf9 1/1 Running 0 2d23h kube-proxy-hknws 1/1 Running 0 2d23h kube-proxy-qc6xj 1/1 Running 0 2d23h kube-scheduler-021rjsh216048s 1/1 Running 1 2d23h kubernetes-dashboard-77fd78f978-pqdvw 1/1 Running 0 2d18h monitoring-grafana-56b668bccf-tm2cl 1/1 Running 0 2d16h monitoring-influxdb-5c5bf4949d-85d5c 1/1 Running 0 2d16h weave-net-5fq89 2/2 Running 0 2d23h weave-net-flxgg 2/2 Running 0 2d23h weave-net-vvdkq 2/2 Running 0 2d23h $ kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE kube-system coredns-576cbf47c7-ps2s2 1/1 Running 0 2d23h 10.32.0.3 021rjsh216048s <none> kube-system coredns-576cbf47c7-qsxdx 1/1 Running 0 2d23h 10.32.0.2 021rjsh216048s <none> kube-system etcd-021rjsh216048s 1/1 Running 0 2d23h 172.23.216.48 021rjsh216048s <none> kube-system heapster-684777c4cb-qzz8f 1/1 Running 0 2d16h 10.44.0.2 021rjsh216049s <none> kube-system kube-apiserver-021rjsh216048s 1/1 Running 0 2d23h 172.23.216.48 021rjsh216048s <none> kube-system kube-controller-manager-021rjsh216048s 1/1 Running 1 2d23h 172.23.216.48 021rjsh216048s <none> kube-system kube-proxy-5fgf9 1/1 Running 0 2d23h 172.23.216.49 021rjsh216049s <none> kube-system kube-proxy-hknws 1/1 Running 0 2d23h 172.23.216.50 021rjsh216050s <none> kube-system kube-proxy-qc6xj 1/1 Running 0 2d23h 172.23.216.48 021rjsh216048s <none> kube-system kube-scheduler-021rjsh216048s 1/1 Running 1 2d23h 172.23.216.48 021rjsh216048s <none> kube-system kubernetes-dashboard-77fd78f978-pqdvw 1/1 Running 0 2d18h 10.36.0.1 021rjsh216050s <none> kube-system monitoring-grafana-56b668bccf-tm2cl 1/1 Running 0 2d16h 10.44.0.1 021rjsh216049s <none> kube-system monitoring-influxdb-5c5bf4949d-85d5c 1/1 Running 0 2d16h 10.36.0.2 021rjsh216050s <none> kube-system weave-net-5fq89 2/2 Running 0 2d23h 172.23.216.48 021rjsh216048s <none> kube-system weave-net-flxgg 2/2 Running 0 2d23h 172.23.216.50 021rjsh216050s <none> kube-system weave-net-vvdkq 2/2 Running 0 2d23h 172.23.216.49 021rjsh216049s <none>
檢視健康狀態
$ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
其他命令
#檢視 master 節點的 token $ kubeadm token list | grep authentication,signing | awk '{print $1}' #檢視 discovery-token-ca-cert-hash $ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
安裝 dashboard
查詢了官方最新版本是 v1.10.0 版本,上述指令碼已經拉取此映象
修改 kubernetes-dashboard.yaml 檔案,在 Dashboard Service 中新增 type: NodePort,暴露 Dashboard 服務。
# ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard
備註:
暴露服務很多種方式:
安裝外掛
# 安裝 Dashboard 外掛 $ kubectl create -f kubernetes-dashboard.yaml # 替換配置 $ kubectl replace --force -f kubernetes-dashboard.yaml
備註 :使用 proxy 本地訪問 ,叢集外訪需要使用 Ingress , 這裡先使用 NodePort 方式。
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml $ kubectl proxy Now access Dashboard at: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.
授予 Dashboard 賬戶叢集管理許可權
建立一個 kubernetes-dashboard-admin 的 ServiceAccount 並授予叢集admin的許可權,建立 kubernetes-dashboard-admin.rbac.yaml。
--- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-admin labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard-admin namespace: kube-system
執行
$ kubectl create -f kubernetes-dashboard-admin.rbac.yaml
或
$ kubectl apply -f https://raw.githubusercontent.com/batizhao/dockerfile/master/k8s/kubernetes-dashboard/kubernetes-dashboard-admin.rbac.yaml
檢視 Dashboard 服務埠
[[email protected] ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6h40m kubernetes-dashboard NodePort 10.98.73.56 <none> 443:30828/TCP 63m
檢視 kubernete-dashboard-admin 的 token
$ kubectl -n kube-system get secret | grep kubernetes-dashboard-admin kubernetes-dashboard-admin-token-4k82b kubernetes.io/service-account-token 3 75m $ kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-4k82b Name: kubernetes-dashboard-admin-token-4k82b Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin kubernetes.io/service-account.uid: a904fbf5-d3aa-11e8-945d-0050569f4a19 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi00azgyYiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE5MDRmYmY1LWQzYWEtMTFlOC05NDVkLTAwNTA1NjlmNGExOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.DGajGHRfLmtFpCyoHKn4wS0ZHKALfwMgTUTjmGSzBM3u1rr4hF51KFWBVwBPCkFQ1e1A5v6ENdhCN