1. 程式人生 > >k8s中的儲存卷-節點和POD儲存資料(一)

k8s中的儲存卷-節點和POD儲存資料(一)

容器的儲存卷

Pod是自己有生命週期的
Pod消失後資料也會消失
所以我們要把資料放在一個容器的外面

docker儲存卷在k8s上只有一定的儲存性,因為k8s是排程的,Pod掛掉之後再啟動不會預設之前的資料位置

脫離節點的儲存裝置才可以解決持久能力

在K8s上Pod刪除,儲存卷也會隨之而刪除的,這一點區分docker

emptyDir 空目錄
hostPath 主機目錄

分散式儲存:
glusterfs,rbd,cephfs,雲端儲存(EBS,等)

檢視K8s支援多少種儲存:
kubectl explain pods.spec.volumes

mark

[[email protected]
volumes]# cat pod-vol-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: node1/create-by: "cluster admin" #備註 spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 - name: https containerPort: 443 volumeMounts: - name: html mountPath: /data/web/html/ #myapp的儲存地址 - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent #Always,Never,IfNotPresent 分別對應:總是去下載,總是不去下載,如果本地不存在就下載 volumeMounts: - name: html mountPath: /data/ #busybox的儲存地址 command: - "/bin/sh" - "-c" - "sleep 7200" volumes: - name: html emptyDir: {}

kubectl create -f pod-vol-demo.yaml 

檢視是否建立成功:
[[email protected] volumes]# kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
client     1/1       Running   0          5d
pod-demo   2/2       Running   0          2m

進入busybox容器:
[[email protected] volumes]# kubectl exec -it pod-demo -c busybox -- /bin/sh
/ # ls
bin   data  dev   etc   home  proc  root  sys   tmp   usr   var

檢視掛載:

/ # mount
rootfs on / type rootfs (rw)
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/4VOPP4JKAAV5FCYGWSVSBG55BW:/var/lib/docker/overlay2/l/D64M6ZROC774RMFNS4RKLUNME7,upperdir=/var/lib/docker/overlay2/fe23f482f33db7242b7f6acf54964c273c025db3568020fc12acfa8d60b331bf/diff,workdir=/var/lib/docker/overlay2/fe23f482f33db7242b7f6acf54964c273c025db3568020fc12acfa8d60b331bf/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,hugetlb)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/mapper/centos-root on /data type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/centos-root on /dev/termination-log type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/centos-root on /etc/resolv.conf type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/centos-root on /etc/hostname type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/centos-root on /etc/hosts type xfs (rw,relatime,attr2,inode64,noquota)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
tmpfs on /var/run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
proc on /proc/asound type proc (ro,relatime)
proc on /proc/bus type proc (ro,relatime)
proc on /proc/fs type proc (ro,relatime)
proc on /proc/irq type proc (ro,relatime)
proc on /proc/sys type proc (ro,relatime)
proc on /proc/sysrq-trigger type proc (ro,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/keys type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/timer_stats type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/sched_debug type tmpfs (rw,nosuid,size=65536k,mode=755)
tmpfs on /proc/scsi type tmpfs (ro,relatime)
tmpfs on /sys/firmware type tmpfs (ro,relatime)

開始向目錄寫檔案(busybox):
寫入成功

/ # date
Mon Sep 10 07:59:18 UTC 2018
/ # echo $(date) >> /date/index.html
/ # echo $(date) >> /data/index.html
/ # echo $(date) >> /data/index.html
/ # cat /data/index.html 
Mon Sep 10 07:59:56 UTC 2018
Mon Sep 10 08:00:02 UTC 2018

進入myapp容器:
[[email protected] volumes]# kubectl exec -it pod-demo -c myapp -- /bin/sh
通過發現,資料是共享的
[[email protected] volumes]# kubectl exec -it pod-demo -c myapp -- /bin/sh
/ # cat /data/web/html/index.html 
Mon Sep 10 07:59:56 UTC 2018
Mon Sep 10 08:00:02 UTC 2018

################
由此說明busyboy的/data和myapp的/data/web/html/是共享的
################

我們刪除後開始嘗試:
第一個容器是開始對外面提供web服務。主容器
第二個容器是對外面提供儲存

重新編輯yaml指令碼
#注意,有時候映象command報錯不一定是命令的問題,也有可能是映象不支援的問題
[[email protected] volumes]# cat pod-vol-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    node1/create-by: "cluster admin"  #備註
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports: 
    - name: http
      containerPort: 80
    - name: https
      containerPort: 443
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent      #Always,Never,IfNotPresent 分別對應:總是去下載,總是不去下載,如果本地不存在就下載
    volumeMounts:
    - name: html
      mountPath: /data/
    command: ["bin/sh"]
    args: ["-c","while true; do echo $(date) >> /data/index.html; sleep 2; done"]

  volumes:
  - name: html
    emptyDir: {}
開始建立:
kubectl apply -f pod-demo.yaml
[[email protected] volumes]# kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
client     1/1       Running   0          5d
pod-demo   2/2       Running   0          1m

檢視是否成功:
[[email protected] volumes]# kubectl get pods -o wide
NAME       READY     STATUS    RESTARTS   AGE       IP            NODE      NOMINATED NODE
client     1/1       Running   0          5d        10.244.2.3    node2     <none>
pod-demo   2/2       Running   0          2m        10.244.2.60   node2     <none>
[[email protected] volumes]# curl 10.244.2.60
Mon Sep 10 08:46:41 UTC 2018
Mon Sep 10 08:46:43 UTC 2018
Mon Sep 10 08:46:45 UTC 2018
Mon Sep 10 08:46:47 UTC 2018
Mon Sep 10 08:46:49 UTC 2018
Mon Sep 10 08:46:51 UTC 2018
Mon Sep 10 08:46:53 UTC 2018
Mon Sep 10 08:46:55 UTC 2018
Mon Sep 10 08:46:57 UTC 2018

每隔兩秒生成一個新資料

我們驗證了同一個儲存卷在同一個Pod不同的容器可以共同的呼叫
生命週期也隨著Pod的消失而消失

實驗:驗證資料在node節點上儲存:

mark

[[email protected] volumes]# cat pod-hostpath-vol.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-hostpath
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: html
    hostPath:
      path: /data/pod/volume1
      type: DirectoryOrCreate  #沒有路徑的話自動建立路徑

kubectl apply -f pod-hostpath-vol.yaml

node1執行:
[[email protected] ~]# mkdir -p /data/pod/volume1
[[email protected] ~]# vim /data/pod/volume1/index.html
node1
node2執行:
[[email protected] ~]# mkdir -p /data/pod/volume1
[[email protected] ~]# vim /data/pod/volume1/index.html
node2

[[email protected] volumes]# kubectl get pods -o wide
NAME               READY     STATUS    RESTARTS   AGE       IP            NODE      NOMINATED NODE
client             1/1       Running   0          5d        10.244.2.3    node2     <none>
pod-vol-hostpath   1/1       Running   0          9s        10.244.2.62   node2     <none>
[[email protected] volumes]# curl 10.244.2.62
node2
如果節點掛了,資料還是會丟失