1. 程式人生 > >k8s中的儲存卷-基於NFS和PV,PVC的入門(二)

k8s中的儲存卷-基於NFS和PV,PVC的入門(二)

下面做一個基於NFS的儲存
NFS支援多客戶端的讀寫

新建立一個主機
node3:192.168.68.30
安裝
yum -y install nfs-utils
建立共享資料夾:
mkdir /data/volumes -pv

設定共享:
vim /etc/exports
/data/volumes 172.20.0.0/16(rw,no_root_squash)
目錄 授權給 網段 許可權
注意172.20.0.0是node的網段

啟動NFS
systemctl start nfs

檢視2049埠是否開啟
ss -tnl

在node1和node2也安裝nfs
yum install nfs-utils -y
確保每個節點暗賬nfs

在node2上測試掛載成功

[[email protected] ~]# mount -t nfs node3:/data/volumes /mnt
mount.nfs: access denied by server while mounting node3:/data/volumes #這是掛載失敗

測試命令:mount 一定看看最下面有沒有掛載的命令,有則是掛載成功的
node3:/data/volumes on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.68.20,local_lock=none,addr=192.168.68.40)

取消掛載:
umount /mnt
[[email protected] volumes]# cat pod-vol-nfs.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-nfs
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: html
    nfs:
      path: /data/volumes
      server: node3.com
建立 kubectl apply -f pod-vol-nfs.yaml

[
[email protected]
volumes]# cd /data/volumes/ [[email protected] volumes]# vim index.html 輸入:hello,word
檢視kubectl get pods -o wide
NAME          READY     STATUS    RESTARTS   AGE       IP            NODE      NOMINATED NODE
pod-vol-nfs   1/1       Running   0          12s       10.244.2.56   node2     <none>
[[email protected] volumes]# curl 10.244.2.56
hello,word

思考:這是一種利用NFS方式掛載到k8S內部的方式,有點,pod掛掉後資料還在,適合做儲存。
前提是每個節點都安裝NFS

#####################################################

開始做PV和PVC實驗

vim /etc/exports
/data/volumes/v1 192.168.0.0/16(rw,no_root_squash)
/data/volumes/v2 192.168.0.0/16(rw,no_root_squash)
/data/volumes/v3 192.168.0.0/16(rw,no_root_squash)
/data/volumes/v4 192.168.0.0/16(rw,no_root_squash)
/data/volumes/v5 192.168.0.0/16(rw,no_root_squash)

重新載入配置:
[[email protected] volumes]# exportfs -arv
exporting 192.168.0.0/16:/data/volumes/v5
exporting 192.168.0.0/16:/data/volumes/v4
exporting 192.168.0.0/16:/data/volumes/v3
exporting 192.168.0.0/16:/data/volumes/v2
exporting 192.168.0.0/16:/data/volumes/v1
[[email protected] volumes]# showmount -e
Export list for node3:
/data/volumes/v5 192.168.0.0/16
/data/volumes/v4 192.168.0.0/16
/data/volumes/v3 192.168.0.0/16
/data/volumes/v2 192.168.0.0/16
/data/volumes/v1 192.168.0.0/16

檢視PV的幫助文件
kuebctl explain pv
kubectl explain pv.spec.nfs

建立yaml檔案

[[email protected] volumes]# cat pvs-demo.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/volumes/v1
    server: node3
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 5Gi
--- 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/volumes/v2
    server: node3
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 7Gi
--- 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/volumes/v1
    server: node3
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 8Gi
--- 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv004
  labels:
    name: pv004
spec:
  nfs:
    path: /data/volumes/v4
    server: node3
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 10Gi
--- 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv005
  labels:
    name: pv005
spec:
  nfs:
    path: /data/volumes/v5
    server: node3
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 12Gi
--- 

開始建立:

[[email protected] volumes]# kubectl get pv
No resources found.
[[email protected] volumes]# kubectl apply -f pvs-demo.yaml 
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[[email protected] volumes]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
pv001     5Gi        RWO,RWX        Retain           Available                                      7s
pv002     7Gi        RWO,RWX        Retain           Available                                      7s
pv003     8Gi        RWO,RWX        Retain           Available                                      7s
pv004     10Gi       RWO,RWX        Retain           Available                                      7s
pv005     12Gi       RWO,RWX        Retain           Available                                      7s
[[email protected] volumes]# 

pv定義完成,我們定義pvc

[[email protected] volumes]# cat pod-vol-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 6Gi    #定義的資源是6G,他會在資源中自動尋找合適的
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-nfs
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: html
    persistentVolumeClaim:
      claimName: mypvc

建立資源

[[email protected] volumes]# kubectl apply -f pod-vol-pvc.yaml 
persistentvolumeclaim/mypvc created
pod/pod-vol-nfs created
[[email protected] volumes]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON    AGE
pv001     5Gi        RWO,RWX        Retain           Available                                            16m
pv002     7Gi        RWO,RWX        Retain           Bound       default/mypvc                            16m   #自動尋找到PV002上
pv003     8Gi        RWO,RWX        Retain           Available                                            16m
pv004     10Gi       RWO,RWX        Retain           Available                                            16m
pv005     12Gi       RWO,RWX        Retain           Available                                            16m
我們檢視一下PVC的狀態
[[email protected] volumes]# kubectl get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc     Bound     pv002     7Gi        RWO,RWX                       3m

許可權是單路讀寫,多路讀寫
說明mypvc已經繫結到了pv002上了
注意:如果定義的策略是return
將pv和pvc刪除掉,資料也會存在目錄上
一般情況下,我們只刪除pv,而不會刪除pvc

1.10版本之前能刪除PVC的
1.10之後是不能刪除PVC的
只要pv和pvc繫結,我們就不能刪除pvc
e2KWEwPeFsZB