kubernetes實戰(九):k8s叢集動態儲存管理GlusterFS及容器化GlusterFS擴容
1、準備工作
所有節點安裝GFS客戶端
yum install glusterfs glusterfs-fuse -y
如果不是所有節點要部署GFS管理服務,就在需要部署的節點上打上標籤
[[email protected] ~]# kubectl label node k8s-node01 storagenode=glusterfs node/k8s-node01 labeled [[email protected]-master01 ~]# kubectl label node k8s-node02 storagenode=glusterfs node/k8s-node02 labeled [[email protected]-master01 ~]# kubectl label node k8s-master01 storagenode=glusterfs node/k8s-master01 labeled
2、建立GFS管理服務容器叢集
本文采用容器化方式部署GFS,公司如有GFS叢集可直接使用。
GFS已Daemonset的方式進行部署,保證每臺需要部署GFS管理服務的Node上都執行一個GFS管理服務。
下載相關檔案:
wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
建立叢集:
[[email protected]8s-master01 kubernetes]# kubectl create -f glusterfs-daemonset.json daemonset.extensions/glusterfs created [[email protected]-master01 kubernetes]# pwd /root/heketi-client/share/heketi/kubernetes
注意1:此時採用的為預設的掛載方式,可使用其他磁碟當做GFS的工作目錄。
注意2:此時建立的namespace為預設的default,按需更改
檢視pods
[[email protected] kubernetes]# kubectl get pods -l glusterfs-node=daemonset NAME READY STATUS RESTARTS AGE glusterfs-5npwn 1/1 Running 0 1m glusterfs-bd5dx 1/1 Running 0 1m...
3、建立Heketi服務
Heketi是一個提供RESTful API管理GFS卷的框架,並能夠在K8S、OpenShift、OpenStack等雲平臺上實現動態儲存資源供應,支援GFS多叢集管理,便於管理員對GFS進行操作。
建立Heketi的ServiceAccount物件:
[[email protected] kubernetes]# cat heketi-service-account.json { "apiVersion": "v1", "kind": "ServiceAccount", "metadata": { "name": "heketi-service-account" } } [[email protected]-master01 kubernetes]# kubectl create -f heketi-service-account.json serviceaccount/heketi-service-account created [[email protected]-master01 kubernetes]# pwd /root/heketi-client/share/heketi/kubernetes [[email protected]-master01 kubernetes]# kubectl get sa NAME SECRETS AGE default 1 13d heketi-service-account 1 <invalid>
建立Heketi對應的許可權和secret
[[email protected] kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created [[email protected]-master01 kubernetes]# kubectl create secret generic heketi-config-secret --from-file=./heketi.json secret/heketi-config-secret created
初始化部署Heketi
[[email protected] kubernetes]# kubectl create -f heketi-bootstrap.json
secret/heketi-db-backup created
service/heketi created
deployment.extensions/heketi created
[[email protected]-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes
4、設定GFS叢集
[[email protected] heketi-client]# cp bin/heketi-cli /usr/local/bin/ [[email protected]-master01 heketi-client]# pwd /root/heketi-client [[email protected]-master01 heketi-client]# heketi-cli -v heketi-cli v7.0.0
修改topology-sample,manage為GFS管理服務的Node節點主機名,storage為Node節點IP,devices為Node節點上的裸裝置
[[email protected] kubernetes]# cat topology-sample.json { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "k8s-master01" ], "storage": [ "192.168.20.20" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdc", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "k8s-node01" ], "storage": [ "192.168.20.30" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "k8s-node02" ], "storage": [ "192.168.20.31" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] } ] } ] }
檢視當前pod的ClusterIP
[[email protected] kubernetes]# kubectl get svc | grep heketi deploy-heketi ClusterIP 10.110.217.153 <none> 8080/TCP 26m [[email protected]-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.110.217.153:8080
建立GFS叢集
[[email protected]master01 kubernetes]# heketi-cli topology load --json=topology-sample.json Creating cluster ... ID: a058723afae149618337299c84a1eaed Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node k8s-master01 ... ID: 929909065ceedb59c1b9c235fc3298ec Adding device /dev/sdc ... OK Creating node k8s-node01 ... ID: 37409d82b9ef27f73ccc847853eec429 Adding device /dev/sdb ... OK Creating node k8s-node02 ... ID: e3ab676be27945749bba90efb34f2eb9 Adding device /dev/sdb ... OK
建立heketi持久化卷
yum install device-mapper* -y
[[email protected] kubernetes]# heketi-cli setup-openshift-heketi-storage Saving heketi-storage.json [[email protected]-master01 kubernetes]# ls glusterfs-daemonset.json heketi.json heketi-storage.json heketi-bootstrap.json heketi-service-account.json README.md heketi-deployment.json heketi-start.sh topology-sample.json [[email protected]-master01 kubernetes]# kubectl create -f heketi-storage.json secret/heketi-storage-secret created endpoints/heketi-storage-endpoints created service/heketi-storage-endpoints created job.batch/heketi-storage-copy-job created
如果出現如下報錯:
[[email protected] kubernetes]# heketi-cli setup-openshift-heketi-storage Error: /usr/sbin/modprobe failed: 1 thin: Required device-mapper target(s) not detected in your kernel. Run `lvcreate --help' for more information.
解決辦法:所有節點執行modprobe dm_thin_pool
刪除中間產物
[[email protected] kubernetes]# kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi" pod "deploy-heketi-59f8dbc97f-5rf6s" deleted service "deploy-heketi" deleted service "heketi" deleted deployment.apps "deploy-heketi" deleted replicaset.apps "deploy-heketi-59f8dbc97f" deleted job.batch "heketi-storage-copy-job" deleted secret "heketi-storage-secret" deleted
建立持久化Heketi,持久化方式也可以選用其他方法。
[[email protected] kubernetes]# kubectl create -f heketi-deployment.json service/heketi created deployment.extensions/heketi created
待pod起來後,部署完成
[[email protected]master01 kubernetes]# kubectl get po NAME READY STATUS RESTARTS AGE glusterfs-5npwn 1/1 Running 0 3h glusterfs-8zfzq 1/1 Running 0 3h glusterfs-bd5dx 1/1 Running 0 3h heketi-5cb5f55d9f-5mtqt 1/1 Running 0 2m
檢視最新部署的持久化Heketi的svc,並更改HEKETI_CLI_SERVER的值
[[email protected]master01 kubernetes]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heketi ClusterIP 10.111.95.240 <none> 8080/TCP 12h heketi-storage-endpoints ClusterIP 10.99.28.153 <none> 1/TCP 12h kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d [[email protected]-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.111.95.240:8080 [[email protected] kubernetes]# curl http://10.111.95.240:8080/hello Hello from Heketi
檢視GFS資訊
Hello from Heketi[[email protected] kubernetes]# heketi-cli topology info Cluster Id: 5dec5676c731498c2bdf996e110a3e5e File: true Block: true Volumes: Name: heketidbstorage Size: 2 Id: 828dc2dfaa00b7213e831b91c6213ae4 Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Mount: 192.168.20.31:heketidbstorage Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20 Durability Type: replicate Replica: 3 Snapshot: Disabled Bricks: Id: 16b7270d7db1b3cfe9656b64c2a3916c Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick Size (GiB): 2 Node: fb181b0cef571e9af7d84d2ecf534585 Device: 04290ec786dc7752a469b66f5e94458f Id: 828da093d9d78a2b1c382b13cc4da4a1 Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick Size (GiB): 2 Node: d38819746cab7d567ba5f5f4fea45d91 Device: 80b61df999fcac26ebca6e28c4da8e61 Id: e8ef0e68ccc3a0416f73bc111cffee61 Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick Size (GiB): 2 Node: 0f00835397868d3591f45432e432ba38 Device: 82af8e5f2fb2e1396f7c9e9f7698a178 Nodes: Node Id: 0f00835397868d3591f45432e432ba38 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostnames: k8s-node02 Storage Hostnames: 192.168.20.31 Devices: Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB):39 Used (GiB):22 Free (GiB):17 Bricks: Id:e8ef0e68ccc3a0416f73bc111cffee61 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick Node Id: d38819746cab7d567ba5f5f4fea45d91 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostnames: k8s-node01 Storage Hostnames: 192.168.20.30 Devices: Id:80b61df999fcac26ebca6e28c4da8e61 Name:/dev/sdb State:online Size (GiB):39 Used (GiB):22 Free (GiB):17 Bricks: Id:828da093d9d78a2b1c382b13cc4da4a1 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick Node Id: fb181b0cef571e9af7d84d2ecf534585 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostnames: k8s-master01 Storage Hostnames: 192.168.20.20 Devices: Id:04290ec786dc7752a469b66f5e94458f Name:/dev/sdc State:online Size (GiB):39 Used (GiB):22 Free (GiB):17 Bricks: Id:16b7270d7db1b3cfe9656b64c2a3916c Size (GiB):2 Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick
5、定義StorageClass
[[email protected] gfs]# cat storageclass-gfs-heketi.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-heketi provisioner: kubernetes.io/glusterfs parameters: resturl: "http://10.111.95.240:8080" restauthenabled: "false" [[email protected]-master01 gfs]# ku [[email protected]-master01 gfs]# kubectl create -f storageclass-gfs-heketi.yaml storageclass.storage.k8s.io/gluster-heketi created
Provisioner引數須設定為"kubernetes.io/glusterfs"
resturl地址為API Server所在主機可以訪問到的Heketi服務的某個地址
6、定義PVC及測試Pod
[[email protected] gfs]# kubectl create -f pod-use-pvc.yaml pod/pod-use-pvc created persistentvolumeclaim/pvc-gluster-heketi created [[email protected]-master01 gfs]# cat pod-use-pvc.yaml apiVersion: v1 kind: Pod metadata: name: pod-use-pvc spec: containers: - name: pod-use-pvc image: busybox command: - sleep - "3600" volumeMounts: - name: gluster-volume mountPath: "/pv-data" readOnly: false volumes: - name: gluster-volume persistentVolumeClaim: claimName: pvc-gluster-heketi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-gluster-heketi spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "gluster-heketi" resources: requests: storage: 1Gi
PVC定義一旦生成,系統便觸發Heketi進行相應的操作,主要為在GlusterFS叢集上建立brick,再建立並啟動一個volume
建立的pv及pvc如下
[[email protected] gfs]# kubectl get pv,pvc | grep gluster persistentvolume/pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27 1Gi RWO Delete Bound default/pvc-gluster-heketi gluster-heketi 5m persistentvolumeclaim/pvc-gluster-heketi Bound pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27 1Gi RWO gluster-heketi 5m
7、測試資料
進入到pod並建立檔案
[[email protected] /]# kubectl exec -ti pod-use-pvc -- /bin/sh / # cd /pv-data/ /pv-data # mkdir {1..10} /pv-data # ls {1..10}
宿主機掛載測試
# 檢視volume [[email protected]-master01 /]# heketi-cli topology info Cluster Id: 5dec5676c731498c2bdf996e110a3e5e File: true Block: true Volumes: Name: vol_56d636b452d31a9d4cb523d752ad0891 Size: 1 Id: 56d636b452d31a9d4cb523d752ad0891 Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891 Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20 Durability Type: replicate Replica: 3 Snapshot: Enabled ... ...# 或者使用volume list檢視
[[email protected] mnt]# heketi-cli volume list Id:56d636b452d31a9d4cb523d752ad0891 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:vol_56d636b452d31a9d4cb523d752ad0891 Id:828dc2dfaa00b7213e831b91c6213ae4 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:heketidbstorage [[email protected] mnt]#
vol_56d636b452d31a9d4cb523d752ad0891為volume Name,Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891,掛載方式。
掛載檢視資料
[[email protected] /]# mount -t glusterfs 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891 /mnt/ [[email protected]-master01 /]# cd /mnt/ [[email protected]-master01 mnt]# ls {1..10}
8、測試Deployments
[[email protected] gfs]# cat nginx-gluster.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-gfs spec: replicas: 2 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: nginx-gfs-html mountPath: "/usr/share/nginx/html" - name: nginx-gfs-conf mountPath: "/etc/nginx/conf.d" volumes: - name: nginx-gfs-html persistentVolumeClaim: claimName: glusterfs-nginx-html - name: nginx-gfs-conf persistentVolumeClaim: claimName: glusterfs-nginx-conf --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-nginx-html spec: accessModes: [ "ReadWriteMany" ] storageClassName: "gluster-heketi" resources: requests: storage: 0.5Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-nginx-conf spec: accessModes: [ "ReadWriteMany" ] storageClassName: "gluster-heketi" resources: requests: storage: 0.1Gi [[email protected]-master01 gfs]# kubectl get po,pvc,pv | grep nginx pod/nginx-gfs-77c758ccc-2hwl6 1/1 Running 0 4m pod/nginx-gfs-77c758ccc-kxzfz 0/1 ContainerCreating 0 3m persistentvolumeclaim/glusterfs-nginx-conf Bound pvc-f40c5d4b-e800-11e8-8a89-000c293ad492 1Gi RWX gluster-heketi 2m persistentvolumeclaim/glusterfs-nginx-html Bound pvc-f40914f8-e800-11e8-8a89-000c293ad492 1Gi RWX gluster-heketi 2m persistentvolume/pvc-f40914f8-e800-11e8-8a89-000c293ad492 1Gi RWX Delete Bound default/glusterfs-nginx-html gluster-heketi 4m persistentvolume/pvc-f40c5d4b-e800-11e8-8a89-000c293ad492 1Gi RWX Delete Bound default/glusterfs-nginx-conf gluster-heketi 4m
檢視掛載情況
[[email protected] gfs]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 86G 6.6G 80G 8% / tmpfs tmpfs 7.8G 0 7.8G 0% /dev tmpfs tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 86G 6.6G 80G 8% /etc/hosts shm tmpfs 64M 0 64M 0% /dev/shm 192.168.20.20:vol_b9c68075c6f20438b46db892d15ed45a fuse.glusterfs 1014M 43M 972M 5% /etc/nginx/conf.d 192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 fuse.glusterfs 1014M 43M 972M 5% /usr/share/nginx/html tmpfs tmpfs 7.8G 12K 7.8G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs
掛載並建立index.html
[[email protected] gfs]# mount -t glusterfs 192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 /mnt/ [[email protected]-master01 gfs]# cd /mnt/ [[email protected]-master01 mnt]# ls [[email protected]-master01 mnt]# echo "test" > index.html [[email protected]-master01 mnt]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- cat /usr/share/nginx/html/index.html test
擴容nginx
[[email protected] ~]# kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE heketi 1 1 1 1 14h nginx-gfs 2 2 2 2 23m [[email protected]-master01 ~]# kubectl scale deploy nginx-gfs --replicas 3 deployment.extensions/nginx-gfs scaled [[email protected]-master01 ~]# kubectl get po NAME READY STATUS RESTARTS AGE glusterfs-5npwn 1/1 Running 0 18h glusterfs-8zfzq 1/1 Running 0 17h glusterfs-bd5dx 1/1 Running 0 18h heketi-5cb5f55d9f-5mtqt 1/1 Running 0 14h nginx-gfs-77c758ccc-2hwl6 1/1 Running 0 11m nginx-gfs-77c758ccc-6fphl 1/1 Running 0 8m nginx-gfs-77c758ccc-kxzfz 1/1 Running 0 10m
檢視檔案內容
[[email protected] ~]# kubectl exec -ti nginx-gfs-77c758ccc-6fphl -- cat /usr/share/nginx/html/index.html test
9、擴容GlusterFS
9.1新增磁碟至已存在的node節點
基於上述節點,假設在k8s-node02上增加磁碟
檢視k8s-node02部署的pod name及IP
[[email protected] ~]# kubectl get po -o wide -l glusterfs-node NAME READY STATUS RESTARTS AGE IP NODE glusterfs-5npwn 1/1 Running 0 20h 192.168.20.31 k8s-node02 glusterfs-8zfzq 1/1 Running 0 20h 192.168.20.20 k8s-master01 glusterfs-bd5dx 1/1 Running 0 20h 192.168.20.30 k8s-node01
在node02上確認新新增的碟符
Disk /dev/sdc: 42.9 GB, 42949672960 bytes, 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
使用heketi-cli檢視cluster ID和所有node ID
[[email protected] ~]# heketi-cli cluster info Error: Cluster id missing [[email protected]-master01 ~]# heketi-cli cluster list Clusters: Id:5dec5676c731498c2bdf996e110a3e5e [file][block] [[email protected]-master01 ~]# heketi-cli cluster info 5dec5676c731498c2bdf996e110a3e5e Cluster id: 5dec5676c731498c2bdf996e110a3e5e Nodes: 0f00835397868d3591f45432e432ba38 d38819746cab7d567ba5f5f4fea45d91 fb181b0cef571e9af7d84d2ecf534585 Volumes: 32146a51be9f980c14bc86c34f67ebd5 56d636b452d31a9d4cb523d752ad0891 828dc2dfaa00b7213e831b91c6213ae4 b9c68075c6f20438b46db892d15ed45a Block: true File: true
找到對應的k8s-node02的node ID
[[email protected] ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38 Node Id: 0f00835397868d3591f45432e432ba38 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostname: k8s-node02 Storage Hostname: 192.168.20.31 Devices: Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB):39 Used (GiB):25 Free (GiB):14 Bricks:4
新增磁碟至GFS叢集的node02
[[email protected] ~]# heketi-cli device add --name=/dev/sdc --node=0f00835397868d3591f45432e432ba38 Device added successfully
檢視結果
[[email protected] ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38 Node Id: 0f00835397868d3591f45432e432ba38 State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostname: k8s-node02 Storage Hostname: 192.168.20.31 Devices: Id:5539e74bc2955e7c70b3a20e72c04615 Name:/dev/sdc State:online Size (GiB):39 Used (GiB):0 Free (GiB):39 Bricks:0 Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB):39 Used (GiB):25 Free (GiB):14 Bricks:4
9.2 新增新節點
假設將k8s-master03,IP為192.168.20.22的加入glusterfs叢集,並將該節點的/dev/sdc加入到叢集
加標籤,之後會自動建立pod
[[email protected] kubernetes]# kubectl label node k8s-master03 storagenode=glusterfs node/k8s-master03 labeled [[email protected]-master01 kubernetes]# kubectl get pod -owide -l glusterfs-node NAME READY STATUS RESTARTS AGE IP NODE glusterfs-5npwn 1/1 Running 0 21h 192.168.20.31 k8s-node02 glusterfs-8zfzq 1/1 Running 0 21h 192.168.20.20 k8s-master01 glusterfs-96w74 0/1 ContainerCreating 0 2m 192.168.20.22 k8s-master03 glusterfs-bd5dx 1/1 Running 0 21h 192.168.20.30 k8s-node01
在任意節點執行peer probe
[[email protected] kubernetes]# kubectl exec -ti glusterfs-5npwn -- gluster peer probe 192.168.20.22 peer probe: success.
將新節點加入到glusterfs叢集中
[[email protected] kubernetes]# heketi-cli cluster list Clusters: Id:5dec5676c731498c2bdf996e110a3e5e [file][block] [[email protected]-master01 kubernetes]# heketi-cli node add --zone=1 --cluster=5dec5676c731498c2bdf996e110a3e5e --management-host-name=k8s-master03 --storage-host-name=192.168.20.22 Node information: Id: 150bc8c458a70310c6137e840619758c State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostname k8s-master03 Storage Hostname 192.168.20.22
將新節點的磁碟加入到叢集中
[[email protected] kubernetes]# heketi-cli device add --name=/dev/sdc --node=150bc8c458a70310c6137e840619758c Device added successfully
驗證
[[email protected] kubernetes]# heketi-cli node list Id:0f00835397868d3591f45432e432ba38 Cluster:5dec5676c731498c2bdf996e110a3e5e Id:150bc8c458a70310c6137e840619758c Cluster:5dec5676c731498c2bdf996e110a3e5e Id:d38819746cab7d567ba5f5f4fea45d91 Cluster:5dec5676c731498c2bdf996e110a3e5e Id:fb181b0cef571e9af7d84d2ecf534585 Cluster:5dec5676c731498c2bdf996e110a3e5e [[email protected]-master01 kubernetes]# heketi-cli node info 150bc8c458a70310c6137e840619758c Node Id: 150bc8c458a70310c6137e840619758c State: online Cluster Id: 5dec5676c731498c2bdf996e110a3e5e Zone: 1 Management Hostname: k8s-master03 Storage Hostname: 192.168.20.22 Devices: Id:2d5210c19858fb7ea3f805e6f582ecce Name:/dev/sdc State:online Size (GiB):39 Used (GiB):0 Free (GiB):39 Bricks:0