2個Kubernetes使用同一個Ceph存儲達到Kubernetes間持久化數據遷移
[TOC]
當前最新Kubernetes穩定版為1.14。現在為止,還沒有不同Kubernetes間持久化存儲遷移的方案。但根據Kubernetes pv/pvc綁定流程和原理,只要 "存儲"-->"PV"-->"PVC" 的綁定關系相同,即可保證不同間Kubernetes可掛載相同的存儲,並且裏面是相同數據。
1. 環境
原來我的Kubernetes為阿裏雲ECS自己搭建的,現在想切換使用阿裏雲購買的Kubernetes。因Kubernetes中一些應用使用像1G、2G等小容量存儲比較多,所以仍舊想保留原有的Ceph存儲使用。
Kubernetes: v1.13.4
Ceph: 12.2.10 luminous (stable)
2個Kubernetes存儲使用storageclass管理,並連接相同Ceph集群。可參考:Kubernetes使用Ceph動態卷部署應用
2. 遷移過程示例
數據依舊保留在存儲中,並未真正有遷移動作,遷移只是相對於不同Kubernetes來講。
2.1 提取舊Kubernetes持久化存儲
為了更好的看到效果,這裏新建一個nginx的deploy,並使用ceph rbd做為持久化存儲,然後寫一些數據。
vim rbd-claim.yaml
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: rbd-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: ceph-rbd resources: requests: storage: 1Gi
vim rbd-nginx-dy.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-rbd-dy spec: replicas: 1 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: ceph-cephfs-volume mountPath: "/usr/share/nginx/html" volumes: - name: ceph-cephfs-volume persistentVolumeClaim: claimName: rbd-pv-claim
# 創建pvc和deploy
kubectl create -f rbd-claim.yaml
kubectl create -f rbd-nginx-dy.yaml
查看結果,並寫入數據至nginx持久化目錄中:
pod/nginx-rbd-dy-7455884d49-rthzt 1/1 Running 0 4m31s
[[email protected] tmp]# kubectl get pvc,pod
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/rbd-pv-claim Bound pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 1Gi RWO ceph-rbd 4m37s
NAME READY STATUS RESTARTS AGE
pod/nginx-rbd-dy-7455884d49-rthzt 1/1 Running 0 4m36s
[[email protected] tmp]# kubectl exec -it nginx-rbd-dy-7455884d49-rthzt /bin/bash
[email protected]:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 40G 23G 15G 62% /
tmpfs 64M 0 64M 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/vda1 40G 23G 15G 62% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/rbd5 976M 2.6M 958M 1% /usr/share/nginx/html
tmpfs 16G 12K 16G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 16G 0 16G 0% /proc/acpi
tmpfs 16G 0 16G 0% /proc/scsi
tmpfs 16G 0 16G 0% /sys/firmware
[email protected]:/# echo ygqygq2 > /usr/share/nginx/html/ygqygq2.html
[email protected]:/# exit
exit
[[email protected] tmp]#
將pv、pvc信息提取出來:
[[email protected] tmp]# kubectl get pvc rbd-pv-claim -oyaml --export > rbd-pv-claim-export.yaml
[[email protected] tmp]# kubectl get pv pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee -oyaml --export > pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml
[[email protected] tmp]# more rbd-pv-claim-export.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd
creationTimestamp: null
finalizers:
- kubernetes.io/pvc-protection
name: rbd-pv-claim
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/rbd-pv-claim
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 1Gi
storageClassName: ceph-rbd
volumeMode: Filesystem
volumeName: pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
status: {}
[[email protected] tmp]# more pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: ceph.com/rbd
rbdProvisionerIdentity: ceph.com/rbd
creationTimestamp: null
finalizers:
- kubernetes.io/pv-protection
name: pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
selfLink: /api/v1/persistentvolumes/pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: rbd-pv-claim
namespace: default
resourceVersion: "51998402"
uid: d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
persistentVolumeReclaimPolicy: Retain
rbd:
fsType: ext4
image: kubernetes-dynamic-pvc-dac8284a-6a1c-11e9-b533-1604a9a8a944
keyring: /etc/ceph/keyring
monitors:
- 172.18.43.220:6789
- 172.18.138.121:6789
- 172.18.228.201:6789
pool: kube
secretRef:
name: ceph-secret
namespace: kube-system
user: kube
storageClassName: ceph-rbd
volumeMode: Filesystem
status: {}
[[email protected] tmp]#
2.2 將提取出來的pv、pvc導入新Kubernetes中
將上文中提取出來的pv和pvc傳至新的Kubernetes中:
[[email protected] tmp]# rsync -avz pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml rbd-pv-claim-export.yaml rbd-nginx-dy.yaml 172.18.97.95:/tmp/
sending incremental file list
pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml
rbd-nginx-dy.yaml
rbd-pv-claim-export.yaml
sent 1,371 bytes received 73 bytes 2,888.00 bytes/sec
total size is 2,191 speedup is 1.52
[[email protected] tmp]#
在新的Kubernetes中導入pv、pvc:
[[email protected] tmp]# kubectl apply -f pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml -f rbd-pv-claim-export.yaml
persistentvolume/pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee created
persistentvolumeclaim/rbd-pv-claim created
[[email protected] tmp]# kubectl get pv pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 1Gi RWO Retain Released default/rbd-pv-claim ceph-rbd 20s
[[email protected] tmp]# kubectl get pvc rbd-pv-claim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rbd-pv-claim Lost pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 0 ceph-rbd 28s
[[email protected] tmp]#
可以看到,pvc狀態顯示為Lost
,這是因為在新的Kubernetes中導入pv和pvc後,它們會自動重新生成自己的resourceVersion
和uid
,因此在新導入的pv中的spec.claimRef
信息為舊的:
為了解決新導入的pv中的spec.claimRef
信息舊的變成新的,我們將這段信息刪除,由provisioner自動重新綁定它們的關系:
這裏我們做成一個腳本處理:
vim unbound.sh
pv=$*
function unbound() {
kubectl patch pv -p ‘{"spec":{"claimRef":{"apiVersion":"","kind":"","name":"","namespace":"","resourceVersion":"","uid":""}}}‘ $pv
kubectl get pv $pv -oyaml> /tmp/.pv.yaml
sed ‘/claimRef/d‘ -i /tmp/.pv.yaml
#kubectl apply -f /tmp/.pv.yaml
kubectl replace -f /tmp/.pv.yaml
}
unbound
sh unbound.sh pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
腳本執行後,過個10秒左右,查看結果:
在新的Kubernetes中使用之前傳的rbd-nginx-dy.yaml
驗證下,在此之前,因為使用ceph rbd,需要先解除舊Kubernetes上的pod占用該rbd:
舊Kubernetes:
[[email protected] tmp]# kubectl delete -f rbd-nginx-dy.yaml
deployment.extensions "nginx-rbd-dy" deleted
新Kubernetes:
3. 小結
上面實驗中,使用的是RWO
的pvc,大家試想下,如果使用RWX
,多個Kubernetes使用,這種使用場景可能有更大的作用。
Kubernetes使用過程中,pv、pvc和存儲,它們的信息和綁定關系至關重要,所以可按需求當作日常備份,有了這些備份,即使Kubernetes etcd數據損壞,也可達到恢復和遷移Kubernetes持久化數據目的。
2個Kubernetes使用同一個Ceph存儲達到Kubernetes間持久化數據遷移