1. 程式人生 > >k8s 基於ceph存儲動態卷的使用

k8s 基於ceph存儲動態卷的使用

har write tex reat pac proc 屬性 ini back

kubernetes 使用Ceph RBD進行動態卷配置

1. 實驗環境簡述:

  本實驗主要演示了將現有Ceph集群用作k8s 動態創建持久性存儲(pv)的示例。假設您的環境已經建立了一個工作的Ceph集群。

2. 配置步驟:

1. k8s所有節點安裝ceph-common軟件包

yum install -y ceph-common
# 在每一臺k8s節點安裝ceph-common軟件包,無論是master節點還是node節點
如果k8s節點比較多,可以使用ansible安裝
ansible kube-master -m copy -a "src=ceph.repo backup=yes dest=/etc/yum.repos.d"
ansible kube-master -m yum -a "name=ceph-common state=present"
ansible kube-node -m copy -a "src=ceph.repo backup=yes dest=/etc/yum.repos.d"
ansible kube-node -m yum -a "name=ceph-common state=present"

2. Create Pool for Dynamic Volumes
  在ceph管理節點上面創建一個pool,名稱為kube

ceph osd pool create kube 1024
[[email protected] cluster]# ceph df
GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED
    3809G     3793G       15899M          0.41
POOLS:
    NAME        ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd         0           0         0         1196G           0
    k8sdemo     1           0         0         1196G           0
    kube        2      72016k         0         1196G          30
[[email protected] cluster]# cd /cluster
#創建密鑰,用於k8s認證
ceph auth get-or-create client.kube mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=kube‘ -o ceph.client.kube.keyring
[[email protected] cluster]# ls ceph.client.kube.keyring
ceph.client.kube.keyring
[[email protected] cluster]#

3. 在k8s集群上面創建一個ceph集群的secret

[[email protected] cluster]# ceph auth get-key client.admin | base64
QVFEdkJhZGN6ZW41SFJBQUQ5RzNJSVU0djlJVXRRQzZRZjBnNXc9PQ==
[[email protected] cluster]#
#   使用該命令在其中一個Ceph MON節點上生成此base64密鑰ceph auth get-key client.admin | base64,然後復制輸出並將其粘貼為密鑰的值

[[email protected] ceph]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: kube-system
data:
  key: QVFEdkJhZGN6ZW41SFJBQUQ5RzNJSVU0djlJVXRRQzZRZjBnNXc9PQ==
type: kubernetes.io/rbd
[[email protected] ceph]#

kubectl apply -f ceph-secret.yaml

[[email protected] ceph]# kubectl describe secrets -n kube-system ceph-secret
Name:         ceph-secret
Namespace:    kube-system
Labels:       <none>
Annotations:
Type:         kubernetes.io/rbd

Data
====
key:  40 bytes
[[email protected] ceph]# 
# k8s上面使用Ceph RBD 動態供給卷需要配置ceph secret

3. 在k8s集群上面創建一個ceph集群的用戶 secret

[email protected] cluster]# ceph auth get-key client.kube | base64
QVFDTks2ZGNjcEZoQmhBQWs4anVvbmVXZnZUeitvMytPbGZ6OFE9PQ==
[[email protected] cluster]#   
# 使用該命令在其中一個Ceph MON節點上生成此base64密鑰ceph auth get-key client.kube | base64,然後復制輸出並將其粘貼為密鑰的值。
[[email protected] ceph]# cat ceph-user-secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: ceph-user-secret
  namespace: kube-system
data:
  key: QVFDTks2ZGNjcEZoQmhBQWs4anVvbmVXZnZUeitvMytPbGZ6OFE9PQ==
type: kubernetes.io/rbd
[[email protected] ceph]#

kubectl apply -f ceph-user-secret.yaml

[[email protected] ceph]# kubectl get secrets -n kube-system ceph-user-secret 
NAME               TYPE                DATA   AGE
ceph-user-secret   kubernetes.io/rbd   1      3h45m
[[email protected] ceph]#

# k8s上面使用Ceph RBD 動態供給卷需要配置ceph user secret

4. 在k8s集群上面創建dynamic volumes

[[email protected] ceph]# cat ceph-storageclass.yaml 
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: dynamic
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
  monitors: 10.83.32.224:6789,10.83.32.225:6789,10.83.32.234:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system 
  pool: kube
  userId: kube
  userSecretName: ceph-user-secret
[[email protected] ceph]#
kubectl apply -f ceph-storageclass.yaml
# 配置了ceph mon節點的地址和端口,在pool中可以創建image的Ceph client ID
# Secret Name for adminId. It is required. The provided secret must have type kubernetes.io/rbd. 
# The namespace for adminSecret. Default is default.
# Ceph RBD pool. Default is rbd, but that value is not recommended.
# Ceph client ID that is used to map the Ceph RBD image. Default is the same as adminId.
# The name of Ceph Secret for userId to map Ceph RBD image. It must exist in the same namespace as PVCs. It is required unless its set as the default in new projects. 

5. 在k8s集群上面創建持久卷聲明(PVC)

  持久卷聲明(PVC)指定所需的訪問模式和存儲容量。目前,僅基於這兩個屬性,PVC被綁定到單個PV。一旦PV綁定到PVC,該PV基本上與PVC的項目相關聯,並且不能被另一個PVC綁定。PV和PVC的一對一映射。但是,同一項目中的多個pod可以使用相同的PVC。
  對於PV,accessModes不強制執行訪問權限,而是充當標簽以將PV與PVC匹配

[[email protected] ceph]# cat ceph-class.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
[[email protected] ceph]# 
kubectl apply -f ceph-class.yaml 

6. 在k8s集群上面創建Pod,使用ceph RDB自動關聯的pvc

  卷的名稱。此名稱在containers和 volumes部分中必須相同。

[[email protected] ceph]# cat ceph-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1
  namespace: kube-system
spec:
  containers:
  - name: ceph-busybox
    image: busybox
    command: ["sleep","60000"]
    volumeMounts:
    - name: ceph-vol1
      mountPath: /usr/share/busybox
      readOnly: false
  volumes:
  - name: ceph-vol1
    persistentVolumeClaim:
      claimName: ceph-claim
[[email protected] ceph]#
kubectl apply -f  ceph-pod.yaml
[[email protected] ceph]# kubectl get pods -n kube-system 
NAME                                    READY   STATUS    RESTARTS   AGE
ceph-pod1                               1/1     Running   0          3h21m

# 進入容器,查看掛載
[[email protected] ceph]# kubectl exec -it -n kube-system ceph-pod1 -- /bin/sh
/ # df -h|grep busybox
/dev/rbd0                 1.9G      6.0M      1.9G   0% /usr/share/busybox
/ # 

推薦關註我的個人微信公眾號 “雲時代IT運維”,周期性更新最新的應用運維類技術文檔。關註虛擬化和容器技術、CI/CD、自動化運維等最新前沿運維技術和趨勢;技術分享圖片

k8s 基於ceph存儲動態卷的使用