1. 程式人生 > >部署GlusterFS及Heketi

部署GlusterFS及Heketi

一、前言及環境

  在實踐kubernetes的StateFulSet及各種需要持久儲存的元件和功能時,通常會用到pv的動態供給,這就需要用到支援此類功能的儲存系統了。在各類支援pv動態供給的儲存系統中,GlusterFS的設定比較簡單,且資料安全性比較有保障,相較於ceph和NFS。

環境(gluster-server之間互信):

二、部署GlusterFS

1.分別在三個節點上安裝glusterfs-server程式包,並啟動服務
[[email protected] ~]# yum clean all && yum makecache fast
[[email protected] ~]# yum install centos-release-gluster -y
[[email protected] ~]# yum --enablerepo=centos-gluster*-test install glusterfs-server -y
[[email protected] ~]# systemctl enable glusterd && systemctl start glusterd
[[email protected] ~]# systemctl status glusterd #確保伺服器啟動成功

#其他兩個節點相同操作

2.在任意一節點發現其他節點,組成GlusterFS叢集
[[email protected] ~]# gluster peer probe gluster-server02
peer probe: success. 
[[email protected]-server01 ~]# gluster peer probe gluster-server03
peer probe: success. 
[[email protected]-server01 ~]# gluster peer status
Number of Peers: 
2 Hostname: gluster-server02 Uuid: 82a98899-550b-466c-80f3-c56b85059e9a State: Peer in Cluster (Connected) Hostname: gluster-server03 Uuid: 9fdb08a3-b5ca-4e93-9701-d49e737f92e8 State: Peer in Cluster (Connected)

三、部署Heketi(gluster-server01上)

Heketi提供了一個RESTful管理介面,可以用來管理GlusterFS卷的生命週期。 通過Heketi,就可以像使用OpenStack Manila,Kubernetes和OpenShift一樣申請可以動態配置GlusterFS卷。Heketi會動態在叢集內選擇bricks構建所需的volumes,這樣以確保資料的副本會分散到叢集不同的故障域內。同時Heketi還支援任意數量的ClusterFS叢集,以保證接入的雲伺服器不侷限於單個GlusterFS叢集。
1.安裝Heketi服務
[[email protected] ~]# yum install heketi heketi-client  -y
2.配置heketi使用者能夠基於SSH祕鑰認證的方式連線至GlusterFS叢集各節點
[[email protected] ~]# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
[[email protected]-server01 ~]# chown heketi:heketi  /etc/heketi/heketi*
[[email protected]-server01 ~]# for host in {01..03};do ssh-copy-id -i /etc/heketi/heketi_key.pub [email protected]${host}  done
3.heketi的主配置檔案/etc/heketi/heketi.json,定義服務監聽的埠、認證及連線Gluster儲存叢集的方式
[[email protected] ~]# cp /etc/heketi/heketi.json /etc/heketi/heketi.json-bak
[[email protected]-server01 ~]# cat /etc/heketi/heketi.json
{
  "port": "8080",
  "use_auth": false,  #若要啟動連線Heketi的認證,則需要將user_auth改為true,並在jwt{}段為各使用者設定相應的密碼,使用者名稱和密碼都可以自定義

  "jwt": {
    "admin": {
      "key": "My Secret"
    },
    "user": {
      "key": "My Secret"
    }
  },

  "glusterfs": {
    "executor": "ssh",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    },
    "db": "/var/lib/heketi/heketi.db",
    "loglevel" : "debug"
  }
}
4.啟動heketi服務
[[email protected] ~]# systemctl enable heketi && systemctl start heketi
[[email protected]-server01 ~]# systemctl status heketi
5.向heketi發起測試請求
[[email protected] ~]# curl http://gluster-server01:8080/hello
Hello from Heketi

四、設定Heketi系統拓撲

拓撲資訊用於讓Heketi確認可以使用的節點、磁碟和叢集,管理員必須自行確定節點的故障域。故障域是賦予一組節點的整數值,這組節點共享相同的交換機、電源或其他任何會導致它們同時失效的元件。管理員必須確認哪些節點構成一個叢集,Heketi使用這些資訊來確保跨故障域中建立副本,從而提供資料冗餘能力,Heketi支援多個Gluster儲存叢集
1、使用客戶端載入預定義的叢集拓撲(注意客戶端的版本要與服務端保持一致)

示例:根據當前Gluster叢集的實際環境將gluster-server01、gluster-server02、gluster-server03三個節點定義在同一個叢集中,並指明各節點用於提供儲存的裝置

[[email protected]server01 heketi]# cat topolgy_demo.json 
{
  "clusters": [
    {
      "nodes": [
         {
           "node": {
               "hostnames": {
                   "manage": [
                      "192.168.1.107"
                    ],
                   "storage": [
                   "192.168.1.107"
                    ]
               },
          "zone": 1
         },
         "devices": [
            "/dev/sdb"
              ]
           },

         {  "node": {
               "hostnames": {
                   "manage": [
                      "192.168.1.108"
                    ],
                   "storage": [
                   "192.168.1.108"
                    ]
               },
          "zone": 1
         },
         "devices": [
            "/dev/sdb"
              ]
           },
         { "node": {
               "hostnames": {
                   "manage": [
                      "192.168.1.109"
                    ],
                   "storage": [
                   "192.168.1.109"
                    ]
               },
          "zone": 1
         },
         "devices": [
            "/dev/sdb"
              ]
           }     
        ]
     }
   ]
}

載入拓撲資訊,並新增各節點生成的隨機ID號

[[email protected] heketi]# echo "export HEKETI_CLI_SERVER=http://gluster-server01:8080" > /etc/profile.d/heketi.sh
[[email protected]-server01 heketi]# source /etc/profile.d/heketi.sh
[[email protected]-server01 heketi]# echo $HEKETI_CLI_SERVER
http://gluster-server01:8080
[[email protected] heketi]# heketi-cli topology load --json=topolgy_demo.json 
Creating cluster ... ID: 34be103e76c2254779d3c0dbd029acbd
    Allowing file volumes on cluster.
    Allowing block volumes on cluster.
    Creating node 192.168.1.107 ... ID: 389b66793f41ed74ab30109e8d1faf85
        Adding device /dev/sdb ... OK
    Creating node 192.168.1.108 ... ID: d3bffc39419abfe1a04d2c235f9720f3
        Adding device /dev/sdb ... OK
    Creating node 192.168.1.109 ... ID: 45b7db6cd1cb0405f07ac634a82b9fc9
        Adding device /dev/sdb ... OK
2.根據生成的隨機Cluster ID號檢視叢集狀態資訊
[[email protected] heketi]# heketi-cli cluster info 34be103e76c2254779d3c0dbd029acbd  #注意是第一行的cluster ID
Cluster id: 34be103e76c2254779d3c0dbd029acbd
Nodes:
389b66793f41ed74ab30109e8d1faf85
45b7db6cd1cb0405f07ac634a82b9fc9
d3bffc39419abfe1a04d2c235f9720f3
Volumes:

Block: true

File: true
3.建立一個測試使用的儲存卷
[[email protected] heketi]# heketi-cli volume create --size=5
Name: vol_9f2dde345a9b7566f8134c3952251d7a
Size: 5
Volume Id: 9f2dde345a9b7566f8134c3952251d7a
Cluster Id: 34be103e76c2254779d3c0dbd029acbd
Mount: 192.168.1.107:vol_9f2dde345a9b7566f8134c3952251d7a   #客戶端掛載的地址
Mount Options: backup-volfile-servers=192.168.1.109,192.168.1.108
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distributed+Replica: 3
4.儲存客戶端安裝glusterfs客戶端程式並掛載測試儲存卷(如果有k8s叢集,也可以使用k8s來測試)
[[email protected] ~]# yum clean all && yum makecache fast
[[email protected]-client01 ~]# yum install centos-release-gluster -y
[[email protected]-client01 ~]# yum --enablerepo=centos-gluster*-test install glusterfs glusterfs-fuse -y
[[email protected]-client01 ~]# mount -t glusterfs 192.168.1.107:vol_9f2dde345a9b7566f8134c3952251d7a /mnt  #建立volume資訊中的Mount資訊
[[email protected]-client01 ~]# df -h
檔案系統                                            容量  已用  可用 已用% 掛載點
/dev/mapper/cl-root                                  17G  2.0G   15G   12% /
devtmpfs                                            330M     0  330M    0% /dev
tmpfs                                               341M     0  341M    0% /dev/shm
tmpfs                                               341M  4.8M  336M    2% /run
tmpfs                                               341M     0  341M    0% /sys/fs/cgroup
/dev/sda1                                          1014M  139M  876M   14% /boot
tmpfs                                                69M     0   69M    0% /run/user/0
192.168.1.107:vol_9f2dde345a9b7566f8134c3952251d7a  5.0G   84M  5.0G    2% /mnt
5.測試完成刪除儲存卷命令
[[email protected] heketi]# heketi-cli volume delete  9f2dde345a9b7566f8134c3952251d7a  #建立資訊中的Volume Id