1. 程式人生 > >OpenStack Cinder 與各種後端儲存技術的整合敘述與實踐

OpenStack Cinder 與各種後端儲存技術的整合敘述與實踐

              Cinder專案為管理快裝置而生,它最重要的地方就是如何做到和各種儲存後端就到完美適配,用好後端儲存的功能,本文為Cinder 多種後端儲存(LVM, FC+SAN, iSCSI+SAN, NFS, VMWARE, Glusterfs)的場景總結, 以防自己將來忘記,歡迎交流, 共同成長微笑


1.LVM

開始OpenStack Cinder實踐之旅的入門儲存, cinder.conf 什麼都不配,預設就是使用LVM, LVM的原理

先把分割槽用pvcreate做成物理卷, 再把多個物理卷做成一個卷組,然後建立volume的時候就通過lvcreate分配lvm邏輯卷。

做部署時,用dd在當前目錄建立一個設定大小(本例中為10G)的檔案(cinder-volumes),然後通過losetup命令把他影射為loop device(虛擬快裝置),然後基於這個快裝置建立邏輯卷, 然後就是建立vg, 建立vg的時候可以一次包含多個pv,本例只使用了一個。


dd if=/dev/zero of=/vol/cinder-volumes bs=1 count=0 seek=10G 
# Mount the file. 
loopdev=`losetup -f` 
losetup $loopdev /vol/cinder-volumes 
# Initialize as a physical volume. 
pvcreate $loopdev 
# Create the volume group. 
vgcreate cinder-volumes $loopdev 
# Verify the volume has been created correctly. 
pvscan

建立好volume group後,使用cinder.conf的初始配置即可,  注意多節點時要配置好iscsi_ip_address為提供儲存服務節點的管理ip

重啟cinder-volume服務

就可以進行正常的volume建立,掛載,解除安裝等等操作了

question1:LVM如何實現掛載?

       建立很簡單,通過lvcreate命令即可,掛載稍微複雜點, 先要把卷export為scsi儲存目標裝置(target,會有個lun ID),  然後通過linux scsi initiator軟體實現到目標裝置的連線,這裡用到兩個軟體,scsi tagert管理軟體(這個裡面有多種如Tgt,Lio,Iet,ISERTgt,預設使用Tgt, 都是為裝有SCSI initiator的作業系統提供塊級scsi儲存)與linux scsi initiator,所以兩次操作分別對應命令為tgtadm與iscsiadm。

2. FC(Fibre Channel) +SAN 裝置

   要求:a) 計算節點所在的機器一定要有HBA卡(光纖網絡卡),
                    檢視host上有無HBA卡方式:

                   一種方法:   

$ lspci
20:00.0 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02)
20:00.1 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02
                     二種方法:

                    可以檢視/sys/class/fc_host/

                    當有兩塊光纖網絡卡,則有host1 與host2兩個目錄                   
$ cat /sys/class/fc_host/host1/port_name
0x10000090fa1b825a wwpn (作用如同MAC地址)
                b) 網絡卡通過光纖線連線到後端儲存上, 以ibm的svc為例,必須保證連線上了,可以登入svc圖形介面檢視host是不是active的, 或者ssh登入svc,執行命令
ww_2145:SVC:superuser>svcinfo lsfabric -delim ! -wwpn "10000090fa1b825a"

10000090FA1B825A!0A0C00!3!node_165008!500507680130DBEA!2!0A0500!active!x3560m4-06MFZF1!!Host
                     這樣才能保證做卷的掛載與解除安裝時沒有問題
下面是實踐,以Storwize裝置為例:
volume_driver = cinder.volume.drivers.storwize_svc.StorwizeSVCDriver
san_ip = 10.2.2.123
san_login = superuser
#san_password = passw0rd
san_private_key = /svc_rsa
storwize_svc_volpool_name = DS3524_DiskArray1
storwize_svc_connection_protocol = FC

san_password 與san_private_key可以二選一,推薦san_private_key 方式,這個私鑰檔案用ssh-keygen生成,生成好留下私鑰, 把公鑰放到san裝置上,以後其他host也想連線此儲存裝置時, 可直接使用此私鑰, 不需重複生成。

測試過程,建個volume

[[email protected] ~]#  cinder create --display-name test55 1
[[email protected] ~]#  nova volume-list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID                                   | Status    | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 24f7e457-f71a-43ce-9ca6-4454fbcfa31f | available | test55       | 1    | None        |             |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
用以下instance來進行attach掛載虛擬硬碟, 省了boot instance的過程~
[[email protected] ~]#  nova list
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 77d7293f-7a20-4f36-ac86-95f4c24b29ae | test2 | ACTIVE | -          | Running     | net_local=10.0.1.5 |
+--------------------------------------+-------+--------+------------+-------------+--------------------+
[[email protected] ~]# nova volume-attach 77d7293f-7a20-4f36-ac86-95f4c24b29ae 24f7e457-f71a-43ce-9ca6-4454fbcfa31f
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 24f7e457-f71a-43ce-9ca6-4454fbcfa31f |
| serverId | 77d7293f-7a20-4f36-ac86-95f4c24b29ae |
| volumeId | 24f7e457-f71a-43ce-9ca6-4454fbcfa31f |
+----------+--------------------------------------+
[[email protected] ~]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| 24f7e457-f71a-43ce-9ca6-4454fbcfa31f |   in-use  |    test55    |  1   |     None    |  false   | 77d7293f-7a20-4f36-ac86-95f4c24b29ae |
+--------------------------------------+-----------+--------------+------+-------------+-------------+

3. iSCSI+SAN 裝置  

這個是通過TCP/IP 協議來連線儲存裝置的,只需要保證儲存服務節點能夠ping通san ip和計算機點能ping儲存裝置上的iSCSI node ip即可。

   以ibm的svc或者v7000為例。
   與FC部分配置唯一的不同就是
storwize_svc_connection_protocol = FC ==》 storwize_svc_connection_protocol = iSCSI
   測試過程同上,一切ok

4. 使用VMWARE

這個主要是使用vcenter來管理快儲存。cinder這個其實就是封裝了一層, 最終都是呼叫vcenter的儲存管理的功能。就像是一箇中轉一樣,修改cinder.conf中如下配置項

volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
vmware_host_ip = $VCENTER_HOST_IP
vmware_host_username = $VCENTER_HOST_USERNAME
vmware_host_password = $VCENTER_HOST_PASSWORD
vmware_wsdl_location = $WSDL_LOCATION
# VIM Service WSDL Location
# example, 'file:///home/SDK5.5/SDK/vsphere-ws/wsdl/vim25/vimService.wsdl

測試過程同2,一切即ok。

5.NFS

非常普遍的一種網路檔案系統,原理可google,直接開始cinder中的實踐 

 第一步: 規劃好NFS儲存server端, 分別分佈在那些節點,那些目錄,這裡在兩個節點做規劃,作為nfs server端,10.11.0.16:/var/volume_share和10.11.1.178:/var/volume_share,在這兩臺機器上建好目錄/var/volume_share, 並export為nfs儲存,在兩個節點上啟動nfs服務

      
  第二步:建立/etc/cinder/share.txt,內容如下, 告知可以被mount的共享儲存            
10.11.0.16:/var/volume_share
10.11.1.178:/var/volume_share
                 修改許可權及使用者組              
$ chmod 0640 /etc/cinder/share.txt
$ chown root:cinder /etc/cinder/share.txt
  第三步:編輯/etc/cinder/cinder.conf              
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/shares.txt
nfs_mount_point_base=$state_path/mnt
 重啟cinder-volume服務,ok了,測試過程和2一樣。

有一次變更環境,voluem-attach報了錯:
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     connector)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 239, in attach
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     device_type=self['device_type'], encryption=encryption)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1263, in attach_volume
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     disk_dev)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1250, in attach_volume
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     virt_dom.attachDeviceFlags(conf.to_xml(), flags)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 179, in doit
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 139, in proxy_call
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     rv = execute(f,*args,**kwargs)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 77, in tworker
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     rv = meth(*args,**kwargs)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 419, in attachDeviceFlags
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher     if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2014-06-12 11:41:58.659 19312 TRACE oslo.messaging.rpc.dispatcher libvirtError: internal error unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk1' could not be initialized

這個錯來自libvirt,做以下設定即可,先察看virt_use_nfs是off還是on
$ /usr/sbin/getsebool virt_use_nfs
如果是off,做以下設定
$ /usr/sbin/setsebool -P virt_use_nfs on

6.GlusterFS     

       寫這麼多, 覺得這個是比較好的,難怪redhat會收購它, 有眼光啊,它為分散式檔案系統,可擴充套件到幾個PB數量級的叢集檔案系統。可以把多個不同型別的儲存塊通過Infiniband RDMA或者TCP/IP匯聚成一個大的並行網路檔案系統。

       簡單總結自己體會到的它的兩個特性
           1.橫向擴充套件能力強, 可以把不同節點的brick server組合起來,形成大的並行網路檔案系統
           2.可以做軟RAID,通過條帶技術[stripe] 和映象卷[replica], 提高併發讀寫速度和容災能力

        下面提供一個cinder+glusterfs實踐全過程, 穿插敘述glusterfs的優良特性的說明和使用

 第一步:首先安裝部署好gluterfs server環境:

        本例中使用10.11.0.16和10.11.1.178作為連個節點,首先要在它們上裝包

        兩種方式:yum源 or RPM 包

            1:yum -y install glusterfs glusterfs-fuse glusterfs-server
            2:去以下網址下載包, 例如http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.0/RHEL/epel-6.5/x86_64/
                        glusterfs-3.5.0-2.el6.x86_64.rpm         glusterfs-fuse-3.5.0-2.el6.x86_64.rpm    glusterfs-server-3.5.0-2.el6.x86_64.rpm   
                        glusterfs-cli-3.5.0-2.el6.x86_64.rpm     glusterfs-libs-3.5.0-2.el6.x86_64.rpm  

                  我下載了3.5版本的,利用rpm的方式安裝上。

        裝好之後,規劃好多節點上的brick server,本例中將在10.11.1.178 和10.11.0.16上分別建立/var/data_cinder和/var/data_cinder2目錄,並在10.11.1.178上建立儲存叢集cfs。

       1.啟動10.11.1.178和10.11.0.16上的glusterd服務

[[email protected] ~]# /etc/init.d/glusterd start

       2.在10.11.1.178上察看儲存池狀態
[[email protected] ~]# gluster peer probe 10.11.0.16
[[email protected] ~]# gluster peer probe 10.11.1.178 #本地也可以不執行
     3.建立儲存叢集
      用法:$ gluster volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
      stripe 條帶,類似做RAID0, 提高讀寫效能的,
      replica 顧名思義,映象,類似於做RAID1, 資料會成映象的寫

      stripe+ replica 可以做RAID10,此時stripe COUNT * replica  COUNT =brick-server  COUNT, 說多了,哈哈

[[email protected] var]# mkdir data_cinder
[[email protected] var]# mkdir data_cinder2
[[email protected] var]# mkdir data_cinder
[[email protected] var]# mkdir data_cinder2
[[email protected] var]# gluster volume create cfs stripe 2 replica 2 10.11.0.16:/var/data_cinder2 10.11.1.178:/var/data_cinder 10.11.0.16:/var/data_cinder 10.11.1.178:/var/data_cinder2 force
volume create: cfs: success: please start the volume to access data

注意:不要 gluster volume create cfs stripe 2 replica 2 10.11.0.16:/var/data_cinder2 10.11.0.16:/var/data_cinder 10.11.1.178:/var/data_cinder 10.11.1.178:/var/data_cinder2 force, 因為前兩個是做RAID1, 在同一個節點上就起不到容災能力了。

      4. start 儲存叢集
用法:$ gluster  volume start <NEW-VOLNAME>
[[email protected] var]# gluster volume start cfs
volume start: cfs: success
[[email protected] ~]# gluster volume info all
 Volume Name: cfs
Type: Striped-Replicate
Volume ID: ac614af9-11b8-4ff3-98e6-fe8c3a2568b6
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.11.0.16:/var/data_cinder2
Brick2: 10.11.1.178:/var/data_cinder
Brick3: 10.11.0.16:/var/data_cinder
Brick4: 10.11.1.178:/var/data_cinder2

 第二步:client端,也就是cinder-volume service所在的節點,這端除了glusterfs-server包不用裝,其他都要裝上,這端就和nfs那些一樣了,保證服務啟動時會做好mount。

建立/etc/cinder/share.conf,內容如下, 告知可以被mount的叢集儲存            

10.11.1.178:/cfs
                 修改許可權及使用者組              
$ chmod 0640 /etc/cinder/share.conf
$ chown root:cinder /etc/cinder/share.conf
cinder.conf 配置
glusterfs_shares_config = /etc/cinder/shares.conf
glusterfs_mount_point_base = /var/lib/cinder/volumes
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver

[[email protected] ~]# for i in api scheduler volume; do sudo service openstack-cinder-${i} restart; done

[[email protected] ~]# cinder create --display-name  chenxiao-glusterfs 1
[[email protected] ~]# cinder list 
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+
|                  ID                  |     Status     |    Display Name    | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+ 
| 866f7084-c624-4c11-a592-8c00fcabfb23 |   available    | chenxiao-glusterfs |  1   |     None    |  false   |                                      |
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+
每個brick server上都有此儲存資料的分佈, 都為512M, 只有1/2G 是因為做個RAID0, 分佈在四處,總共有2G,是因為做了RAID1,以其中一個為例:
[[email protected] data_cinder]# ls -al
總用量 20
drwxrwxr-x    3 root cinder      4096 6月  18 20:43 .
drwxr-xr-x.  27 root root        4096 6月  18 10:24 ..
drw-------  240 root root        4096 6月  18 20:39 .glusterfs
-rw-rw-rw-    2 root root   536870912 6月  18 20:39 volume-866f7084-c624-4c11-a592-8c00fcabfb23
boot個instance, 進行attach操作。
[[email protected] data_cinder]# nova volume-attach f5b7527e-2ab8-424c-9842-653bd73e8f26 866f7084-c624-4c11-a592-8c00fcabfb23
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdd                             |
| id       | 866f7084-c624-4c11-a592-8c00fcabfb23 |
| serverId | f5b7527e-2ab8-424c-9842-653bd73e8f26 |
| volumeId | 866f7084-c624-4c11-a592-8c00fcabfb23 |
+----------+--------------------------------------+

[[email protected] data_cinder]# cinder list
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+
|                  ID                  |     Status     |    Display Name    | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+ 
| 866f7084-c624-4c11-a592-8c00fcabfb23 |     in-use     | chenxiao-glusterfs |  1   |     None    |  false   | f5b7527e-2ab8-424c-9842-653bd73e8f26 |
+--------------------------------------+----------------+--------------------+------+-------------+----------+--------------------------------------+