1. 程式人生 > >GlusterFS分布式存儲集群部署記錄-相關補充

GlusterFS分布式存儲集群部署記錄-相關補充

messages can a star cgroup mkdir 測試 做了 1.0 block

接著上一篇Centos7下GlusterFS分布式存儲集群環境部署記錄文檔,繼續做一些補充記錄,希望能加深對GlusterFS存儲操作的理解和熟悉度。

========================清理glusterfs存儲環境=========================

由上面可知,該glusterfs存儲集群有四個節點:
[root@GlusterFS-master ~]# cat /etc/hosts
.......
192.168.10.239  GlusterFS-master
192.168.10.212  GlusterFS-slave
192.168.10.204  GlusterFS-slave2
192.168.10.220  GlusterFS-slave3

現將四個節點的存儲目錄/opt/gluster/data全部刪除

查看集群節點情況,如下發現glusterfs集群中有三個節點。
由於是在192.168.10.239機器上查看的,所以在集群中默認看不到自己的。
如果在其他集群上執行這個命令,就能查看到192.168.10.239這個節點了
[root@GlusterFS-master ~]# gluster peer status   
Number of Peers: 3

Hostname: 192.168.10.212
Uuid: f8e69297-4690-488e-b765-c1c404810d6a
State: Peer in Cluster (Connected)

Hostname: 192.168.10.204
Uuid: a989394c-f64a-40c3-8bc5-820f623952c4
State: Peer in Cluster (Connected)

Hostname: 192.168.10.220
Uuid: dd99743a-285b-4aed-b3d6-e860f9efd965
State: Peer in Cluster (Connected)

然後分別將節點從集群中刪除(不能在節點機本機的gluster命令下刪除自己)
[root@GlusterFS-master ~]# gluster                               //可以在gluster的交互界面裏操作
gluster> peer detach 192.168.10.220
peer detach: success
gluster> peer detach 192.168.10.204
peer detach: success
gluster> peer detach 192.168.10.212
peer detach: success
gluster> peer detach 192.168.10.239
peer detach: failed: 192.168.10.239 is localhost                //默認在本機是刪除不了自己的。需要在別的節點上刪除它。
gluster>

登錄另一臺節點機上,執行將192.168.10.220節點從集群中移除的操作
[root@GlusterFS-slave ~]# gluster
gluster> peer detach 192.168.10.239
peer detach: success
gluster> 

再次查看集群情況,發現沒有節點了
[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 0

雖然這時候,節點從集群中刪除了,但是卷還存在!
[root@GlusterFS-master ~]# gluster volume info
Volume Name: models
Type: Distributed-Replicate
Volume ID: f1945b0b-67d6-4202-9198-639244ab0a6a
Status: Stopped
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/opt/gluster/data
Brick2: 192.168.10.212:/opt/gluster/data
Brick3: 192.168.10.204:/opt/gluster/data
Brick4: 192.168.10.220:/opt/gluster/data
Options Reconfigured:
performance.write-behind: on
performance.io-thread-count: 32
performance.flush-behind: on
performance.cache-size: 128MB
features.quota: on

接著刪除之前創建的models卷
[root@GlusterFS-master ~]# gluster volume stop models
[root@GlusterFS-master ~]# gluster volume delete models

[root@GlusterFS-master ~]# gluster volume info
No volumes present

可以在gluster交互界面裏執行它的所有相關命令:
[root@GlusterFS-master ~]# gluster
gluster> volume info
No volumes present
gluster> peer status
Number of Peers: 0
gluster> 

=====================dd命令創建虛擬分區,創建存儲目錄=====================

首先利用dd命令創建虛擬分區,創建存儲目錄
[root@GlusterFS-master ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   36G  1.8G   34G   5% /
devtmpfs                 2.9G     0  2.9G   0% /dev
tmpfs                    2.9G     0  2.9G   0% /dev/shm
tmpfs                    2.9G  8.5M  2.9G   1% /run
tmpfs                    2.9G     0  2.9G   0% /sys/fs/cgroup
/dev/vda1               1014M  143M  872M  15% /boot
/dev/mapper/centos-home   18G   33M   18G   1% /home
tmpfs                    581M     0  581M   0% /run/user/0

dd命令創建一個虛擬分區出來,格式化並掛載到/data目錄下
[root@GlusterFS-master ~]# dd if=/dev/vda1 of=/dev/vdb1
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 2.0979 s, 512 MB/s

[root@GlusterFS-master ~]# du -sh /dev/vdb1
1.0G  /dev/vdb1

[root@GlusterFS-master ~]# mkfs.xfs -f /dev/vdb1                        //這裏格式成xfs格式文件,也可以格式化成ext4格式的。
meta-data=/dev/vdb1              isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


[root@GlusterFS-master ~]# mkdir /data

[root@GlusterFS-master ~]# mount /dev/vdb1 /data

[root@GlusterFS-master ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   36G  1.8G   34G   5% /
devtmpfs                 2.9G   34M  2.8G   2% /dev
tmpfs                    2.9G     0  2.9G   0% /dev/shm
tmpfs                    2.9G  8.5M  2.9G   1% /run
tmpfs                    2.9G     0  2.9G   0% /sys/fs/cgroup
/dev/vda1               1014M  143M  872M  15% /boot
/dev/mapper/centos-home   18G   33M   18G   1% /home
tmpfs                    581M     0  581M   0% /run/user/0
/dev/loop0               976M  2.6M  907M   1% /data

[root@GlusterFS-master ~]# fdisk -l
.......
Disk /dev/loop0: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

設置開機自動掛載
[root@GlusterFS-master ~]# echo ‘/dev/loop0 /data xfs defaults 1 2‘ >> /etc/fstab

然後創建gluster存儲目錄
[root@GlusterFS-master ~]# mkdir /data/gluster

以上操作要在四臺節點機器上都要執行一遍,即創建存儲目錄環境!

================創建分布式卷(即Hash卷)及其相關管理操作================

接著將節點添加到集群。這裏選擇在GlusterFS-master節點上執行:
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.212
peer probe: success. 
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.204
peer probe: success. 
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.220
peer probe: success. 

[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 3

Hostname: 192.168.10.212
Uuid: f8e69297-4690-488e-b765-c1c404810d6a
State: Peer in Cluster (Connected)

Hostname: 192.168.10.204
Uuid: a989394c-f64a-40c3-8bc5-820f623952c4
State: Peer in Cluster (Connected)

Hostname: 192.168.10.220
Uuid: dd99743a-285b-4aed-b3d6-e860f9efd965
State: Peer in Cluster (Connected)


登錄其他節點集群查看,就能看到GlusterFS-master節點(192.168.10.239)也在集群中了
[root@GlusterFS-slave ~]# gluster peer status
Number of Peers: 3

Hostname: GlusterFS-master
Uuid: 5dfd40e2-096b-40b5-bee3-003b57a39007
State: Peer in Cluster (Connected)

Hostname: 192.168.10.204
Uuid: a989394c-f64a-40c3-8bc5-820f623952c4
State: Peer in Cluster (Connected)

Hostname: 192.168.10.220
Uuid: dd99743a-285b-4aed-b3d6-e860f9efd965
State: Peer in Cluster (Connected)

----------------------------------------------------------
現在開始創建。這裏在GlusterFS-master機器上執行的,默認這裏創建的是哈希卷。
如下,它會自動在192.168.10.212節點的/data下面創建個gluster目錄(這個目錄不需要提前手動創建)
[root@GlusterFS-master ~]# gluster volume create gluster_data 192.168.10.212:/data/gluster force
volume create: gluster_data: success: please start the volume to access data

登錄192.168.10.212節點查看,果然在/data目錄下自動創建了gluster目錄
[root@GlusterFS-slave ~]# ls /data/gluster
[root@GlusterFS-slave ~]# ls /data/                     //data分區大小為1G
gluster
......
/dev/loop0              1014M   33M  982M   4% /data

啟動卷,查看卷狀態
[root@GlusterFS-master ~]# gluster volume start gluster_data
volume start: gluster_data: success

[root@GlusterFS-master ~]# gluster volume info 
 
Volume Name: gluster_data
Type: Distribute
Volume ID: 0f8b2268-9d2f-4b5c-85df-13408825d6b3
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.10.212:/data/gluster

掛載卷操作
在客戶端機器上執行glusterfs存儲掛載的操作
註意,由於上面添加的是192.168.10.212節點,所以客戶端要掛載的也是192.168.10.212節點的存儲
[root@Client ~]# mkdir /opt/gfsmount
[root@Client ~]# mount -t glusterfs 192.168.10.212:gluster_data /opt/gfsmount
[root@Client ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/centos-root       38G  4.3G   33G  12% /
devtmpfs                     1.9G     0  1.9G   0% /dev
tmpfs                        1.9G     0  1.9G   0% /dev/shm
tmpfs                        1.9G  8.6M  1.9G   1% /run
tmpfs                        1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda1                   1014M  143M  872M  15% /boot
/dev/mapper/centos-home       19G   33M   19G   1% /home
tmpfs                        380M     0  380M   0% /run/user/0
overlay                       38G  4.3G   33G  12% /var/lib/docker/overlay2/9904ac8cbcba967de3262dc0d5e230c64ad3c1c53b588048e263767d36df8c1a/merged
shm                           64M     0   64M   0% /var/lib/docker/containers/222ec7f21b2495591613e0d1061e4405cd57f99ffaf41dbba1a98c350cd70f60/mounts/shm
192.168.10.212:gluster_data 1014M   33M  982M   4% /opt/gfsmount

上面可知,已經掛載上了glusterfs存儲,大小為1G(即是192.168.10.212的存儲目錄所在的/data分區的空間。
記住:存儲目錄在節點機的哪個分區下,客戶端掛載後就享用這個分區空間。

在客戶端掛載目錄下測試寫數據
[root@Client gfsmount]# mkdir test
[root@Client gfsmount]# touch kevin
[root@Client gfsmount]# ls
kevin  test

然後在192.168.10.212的存儲目錄下發現是正常同步過來的
[root@GlusterFS-slave ~]# cd /data/gluster/
[root@GlusterFS-slave gluster]# ls
kevin  test

----------------------------------------------------------
增加brick(即擴容卷)
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.239:/data/gluster force
volume add-brick: success
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.204:/data/gluster force
volume add-brick: success
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.220:/data/gluster force
volume add-brick: success

同樣,上面三個節點的/data下會自動創建gluster目錄!

查看卷狀態
[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_data
Type: Distribute
Volume ID: 0f8b2268-9d2f-4b5c-85df-13408825d6b3
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.212:/data/gluster
Brick2: 192.168.10.239:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster

然後在到客戶端,發現掛載點的容量已經由1G上升到了4G!!(即四個節點的存儲目錄所在分區空間之和)
[root@Client ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/centos-root       38G  4.3G   33G  12% /
devtmpfs                     1.9G     0  1.9G   0% /dev
tmpfs                        1.9G     0  1.9G   0% /dev/shm
tmpfs                        1.9G  8.6M  1.9G   1% /run
tmpfs                        1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda1                   1014M  143M  872M  15% /boot
/dev/mapper/centos-home       19G   33M   19G   1% /home
tmpfs                        380M     0  380M   0% /run/user/0
overlay                       38G  4.3G   33G  12% /var/lib/docker/overlay2/9904ac8cbcba967de3262dc0d5e230c64ad3c1c53b588048e263767d36df8c1a/merged
shm                           64M     0   64M   0% /var/lib/docker/containers/222ec7f21b2495591613e0d1061e4405cd57f99ffaf41dbba1a98c350cd70f60/mounts/shm
192.168.10.212:gluster_data  4.0G  130M  3.9G   4% /opt/gfsmount

如上操作後的總結
1)客戶端掛載點的容量是集群中四個節點的存儲目錄所在分區總和。
2)在客戶端掛載點下創建目錄,所有節點的存儲目錄(即Brick)下都會同步到。
3)在客戶端掛載點的目錄下創建的文件,會在3個節點的存儲目錄內hash分布。
4)直接在客戶端掛載點下創建的文件,則這些文件只會單獨同步到所掛載的節點(如上是192.168.10.212的/data/gluster目錄下),其他節點不會同步!
5)刪除卷會造成一些數據丟失,因為被刪除節點有數。

比如:
a)客戶端在掛載點下創建目錄kevin
[root@Client ~]# cd /opt/gfsmount/
[root@Client gfsmount]# mkdir kevin

節點查看
[root@GlusterFS-master ~]# ls /data/gluster/
kevin
[root@GlusterFS-slave ~]# ls /data/gluster/
kevin
[root@GlusterFS-slave2 ~]# ls /data/gluster/
kevin
[root@GlusterFS-slave3 ~]# ls /data/gluster/
kevin

b)客戶端在上面掛載點下的目錄裏創建文件
[root@Client gfsmount]# for i in `seq -w 1 100`; do cp -rp /var/log/messages /opt/gfsmount/kevin/copy-test-$i; done
[root@Client gfsmount]# ls kevin/
copy-test-001  copy-test-014  copy-test-027  copy-test-040  copy-test-053  copy-test-066  copy-test-079  copy-test-092
copy-test-002  copy-test-015  copy-test-028  copy-test-041  copy-test-054  copy-test-067  copy-test-080  copy-test-093
copy-test-003  copy-test-016  copy-test-029  copy-test-042  copy-test-055  copy-test-068  copy-test-081  copy-test-094
copy-test-004  copy-test-017  copy-test-030  copy-test-043  copy-test-056  copy-test-069  copy-test-082  copy-test-095
copy-test-005  copy-test-018  copy-test-031  copy-test-044  copy-test-057  copy-test-070  copy-test-083  copy-test-096
copy-test-006  copy-test-019  copy-test-032  copy-test-045  copy-test-058  copy-test-071  copy-test-084  copy-test-097
copy-test-007  copy-test-020  copy-test-033  copy-test-046  copy-test-059  copy-test-072  copy-test-085  copy-test-098
copy-test-008  copy-test-021  copy-test-034  copy-test-047  copy-test-060  copy-test-073  copy-test-086  copy-test-099
copy-test-009  copy-test-022  copy-test-035  copy-test-048  copy-test-061  copy-test-074  copy-test-087  copy-test-100
copy-test-010  copy-test-023  copy-test-036  copy-test-049  copy-test-062  copy-test-075  copy-test-088
copy-test-011  copy-test-024  copy-test-037  copy-test-050  copy-test-063  copy-test-076  copy-test-089
copy-test-012  copy-test-025  copy-test-038  copy-test-051  copy-test-064  copy-test-077  copy-test-090
copy-test-013  copy-test-026  copy-test-039  copy-test-052  copy-test-065  copy-test-078  copy-test-091

節點查看。發現文件同步到掛載的192.168.10.212節點山上了
[root@GlusterFS-master ~]# ls /data/gluster/kevin/
copy-test-002  copy-test-014  copy-test-036  copy-test-045  copy-test-056  copy-test-070  copy-test-075  copy-test-097
copy-test-009  copy-test-020  copy-test-042  copy-test-047  copy-test-062  copy-test-071  copy-test-080
copy-test-010  copy-test-027  copy-test-043  copy-test-053  copy-test-064  copy-test-072  copy-test-084
copy-test-013  copy-test-035  copy-test-044  copy-test-055  copy-test-068  copy-test-074  copy-test-092
[root@GlusterFS-master ~]# ll /data/gluster/kevin/|wc -l
30

[root@GlusterFS-slave ~]# ls /data/gluster/kevin/
copy-test-003  copy-test-018  copy-test-037  copy-test-050  copy-test-061  copy-test-069  copy-test-089
copy-test-005  copy-test-025  copy-test-040  copy-test-058  copy-test-066  copy-test-076  copy-test-091
copy-test-007  copy-test-026  copy-test-049  copy-test-059  copy-test-067  copy-test-085  copy-test-096
[root@GlusterFS-slave ~]# ll /data/gluster/kevin/|wc -l
22

[root@GlusterFS-slave2 gluster]# ls /data/gluster/kevin/
copy-test-004  copy-test-016  copy-test-024  copy-test-046  copy-test-065  copy-test-082  copy-test-088  copy-test-099
copy-test-006  copy-test-017  copy-test-029  copy-test-048  copy-test-078  copy-test-086  copy-test-093
copy-test-015  copy-test-023  copy-test-033  copy-test-052  copy-test-079  copy-test-087  copy-test-095
[root@GlusterFS-slave2 gluster]# ll /data/gluster/kevin/|wc -l
23

[root@GlusterFS-slave3 ~]# ls /data/gluster/kevin/
copy-test-001  copy-test-019  copy-test-030  copy-test-038  copy-test-054  copy-test-073  copy-test-090
copy-test-008  copy-test-021  copy-test-031  copy-test-039  copy-test-057  copy-test-077  copy-test-094
copy-test-011  copy-test-022  copy-test-032  copy-test-041  copy-test-060  copy-test-081  copy-test-098
copy-test-012  copy-test-028  copy-test-034  copy-test-051  copy-test-063  copy-test-083  copy-test-100
[root@GlusterFS-slave3 ~]# ll /data/gluster/kevin/|wc -l
29

c)如果直接在客戶端掛載點下創建文件,則這些文件只會單獨同步到所掛載的節點
(如上是192.168.10.212的/data/gluster目錄下),其他節點不會同步!
[root@Client gfsmount]# for i in `seq -w 1 30`; do cp -rp /var/log/messages /opt/gfsmount/haha-test-$i; done
[root@Client gfsmount]# ls
haha-test-01  haha-test-05  haha-test-09  haha-test-13  haha-test-17  haha-test-21  haha-test-25  haha-test-29
haha-test-02  haha-test-06  haha-test-10  haha-test-14  haha-test-18  haha-test-22  haha-test-26  haha-test-30
haha-test-03  haha-test-07  haha-test-11  haha-test-15  haha-test-19  haha-test-23  haha-test-27  kevin
haha-test-04  haha-test-08  haha-test-12  haha-test-16  haha-test-20  haha-test-24  haha-test-28

節點查看,發現只是在192.168.10.212節點上有上面創建的那30個文件,其他3個節點都沒有
[root@GlusterFS-master ~]# ls /data/gluster/
kevin

[root@GlusterFS-slave ~]# ls /data/gluster/
haha-test-01  haha-test-05  haha-test-09  haha-test-13  haha-test-17  haha-test-21  haha-test-25  haha-test-29
haha-test-02  haha-test-06  haha-test-10  haha-test-14  haha-test-18  haha-test-22  haha-test-26  haha-test-30
haha-test-03  haha-test-07  haha-test-11  haha-test-15  haha-test-19  haha-test-23  haha-test-27  kevin
haha-test-04  haha-test-08  haha-test-12  haha-test-16  haha-test-20  haha-test-24  haha-test-28

[root@GlusterFS-slave2 ~]# ls /data/gluster/
kevin

[root@GlusterFS-slave3 ~]# ls /data/gluster/
kevin

d)刪除卷會造成一些數據丟失,因為被刪除節點有數。
從卷中刪除其中一個節點的brick
[root@GlusterFS-master ~]# gluster volume remove-brick gluster_data 192.168.10.220:/data/gluster
Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force>
[root@GlusterFS-master ~]# gluster
gluster> volume remove-brick gluster_data 192.168.10.220:/data/gluster force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
gluster> 

查看卷的信息,mount點空間也下降了(少了192.168.10.220的存儲空間)
[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_data
Type: Distribute
Volume ID: 0f8b2268-9d2f-4b5c-85df-13408825d6b3
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 192.168.10.212:/data/gluster
Brick2: 192.168.10.239:/data/gluster
Brick3: 192.168.10.204:/data/gluster

客戶端的掛載點空間下降到3G了
[root@Client ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
.......
192.168.10.212:gluster_data  3.0G   98M  2.9G   4% /opt/gfsmount

[root@Client ~]# ll /opt/gfsmount/kevin|wc -l
73

發現上面在客戶端掛載點的kevin目錄下創建的100個文件已經少了一部分,因為這部分數據在192.168.10.220節點上,該
節點的brick已經從卷中刪除了,所以這部分數據就丟失了!

接著註意下面的操作!!
將上面刪除的192.168.10.220的brick添加進去,發現添加失敗!
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.220:/data/gluster
volume add-brick: failed: Staging failed on 192.168.10.220. Error: /data/gluster is already part of a volume

需要刪除目錄,才能加回來
[root@GlusterFS-slave3 ~]# rm -rf /data/gluster
[root@GlusterFS-slave3 ~]# ll /data
total 0

然後再添加就能成功了
[root@GlusterFS-master ~]# gluster volume add-brick gluster_data 192.168.10.220:/data/gluster 
volume add-brick: success

[root@GlusterFS-master ~]# gluster volume info
 
Volume Name: gluster_data
Type: Distribute
Volume ID: 0f8b2268-9d2f-4b5c-85df-13408825d6b3
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 192.168.10.212:/data/gluster
Brick2: 192.168.10.239:/data/gluster
Brick3: 192.168.10.204:/data/gluster
Brick4: 192.168.10.220:/data/gluster

添加回來後,客戶端的掛載點的容量又上升到4G了
[root@Client ~]# df -h
.......
192.168.10.212:gluster_data  4.0G  131M  3.9G   4% /opt/gfsmount
---------------------------------------------------------------------------------

rebalance操作能夠讓文件按照之前的規則再分配

做下rebalance,看到新加的節點上分到了文件和目錄。
註意,在實際生產環節中,做rebalance,最好在服務器空閑的時間操作

如上,新添加進去的192.168.10.220節點的存儲目錄下一開始是沒有內容的
[root@GlusterFS-slave3 ~]# ls /data/gluster/
[root@GlusterFS-slave3 ~]# 

[root@GlusterFS-master ~]# gluster volume rebalance gluster_data start
volume rebalance: gluster_data: success: Initiated rebalance on volume gluster_data.
Execute "gluster volume rebalance <volume-name> status" to check status.
ID: 49277bfa-df25-45c4-b1fb-cbcf8607a23e

再次查看新添加進去的192.168.10.220節點的存儲目錄,發現就有了數據
[root@GlusterFS-slave3 ~]# ls /data/gluster/
haha-test-03  haha-test-11  haha-test-14  haha-test-16  haha-test-25  haha-test-30  kevin

發現做了rebalance之後,客戶端在掛載點下的數據就會均衡地分布到各節點上了
[root@Client ~]# ls /opt/gfsmount/
haha-test-01  haha-test-05  haha-test-09  haha-test-13  haha-test-17  haha-test-21  haha-test-25  haha-test-29
haha-test-02  haha-test-06  haha-test-10  haha-test-14  haha-test-18  haha-test-22  haha-test-26  haha-test-30
haha-test-03  haha-test-07  haha-test-11  haha-test-15  haha-test-19  haha-test-23  haha-test-27  kevin
haha-test-04  haha-test-08  haha-test-12  haha-test-16  haha-test-20  haha-test-24  haha-test-28
[root@Client ~]# ls /opt/gfsmount/kevin/
copy-test-002  copy-test-014  copy-test-026  copy-test-043  copy-test-053  copy-test-066  copy-test-076  copy-test-088
copy-test-003  copy-test-015  copy-test-027  copy-test-044  copy-test-055  copy-test-067  copy-test-078  copy-test-089
copy-test-004  copy-test-016  copy-test-029  copy-test-045  copy-test-056  copy-test-068  copy-test-079  copy-test-091
copy-test-005  copy-test-017  copy-test-033  copy-test-046  copy-test-058  copy-test-069  copy-test-080  copy-test-092
copy-test-006  copy-test-018  copy-test-035  copy-test-047  copy-test-059  copy-test-070  copy-test-082  copy-test-093
copy-test-007  copy-test-020  copy-test-036  copy-test-048  copy-test-061  copy-test-071  copy-test-084  copy-test-095
copy-test-009  copy-test-023  copy-test-037  copy-test-049  copy-test-062  copy-test-072  copy-test-085  copy-test-096
copy-test-010  copy-test-024  copy-test-040  copy-test-050  copy-test-064  copy-test-074  copy-test-086  copy-test-097
copy-test-013  copy-test-025  copy-test-042  copy-test-052  copy-test-065  copy-test-075  copy-test-087  copy-test-099

[root@GlusterFS-master ~]# ls /data/gluster/
haha-test-06  haha-test-07  haha-test-15  haha-test-19  haha-test-22  haha-test-28  kevin
[root@GlusterFS-master ~]# ls /data/gluster/kevin/
copy-test-002  copy-test-014  copy-test-036  copy-test-047  copy-test-059  copy-test-069  copy-test-080  copy-test-097
copy-test-003  copy-test-018  copy-test-037  copy-test-049  copy-test-061  copy-test-070  copy-test-084
copy-test-005  copy-test-020  copy-test-040  copy-test-050  copy-test-062  copy-test-071  copy-test-085
copy-test-007  copy-test-025  copy-test-042  copy-test-053  copy-test-064  copy-test-072  copy-test-089
copy-test-009  copy-test-026  copy-test-043  copy-test-055  copy-test-066  copy-test-074  copy-test-091
copy-test-010  copy-test-027  copy-test-044  copy-test-056  copy-test-067  copy-test-075  copy-test-092
copy-test-013  copy-test-035  copy-test-045  copy-test-058  copy-test-068  copy-test-076  copy-test-096

[root@GlusterFS-slave ~]# ls /data/gluster/
haha-test-01  haha-test-04  haha-test-08  haha-test-17  haha-test-20  haha-test-26  haha-test-27  haha-test-29  kevin
[root@GlusterFS-slave ~]# ls /data/gluster/kevin/
copy-test-002  copy-test-017  copy-test-040  copy-test-050  copy-test-064  copy-test-074  copy-test-086  copy-test-097
copy-test-004  copy-test-020  copy-test-042  copy-test-052  copy-test-065  copy-test-075  copy-test-087  copy-test-099
copy-test-006  copy-test-023  copy-test-043  copy-test-053  copy-test-066  copy-test-076  copy-test-088
copy-test-009  copy-test-024  copy-test-044  copy-test-055  copy-test-067  copy-test-078  copy-test-089
copy-test-010  copy-test-027  copy-test-045  copy-test-056  copy-test-068  copy-test-079  copy-test-091
copy-test-013  copy-test-029  copy-test-046  copy-test-058  copy-test-069  copy-test-080  copy-test-092
copy-test-014  copy-test-033  copy-test-047  copy-test-059  copy-test-070  copy-test-082  copy-test-093
copy-test-015  copy-test-035  copy-test-048  copy-test-061  copy-test-071  copy-test-084  copy-test-095
copy-test-016  copy-test-036  copy-test-049  copy-test-062  copy-test-072  copy-test-085  copy-test-096

[root@GlusterFS-slave2 ~]# ls /data/gluster/
haha-test-02  haha-test-09  haha-test-12  haha-test-18  haha-test-23  kevin
haha-test-05  haha-test-10  haha-test-13  haha-test-21  haha-test-24
[root@GlusterFS-slave2 ~]# ls /data/gluster/kevin/
copy-test-004  copy-test-016  copy-test-024  copy-test-046  copy-test-065  copy-test-082  copy-test-088  copy-test-099
copy-test-006  copy-test-017  copy-test-029  copy-test-048  copy-test-078  copy-test-086  copy-test-093
copy-test-015  copy-test-023  copy-test-033  copy-test-052  copy-test-079  copy-test-087  copy-test-095

[root@GlusterFS-slave3 ~]# ls /data/gluster/
haha-test-03  haha-test-11  haha-test-14  haha-test-16  haha-test-25  haha-test-30  kevin
[root@GlusterFS-slave3 ~]# ls /data/gluster/kevin/

查看下平衡狀態
[root@GlusterFS-master ~]# gluster volume rebalance gluster_data status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost               29       164.8KB           105             0             0            completed               1.00
                          192.168.10.212               43       248.7KB           131             0             0            completed               1.00
                          192.168.10.204                0        0Bytes           105             0             0            completed               0.00
                          192.168.10.220                0        0Bytes           105             0             0            completed               0.00
volume rebalance: gluster_data: success: 

--------------------------------------------------------------------------
接著進行卸載掛載點,停止卷
這個操作很危險,但是卷刪除了,下面的數據還在:

[root@Client ~]# umount /opt/gfsmount -lf

[root@GlusterFS-master ~]# gluster volume stop gluster_data
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gluster_data: success

[root@GlusterFS-master ~]# gluster volume delete gluster_data
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gluster_data: success

[root@GlusterFS-master ~]# gluster volume list
No volumes present in cluster

[root@GlusterFS-master ~]# gluster volume info
No volumes present

gluster_data卷刪除了,但是各節點上存儲目錄下的數據還在
[root@GlusterFS-master ~]# ls /data/gluster/
haha-test-06  haha-test-07  haha-test-15  haha-test-19  haha-test-22  haha-test-28  kevin

[root@GlusterFS-slave ~]# ls /data/gluster/
haha-test-01  haha-test-04  haha-test-08  haha-test-17  haha-test-20  haha-test-26  haha-test-27  haha-test-29  kevin

[root@GlusterFS-slave2 ~]# ls /data/gluster/
haha-test-02  haha-test-09  haha-test-12  haha-test-18  haha-test-23  kevin
haha-test-05  haha-test-10  haha-test-13  haha-test-21  haha-test-24

[root@GlusterFS-slave3 ~]# ls /data/gluster/
haha-test-03  haha-test-11  haha-test-14  haha-test-16  haha-test-25  haha-test-30  kevin

客戶端掛載點數據不顯示
[root@Client ~]# ls /opt/gfsmount/
[root@Client ~]#

想要清除數據,可以登錄到每個節點上刪除brick下面的數據
[root@GlusterFS-master ~]# rm -rf /data/gluster
[root@GlusterFS-slave ~]# rm -rf /data/gluster
[root@GlusterFS-slave2 ~]# rm -rf /data/gluster
[root@GlusterFS-slave3 ~]# rm -rf /data/gluster

GlusterFS分布式存儲集群部署記錄-相關補充