1. 程式人生 > >Centos 7.4 部署配置Software Raid

Centos 7.4 部署配置Software Raid

welcom white centos 系統 app ssi work num -o list

本章Blog相關知識點:


Raid是英文Redundant Array of Independent Disks 的縮寫,翻譯成中文意思是“獨立磁盤冗余陣列”,有時也簡稱磁盤陣列(Disk Array)。簡單的說,RAID是一種把多塊獨立的硬盤(物理硬盤)按不同的方式組合起來形成一個硬盤組(邏輯硬盤),從而提供比單個硬盤更高的存儲性能和提供數據備份的技術。

組成磁盤陣列的不同方式稱為RAID級別(RAID Levels),常見的RAID Level 包括raid0, raid1, raid5, raid10,raid50。

Raid0(條帶) :讀寫性能提升;無冗余能力;100% 利用率;至少2塊盤,數據平均分配到多個磁盤。

Raid1(鏡像 ):讀性能提升,寫性能下降;有冗余能力;1/2 利用率; 至少2塊盤,數據在多個磁盤分別存一份數據 ;

Raid 1+0 : 讀性能提升,寫性能下降;有冗余能力;可允許不同分組內壞一塊磁盤 ,1/2利用率;至少4塊盤

Raid 5 : 校驗碼分配到在多個磁盤 ,可提升數據讀寫性能 。有冗余能力;{n-1}/n利用率;至少3塊盤。

jbod, 擁有多塊磁盤,但不同時工作。用於把小空間提升為大空間 。

註:Raid10 性能 優於 Raid01性能,數據鏡像(目的,避免設備損壞而導致業務終止),但不能取代數據備份 。


mdadm :模式化工具 manage MD devices for Linux Software RAID

#mdadm [mode] <raiddevice> [options] <component-devices>

options

-A Assemble 裝配模式

-C create /dev/md0 創建模式

-n # ,用於創建raid 設備的 磁盤個數

-x # ,熱備磁盤個數

-l 級別 ,raid level 級別

-a yes ,自動為創建Raid設備創建設備文件

-c chunksize ,指定分塊大小,默認為512kB

-F Follow or Monitor 監控模式

-D --detatil 顯示陣列詳細信息

管理模式: -f 模擬為損壞 , -r 模擬移除 ,-a 模擬新增

-S stop 陣列

-A 重新啟動md 例:# mdadm -A /dev/md10 /dev/sdb{1,3} 重新啟動md10

-Ds 顯示陣列信息,例 #mdadm -Ds >> /etc/mdadm.conf


軟Raid實驗目的及步驟


實驗目的:

1、通過軟raid 配置,分別實現2塊磁盤組成Raid 5;8塊磁盤組成raid 10 +1 熱備盤。

2、實現開機自動掛載

3、模擬單盤故障及磁盤替換。
實驗步驟:

一、增加磁盤(虛擬機中增加11塊20G磁盤),安裝mdadm 管理工具。

技術分享圖片

[root@study ~]# rpm -qa mdadm   查詢是否安裝了mdadm工具
[root@study ~]# yum install mdadm -y  安裝mdadm工具


二、對11塊磁盤分別創建1個主分區 ,並修改分區類型fd

[root@study ~]# fdisk -l | grep "^Disk\b"
Disk /dev/sda: 128.8 GB, 128849018880 bytes, 251658240 sectors
Disk label type: dos
Disk identifier: 0x000d9648
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sde: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdf: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdg: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdh: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdi: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdj: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdl: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdk: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/mapper/VG-root: 16.1 GB, 16106127360 bytes, 31457280 sectors
Disk /dev/mapper/VG-home: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/mapper/VG-var: 10.7 GB, 10737418240 bytes, 20971520 sectors
[root@study ~]# fdisk  /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x2974d4fb.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x2974d4fb

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@study ~]# echo 'n        通過命令實現磁盤自動分區並調整分區類型為fd
> p
> 1
> 
> 
> t
> fd
> w' |fdisk /dev/sdc
[root@study ~]#  reboot    通過reboot命令,實現內核重新加載磁盤分區
[root@study ~]# fdisk -l | grep "^/dev/sd"       完成分區後磁盤狀態查看
/dev/sda1   *        2048     4196351     2097152   83  Linux
/dev/sda2         4196352    12584959     4194304   82  Linux swap / Solaris
/dev/sda3        12584960   251658239   119536640   8e  Linux LVM
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdf1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdg1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdh1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdi1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdj1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdl1            2048    41943039    20970496   fd  Linux raid autodetect
/dev/sdk1            2048    41943039    20970496   fd  Linux raid autodetect
[root@study ~]# cat /proc/partitions | grep "1$"   查看內核加載分區
   8        1    2097152 sda1
   8       17   20970496 sdb1
   8       49   20970496 sdd1
   8       33   20970496 sdc1
   8       65   20970496 sde1
   8       81   20970496 sdf1
   8       97   20970496 sdg1
   8       113   20970496 sdh1
   8       129   20970496 sdi1
   8       145   20970496 sdj1
   8       177   20970496 sdl1
   8       161   20970496 sdk1
  253         1   20971520 dm-1

說明:#fdisk /dev/sdb 對/dev/sdb 進行分區;

輸入 "m", 獲取幫助;

輸入 "p", 查看分區前磁盤狀態;

輸入“n”,新建磁盤分區;

輸入“e”,新建邏輯分區;

輸入“p”,新建主分區;

輸入“t”,修改分區類型;

輸入“fd”,調整為linux raid autodetect 類型 ;

輸入“p”,顯示分區狀態表;

輸入“w”,保存新建分區。

命令:# echo 'n

P

1


t

fd

w ' | fdisk /dev/sdb 通過命令管道實現磁盤分區並調整分區類型


三、使用2塊磁盤創建raid0,使用8塊磁盤創建raid10+1塊熱備盤

[root@study ~]# mdadm -C /dev/md0 -a yes -c 1024 -l 0 -n 2 /dev/sd{b,c}1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@study ~]# mdadm -C /dev/md10 -a yes -c 1024 -l 10 -n 8 /dev/sd{d..k}1 -x 1 /dev/sdl1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.
[root@study ~]# mdadm -D /dev/md0  /dev/md10
/dev/md0:
           Version : 1.2
     Creation Time : Sun Jan 28 17:24:41 2018
        Raid Level : raid0
        Array Size : 41908224 (39.97 GiB 42.91 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Jan 28 17:24:41 2018
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 1024K

Consistency Policy : none

              Name : study.itwish.cn:0  (local to host study.itwish.cn)
              UUID : 491cf3f6:52a790ee:fcc232a0:83a1f6b7
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
/dev/md10:
           Version : 1.2
     Creation Time : Sun Jan 28 17:29:21 2018
        Raid Level : raid10
        Array Size : 83816448 (79.93 GiB 85.83 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 8
     Total Devices : 9
       Persistence : Superblock is persistent

       Update Time : Sun Jan 28 17:31:06 2018
             State : clean, resyncing 
    Active Devices : 8
   Working Devices : 9
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 1024K

Consistency Policy : resync


              Name : study.itwish.cn:10  (local to host study.itwish.cn)
              UUID : 34dfaf9d:f7664825:c6968e4c:eaa15141
            Events : 4

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync set-A   /dev/sdd1
       1       8       65        1      active sync set-B   /dev/sde1
       2       8       81        2      active sync set-A   /dev/sdf1
       3       8       97        3      active sync set-B   /dev/sdg1
       4       8      113        4      active sync set-A   /dev/sdh1
       5       8      129        5      active sync set-B   /dev/sdi1
       6       8      145        6      active sync set-A   /dev/sdj1
       7       8      161        7      active sync set-B   /dev/sdk1

       8       8      177        -      spare   /dev/sdl1

/dev/md0 ,/dev/md10  陣列的設備名稱;

Raid Level : 陣列級別;

Array Size : 陣列容量大小;

Raid Devices : RAID成員的個數;

Total Devices : RAID中下屬成員的總計個數,包含冗余硬盤或分區,如spare,

State : clean, degraded, recovering 狀態,包括三個狀態,clean 表示正常,degraded 表示有問題,recovering 表示正在恢復或構建;

Active Devices : 被激活的RAID成員個數;

Working Devices : 正常的工作的RAID成員個數;

Failed Devices : 出問題的RAID成員;

Spare Devices : 備用RAID成員個數,當一個RAID的成員出問題時,用其它硬盤或分區來頂替;

UUID : RAID的UUID值,在系統中是唯一的;


四、格式/dev/md0和/dev/md10分區並掛載

[root@study ~]# mke2fs -t ext4 -L myraid0 -m 2 -b 4096  /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=myraid0
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=256 blocks, Stripe width=512 blocks
2621440 inodes, 10477056 blocks
209541 blocks (2.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@study ~]# mke2fs -t ext4 -L myraid10 -m 2 -b 4096  /dev/md10
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=myraid10
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=256 blocks, Stripe width=1024 blocks
5242880 inodes, 20954112 blocks
419082 blocks (2.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2168455168
640 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   
[root@study ~]# mkdir -p /raid0 /raid10             創建掛載路徑
[root@study ~]# mount  /dev/md0 /raid0                 
[root@study ~]# mount  -o remount,acl /dev/md0 /raid0   重新掛載並支持acl功能
[root@study ~]# mount /dev/md10 /raid10
[root@study ~]# mount -o remount,acl /dev/md10 /raid10/
[root@study ~]# mount | tail -2
/dev/md0 on /raid0 type ext4 (rw,relatime,seclabel,stripe=512,data=ordered)
/dev/md10 on /raid10 type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
[root@study ~]# df -h |tail -2
/dev/md0              40G   49M   39G   1% /raid0
/dev/md10             79G   57M   77G   1% /raid10

# mke2fs 創建文件系統

-t fstype /dev/somdevice 指定文件類型後,對磁盤進行格式化

-j 相當於-t ext3 ,專門創建ext3文件

-L label 指定卷標名稱

-b {1024/2048/4096} 指定塊大小

-i # #個字節預留一個inode

-I # 指定Inode 大小

-N # 直接指定預留多少個Inode

-m 預留管理空間的百分比

-O 指定分區特性

#mount [options]:

直接# mount,顯示當前系統所有已被掛載的分區

-a 自動掛載所有(/etc/fstab 文件中)支持自動掛載的設備

-t fstype 指定文件系統ext類型

-r 只讀掛載

-w 讀寫掛載

-L label 以卷標指定掛載設備 或 LABLE=“MYDATA”

-U UUID 以UUID指定掛載設備

-n 不更新/etc/mtab 文件

--bind 把目錄掛載到目錄段 例 #mount --bind /usr/ /mnt


五、配置開機自動掛載,必須配置/etc/mdadm.conf 文件 。重啟確認,完成開機自動掛載raid0 和raid10

[root@study ~]# echo "DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1" >> /etc/mdadm.conf 
[root@study ~]# mdadm -Ds /dev/md{0,10} >> /etc/mdadm.conf 
[root@study ~]# !cat
cat /etc/mdadm.conf 
DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1
ARRAY /dev/md0 metadata=1.2 name=study.itwish.cn:0 UUID=491cf3f6:52a790ee:fcc232a0:83a1f6b7
ARRAY /dev/md10 metadata=1.2 spares=1 name=study.itwish.cn:10 UUID=34dfaf9d:f7664825:c6968e4c:eaa15141
[root@study ~]# vi /etc/fstab         配置開啟啟動文件,添加最後兩行
#
# /etc/fstab
# Created by anaconda on Sun Jan 28 12:27:18 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VG-root     /                       ext4    defaults        1 1
UUID=085b953d-5803-45df-b9d9-dc0ff7f92a3d /boot                   ext4    defaults        1 2
/dev/mapper/VG-home     /home                   ext4    defaults        1 2
/dev/mapper/VG-var      /var                    ext4    defaults        1 2
UUID=4169cca6-5a09-46fe-a2a7-64eba563b00a swap                    swap    defaults        0 0
/dev/md0               /raid0                 ext4      defaults        0 2 
/dev/md10              /raid10                ext4      defaults        0 2
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          
                                                                                                                                                                                                                                                                                                                                                                                                                               
"/etc/fstab" 15L, 788C written
[root@study ~]# reboot
[root@study ~]# df -l | grep "^/dev/md"      重啟後確認,開機自動掛載raid0 和raid10 
/dev/md10            82368920   57368  80618840   1% /raid10
/dev/md0             41118944   49176  40215220   1% /raid0

#mdadm -Ds 顯示陣列信息,例 #mdadm -Ds /dev/md0 >> /etc/mdadm.conf

註: 一定要配置/etc/mdadm.conf 文件 ,實現開機自動掛載 。若不配置該文件 ,則/dev/md0 和 /dev/md10 會自動改變名稱為 /dev/md126 和 /dev/md127 ,且無法自動掛載磁盤


六、模擬擴展磁盤:單磁盤故障、擴展磁盤容量

6.1、以raid 10為例,模擬單磁盤故障 ,移除故障硬盤,添加新磁盤 。 通過實驗:磁盤發生故障時,熱備盤會自動頂替故障磁盤工作,陣列也能夠在短時間內實現重建。

[root@study ~]# cp -a /boot/* /raid10/   拷貝數據文件到/raid10 中
[root@study ~]# ls /raid10/
config-3.10.0-693.el7.x86_64                             initrd-plymouth.img
efi                                                      lost+found
grub                                                     symvers-3.10.0-693.el7.x86_64.gz
grub2                                                    System.map-3.10.0-693.el7.x86_64
initramfs-0-rescue-aa42d80ce1774acf8f5de007d85e5ef1.img  vmlinuz-0-rescue-aa42d80ce1774acf8f5de007d85e5ef1
initramfs-3.10.0-693.el7.x86_64.img                      vmlinuz-3.10.0-693.el7.x86_64
initramfs-3.10.0-693.el7.x86_64kdump.img
[root@study ~]# mdadm  -f  /dev/md10 /dev/sdd1     模擬/dev/sdd1 損壞
mdadm: set /dev/sdd1 faulty in /dev/md10
[root@study ~]# mdadm -D /dev/md10          通過查看/dev/md10 狀態確認 ,/dev/sdl1 正常掛載並替換損壞的/dev/sdd1 盤 ,而/dev/sdd1 盤狀態為faulty
/dev/md10:
           Version : 1.2
     Creation Time : Sun Jan 28 17:29:21 2018
        Raid Level : raid10
        Array Size : 83816448 (79.93 GiB 85.83 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 8
     Total Devices : 9
       Persistence : Superblock is persistent

       Update Time : Sun Jan 28 18:24:24 2018
             State : clean, degraded, recovering 
    Active Devices : 7
   Working Devices : 8
    Failed Devices : 1
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 1024K

Consistency Policy : resync

    Rebuild Status : 36% complete

              Name : study.itwish.cn:10  (local to host study.itwish.cn)
              UUID : 34dfaf9d:f7664825:c6968e4c:eaa15141
            Events : 24

    Number   Major   Minor   RaidDevice State
       8       8      177        0      spare rebuilding   /dev/sdl1
       1       8       65        1      active sync set-B   /dev/sde1
       2       8       81        2      active sync set-A   /dev/sdf1
       3       8       97        3      active sync set-B   /dev/sdg1
       4       8      113        4      active sync set-A   /dev/sdh1
       5       8      129        5      active sync set-B   /dev/sdi1
       6       8      145        6      active sync set-A   /dev/sdj1
       7       8      161        7      active sync set-B   /dev/sdk1

       0       8       49        -      faulty   /dev/sdd1
[root@study ~]# mdadm -r /dev/md10 /dev/sdd1       模擬刪除損壞的/dev/sdd1 盤 
mdadm: hot removed /dev/sdd1 from /dev/md10
[root@study ~]# mdadm -a /dev/md10 /dev/sdd1      模擬/dev/md10 重新裝入新盤 /dev/sdd1 
mdadm: added /dev/sdd1
[root@study ~]# mdadm -D /dev/md10               通過查看/dev/md10 狀態,確認新裝入的/dev/sdd1 盤做備份盤存在
/dev/md10:
           Version : 1.2
     Creation Time : Sun Jan 28 17:29:21 2018
        Raid Level : raid10
        Array Size : 83816448 (79.93 GiB 85.83 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 8
     Total Devices : 9
       Persistence : Superblock is persistent

       Update Time : Sun Jan 28 18:27:55 2018
             State : clean, degraded, recovering 
    Active Devices : 7
   Working Devices : 9
    Failed Devices : 0
     Spare Devices : 2

            Layout : near=2
        Chunk Size : 1024K

Consistency Policy : resync

    Rebuild Status : 95% complete

              Name : study.itwish.cn:10  (local to host study.itwish.cn)
              UUID : 34dfaf9d:f7664825:c6968e4c:eaa15141
            Events : 36

    Number   Major   Minor   RaidDevice State
       8       8      177        0      spare rebuilding   /dev/sdl1
       1       8       65        1      active sync set-B   /dev/sde1
       2       8       81        2      active sync set-A   /dev/sdf1
       3       8       97        3      active sync set-B   /dev/sdg1
       4       8      113        4      active sync set-A   /dev/sdh1
       5       8      129        5      active sync set-B   /dev/sdi1
       6       8      145        6      active sync set-A   /dev/sdj1
       7       8      161        7      active sync set-B   /dev/sdk1

       9       8       49        -      spare   /dev/sdd1
[root@study ~]# ls /raid10/         查看磁盤數據依然存在,磁盤的損壞未對數據造成影響。
config-3.10.0-693.el7.x86_64                             initrd-plymouth.img
efi                                                      lost+found
grub                                                     symvers-3.10.0-693.el7.x86_64.gz
grub2                                                    System.map-3.10.0-693.el7.x86_64
initramfs-0-rescue-aa42d80ce1774acf8f5de007d85e5ef1.img  vmlinuz-0-rescue-aa42d80ce1774acf8f5de007d85e5ef1
initramfs-3.10.0-693.el7.x86_64.img                      vmlinuz-3.10.0-693.el7.x86_64
initramfs-3.10.0-693.el7.x86_64kdump.im

至此,Centos 系統配置軟Raid 實驗及測試完成。


Centos 7.4 部署配置Software Raid