1. 程式人生 > >RAID原理及軟RAID的實現方式

RAID原理及軟RAID的實現方式

rand mdadm 磁盤陣列 軟raid 共享熱備盤

  1. 什麽是RAID
  2. RAID的實現方式
  3. RAID的級別及特點
  4. 軟RAID的實現方式


1. 什麽是RAID

??RAID全稱:Redundant Arrays of Inexpensive(Independent) Disks
??實現原理:通過多個磁盤合成一個“陣列”來提供更好的性能、冗余,或者兩者都提供
??RAID的級別:多塊磁盤組織在一起的工作方式有所不同,通過工作方式不同來區分。如RAID0 , RAID1 , RAID4 , RAID5 , RAID6 , RAID 10 , RAID01 , RAID50等


2. RAID的實現方式:

硬件實現方式:

??外接式磁盤陣列:通過PCI或PCI-E擴展卡提供適配能力。
??內接式磁盤陣列:主板上集成的RAID控制器

軟件實現方式(linux):

??通過軟件實現(mdadm)
?

實現RAID的操作:

??安裝操作系統之前通過BIOS設置。此方式主要用把操作系統安裝在RAID上。
??安裝操作系統後通過BIOS或軟件設置。此方式的主要目的是把操作系統和其他存儲RAID的數據區隔離開。


3. RAID的級別及特點

RAID 0 :也稱條帶卷(strip)
?? 這種模式如通過相同型號、相同容量的磁盤來實現效果極佳。
??特點
?? 讀寫能力提升
?? 可用空間為最小硬盤容量乘以硬盤塊數。

?? 無容錯能力,若其中一塊硬盤有故障,會導致數據缺失。
?? 最少硬盤塊數:2
??工作方式:
??先把硬盤分切出等量的區塊,當文件要寫入磁盤中時,把數據依據磁盤區塊大小切割好,再依序交錯存入磁盤。如一個數據文件有100MB,每個磁盤會各存入50MB。

?? 工作方式如下圖:
技術分享圖片

??
RAID 1 鏡像卷(mirror)
??特點:
?? 讀性能提升,寫性能略有下降
?? 可用空間:最小磁盤空間*1
?? 有容錯能力
?? 最少磁盤個數:2

??工作方式:
??先把硬盤分切出等量的區塊,當文件要寫入磁盤中時,把數據依據磁盤區塊大小切割好,再存入各磁盤各一份。
??如保存一份文件時,兩個磁盤都會保存一份完整的文件。
??工作方式如下圖:

技術分享圖片
??
RAID 4
??特點:
?? 至少需要三塊硬盤
?? 2塊硬盤做數據盤,1塊硬盤做校驗盤(文件讀寫時都需訪問該硬盤,工作壓力大)。
?? 有容錯能力,允許壞一塊硬盤。當壞一塊硬盤的時候,為降級工作模式。可讀寫,但不推薦。
?? 可用空間:(N-1)*最小硬盤空間

??工作方式:
??兩塊硬盤做數據盤,另外一塊硬盤專門來做校驗盤。數據保存時,按異或運算保存數據。
??即當硬盤1保存的數據為1011,硬盤2保存的數據為1100,硬盤3中的數據以1011和1100作異或運算即為0111。
??工作方式如下圖:
技術分享圖片
??
RAID 5
??特點:
??至少需要三塊硬盤
??三塊硬盤輪流做校驗盤。
??有容錯能力,允許壞一塊硬盤。當壞一塊硬盤的時候,為降級工作模式。可讀寫,但不推薦。
??可用空間:(N-1)*最小硬盤空間

??工作方式
??同RAID 4 ,不過為三塊硬盤輪流做校驗盤。
??工作方式如下圖:
技術分享圖片
??

RAID 6

??特點:
??最少需4塊硬盤
??讀寫能力提升
??有容錯能力,可壞2塊硬盤而不影響數據。
??可用空間(N-2)X最小磁盤空間大小

??工作方式:
??至少由四塊硬盤構成。兩塊硬盤數據,另外兩塊硬盤輪流做校驗盤。
??工作方式見下圖:
技術分享圖片

??
RAID 10
??特點:
??讀寫能力提升
??有容錯能力。每組RAID 1允許壞一塊硬盤而不影響數據完整性。
??可用空間為N較小硬盤空間大小50%
??至少需要用4塊硬盤
??實現方式:
??假設有四塊硬盤,分別為1,2,3,4。硬盤1,2構成一組RAID 1,硬盤3,4構成一組RAID 1。這兩組RAID 1再組成一組RAID 0
技術分享圖片
RAID 01
??特點:
??特點:
??讀寫能力提升
??有容錯能力。可以一組RAID0同時壞而不影響數據完整性。
??可用空間為N較小硬盤空間大小50%
??至少需要用4塊硬盤
??實現方式:
??假設有四塊硬盤,分別為1,2,3,4。硬盤1,2構成一組RAID 0,硬盤3,4構成一組RAID 0。這兩組RAID 0再組成一組RAID 1
技術分享圖片


4. 軟RAID的實現方式(mdadm的用法)

mdadm - manage MD devices aka Linux Software RAID
?
SYNOPSIS
?mdadm [mode] <raiddevice> [options] <component-devices>
??<raiddevice>: /dev/md#
?? <component-devices>: 任意塊設備
?支持的RAID級別:Linux supports LINEAR md devices, RAID0 (striping), RAID1 (mirroring), RAID4, RAID5, RAID6, RAID10, MULTIPATH, FAULTY, and CONTAINER.
?
MODES:
創建:-C Create a new array
?? 創建一個新的陣列
裝配:-A Assemble a pre-existing array
??裝配事先存在的陣列
監控:-F Select Monitor mode
?? 選擇監控模式
管理:-f , -r , -a
增長:-G Change the size or shape of an active array
?? 更改活動陣列的大小或狀態
?
For Misc mode(混合模式):
-S, --stop?deactivate array, releasing all resources.
?? 停用陣列,釋放所有資源
-D, --detail?Print details of one or more md devices.
?? 打印一個或多個md設備的詳細信息
--zero-superblock
??If the device contains a valid md superblock, the block is overwritten with zeros. With --force the block where the superblock would be is overwritten even if it doesn‘t appear to be valid.
??如果一個設備包含有效的md超級塊信息,這個塊信息會被零填充。加上--froce選項之後,即使塊信息不是有效的md信息也會被零填充。
?
For Manage mode(管理模式):
-a, --add?hot-add listed devices. 熱添加列出的設備
-r, --remove?remove listed devices. They must not be active. i.e. they should be failed or spare devices.
??移除列出的設備。其必須為非活動的。即為故障設備或熱備設備。
-f, --fail ?Mark listed devices as faulty.
??把列出的設備標記為故障
?
OPTIONS:
??For create ,grou:
-c, --chunk=
??Specify chunk size of kilobytes. The default when creating an array is 512KB. RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a power of 2. In any case it must be a multiple of 4KB
??指定塊大小,默認為512KB。RAID4, RAID5, RAID6,和 RAID10需要塊大小為2的次方,大多數情況下必須是4KB的倍數
??A suffix of ‘K‘, ‘M‘ or ‘G‘ can be given to indicate Kilobytes, Megabytes or Gigabytes respectively
??可給定的單位為K , M , G

-n, --raid-devices=
??Specify the number of active devices in the array.
??指定陣列的活動設備數量
??This, plus the number of spare devices (see below) must equal the number of component-devices (including "missing" devices) that are listed on the command line for --create.
??這裏的數量加上熱備設備必須等於 --create時在命令行列出的設備數量
??Setting a value of 1 is probably a mistake and so requires that --force be specified first. A value of 1 will then be allowed for linear, multipath, RAID0 and RAID1. It is never allowed for RAID4, RAID5 or RAID6.
??該值設置為1時為錯誤,需要加上--force參數。創建linear, multipath, RAID0 and RAID1時可設置值為1,但是創建RAID4, RAID5 or RAID6時不可指定為1
??This number can only be changed using --grow for RAID1, RAID4, RAID5 and RAID6 arrays, and only on kernels which provide the necessary support.
??如果內核支持,可以使用--grow選項為RAID1,RAID4,RAID5,RAID6增加磁盤數量
-l, --level=
??Set RAID level. When used with --create, options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container. Obviously some of these are synonymous.
??設置陣列級別。當使用--create時,可用選項有:linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4,raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container.這其中有些是同義的。
??Can be used with --grow to change the RAID level in some cases. See LEVEL CHANGES below.
??在某些情況下可以使用--grow改變陣列級別
-a, --auto{=yes,md,mdp,part,p}{NN}
??Instruct mdadm how to create the device file if needed, possibly allocating an unused minor number.
??如需要,指引mdadm如何創建設備文件,通常會配置一個未使用的設備編號。
??If --auto is not given on the command line or in the config file, then the default will be --auto=yes.
??如果在配置文件中或命令行中未給出--auto選項,默認情況下為--auto=yes.
-a, --add
??This option can be used in Grow mode in two cases.
?? 該選項在兩種模式下可用在增長模式下
-x, --spare-devices=
??Specify the number of spare (eXtra) devices in the initial array. Spares can also be added and removed later.
??指定初始創建的陣列中熱備盤的數量,後續可對熱備盤添加或移除。
??The number of component devices listed on the command line must equal the number of RAID devices plus the number of spare devices.
??命令行中列出的組成陣列的磁盤必須等於RAID的設備數加上熱備盤數量。
未指定模式的命令:
-s, --scan
??Scan config file or /proc/mdstat for missing information.
??從配置文件或者/proc/mdstat中尋找丟失的md信息


例:1、創建一個可用空間為1.5G的RAID1設備,要求其chunk大小為128k,文件系統為ext4,有一個空閑盤,開機可自動掛載至/backup目錄;

[root@localhost ~]# mdadm -C /dev/md0 -l 1 -n 2 -x 1 -c 128K /dev/sd{c,d,e}
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store ‘/boot‘ on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Feb 25 20:07:03 2018
        Raid Level : raid1
        Array Size : 1571840 (1535.00 MiB 1609.56 MB)
     Used Dev Size : 1571840 (1535.00 MiB 1609.56 MB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sun Feb 25 20:07:11 2018
             State : clean
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : c3bdff1b:81d61bc6:2fc3c3a1:7bc1b94d
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd

       2       8       64        -      spare   /dev/sde
[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
98304 inodes, 392960 blocks
19648 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=402653184
12 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost ~]# mkdir -v /backup
mkdir: created directory ‘/backup’
[root@localhost ~]# blkid /dev/md0
/dev/md0: UUID="0eed7df3-35e2-4bd1-8c05-9a9a8081d8da" TYPE="ext4"
編輯/etc/fstab增加一行內容如下:
UUID=0eed7df3-35e2-4bd1-8c05-9a9a8081d8da  /backup                  ext4    defaults        0 0
[root@localhost ~]# mount -a
#掛載fstab文件中所有掛載選項
[root@localhost ~]# df /backup/
#查看/backup目錄大小
Filesystem     1K-blocks  Used Available Use% Mounted on
/dev/md0         1514352  4608   1414768   1% /backup

2、創建一個可用空間為10G的RAID10設備,要求其chunk大小為256k,文件系統為ext4
方法一:直接創建raid10

[root@localhost ~]# mdadm -C /dev/md1 -n 4 -l 10 -c 256 /dev/sd{c..f}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid10 sdf[3] sde[2] sdd[1] sdc[0]
      3143680 blocks super 1.2 256K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sun Feb 25 20:21:57 2018
        Raid Level : raid10
        Array Size : 3143680 (3.00 GiB 3.22 GB)
     Used Dev Size : 1571840 (1535.00 MiB 1609.56 MB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Feb 25 20:22:12 2018
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 256K

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : a573eb7d:018a1c9b:1e65917b:f7c48894
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync set-A   /dev/sdc
       1       8       48        1      active sync set-B   /dev/sdd
       2       8       64        2      active sync set-A   /dev/sde
       3       8       80        3      active sync set-B   /dev/sdf
[root@localhost ~]# mke2fs -t ext4 /dev/md1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=64 blocks, Stripe width=128 blocks
196608 inodes, 785920 blocks
39296 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=805306368
24 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost ~]# blkid /dev/md1
/dev/md1: UUID="a3e591d6-db9e-40b0-b61a-69c1a69b2db4" TYPE="ext4"

方法二:先創建兩個raid1,再利用兩個raid1創建raid0

[root@localhost ~]# mdadm -C /dev/md1 -n 2 -l 1 /dev/sdc /dev/sdd
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store ‘/boot‘ on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@localhost ~]# mdadm -C /dev/md2 -n 2 -l 1 /dev/sde /dev/sdf
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store ‘/boot‘ on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
[root@localhost ~]# mdadm -C /dev/md3 -n 2 -l 0 /dev/md{1,2}
mdadm: /dev/md1 appears to contain an ext2fs file system
       size=3143680K  mtime=Thu Jan  1 08:00:00 1970
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md3 started.

3、裝配已經創建的陣列
方法一

[root@localhost ~]# mdadm -S /dev/md3
mdadm: stopped /dev/md3
[root@localhost ~]# mdadm -A /dev/md3 /dev/md1 /dev/md2
mdadm: /dev/md3 has been started with 2 drives.
[root@localhost ~]# mdadm -D /dev/md3
/dev/md3:
           Version : 1.2
     Creation Time : Sun Feb 25 20:28:23 2018
        Raid Level : raid0
        Array Size : 3141632 (3.00 GiB 3.22 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent
                   Update Time : Sun Feb 25 20:28:23 2018
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0       
            Chunk Size : 512K

Consistency Policy : none
               Name : localhost.localdomain:3  (local to host localhost.localdomain)
               UUID : aac6a917:944b3d60:f901a2cc:a1e3fd13
               Events : 0
    Number   Major   Minor   RaidDevice State
       0       9        1        0      active sync   /dev/md1
       1       9        2        1      active sync   /dev/md2

方法二:利用-s選項掃描

[root@localhost ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
[root@localhost ~]# mdadm -A -s
mdadm: /dev/md/0 has been started with 4 drives.
[root@localhost ~]# ls /dev/md*
/dev/md0

/dev/md:
0

方法三:

[root@localhost ~]# mdadm -D -s > /etc/mdadm.conf
#保存磁盤陣列信息至/etc/mdadm.conf
[root@localhost ~]# cat /etc/mdadm.conf
ARRAY /dev/md1 metadata=1.2 name=localhost.localdomain:1 UUID=2c35b0c6:d7a07385:bedeea30:cef62b46
ARRAY /dev/md2 metadata=1.2 name=localhost.localdomain:2 UUID=48d4ad71:6f9f6565:a62447ba:2a7ce6ec
ARRAY /dev/md3 metadata=1.2 name=localhost.localdomain:3 UUID=aac6a917:944b3d60:f901a2cc:a1e3fd13
[root@localhost ~]# mdadm -S /dev/md3
#停用/dev/md3
mdadm: stopped /dev/md3
[root@localhost ~]# mdadm -A /dev/md3
#裝載/dev/md3
mdadm: /dev/md3 has been started with 2 drives.
[root@localhost ~]# cat /proc/mdstat
#查看md的狀態
Personalities : [raid1] [raid6] [raid5] [raid4] [raid10] [raid0]
md3 : active raid0 md1[0] md2[1]
      3141632 blocks super 1.2 512k chunks

md2 : active raid1 sdf[1] sde[0]
      1571840 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdd[1] sdc[0]
      1571840 blocks super 1.2 [2/2] [UU]

unused devices: <none>

4、創建一個raid50,由7塊盤組成,其中一塊為熱備盤,最終容量為6G大小,chunk大小為1M,要求熱備盤共享,創建為ext4文件系統,開機自動掛載到/magedata目錄下。

#創建第一個raid 5 ,3塊活動硬盤,1塊熱備盤
 [root@localhost ~]# mdadm -C /dev/md1 -n 3 -l 5 -x 1 /dev/sd{c,d,e,f}
 mdadm: Defaulting to version 1.2 metadata
 mdadm: array /dev/md1 started.
 #創建第二個raid 5 ,3塊活動硬盤
 [root@localhost ~]# mdadm -C /dev/md2 -n 3 -l 5 /dev/sd{g,h,i}
 mdadm: largest drive (/dev/sdg) exceeds size (1047552K) by more than 1%
 Continue creating array? y
 mdadm: Defaulting to version 1.2 metadata
 mdadm: array /dev/md2 started.
 #利用兩個raid 5 創建一個raid 0 ,chunk大小為1M
 [root@localhost ~]# mdadm -C /dev/md3 -n 2 -l 0 -c 1M /dev/md{1,2}
 mdadm: /dev/md1 appears to contain an ext2fs file system
        size=3143680K  mtime=Thu Jan  1 08:00:00 1970
 Continue creating array? y
 mdadm: Defaulting to version 1.2 metadata
 mdadm: array /dev/md3 started.
 #查看當前md狀態
 [root@localhost ~]# cat /proc/mdstat
 Personalities : [raid1] [raid6] [raid5] [raid4] [raid10] [raid0]
 md3 : active raid0 md2[1] md1[0]
       5234688 blocks super 1.2 1024k chunks

 md2 : active raid5 sdi[3] sdh[1] sdg[0]
       2095104 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

 md1 : active raid5 sde[4] sdf[3](S) sdd[1] sdc[0]
       3143680 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

 unused devices: <none>
 #為md3創建文件系統
 [root@localhost ~]# mkfs.ext4 /dev/md3
 mke2fs 1.42.9 (28-Dec-2013)
 Filesystem label=
 OS type: Linux
 Block size=4096 (log=2)
 Fragment size=4096 (log=2)
 Stride=256 blocks, Stripe width=512 blocks
 327680 inodes, 1308672 blocks
 65433 blocks (5.00%) reserved for the super user
 First data block=0
 Maximum filesystem blocks=1340080128
 40 block groups
 32768 blocks per group, 32768 fragments per group
 8192 inodes per group
 Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736

 Allocating group tables: done
 Writing inode tables: done
 Creating journal (32768 blocks): done
 Writing superblocks and filesystem accounting information: done

 #查看/dev/md3的UUID
 [root@localhost ~]# blkid /dev/md3
 /dev/md3: UUID="22add5f6-89fe-47c9-a867-a86d952ecf0b" TYPE="ext4"
 #創建所需目錄
 [root@localhost ~]# mkdir -v /magedata
 mkdir: created directory ‘/magedata’
 #修改/etc/fstab文件,增加一行內容如下:
 UUID=22add5f6-89fe-47c9-a867-a86d952ecf0b /magedata            ext4    defaults      0 0
 #保存當前md狀態至/etc/mdadm.conf文件
 [root@localhost ~]# mdadm -D -s > /etc/mdadm.conf
 #查看/etc/mdad.conf文件內容
 [root@localhost ~]# cat /etc/mdadm.conf
 ARRAY /dev/md1 metadata=1.2 spares=1 name=localhost.localdomain:1 UUID=f9a252f3:c315e0d6:0fa2986a:e195a8ab
 ARRAY /dev/md2 metadata=1.2 name=localhost.localdomain:2 UUID=4c5641d3:965a42dd:e7f26d34:9557d42e
 ARRAY /dev/md3 metadata=1.2 name=localhost.localdomain:3 UUID=a5ee4d2f:8512c453:0faadd80:23d98b17
 #編輯上述文件內容添加以下紅色內容:
 ARRAY /dev/md1 metadata=1.2 spares=1 name=localhost.localdomain:1 UUID=f9a252f3:c315e0d6:0fa2986a:e195a8ab spare-group=magedu
 ARRAY /dev/md2 metadata=1.2 spares=1 name=localhost.localdomain:2 UUID=4c5641d3:965a42dd:e7f26d34:9557d42e spare-group=magedu
 ARRAY /dev/md3 metadata=1.2 name=localhost.localdomain:3 UUID=a5ee4d2f:8512c453:0faadd80:23d98b17MAILADDR root@localhost
 #把/dev/sdg設為壞盤
 [root@localhost ~]# mdadm /dev/md2 -f /dev/sdg
 mdadm: set /dev/sdg faulty in /dev/md2
 #查看/dev/md2的狀態,可以看到之前md2沒有熱備盤,但/dev/sdg設為壞盤之後,/dev/md1的熱備盤移至/dev/md2下去了。
 [root@localhost ~]# mdadm -D /dev/md2
 /dev/md2:
            Version : 1.2
      Creation Time : Sun Feb 25 21:39:04 2018
         Raid Level : raid5
         Array Size : 2095104 (2046.00 MiB 2145.39 MB)
      Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)
       Raid Devices : 3
      Total Devices : 4
        Persistence : Superblock is persistent

       Update Time : Sun Feb 25 22:03:11 2018
             State : clean
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:2  (local to host localhost.localdomain)
              UUID : 4c5641d3:965a42dd:e7f26d34:9557d42e
            Events : 39

    Number   Major   Minor   RaidDevice State
       4       8       80        0      active sync   /dev/sdf
       1       8      112        1      active sync   /dev/sdh
       3       8      128        2      active sync   /dev/sdi

       0       8       96        -      faulty   /dev/sdg

RAID原理及軟RAID的實現方式