1. 程式人生 > >centos下軟raid的的實現方式

centos下軟raid的的實現方式

raid 軟raid raid5

mdadm 模式化的工具
命令的語法格式mdadm [mode] <raiddevice> [options] <component-devices>
支持的RAID級別: LINEAR,RAID0,RAID1,RAID4,RAID5,RAID6,RAID10
主要模式有:
創建 -C -D 查看詳細信息
裝配 -A
監控 -F
管理 -f,-r,-a
<raiddevice> /dev/md[0..9]
<component-devices> 任意的塊設備
-C 創建模式
-n num 使用num個塊設備來創建此raid
-l num 指明要創建的RAID的級別
-a {yes|no} 自動創建RIAD設備的設備文件
-c 指明塊大小的
-x num 指明空閑盤的個數
實例:創建一個10G可用空間的raid5

[root@ads3 ~]# fdisk /dev/sda ##創建4塊磁盤用來做軟raid**

WARNING: DOS-compatible mode is deprecated. It‘s strongly recommended to
switch off the mode (command ‘c‘) and change display units to
sectors (command ‘u‘).

Command (m for help): m

Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition‘s system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

Command (m for help): l

0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-
2 XENIX root 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-
3 XENIX usr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
4 FAT16 <32M 41 PPC PReP Boot 85 Linux extended c7 Syrinx
5 Extended 42 SFS 86 NTFS volume set da Non-FS data
6 FAT16 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / .
7 HPFS/NTFS 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility
8 AIX 4f QNX4.x 3rd part 8e Linux LVM df BootIt
9 AIX bootable 50 OnTrack DM 93 Amoeba e1 DOS access
a OS/2 Boot Manag 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O
b W95 FAT32 52 CP/M 9f BSD/OS e4 SpeedStor
c W95 FAT32 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs
e W95 FAT16 (LBA) 54 OnTrackDM6 a5 FreeBSD ee GPT
f W95 Ext‘d (LBA) 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/
10 OPUS 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b
11 Hidden FAT12 5c Priam Edisk a8 Darwin UFS f1 SpeedStor
12 Compaq diagnost 61 SpeedStor a9 NetBSD f4 SpeedStor
14 Hidden FAT16 <3 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary
16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS
17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE
18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto
1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep
1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT
1e Hidden W95 FAT1
Command (m for help): t
Command (m for help): t
Partition number (1-9): 7
Hex code (type L to list codes): fd
Changed system type of partition 7 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-9): 8
Hex code (type L to list codes): fd
Changed system type of partition 8 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-9): 9
Hex code (type L to list codes): fd
Changed system type of partition 9 to fd (Linux raid autodetect)

Command (m for help): n
First cylinder (12161-65271, default 12161):
Using default value 12161
Last cylinder, +cylinders or +size{K,M,G} (12161-65271, default 65271): +5G

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@ads3 ~]# partx -a /dev/sda ##重讀分區表實現內核重讀分區表
[root@ads3 ~]# cat /proc/mdstat ##查看系統下是否有軟raid

Personalities :
unused devices: <none>
[root@ads3 ~]# mdadm -C /dev/md0 -a yes -n 3 -x 1 -l 5 /dev/sda{7,8,9,10} ##制作軟raid
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@ads3 ~]# cat /proc/mdstat ##查看系統下是否有軟raid
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda9[4] sda103 sda8[1] sda7[0]
10482688 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[==>..................] recovery = 12.9% (678720/5241344) finish=0.7min speed=96960K/sec

unused devices: <none>
[root@ads3 ~]# mkfs -t ext4 /dev/md0 ##格式化軟raid
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
655360 inodes, 2620672 blocks
131033 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@ads3 ~]# mkdir /mydata ##創建目錄
[root@ads3 ~]# mount /dev/md0 /mydata ##將軟raid掛載
[root@ads3 ~]# mount ##查看系統分區掛載情況

/dev/sda2 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
/dev/sda3 on /data type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/md0 on /mydata type ext4 (rw)
[root@ads3 ~]# df -lh #查看/dev/md0的大小
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 48G 1.7G 44G 4% /
tmpfs 931M 0 931M 0% /dev/shm
/dev/sda1 1.9G 76M 1.8G 5% /boot
/dev/sda3 20G 44M 19G 1% /data
/dev/md0 9.8G 23M 9.2G 1% /mydata
[root@ads3 ~]# blkid /dev/md0 #查看/dev/md0的信息
/dev/md0: UUID="b161511b-3570-4874-8c23-b1d2a8d0893b" TYPE="ext4"
dumpe2fs -h /dev/md0
-h only display the superblock information and not any of the block group descriptor detail information.
[root@ads3 ~]# dumpe2fs -h /dev/md0 ##查看分區信息
dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: b161511b-3570-4874-8c23-b1d2a8d0893b
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
....
實現/dev/md0的自動掛載。編輯/etc/fstab 添加如下內容
UUID="b161511b-3570-4874-8c23-b1d2a8d0893b" /mydata ext4 defaults 0 0
[root@ads3 ~]# mdadm -D /dev/md0 ##查看raid的詳細信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sat Feb 24 21:56:21 2018
      State : clean 

Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 18

Number   Major   Minor   RaidDevice State
   0       8        7        0      active sync   /dev/sda7
   1       8        8        1      active sync   /dev/sda8
   4       8        9        2      active sync   /dev/sda9

   3       8       10        -      spare   /dev/sda10

[root@ads3 ~]# mdadm /dev/md0 -f /dev/sda7 ##標記7損壞
mdadm: set /dev/sda7 faulty in /dev/md0
[root@ads3 ~]# watch -n1 "cat /proc/mdstat" ##查看同步狀態
-bash: whatch: command not found
[root@ads3 ~]# watch -n1 "cat /proc/mdstat"

Every 1.0s: cat /proc/mdstat Sat Feb 24 22:08:30 2018

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda9[4] sda10[3] sda8[1] sda70
10482688 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
[root@ads3 ~]# mdadm -D /dev/md0 ##查看raid的詳細信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sat Feb 24 22:07:11 2018
      State : clean 

Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 37

Number   Major   Minor   RaidDevice State
   3       8       10        0      active sync   /dev/sda10
   1       8        8        1      active sync   /dev/sda8
   4       8        9        2      active sync   /dev/sda9

   0       8        7        -      faulty   /dev/sda7

**[root@ads3 ~]# cp /etc/fstab /mydata ##復制文件到mydata目錄下
[root@ads3 ~]# cat /mydata/fstab ##mydata目錄下的文件內容

/etc/fstab
Created by anaconda on Fri Feb 23 10:54:44 2018

Accessible filesystems, by reference, are maintained under ‘/dev/disk‘
See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

UUID=af25ecdc-82ea-43ec-89ca-9ae5b12109b9 / ext4 defaults 1 1
UUID=9e3fa0e8-6130-49a1-a2a3-fafa6ac0cfeb /boot ext4 defaults 1 2
UUID=0b8c477f-3d8e-4a24-88e8-a4b0524e1101 /data ext4 defaults 1 2
UUID=af38e7c0-6524-4d7b-ab20-f6faed3be199 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
[root@ads3 ~]# mdadm /dev/md0 -f /dev/sda8 ##標記8損壞
mdadm: set /dev/sda8 faulty in /dev/md0
[root@ads3 ~]# mdadm -D /dev/md0 ##查看raid的詳細信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sat Feb 24 22:16:11 2018
      State : clean, degraded 

Active Devices : 2
Working Devices : 2
Failed Devices : 2
Spare Devices : 0

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 41

Number   Major   Minor   RaidDevice State
   3       8       10        0      active sync   /dev/sda10
   2       0        0        2      removed
   4       8        9        2      active sync   /dev/sda9

   0       8        7        -      faulty   /dev/sda7
   1       8        8        -      faulty   /dev/sda8

[root@ads3 ~]# mdadm /dev/md0 -r /dev/sda7 ##移除7號盤
mdadm: hot removed /dev/sda7 from /dev/md0
[root@ads3 ~]# mdadm /dev/md0 -r /dev/sda8 ##移除8號盤
mdadm: hot removed /dev/sda8 from /dev/md0
[root@ads3 ~]# mdadm -D /dev/md0 ##查看raid的詳細信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Sat Feb 24 22:18:09 2018
      State : clean, degraded 

Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 43

Number   Major   Minor   RaidDevice State
   3       8       10        0      active sync   /dev/sda10
   2       0        0        2      removed
   4       8        9        2      active sync   /dev/sda9

[root@ads3 ~]# watch -n1 "cat /proc/mdstat" ##查看同步的詳細信息
查看每秒鐘同步恢復 的狀態
Every 1.0s: cat /proc/mdstat Sat Feb 24 22:21:02 2018

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda86 sda7[5] sda9[4] sda10[3]
10482688 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
[===========>.........] recovery = 59.2% (3107360/5241344) finish=0.3min speed=93006K/sec

unused devices: <none>

[root@ads3 ~]# mdadm -D /dev/md0 ##查看raid的詳細信息
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 24 21:53:39 2018
Raid Level : raid5
Array Size : 10482688 (10.00 GiB 10.73 GB)
Used Dev Size : 5241344 (5.00 GiB 5.37 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sat Feb 24 22:21:25 2018
      State : clean 

Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

     Layout : left-symmetric
 Chunk Size : 512K

       Name : ads3:0  (local to host ads3)
       UUID : a93c9421:a485dac0:04e4a6d4:e17d7116
     Events : 64

Number   Major   Minor   RaidDevice State
   3       8       10        0      active sync   /dev/sda10
   5       8        7        1      active sync   /dev/sda7
   4       8        9        2      active sync   /dev/sda9

   6       8        8        -      spare   /dev/sda8

centos下軟raid的的實現方式