1. 程式人生 > >Linux高階檔案系統管理(8)

Linux高階檔案系統管理(8)

如果您的 Linux 伺服器有多個使用者經常存取資料時,為了維護所有使用者在硬碟容量的公平使用,磁碟配額 (Quota) 就是一項非常有用的工具,另外,如果你的使用者常常抱怨磁碟容量不夠用,那麼更進階的檔案系統就得要學習,本章我們會介紹磁碟陣列 (RAID),及邏輯卷軸檔案系統 (LVM),這些工具都可以幫助你管理與維護使用者可用的磁碟容量.



Quota 磁碟配額配置

Quota 這個玩意兒就字面上的意思來看,就是有多少『限額』的意思啦,如果是用在零用錢上面,就是類似『有多少零用錢一個月』的意思之類的,如果是在計算機主機的磁碟使用量上呢? 以 Linux 來說,就是有多少容量限制的意思,我們可以使用 quota 來讓磁碟的容量使用較為公平,下面我們會介紹什麼是 quota 然後以一個完整的範例來介紹 quota 的使用作用.

由於Linux是一個多使用者管理的作業系統,而Linux預設情況下並不限制每個使用者使用磁碟空間的大小,假如某個使用者疏忽或者惡意佔滿磁碟空間,將導致系統磁碟無法寫入甚至崩潰;為了保證系統磁
盤的有足夠的剩餘空間,我們需要對使用者和組進行磁碟空間使用限制。

磁碟配額限制類型:

⦁ 限制使用者和組,對磁碟空間的使用量
⦁ 限制使用者和組,在磁碟內建立檔案的個數

磁碟配額限制級別:

⦁ 軟限制:低階限制,此限制可以突破,突破時會被警告,超出部分會有寬限天數,寬限天數到期後超出部分被清空,軟限制不能超過硬限制
⦁ 硬限制:絕對限制,此限制不會被突破,達到指定限制後無法使用更多空間
⦁ 寬限天數:當有資料超過軟限制後,超出部分會被計時,寬限天數到期後超出部分資料將被清空,寬限天數預設是7天
注:磁碟配額是針對分割槽進行設定的,無法實現,"某使用者在系統中共計只能使用50MB磁碟空間",只能設定某使用者在/home分割槽能使用30M這樣的限制.切記:磁碟配額是針對分割槽的!

精簡模式下沒有此命令,執行 yum install -y quota 安裝

◆檢查核心是否支援配額◆

[[email protected] ~]# cat /boot/config-3.10.0-693.el7.x86_64 |grep "CONFIG_QUOTA"
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y

◆檢查指定分割槽的掛載屬性是否滿足條件◆

[[email protected] ~]# dumpe2fs -h /dev/vdb |grep "Default mount options"
dumpe2fs 1.42.9 (28-Dec-2013)
Default mount options:    user_xattr acl
 
#檢查結果中是否包含,usrquota,grpquota兩個掛載屬性

◆quotacheck 生成使用者和組的配置檔案◆

[[email protected] ~]# quotacheck --help
Utility for checking and repairing quota files.
quotacheck [-gucbfinvdmMR] [-F <quota-format>] filesystem|-a

語法格式:[ quota [選項] [分割槽名] ]

        -a      #掃描/etc/mtab檔案中所有啟用磁碟配額功能的分割槽.如果加入此引數,命令後面就不需要加入分割槽名了
        -u      #建立使用者配額的配置檔案,即生成aquota.user
        -g      #建立組配額的配置檔案,即aquota.group
        -v      #顯示掃描過程
        -c      #清除原有的配置檔案,重新建立新的配置檔案

◆edquota 編輯配額檔案,設定指定限制大小◆

[[email protected] ~]# edquota --help
edquota: Usage:
        edquota [-rm] [-u] [-F formatname] [-p username] [-f filesystem] username ...
        edquota [-rm] -g [-F formatname] [-p groupname] [-f filesystem] groupname ...
        edquota [-u|g] [-F formatname] [-f filesystem] -t
        edquota [-u|g] [-F formatname] [-f filesystem] -T username|groupname ...

語法格式:[ edquota [選項] [使用者名稱或組名] ]

        -u      #使用者名稱
        -g      #組名
        -t      #設定寬限時間
        -p      #複製磁碟配額規則,不需要每一個使用者或者組都手動設定一遍
                #edquota        -p 模板使用者     -u 目標使用者
 
#注:配置檔案中所寫大小預設單位KB

◆啟動quota配額管理◆

[[email protected] ~]# quotaon --help
quotaon: Usage:
        quotaon [-guvp] [-F quotaformat] [-x state] -a
        quotaon [-guvp] [-F quotaformat] [-x state] filesys ...

語法格式:[ quotaon [選項] [分割槽名] ]

        -a      #根據/etc/mtab檔案啟動所有分割槽的磁碟配額(不寫分割槽名)
        -u      #啟動使用者的磁碟配額
        -g      #啟動組的磁碟配額
        -v      #顯示啟動過程資訊

◆關閉quota配額管理◆

[[email protected] ~]# quotaoff --help
quotaoff: Usage:
        quotaoff [-guvp] [-F quotaformat] [-x state] -a
        quotaoff [-guvp] [-F quotaformat] [-x state] filesys ...

語法格式:[ quotaoff [選項] [分割槽名] ]

        -a      #根據/etc/mtab檔案關閉所有分割槽的磁碟配額(不寫分割槽名)
        -u      #關閉使用者的磁碟配額
        -g      #關閉組的磁碟配額
        -v      #顯示關閉過程資訊

◆quota 檢視指定使用者和組的配額資訊◆

[[email protected] ~]# quota --hlep
quota: unrecognized option '--hlep'
quota: Usage: quota [-guqvswim] [-l | [-Q | -A]] [-F quotaformat]
        quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -u username ...
        quota [-qvswim] [-l | [-Q | -A]] [-F quotaformat] -g groupname ...
        quota [-qvswugQm] [-F quotaformat] -f filesystem ...

語法格式:[ quota [選項] [使用者名稱] ]

        -u      #使用者名稱
        -g      #組名
        -v      #顯示詳細資訊
        -s      #以常見單位顯示大小

◆repquota 檢視指定分割槽的磁碟配額◆

[[email protected] ~]# repquota --help
repquota: Utility for reporting quotas.
Usage:
repquota [-vugsi] [-c|C] [-t|n] [-F quotaformat] (-a | mntpoint)

語法格式:[ repquota [選項] [分割槽名] ]

        -u      #查詢使用者配額
        -g      #查詢組配額
        -v      #顯示詳情
        -s      #以常見單位顯示

◆setquota 非互動設定磁碟配額命令◆

[[email protected] ~]# setquota --help
setquota: Usage:
  setquota [-u|-g] [-rm] [-F quotaformat] <user|group>
        <block-softlimit> <block-hardlimit> <inode-softlimit> <inode-hardlimit> -a|<filesystem>...
  setquota [-u|-g] [-rm] [-F quotaformat] <-p protouser|protogroup> <user|group> -a|<filesystem>...
  setquota [-u|-g] [-rm] [-F quotaformat] -b [-c] -a|<filesystem>...
  setquota [-u|-g] [-F quotaformat] -t <blockgrace> <inodegrace> -a|<filesystem>...
  setquota [-u|-g] [-F quotaformat] <user|group> -T <blockgrace> <inodegrace> -a|<filesystem>...

setquota -u 使用者名稱 軟(容) 硬(容) 軟(數) 硬(數) 分割槽名
 
注:這樣的非互動式的命令更適合寫入指令碼,而且假如有很多使用者的磁碟配額配置相同也可以用複製來實現。

◆磁碟配額小實驗◆

⦁ 這裡有一塊未分割槽的磁碟/dev/sdb,手動分割槽並格式化.
⦁ 將磁碟配額開啟,並寫入開機自啟動列表.
⦁ 建立lyshark使用者,和temp組.
⦁ 配置lyshark的軟限制200M,硬限制500M,配置temp組軟限制100M,硬限制200M.

1.檢查系統是否支援配額

[[email protected] ~]# cat /boot/config-3.10.0-862.el7.x86_64 |grep "CONFIG_QUOTA"
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y

2.檢視磁碟資訊

[[email protected] ~]# ll /dev/sd*
brw-rw---- 1 root disk 8,  0 6月  24 09:14 /dev/sda
brw-rw---- 1 root disk 8,  1 6月  24 09:14 /dev/sda1
brw-rw---- 1 root disk 8,  2 6月  24 09:14 /dev/sda2
brw-rw---- 1 root disk 8, 16 6月  24 09:14 /dev/sdb

3.磁碟分割槽/dev/sdb,並格式化為ext4格式

[[email protected] ~]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mkpart
Partition name?  []? sdb1
File system type?  [ext2]? ext2
Start? 1M
End? 10000M
(parted) p
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  10.0GB  9999MB  ext4         sdb1

(parted) q
Information: You may need to update /etc/fstab.

[[email protected] ~]# mkfs.ext4 /dev/sdb1

mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
610800 inodes, 2441216 blocks
122060 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
75 block groups
32768 blocks per group, 32768 fragments per group
8144 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

4.建立掛載點,並掛載裝置

[[email protected] ~]# mkdir /sdb1
[[email protected] ~]# mount /dev/sdb1 /sdb1/

[[email protected] ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.0G  1.4G  6.7G  17% /
devtmpfs                  98M     0   98M   0% /dev
tmpfs                    110M     0  110M   0% /dev/shm
tmpfs                    110M  5.5M  104M   6% /run
tmpfs                    110M     0  110M   0% /sys/fs/cgroup
/dev/sda1               1014M  130M  885M  13% /boot
tmpfs                     22M     0   22M   0% /run/user/0
/dev/sr0                 4.2G  4.2G     0 100% /mnt
/dev/sdb1                9.1G   37M  8.6G   1% /sdb1

5.檢查分割槽是否支援配額 (主要看有沒有usrquota,grpquota)

[[email protected] ~]# dumpe2fs -h /dev/sdb1 |grep "Default mount options"
dumpe2fs 1.42.9 (28-Dec-2013)
Default mount options:    user_xattr acl

[[email protected] ~]# cat /proc/mounts |grep "/dev/sdb1"
/dev/sdb1 /sdb1 ext4 rw,relatime,data=ordered 0 0

#上面沒有看到相關許可權,此時我們要重新掛載一下磁碟,加上許可權

[[email protected] ~]# mount -o remount,usrquota,grpquota /dev/sdb1

[[email protected] ~]# cat /proc/mounts |grep "/dev/sdb1"
/dev/sdb1 /sdb1 ext4 rw,relatime,quota,usrquota,grpquota,data=ordered 0 0

6.設定開機自動掛載分割槽,並開啟配額

[[email protected] ~]# ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Sep 21 20:07 13d5ccc2-52db-4aec-963a-f88e8edcf01c -> ../../sda1
lrwxrwxrwx 1 root root  9 Sep 21 20:07 2018-05-03-20-55-23-00 -> ../../sr0
lrwxrwxrwx 1 root root 10 Sep 21 20:07 4604dcf2-da39-455a-9719-e7c5833e566c -> ../../dm-0
lrwxrwxrwx 1 root root 10 Sep 21 20:47 939cbeb8-bc88-44aa-9221-50672111e123 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Sep 21 20:07 f6a4b420-aa6a-4e66-bbb3-c8e8280a099f -> ../../dm-1


[[email protected] ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Sep 18 09:05:06 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=13d5ccc2-52db-4aec-963a-f88e8edcf01c /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0

UUID=7d7f22ed-466e-4205-8efe-1b6184dc5e1b swap swap defaults 0 0
UUID=939cbeb8-bc88-44aa-9221-50672111e123 /sdb1   ext4   defaults,usrquota,grpquota  0 0

[[email protected] ~]# mount -o remount,usrquota,grpquota /dev/sdb1

7.生成配額檔案 quotackeck -ugv[分割槽名]

[[email protected] ~]# quotacheck -ugv /dev/sdb1

quotacheck: Your kernel probably supports journaled quota but you are not using it. Consider switching to journaled quota to avoid running quotacheck after an unclean shutdown.
quotacheck: Scanning /dev/sdb1 [/sdb1] done
quotacheck: Cannot stat old user quota file /sdb1/aquota.user: No such file or directory. Usage will not be subtracted.
quotacheck: Cannot stat old group quota file /sdb1/aquota.group: No such file or directory. Usage will not be subtracted.
quotacheck: Cannot stat old user quota file /sdb1/aquota.user: No such file or directory. Usage will not be subtracted.
quotacheck: Cannot stat old group quota file /sdb1/aquota.group: No such file or directory. Usage will not be subtracted.
quotacheck: Checked 3 directories and 0 files
quotacheck: Old file not found.
quotacheck: Old file not found.

8.編輯限制,edquota -ugtp [使用者名稱/組名]

配置lyshark的軟限制200M,硬限制500M

[[email protected]localhost ~]# edquota -u lyshark

Disk quotas for user lyshark (uid 1000):

     ↓檔案系統                         軟(容量)   硬(容量)    I節點      軟(數)   硬(數)
  Filesystem              blocks       soft       hard     inodes     soft     hard
  /dev/sdb1                 0          200M       500M          0        0        0

配置temp組軟限制100M,硬限制200M.

[[email protected] ~]# edquota -g temp

Disk quotas for group temp (gid 1001):
  Filesystem                   blocks       soft       hard     inodes     soft     hard
  /dev/sdb1                         0     102400     204800          0        0        0

9.開啟配額,quota on/off -augv

[[email protected] ~]# quotaon -augv
/dev/sdb1 [/sdb1]: group quotas turned on
/dev/sdb1 [/sdb1]: user quotas turned on

10.檢視指定使用者或組的配額,quota -ugvs

[[email protected] ~]# quota -ugvs

Disk quotas for user root (uid 0):
     Filesystem   space   quota   limit   grace   files   quota   limit   grace
      /dev/sdb1     20K      0K      0K               2       0       0
Disk quotas for group root (gid 0):
     Filesystem   space   quota   limit   grace   files   quota   limit   grace
      /dev/sdb1     20K      0K      0K               2       0       0


LVM 邏輯卷管理器

LVM(Logical Volume Manager)邏輯卷管理,它是Linux環境下對磁碟分割槽進行管理的一種機制,普通的磁碟分割槽管理方式在分區劃分好之後就無法改變其大小,當一個邏輯分割槽存放不下某個檔案時,解決的方法通常是使用符號連結,或者使用調整分割槽大小的工具,但這只是暫時解決辦法,沒有從根本上解決問題.簡單來說LVM就是將物理磁碟融合成一個或幾個大的虛擬磁碟儲存池,按照我們的需求去儲存池劃分空間來使用,由於是虛擬的儲存池,所以劃分空間時可以自由的調整大小,如下:

LVM的組成部分:

⦁ 物理卷(PV,Physical Volume):由磁碟或分割槽轉化而成
⦁ 卷組(VG,Volume Group):將多個物理卷組合在一起組成了卷組,組成同一個卷組的可以是同一個硬碟的不同分割槽,也可以是不同硬碟上的不同分割槽,我們通常把卷組理解為一塊硬碟.
⦁ 邏輯卷(LV,Logical Volume):把卷組理解為硬碟的話,那麼我們的邏輯卷則是硬碟上的分割槽,邏輯卷可以進行格式化,儲存資料.
⦁ 物理擴充套件(PE,Physical Extend):PE卷組的最小儲存單元,PE所在的位置是VG卷組,即硬碟上,那麼我們可以把PE理解為硬碟上的扇區,預設是4MB,可自由配置.

這裡準備好4塊硬碟,無需分割槽與格式化.

[[email protected] ~]# ll /dev/sd[b-z]

brw-rw---- 1 root disk 8, 16 Sep 21 22:04 /dev/sdb
brw-rw---- 1 root disk 8, 32 Sep 21 22:04 /dev/sdc
brw-rw---- 1 root disk 8, 48 Sep 21 22:04 /dev/sdd
brw-rw---- 1 root disk 8, 64 Sep 21 22:04 /dev/sde

◆PV 物理卷建立與移除◆

PV的建立

pvcreate [分割槽路徑],[分割槽路徑][.......]

[[email protected] ~]# ll /dev/sd[b-z]
brw-rw---- 1 root disk 8, 16 Sep 21 22:04 /dev/sdb
brw-rw---- 1 root disk 8, 32 Sep 21 22:04 /dev/sdc
brw-rw---- 1 root disk 8, 48 Sep 21 22:04 /dev/sdd
brw-rw---- 1 root disk 8, 64 Sep 21 22:04 /dev/sde

[[email protected] ~]# pvcreate /dev/sdb /dev/sdc           #此處拿3塊硬碟建立
  Physical volume "/dev/sdb" successfully created.
  Physical volume "/dev/sdc" successfully created.
  Physical volume "/dev/sdd" successfully created.

[[email protected] ~]# pvs                                  #查詢建立好的硬碟
  PV         VG     Fmt  Attr PSize  PFree
  /dev/sda2  centos lvm2 a--  <9.00g     0
  /dev/sdb          lvm2 ---  10.00g 10.00g
  /dev/sdc          lvm2 ---  10.00g 10.00g
  /dev/sdd          lvm2 ---  10.00g 10.00g

PV的移除

pvremove [分割槽路徑]

[[email protected] ~]# pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/sda2  centos lvm2 a--  <9.00g     0
  /dev/sdb          lvm2 ---  10.00g 10.00g
  /dev/sdc          lvm2 ---  10.00g 10.00g
  /dev/sdd          lvm2 ---  10.00g 10.00g

[[email protected] ~]# pvremove /dev/sdd                       #移除/dev/sdd
  Labels on physical volume "/dev/sdd" successfully wiped.

[[email protected] ~]# pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/sda2  centos lvm2 a--  <9.00g     0
  /dev/sdb          lvm2 ---  10.00g 10.00g
  /dev/sdc          lvm2 ---  10.00g 10.00g

◆VG 卷組建立與移除◆

建立VG卷組,VG卷組要在PV中選擇

vgcreate -s [指定PE大小] [VG卷組名] [分割槽路徑] [分割槽路徑][.....]

[[email protected] ~]# pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/sda2  centos lvm2 a--  <9.00g     0
  /dev/sdb          lvm2 ---  10.00g 10.00g
  /dev/sdc          lvm2 ---  10.00g 10.00g

[[email protected] ~]# vgcreate -s 4M my_vg /dev/sdb /dev/sdc        #此處就是建立一個VG卷組
  Volume group "my_vg" successfully created

[[email protected] ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  centos   1   2   0 wz--n- <9.00g     0
  my_vg    2   0   0 wz--n- 19.99g 19.99g                         #這就是VG卷組,名字是my_vg

給當前my_vg卷組,新增一塊新的PV,也就是擴展卷組

vgextend [卷組名] [物理卷分割槽]

[[email protected] ~]# pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  centos lvm2 a--   <9.00g      0
  /dev/sdb   my_vg  lvm2 a--  <10.00g <10.00g
  /dev/sdc   my_vg  lvm2 a--  <10.00g <10.00g
  /dev/sdd          lvm2 ---   10.00g  10.00g               #這個物理卷沒有劃分卷組

[[email protected] ~]# vgextend my_vg /dev/sdd                 #新增一個PV到指定卷組
  Volume group "my_vg" successfully extended

[[email protected] ~]# pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  centos lvm2 a--   <9.00g      0
  /dev/sdb   my_vg  lvm2 a--  <10.00g <10.00g
  /dev/sdc   my_vg  lvm2 a--  <10.00g <10.00g
  /dev/sdd   my_vg  lvm2 a--  <10.00g <10.00g               #已被劃分到my_vg卷組

在VG卷組裡移除一個PV(移除單個PG)

vgreduce [卷組名] [物理卷分割槽]

[[email protected] ~]# pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  centos lvm2 a--   <9.00g      0
  /dev/sdb   my_vg  lvm2 a--  <10.00g <10.00g
  /dev/sdc   my_vg  lvm2 a--  <10.00g <10.00g
  /dev/sdd   my_vg  lvm2 a--  <10.00g <10.00g

[[email protected] ~]# vgreduce my_vg /dev/sdd                #將/dev/sdd從my_vg卷組裡移除
  Removed "/dev/sdd" from volume group "my_vg"

[[email protected] ~]# pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  centos lvm2 a--   <9.00g      0
  /dev/sdb   my_vg  lvm2 a--  <10.00g <10.00g
  /dev/sdc   my_vg  lvm2 a--  <10.00g <10.00g
  /dev/sdd          lvm2 ---   10.00g  10.00g

移除整個VG卷組

vgremove [卷組名]

[[email protected] ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  centos   1   2   0 wz--n- <9.00g     0
  my_vg    2   0   0 wz--n- 19.99g 19.99g

[[email protected] ~]# vgremove my_vg                    #移除整個卷組
  Volume group "my_vg" successfully removed

[[email protected] ~]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  centos   1   2   0 wz--n- <9.00g    0
[[email protected] ~]#


移除空的物理卷VG

vgreduce -a [卷組名]

[[email protected] ~]# vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  centos   1   2   0 wz--n-  <9.00g      0
  my_vg    3   0   0 wz--n- <29.99g <29.99g

[[email protected] ~]# vgreduce -a my_vg                 #只移除空卷組
  Removed "/dev/sdb" from volume group "my_vg"
  Removed "/dev/sdc" from volume group "my_vg"

[[email protected] ~]# vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  centos   1   2   0 wz--n-  <9.00g      0
  my_vg    1   0   0 wz--n- <10.00g <10.00g

◆LV 邏輯卷建立與移除◆

建立LVM

lvcreate -L [指定大小] -n [LV名字] [VG卷組:從哪個卷組裡劃分]

[[email protected] ~]# lvs
  LV   VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root centos -wi-ao---- <8.00g
  swap centos -wi-ao----  1.00g

[[email protected] ~]# lvcreate -L 10G -n my_lv my_vg            #建立LVM邏輯卷
  Logical volume "my_lv" created.

[[email protected] ~]# lvs
  LV    VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root  centos -wi-ao---- <8.00g
  swap  centos -wi-ao----  1.00g
  my_lv my_vg  -wi-a----- 10.00g

格式化並掛載使用

[[email protected] ~]# mkdir /LVM                            #首先建立一個掛載點
[[email protected] ~]#
[[email protected] ~]# mkfs.ext4 /dev/my_vg/my_lv            #格式化LVM分割槽
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[[email protected] ~]# mount /dev/my_vg/my_lv /LVM/                  #掛載LVM
[[email protected] ~]#
[[email protected] ~]# df -h                                         #檢視結果
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
devtmpfs                  98M     0   98M   0% /dev
tmpfs                    110M     0  110M   0% /dev/shm
tmpfs                    110M  5.5M  104M   5% /run
tmpfs                    110M     0  110M   0% /sys/fs/cgroup
/dev/sda1               1014M  130M  885M  13% /boot
tmpfs                     22M     0   22M   0% /run/user/0
/dev/mapper/my_vg-my_lv  9.8G   37M  9.2G   1% /LVM                ← 掛載成功


◆LV 容量增加 (將LV的容量增加5G的空間)◆

注意:這裡擴充套件,要先擴充套件LVM,然後再擴充套件檔案系統

[[email protected] ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
devtmpfs                  98M     0   98M   0% /dev
tmpfs                    110M     0  110M   0% /dev/shm
tmpfs                    110M  5.5M  104M   5% /run
tmpfs                    110M     0  110M   0% /sys/fs/cgroup
/dev/sda1               1014M  130M  885M  13% /boot
tmpfs                     22M     0   22M   0% /run/user/0
/dev/mapper/my_vg-my_lv  9.8G   37M  9.2G   1% /LVM                  ←此處是10G

[[email protected] ~]# lvextend -L +5G /dev/my_vg/my_lv                 #執行增加命令,從VG卷組劃分5G
  Size of logical volume my_vg/my_lv changed from 10.00 GiB (2560 extents) to 15.00 GiB (3840).
  Logical volume my_vg/my_lv successfully resized.

[[email protected] ~]# resize2fs -f /dev/my_vg/my_lv                    #擴充套件檔案系統
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/my_vg/my_lv is mounted on /LVM; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 2
The filesystem on /dev/my_vg/my_lv is now 3932160 blocks long.

[[email protected] ~]# df -h                                            #驗證擴充套件結果
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
devtmpfs                  98M     0   98M   0% /dev
tmpfs                    110M     0  110M   0% /dev/shm
tmpfs                    110M  5.5M  104M   5% /run
tmpfs                    110M     0  110M   0% /sys/fs/cgroup
/dev/sda1               1014M  130M  885M  13% /boot
tmpfs                     22M     0   22M   0% /run/user/0
/dev/mapper/my_vg-my_lv   15G   41M   14G   1% /LVM                  ←此處已經從10G 增加到15G

◆LV 容量縮小(將LV的容量縮小5G的空間)◆

注意:這裡縮小,要解除安裝檔案系統,檢查分割槽,然後縮小檔案系統,最後再縮小LVM

[[email protected] ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
devtmpfs                  98M     0   98M   0% /dev
tmpfs                    110M     0  110M   0% /dev/shm
tmpfs                    110M  5.5M  104M   5% /run
tmpfs                    110M     0  110M   0% /sys/fs/cgroup
/dev/sda1               1014M  130M  885M  13% /boot
tmpfs                     22M     0   22M   0% /run/user/0
/dev/mapper/my_vg-my_lv   15G   41M   14G   1% /LVM                 ←此處顯示15G空間

[[email protected] ~]# umount /dev/my_vg/my_lv                         #解除安裝LVM卷組

[[email protected] ~]# e2fsck -f /dev/my_vg/my_lv                      #檢查檔案系統
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/my_vg/my_lv: 11/983040 files (0.0% non-contiguous), 104724/3932160 blocks

[[email protected] ~]# resize2fs -f /dev/my_vg/my_lv 10G(減小後的大小)   #縮小檔案系統
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/my_vg/my_lv to 2621440 (4k) blocks.
The filesystem on /dev/my_vg/my_lv is now 2621440 blocks long.

[[email protected] ~]# lvreduce -L 10G /dev/my_vg/my_lv                 #縮小LVM
  WARNING: Reducing active logical volume to 10.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce my_vg/my_lv? [y/n]: y                   #輸入y
  Size of logical volume my_vg/my_lv changed from 15.00 GiB (3840 extents) to 10.00 GiB (2560).
  Logical volume my_vg/my_lv successfully resized.

[[email protected] ~]# mount /dev/my_vg/my_lv /LVM/                    #掛載

[[email protected] ~]# df -h                                           #再次檢視分割槽變化
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
devtmpfs                  98M     0   98M   0% /dev
tmpfs                    110M     0  110M   0% /dev/shm
tmpfs                    110M  5.5M  104M   5% /run
tmpfs                    110M     0  110M   0% /sys/fs/cgroup
/dev/sda1               1014M  130M  885M  13% /boot
tmpfs                     22M     0   22M   0% /run/user/0
/dev/mapper/my_vg-my_lv  9.8G   37M  9.2G   1% /LVM                 ←此處已經從15G變成10G

◆LV 快照功能◆

拍攝快照

lvcreate [-s 快照] -n [快照名] -L [快照大小] [指定分割槽] 

[[email protected] LVM]# ls
1    12  16  2   23  27  30  34  38  41  45  49  52  56  6   63  67  70  74  78  81  85  89  92  96
10   13  17  20  24  28  31  35  39  42  46  5   53  57  60  64  68  71  75  79  82  86  9   93  97
100  14  18  21  25  29  32  36  4   43  47  50  54  58  61  65  69  72  76  8   83  87  90  94  98
11   15  19  22  26  3   33  37  40  44  48  51  55  59  62  66  7   73  77  80  84  88  91  95  99

[[email protected] LVM]# lvcreate -s -n mylv_back -L 200M /dev/my_vg/my_lv            #給/LVM目錄拍攝快照
  Logical volume "mylv_back" created.

[[email protected] LVM]# lvs                                                          #檢視快照
  LV        VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root      centos -wi-ao----  <8.00g
  swap      centos -wi-ao----   1.00g
  my_lv     my_vg  owi-aos---  10.00g
  mylv_back my_vg  swi-a-s--- 200.00m      my_lv  0.01                             ←此處就是快照

快照恢復

[[email protected] LVM]# ls
1    12  16  2   23  27  30  34  38  41  45  49  52  56  6   63  67  70  74  78  81  85  89  92  96
10   13  17  20  24  28  31  35  39  42  46  5   53  57  60  64  68  71  75  79  82  86  9   93  97
100  14  18  21  25  29  32  36  4   43  47  50  54  58  61  65  69  72  76  8   83  87  90  94  98
11   15  19  22  26  3   33  37  40  44  48  51  55  59  62  66  7   73  77  80  84  88  91  95  99

[[email protected] LVM]# rm -fr *                                #模擬被刪除
[[email protected] LVM]# mkdir /back                             #建立掛載點
[[email protected] LVM]# mount /dev/my_vg/mylv_back /back/       #掛載備份檔案
[[email protected] LVM]# cp -a /back/* ./                        #複製備份檔案

[[email protected] LVM]# ls
1    12  16  2   23  27  30  34  38  41  45  49  52  56  6   63  67  70  74  78  81  85  89  92  96
10   13  17  20  24  28  31  35  39  42  46  5   53  57  60  64  68  71  75  79  82  86  9   93  97
100  14  18  21  25  29  32  36  4   43  47  50  54  58  61  65  69  72  76  8   83  87  90  94  98
11   15  19  22  26  3   33  37  40  44  48  51  55  59  62  66  7   73  77  80  84  88  91  95  99


RAID 獨立磁碟冗餘陣列

定義:獨立磁碟構成的具有冗餘能力的陣列

磁碟陣列分類:一是外接式磁碟陣列櫃、二是內接式磁碟陣列卡,三是利用軟體來模擬

1.通過把多個磁碟組織在一起作為一個邏輯卷提供磁碟跨越功能
2.通過把資料分成多個數據塊(Block)並行寫入/讀出多個磁碟以提高訪問磁碟的速度
3.通過映象或校驗操作提供容錯能力

注意:RAID磁碟陣列主要為了保證硬體損壞的情況下業務不會終止,無法防止誤操作

磁碟陣列的分類

1.外接式磁碟陣列櫃
2.內接式磁碟陣列
3.利用軟體來模擬
注意:通常情況下,生產環境中,一般使用硬體RAID來做,這裡只做瞭解即可

RAID磁碟陣列簡介

RAID 0 沒有奇偶校驗的 (條帶卷)
RAID 0 提高儲存效能的原理是把連續的資料分散到多個磁碟上存取,這樣系統有資料請求就可以被多個磁碟並行的執行,每個磁碟執行屬於它自己的那部分資料請求

RAID 1 獨立磁碟冗餘陣 (映象卷)
RAID 1 通過磁碟資料映象實現資料冗餘,在成對的獨立磁碟上產生互為備份的資料.當原始資料繁忙時,可直接從映象拷貝中讀取資料,因此RAID 1可以提高讀取效能.

RAID10 (鏡象陣列條帶)
Raid 10 是一個Raid1與Raid0的組合體,它是利用奇偶校驗實現條帶集映象,所以它繼承了Raid0的快速和Raid1的安全.

RAID5 分散式奇偶校驗的獨立磁碟結構(最少3塊)
RAID 5 是一種儲存效能,資料安全,和儲存成本,兼顧的儲存解決方案. RAID 5可以理解為是RAID 0和RAID 1的折中方案.

◆Mdadm 命令解析◆

[[email protected] ~]# mdadm --help
mdadm is used for building, managing, and monitoring
Linux md devices (aka RAID arrays)
Usage: mdadm 

mdadm --create --auto=yes /dev/md[0-9] --raid-devices=[0-n] \
--level=[015] --spare-devices=[0-n] /dev/sd[a-z]

        --create             #新建RAID引數
        --auto=yes           #預設配置
        --raid-devices=N     #磁碟陣列數
        --spare-devices=N    #備份磁碟數
        --level [015]        #陣列等級
        mdadm --detail       #查詢陣列資訊

◆構建一個RAID 5◆

注意:精簡模式下,沒有安裝此命令,執行 yum install -y mdadm 安裝

[[email protected] ~]# ls -l /dev/sd[b-z]
brw-rw---- 1 root disk 8, 16 Sep 21 23:06 /dev/sdb
brw-rw---- 1 root disk 8, 32 Sep 21 23:06 /dev/sdc
brw-rw---- 1 root disk 8, 48 Sep 21 23:06 /dev/sdd
brw-rw---- 1 root disk 8, 64 Sep 21 23:04 /dev/sde

[[email protected] ~]# mdadm --create --auto=yes /dev/md0 --level=5 \
> --raid-devices=3 --spare-devices=1 /dev/sd{b,c,d,e}                  #建立一個RAID,其中介面是/dev/md0,等級是RAID5
mdadm: Defaulting to version 1.2 metadata                              #主磁碟數3,備份盤數1,提供sd{b,c,d,e}磁碟
mdadm: array /dev/md0 started.

[[email protected] ~]# mdadm --detail /dev/md0                            #檢視陣列資訊
/dev/md0:   ←裝置檔名
           Version : 1.2
     Creation Time : Fri Sep 21 23:19:09 2018   ←建立日期
        Raid Level : raid5                      ←RAID等級
        Array Size : 20953088 (19.98 GiB 21.46 GB)  ←可用空間
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)   ←每個裝置可用空間
      Raid Devices : 3       ←RAID裝置數量
     Total Devices : 4       ←全部裝置數量
       Persistence : Superblock is persistent

       Update Time : Fri Sep 21 23:19:26 2018
             State : clean, degraded, recovering
    Active Devices : 3   ←啟動磁碟
   Working Devices : 4   ←可用磁碟
    Failed Devices : 0   ←錯誤磁碟
     Spare Devices : 1   ←預備磁碟

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 34% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 2ee2bcd5:c5189354:d3810252:23c2d5a8   ←此裝置UUID
            Events : 6

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      spare rebuilding   /dev/sdd

       3       8       64        -      spare   /dev/sde

格式化 /dev/md0並掛載使用

[[email protected] ~]# mkfs -t ext4 /dev/md0             #格式化
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
1310720 inodes, 5238272 blocks
261913 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[[email protected] ~]# mkdir /RAID              #新建掛載目錄
[[email protected] ~]#
[[email protected] ~]# mount /dev/md0 /RAID/    #掛載裝置
[[email protected] ~]#
[[email protected] ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.0G  1.2G  6.9G  15% /
devtmpfs                  98M     0   98M   0% /dev
/dev/sr0                 4.2G  4.2G     0 100% /mnt
/dev/md0                  20G   45M   19G   1% /RAID    ←此處可看到掛載成功

◆RAID 模擬救援模式◆

mdadm --manage /dev/md[0-9] --add 裝置 --remove 裝置 --fail 裝置

    --add     #將後面的裝置加入md中
    --remove  #移除裝置
    --fail    #設定出錯磁碟
------------------------------------------------------------
[實驗]


[[email protected] /]# mdadm --manage /dev/md0 --fail /dev/sdb         #將/dev/sdb標註為錯誤
mdadm: set /dev/sdb faulty in /dev/md0

[[email protected] /]# mdadm --detail /dev/md0                         #檢視一下狀態
/dev/md0:
           Version : 1.2
     Creation Time : Fri Sep 21 23:19:09 2018
        Raid Level : raid5
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Fri Sep 21 23:50:12 2018
             State : clean, degraded, recovering
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1  ← 出錯磁碟一個
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 5% complete     ←此處需要注意,他正在恢復資料,等到100%時又可以正常工作

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 2ee2bcd5:c5189354:d3810252:23c2d5a8
            Events : 20

    Number   Major   Minor   RaidDevice State
       3       8       64        0      spare rebuilding   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

       0       8       16        -      faulty   /dev/sdb   ← 出錯磁碟


[[email protected] /]# mdadm --manage /dev/md0 --remove /dev/sdb            #移除這個壞掉的磁碟
mdadm: hot removed /dev/sdb from /dev/md0


[[email protected] /]# mdadm --manage /dev/md0 --add /dev/sdb               #新增一個新的磁碟
mdadm: added /dev/sdb