目 录CONTENT

文章目录

RAID基础

简中仙
2020-05-07 / 0 评论 / 0 点赞 / 65 阅读 / 0 字 / 正在检测是否收录...
温馨提示:
本文最后更新于2023-10-07,若内容或图片失效,请留言反馈。 本文如有错误或者侵权的地方,欢迎您批评指正!

一、RAID 分类

级别性能数据冗余能力磁盘数量空间利用率
Raid 0(条带)读写速度快最少2块磁盘1
Raid 1(镜像)写性能下降,读性能提升支持损坏1块磁盘最少2块磁盘1/2
Raid-10读写速度快同组不能都坏掉最少4块磁盘1/2
Raid-01读写速度快同组可以坏,不能是不同组的相同标号最少4块磁盘1/2
Raid 5(校验码机制)读写速度快只能坏一块最少3块磁盘(n-1)/n
Raid 6读写速度快支持同时损坏2块磁盘最少4块磁盘(n-2)/n
Raid 7读写速度快支持同时损坏3块磁盘最少5块磁盘(n-3)/n

二、RAID 创建

创建模式命令:mdadm -C

  • -l: 指定级别
  • -n: 设备个数
  • -a: {yes|no} 自动为其创建设备文件
  • -c: chunk大小,默认为64k,(数据块) 2的N次方
  • -x: 指定空闲盘的个数

1、创建 RAID 0

# mdadm -C /dev/md0 -a yes -l 0 -n 2 /dev/sd{b1,c1}       //
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
# cat /proc/mdstat 
Personalities : [raid0] 
md0 : active raid0 sdc1[1] sdb1[0]
      41906176 blocks super 1.2 512k chunks
      
unused devices: <none>
# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# mkdir /mnt/Raid0
# mount /dev/md0 /mnt/Raid0/
# df -hT /mnt/Raid0/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       xfs    40G   33M   40G   1% /mnt/Raid0

2、创建 RAID 1

# mdadm -C /dev/md1 -a yes -l 1 -n 2 /dev/sd{b1,c1}
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
# cat /proc/mdstat 
Personalities : [raid1] 
md1 : active raid1 sdc1[1] sdb1[0]
      20953088 blocks super 1.2 [2/2] [UU]
      [=========>...........]  resync = 46.7% (9802112/20953088) finish=0.1min speed=980211K/sec
      
unused devices: <none>
# mkdir /mnt/Raid1
# mkfs.xfs /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=1309568 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5238272, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# mount /dev/md1 /mnt/Raid1/
# df -hT /mnt/Raid1/ 
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md1       xfs    20G   33M   20G   1% /mnt/Raid1

3、创建 RAID 5

# mdadm -C /dev/md5 -a yes -l 5 -n 3 /dev/sd{b1,c1,d1}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md5 : active raid5 sdd1[3] sdc1[1] sdb1[0]
      41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [=======>.............]  recovery = 37.8% (7921536/20953088) finish=0.4min speed=528102K/sec
      
unused devices: <none>
# mkdir /mnt/raid5
# mkfs.xfs /dev/md5 
meta-data=/dev/md5               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# mount /dev/md5 /mnt/raid5/
# df -hT /mnt/raid5/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md5       xfs    40G   33M   40G   1% /mnt/raid5
# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Fri May  8 02:04:28 2020
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Fri May  8 02:06:50 2020
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 5b706bd6:feaf4048:5d1160a3:b53e91ea
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       3       8       49        2      active sync   /dev/sdd1

三、其他相关命令

1、移除其中一块磁盘(磁盘损坏)

# mdadm /dev/md1 -r /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md1
# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri May  8 01:22:36 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Fri May  8 01:27:01 2020
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 53b4a211:634a85a6:288b86c3:9a404213
            Events : 26

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       -       0        0        1      removed

2、当其中一块磁盘损坏时,添加另一块磁盘方法

对于硬盘大小,转速各方面都要尽量一致,且已经格式化

# mdadm /dev/md1 -a /dev/sdd1
mdadm: added /dev/sdd1
# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri May  8 01:22:36 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri May  8 01:28:34 2020
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 27% complete

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 53b4a211:634a85a6:288b86c3:9a404213
            Events : 36

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       2       8       49        1      spare rebuilding   /dev/sdd1

3、卸载raid阵列

# umount /dev/md1
# df -hT /mnt/Raid1/
Filesystem              Type  Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs    17G  1.2G   16G   7% /
# mdadm -S /dev/md1 
mdadm: stopped /dev/md1
# cat /proc/mdstat 
Personalities : [raid1] 
unused devices: <none>

4、重新添加磁盘阵列

# mdadm -AR /dev/md1 /dev/sd{b1,c1}
# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri May  8 01:22:36 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri May  8 01:45:02 2020
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 9% complete

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 53b4a211:634a85a6:288b86c3:9a404213
            Events : 52

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       2       8       33        1      spare rebuilding   /dev/sdc1

5、扫描raid磁盘信息

# mdadm -D --scan
ARRAY /dev/md1 metadata=1.2 name=localhost.localdomain:1 UUID=53b4a211:634a85a6:288b86c3:9a404213
0

评论区