RAID1 md volume named /dev/md2 mounted as rootfs
and two partitions named /dev/nvme0n1p4 and /dev/nvme1n1p4
We will remove second partiotion (/dev/nvme1n1p4) from raid1 members and later add it to raid0
-
Remove partition from the mirror (make it to fail state and then remove):
# mdadm /dev/md2 -f /dev/nvme1n1p4
# mdadm /dev/md2 -r /dev/nvme1n1p4
-
Extend md2 to raid0:
# mdadm /dev/md2 --grow --level=0
-
Add the partition of second disk back to the array md2:
# mdadm --grow /dev/md2 --level=0 --raid-devices=2 --add /dev/nvme1n1p4
Note:
this operation will temporary change type of md2 to RAID4.
That’s okay, after reshaping md2 will back to RAID0 -
Wait for reshape to finish.
-
/dev/md2 will automatically become RAID0 after reshape is finished.
# mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Wed Aug 3 21:48:34 2022
Raid Level : raid0
Array Size : 957333504 (912.98 GiB 980.31 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Aug 24 14:11:33 2022
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : -unknown-
Chunk Size : 64K
Consistency Policy : none
Name : N*****1-20:2
UUID : 14cf83f2:7e9c99e3:1114a588:18e75caa
Events : 3373
Number Major Minor RaidDevice State
1 259 9 0 active sync /dev/nvme1n1p4
2 259 4 1 active sync /dev/nvme0n1p4
# cat /proc/mdstat
Personalities : [raid0] [raid1] [linear] [multipath] [raid6] [raid5] [raid4] [raid10]
md2 : active raid0 nvme0n1p4[2] nvme1n1p4[1]
957333504 blocks super 1.2 64k chunks -
Grow file system:
# resize2fs /dev/md2
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 898G 97G 761G 12% /
All operation processed in realtime with zero downtime.
Останні коментарі