What's this all about?
Learning about Linux software RAID and getting practice using Linux software RAID tools. The following tutorial is for those people who'd like to get some practice building RAID devices, but don't have spare drives or partitions laying about. Making use of the Linux loop device enables us to create an arbitrary number of block devices from which we can build a RAID device from. If you've ever mounted an ISO image of a cdrom with the 'mount -o loop' command, you've used the loop device. I'll run through what's needed in the kernel and the software needed to manage the RAID devices. The goal is to become familiar with most of the concepts and tools for building and using RAID devices. Note: This will not show you how to boot from your array.
Here's some more background material on various RAID levels.
You'll need space on your disks to create the files which will be mounted as file systems. If you want to build a 1GB RAID 0 device, you'll need just over 1GB of free disk space, just over 2GB for a 1GB RAID 1 device. You'll need to be the root user or have access via sudo.
Create block devices from files
Make a directory somewhere, and cd into it.
# mkdir raidEx # cd raidEx
Next, we create two files of 1GB each. Feel free to change this to whatever size you like.
# dd if=/dev/zero of=file1 bs=1M count=1000 # dd if=/dev/zero of=file2 bs=1M count=1000 # dd if=/dev/zero of=file3 bs=1M count=1000
Then we setup the loop devices.
# losetup /dev/loop1 file1 # losetup /dev/loop2 file2 # losetup /dev/loop3 file3
Now we've got our block devices. In the scheme of things, you can treat /dev/loop1 the same as any other block device such as /dev/hda1.
Build a RAID deviceUsing mdadm, we create the md device.
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/loop1 /dev/loop2 /dev/loop3
You can follow the status of the md devices in your system cat'ing /proc/mdstat as so.
# watch -n .1 cat /proc/mdstat Every 0.1s: cat /proc/mdstat Mon Jul 2 20:54:22 2007 Personalities : [raid0] [raid1] [raid5] md0 : active raid5 loop3 loop2 loop1 2047872 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_] [=>...................] recovery = 5.7% (58752/1023936) finish=8.7min speed=1836K/sec
We can see the device is synchronising itself, calculating checksums for the parity blocks, complete with statistics about the operation. Once that the array is built, you can add it to the mdadm.conf file, so that it's easy to start:
mdadm --detail --scan >> /etc/mdadm.conf
Then format the device with the filesystem of your choice, Reiserfs ( MurderFS as some people call it) in this case, and mount it:
# mkreiserfs /dev/md0 # mkdir mnt0 # mount /dev/md0 mnt0
Try failing a device ( once it's finished syncing of course - or not depending on what you want to learn ;-) ), removing it, then re-adding it.
# mdadm /dev/md0 -f /dev/loop2 # mdadm /dev/md0 -r /dev/loop2 # mdadm /dev/md0 -a /dev/loop2
Check out /proc/mdstat to see how it's all going.
Growing a RAID 5 Array
The latest version of the Linux kernel allows you to grow RAID sets in various manners. This allows you some very useful flexibility. With RAID 5, you can add extra disks to the array as hot spares, then grow the array to include these new disks. Amongst other things, you can:
- Add disks to a RAID 5
- Add larger disks to a RAID 1 Array, growing the size of the device.
# mdadm /dev/md0 -a /dev/loop4 mdadm: added /dev/loop4 # mdadm --grow /dev/md0 --raid-devices=4 mdadm: Need to backup 384K of critical section.. mdadm: ... critical section passed
Now check out /proc/mdstat to see how it's going. When you're growing a RAID 5 array, there's a period where a crash would cause it to loose data. mdadm will backup the data at risk before proceeding.
Growing a RAID 1 Array
To grow a RAID 1 array, the steps are as follows:
- Add two new larger devices to the array as a spares
- Fail one of the existing disks in the array
- Once the array has repaired itself, fail the next device then remove both failed devices
- You now have a RAID 1 array of the original size, but with larger underlying block devices
- Use --grow and --size=max to expand the array to the full size of the underlying block devices
- Resize your filesystem to match the new larger block device
Here's an example: /dev/loop1 & /dev/loop2 are both 1G in size, /dev/loop3 & /dev/loop4 are 1.5GB.
~ # mdadm /dev/md0 -a /dev/loop3 mdadm: added /dev/loop3 ~ # mdadm /dev/md0 -a /dev/loop4 mdadm: added /dev/loop4 ~ # mdadm /dev/md0 -f /dev/loop1 mdadm: set /dev/loop1 faulty in /dev/md0 ~ # mdadm /dev/md0- -f /dev/loop2 mdadm: set /dev/loop2 faulty in /dev/md0 ~ # mdadm /dev/md0 -r /dev/loop1 mdadm: hot removed /dev/loop1 ~ # mdadm /dev/md0 -r /dev/loop2 mdadm: hot removed /dev/loop2 ~ # mdadm --grow /dev/md1 --size=max
Now the underlying RAID device has grown, you need to resize your filesystem to match. This is can be useful in many situations. I'm sure you're thinking of them now.
Convert RAID 1 array to RAID 5
Started out with 2 disks in RAID 1 and now you want to add another disk? How about converting your RAID 1 array to RAID5?
First, create your RAID 1 array
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/loop1 /dev/loop2
Add another disk:
mdadm --add /dev/md0 /dev/loop3
As you can see, this will be listed as a spare (S) in your RAID 1 array
# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 loop3(S) loop2 loop1 1023424 blocks super 1.2 [2/2] [UU]
Now for the cool part:
# mdadm --grow --level=5 --raid-devices=3 /dev/md0 mdadm: level of /dev/md0 changed to raid5 mdadm: Need to backup 128K of critical section..
Lets have a look:
# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 loop3 loop2 loop1 1023424 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] [=>...................] reshape = 6.4% (65600/1023424) finish=1.9min speed=8184K/sec
Isn't that nice?
Now that the concepts are familiar, it shouldn't be difficult to start doing some more advanced stuff. The man page for mdadm is very good, but here are some useful commands:
- To query the device in detail: "mdadm -QD /dev/md0"
- To stop the array: "mdadm -S /dev/md0"
- To fail a device in the array: "mdadm /dev/md0 -f /dev/loop1"
- To remove a device from the array: "mdadm /dev/md0 -r /dev/loop1"
Some example losetup commands:
- To check the Loop back device: "losetup /dev/loop1"
- To detach the loop device: "losetup -d /dev/loop1"