Setup software RAID1 array on running CentOS 6.3 using mdadm. (Multiple Device Administrator) All commands run from terminal as super user.

Default CentOS 6.3 installation with two hard drives, /dev/sda and /dev/sdb which are identical in size. Machine name is “serverbox.local”. /dev/sdb is currently unused, and /dev/sda has the following partitions:

 /dev/sda1: /boot partition, ext4;  /dev/sda2: is used for LVM (volume group vg_serverbox) and contains / (volume root), swap (volume swap_1) and /home (volume home).

Final RAID1 configuration:

 /dev/md0 (made up of /dev/sda1 and /dev/sdb1): /boot partition, ext4;  /dev/md1 (made up of /dev/sda2 and /dev/sdb2): LVM (volume group vg_serverbox), contains / (volume root), swap (volume swap_1) and /home (volume home).

1. Gather information about current system.

Report the current disk space usage:

df -h

View physical disks:

fdisk -l

View physical volumes on logical disk partition:

pvdisplay

View virtual group details:

vgdisplay

View Logical volumes:

lvdisplay

Load kernel modules (to avoid a reboot):

modprobe linear modprobe raid0 modprobe raid1

Verify personalities:

cat /proc/mdstat

The output should look as follows: serverbox:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices:

2. Preparing /dev/sdb

To create a RAID1 array on a running system, prepare the /dev/sdb hard drive for RAID1, then copy the contents of /dev/sda hard drive to it, and finally add /dev/sda to the RAID1 array.

Copy the partition table from /dev/sda to /dev/sdb so that both disks have exactly the same layout:

sfdisk -d /dev/sda | sfdisk –f /dev/sdb

Verify both disks are partitioned identically:

fdisk -l

Change the partition type of partitions on /dev/sdb to autodetect:

fdisk /dev/sdb

serverbox:~# fdisk /dev/sdb

Command (m for help): t [t = change a partition's system id] Partition number (1-5): 1 Hex code (type L to list codes): fd [fd = Linux raid auto]

Command (m for help): t Partition number (1-5): 2 Hex code (type L to list codes): fd

Command (m for help): w [w = write changes to disk]

Verify changes written successfully:

fdisk -l

Remove any previous RAID installations on /dev/sdb : (error displayed if none exist)

mdadm --zero-superblock /dev/sdb1 mdadm --zero-superblock /dev/sdb2 3. Creating RAID Arrays

Create RAID arrays /dev/md0 and /dev/md1. /dev/sdb1 will be added to /dev/md0 and/dev/sdb2 to /dev/md1.

mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-disks=2 missing /dev/sdb1 mdadm --create /dev/md1 --metadata=0.90 --level=1 --raid-disks=2 missing /dev/sdb2

Verify array created: ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok)

cat /proc/mdstat

Create a file system (ext4) on our non-LVM RAID array /dev/md0:

mkfs.ext4 /dev/md0

Logical volumes: Initialize physical volume /dev/md1 for LVM: (Logical Volume Manager)

pvcreate /dev/md1

Add /dev/md1 to our volume group vg_serverbox:

vgextend vg_serverbox /dev/md1

Verify array /dev/md1 added to logical volume group:

pvdisplay

Review volume group details: (optional)

vgdisplay

Update configuration file /etc/mdadm.conf :

cp /etc/mdadm.conf /etc/mdadm.conf_orig mdadm --examine --scan >> /etc/mdadm.conf

Review the updated configuration file: (Array devices added at end.)

cat /etc/mdadm.conf Modify file /etc/fstab. Replace /dev/sda1 with /dev/md0 so that the line looks as follows:

gedit /etc/fstab

[...] /dev/md0 /boot ext4 defaults 0 2 [...]

Modify file /etc/mtab. Replace line containing /dev/sda1 with /dev/md0:

gedit /etc/mtab

[...] /dev/md0 /boot ext3 rw 0 0 [...]

Open /boot/grub/menu.lst and add fallback 1 right after default 0:

gedit /boot/grub/menu.lst

[...] default 0 fallback 1 [...]

Duplicate first kernel stanza [title ... savedefault]. In first stanza; replace root (hd0,0) with root (hd1,0):

[...] title Vg_serverbox GNU/Linux, kernel 2.6.18-6-686 root (hd1,0) kernel /vmlinuz-2.6.18-6-686 root=/dev/mapper/vg_serverbox-root ro initrd /initrd.img-2.6.18-6-686 savedefault

title Vg_serverbox GNU/Linux, kernel 2.6.18-6-686 root (hd0,0) kernel /vmlinuz-2.6.18-6-686 root=/dev/mapper/vg_serverbox-root ro initrd /initrd.img-2.6.18-6-686 savedefault

[...]

* REMOVE any kernel options that prevent loading md devices (not ‘dm’).

Rebuild initramfs using mdadm.conf: https://wiki.ubuntu.com/Initramfs

mv /boot/initramfs-$(uname –r).img /boot/initramfs-$(uname –r).img.old --mdadmconf --force /boot/initramfs-$(uname –r).img $(uname –r)

4. Moving Data to the RAID Arrays

Copy data from boot partition /dev/sda1 to array /dev/md0: (assuming /dev/sda1 is mounted at /boot)

mkdir /mnt/raid mount /dev/md0 /mnt/raid cd /boot; find . –depth | cpio –pmd /mnt/raid touch /mnt/raid/.autorelabel sync umount /mnt/raid

Move the contents of LVM partition /dev/sda2 to LVM RAID array /dev/md1: (takes approx. 45 min. for 200 GB)

pvmove /dev/sda2 /dev/md1

Remove /dev/sda2 from the volume group vg_serverbox:

vgreduce vg_serverbox /dev/sda2

Remove volume /dev/sda2 from LVM:

pvremove /dev/sda2

Verify /dev/md1 is the only physical volume for volume group vg_serverbox:

pvdisplay

Change the partition type of /dev/sda2 to Linux raid autodetect:

fdisk /dev/sda

Command (m for help): t Partition number (1-5): 2 Hex code (type L to list codes): fd Command (m for help): w The partition table has been altered!

Add /dev/sda2 to /dev/md1 array: (Takes approx. 35 min. for 200 GB array to be rebuilt)

mdadm --add /dev/md1 /dev/sda2

Monitor rebuild in progress: (To leave watch, press CTRL+C.)

watch –n 5 cat /proc/mdstat

5. Preparing GRUB

Install the GRUB bootloader on the hard drive /dev/sda and /dev/sdb:

grub

grub> root (hd0,0) setup (hd0) root (hd1,0) setup (hd1) quit

Verify system will boot:

reboot

After system reboots verify /dev/md0 is mounted at /boot:

df -h

Verify arrays are active: (status should be /dev/md1 [UU] and /dev/md0 [_U] )

cat /proc/mdstat

Verify logical volumes using pvdisplay, vgdisplay, and lvdisplay : (optional)

pvdisplay vgdisplay lvdisplay

Change the partition type of /dev/sda1 to Linux raid autodetect: fdisk /dev/sda

Command (m for help): t Partition number (1-5): 1 Hex code (type L to list codes): fd Command (m for help): w

Add /dev/sda1 to the /dev/md0 RAID array: (takes < 1 min.) mdadm --add /dev/md0 /dev/sda1

Verify both volumes are active and running [UU]:

cat /proc/mdstat

Update /etc/mdadm.conf :

cp /etc/mdadm.conf_orig /etc/mdadm.conf mdadm --examine --scan >> /etc/mdadm.conf

Reboot the system:

reboot

That's it - software RAID1 array is active and running!

Testing and rebuilding

Physically remove drive from system, or;

Simulate a hard drive failure. [ /dev/sdb ]

mdadm --manage /dev/md0 --fail /dev/sdb1 mdadm --manage /dev/md1 --fail /dev/sdb2 mdadm --manage /dev/md0 --remove /dev/sdb1 mdadm --manage /dev/md1 --remove /dev/sdb2

Shut down the system:

shutdown -h now

Reboot: System should boot in degraded state. Verify array is active but degraded: ( [_U] or [U_] )

cat /proc/mdstat

Shut down the system:

shutdown -h now

Put in a new /dev/sdb drive, or if failure of /dev/sda, move /dev/sdb to failed /dev/sda slot and connect the new HDD as /dev/sdb. Boot the system.

Verify /dev/sda is boot device and /dev/sdb is empty.

fdisk -l

Copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk –f /dev/sdb

Remove any remains of the RAID array from /dev/sdb:

mdadm --zero-superblock /dev/sdb1 mdadm --zero-superblock /dev/sdb2

Add /dev/sdb to the RAID array: mdadm -a /dev/md0 /dev/sdb1 mdadm -a /dev/md1 /dev/sdb2

Monitor the rebuilding process:

watch –n 5 cat /proc/mdstat

Wait until the synchronization has finished:

Install the bootloader on both HDDs:

grub

grub>

root (hd0,0) setup (hd0) root (hd1,0) setup (hd1) quit

That's it. Failed hard drive in RAID1 array is replaced.