13

I use my Ubuntu machine as a file server for Windows/Linux/Mac clients using a Samba share. I need it to be easily expandable by just adding more hard drives without having to move any data back and forth.

This is how I have done it so far. I have successfully added a fourth hard drive. Now it would be nice to know is this how it should be done? What I am doing wrong or what I could do better?

Creating the initial 3 drive array

I started with three empty drives: /dev/sdb, /dev/sdc and /dev/sdd.

First I created empty partitions to all drives:

$ fdisk /dev/sdX
n # Create a new partition
p # Primary
1 # First partition
[enter] # Starting point to first sector (default)
[enter] # Ending point to last sector (default)
t # Change partition type
fd # Type: Linux raid autodetect
w # Write changes to disc

When empty RAID partitions have been created to all three discs, I created a RAID5 array:

$ mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

Now the RAID5 array is created, and it is being built already. It takes time, but you can proceed with creating a new physical LVM2 volume:

$ pvcreate /dev/md0

Now let's create a new volume group:

$ vgcreate vd_raid /dev/md0

Then we need to create a new logical volume inside that volume group. First we need to figure out the exact size of the created volume group:

$ vgdisplay vg_raid

The size can be seen from the row which indicates the "Total PE" in physical extents. Let's imagine it is 509. Now create a new logical volume which takes all available space:

$ lvcreate -l 509 vg_raid -n lv_raid

Finally we can create a file system on top of that logical volume:

$ mkfs.xfs /dev/mapper/vg_raid-lv_raid

To be able to use our newly created RAID array, we need to create a directory and mount it:

$ mkdir /raid
$ mount /dev/mapper/vg_raid-lv_raid /raid

Now it is ready to use. But for it to automatically mount after reboot, we need to save RAID geometry to mdadm's configuration file:

$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Then add the following line to /etc/fstab which mounts the RAID array automatically:

/dev/mapper/vg_raid-lv_raid /raid auto auto,noatime,nodiratime,logbufs=8 0 1

Now the RAID array is ready to use, and mounted automatically to /raid directory after every boot.

Adding a new drive to the array

Let's imagine that now you have a new drive, /dev/sde, which you want to add to the previously created array without losing any data.

First the new drive needs to be partitioned as all the other drives:

$ fdisk /dev/sde
n # Create a new partition
p # Primary
1 # First partition
[enter] # Starting point to first sector (default)
[enter] # Ending point to last sector (default)
t # Change partition type
fd # Type: Linux raid autodetect
w # Write changes to disc

Then it needs to be added to the RAID array:

$ mdadm --add /dev/md0 /dev/sde1

Now the RAID5 array includes four drives, which only three are in use currently. The array needs to be expanded to include all four drives:

$ mdadm --grow /dev/md0 --raid-devices=4

Then the physical LVM2 volume needs to be expanded:

$ pvresize /dev/md0

Now the physical volume is resized by default to cover all available space in the RAID array. We need to find out the new size in physical extents:

$ vgdisplay vg_raid

Let's imagine that the new size is now 764 (can be seen from "Total PE"). Now expand the logical volume to cover this:

$ lvextend /dev/mapper/vg_raid-lv_raid -l 764

Then expand the XFS file system. This needs to be done during the file system is online and mounted:

$ xfs_grow /raid

By default it is expanded to cover all available space. Finally the RAID array geometry needs to be updated because the array now includes a new disk. First delete the added line from /etc/mdadm/mdadm.conf and then add a new one:

$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Taskinen
  • 200

1 Answers1

5

I think you've got it right. Make sure you understand and heed the warnings regarding growing RAID 5 in man 8 mdadm.

Personally if I were growing an LVM volume, I would not be growing an existing RAID array to do it. I'd create another RAID array, create a new physvol from it, and add it to the same volume group. This is a much safer operation (doesn't involve rewriting the whole RAID5 array across the new set of disks) and keeps the size of your arrays down.

Kamil Kisiel
  • 12,444