Adding sda1 vs. sda to a linux software raid

Posted on

Problem :

I’m running a Linux md software RAID 6 on several USB hard-drives. Each drive is partitioned to contain one partition of type fd (linux raid). My RAID is built on these partitions, i.e. it uses sda1, sdb1, sdc1, … as its disks.

I just had the USB controller disconnect one of the drives and it dropped out of the array. I unplugged the drive, plugged it back in and added it back into the array, except that I accidentally typed mdadm --add /dev/md0 /dev/sdc instead of mdadm --add /dev/md0 /dev/sdc1 (note the sdc vs. sdc1).

Mdadm started rebuilding the “new” drive.

When I noticed, I stopped the array and, to my surprise, fdisk reported that the partition table on sdc was still fine. I restarted the array and this time added sdc1 back into the array. Mdadm took the drive without complaints and simply marked it as active. No rebuild required… ???

This leaves me with the following questions:

  • If I add a drive directly to a linux raid, rather than a partition on the drive, does mdadm notice this and leave the first couple of sectors in the drive unused?
  • Or does it even automatically detect that there is a linux raid partition on the drive and defaults to using that?
  • Or did the partial rebuild actually destroy the beginning of the data on the drive (wrote it earlier than it should be) and mdadm just didn’t detect this when I readded the partition correctly?
  • I’m having mdadm check the array right now and it’s not complaining about anything. Does this mean all is well???


Unfortunately, everything is not well… I can’t mount the raid drive any more and xfs_repair is busy trying to find a non-corrupt super-block right now… Let’s hope it succeeds… Yay for linux software raid… Zero fault tolerance for user error…

Solution :

Regardless of which method you use, the disk will retain its master boot record and partition table. This is why your disk appeared intact.

Two further points:

  1. XFS is absolute junk. And I say this after many a 18+ hour day recovering XFS based servers. Bin it, pick something good – you’ll thank me later. If you want software RAID and a file system, ZFS will accomplish both.
  2. USB is horrible for RAID devices, if you want multiple drives in an external array, invest a little and buy yourself an external SATA enclosure for the drives