4

I have a bunch of old 1T disks with an mdadm array on them. They had been out of commission for awhile, but yesterday I slotted them into a server running an up to date Debian Jessie.

Eventually I got the array back together, but two drives refused to re-add. Looking at these drives, it appeared that I had somehow added the devices to the array rather than the (Linux RAID autodetect) partitions (sdz rather than sdz1) - I get what appears to be proper output for mdadm -E /dev/sdz, but if I run mdadm -E /dev/sdz1, I get mdadm: cannot open /dev/sdz1: No such device or address.

Looking into it further, it seems that the partitions for these two drives are character special devices rather than block special:

root@comp:~# file /dev/sda1        # good drive
/dev/sda1: block special (8/225)
root@comp:~# file /dev/sdz1        # bad drive
/dev/sdz1: character special (8/209)

Even after zeroing the entire bad drive with dd and recreating the partitions with fdisk, they still come back the same way! What's going on here?


Edit: Here's what ls says about these devices:

root@comp:~# ls -l /dev/sdz*
brw-rw---- 1 root disk 65, 0 Feb  1 15:02 /dev/sdz
cr-------- 1 root root 65, 1 Jan 31 18:31 /dev/sdz1

E2: Relevant numbers from /proc/partitions:

root@comp:~# cat /proc/partitions | egrep 'sdz|sda'
  65        0  976762584 sdz
  65       32  976762584 sda
  65       33  976760832 sda1

I don't understand why the sdz partiton is not showing up here.

1 Answers1

1

Deleting /dev/sdz1 (the character device partition) with just a rm /dev/sdz1, and then calling partprobe /dev/sdz caused the new partition to show up properly.

I have no explanation for why this originally happened, but this solution worked for me.