5

For a while I have been trying to set up a new server system with a RAID 1 as boot source. That RAID consists of two 4TB Harddisks. So I boot from an USB stick that I have prepared with the installation image of Debian 12. In the beginning everything runs normal. I configure the network and create the first two users. When it comes to the partitioning part, I select the manual way

In the upcoming table I see my two 4TB drives and the USB boot device the harddisk SCSI9 is an emergency spare, I will not use it here. I choose to configure the software RAID

I select to create MD device

I select RAID 1

2 RAID devices

0 Spare devices

I select my 2 free disks

Once more back to partitioning. This time I select guided mode

All files in one partion

I select my new RAID device

After one or two minutes the new partioned RAID comes up. Looks promising...

I confirm the changes and the system starts getting and installing the rest of the OS and all utilities. This process is nicely designed and quite straight forward so far. The only problem: It does not finish. :-( When it comes to installing GRUB (2 hours later), it dies... I found no way to get beyond this point. What is my mistake and how can I fix that?

I have tried a lot of variations from the results of extensive internet searches, all with the same result. This was the most promising one: Skipping part of the installation process and manually generating the array with mdadm: https://www.server-world.info/en/note?os=Debian_12&p=raid1

Adding

LudgerH
  • 63

2 Answers2

7

4T requires the use of GPT, and you can't properly both have a whole-disk partitionable RAID and make disks recognizable by BIOS to boot from (strictly impossible with EFI, not so strictly with legacy, but that will require dark magic you don't want to know).

(It's a shame Debian installer even accepted this scheme. It shouldn't, it is strictly invalid and could not possibly work.)

This is because GPT partition table is actually stored both at the beginning and at the end of the device. When whole-disk RAID is used and seen without proper MD metadata interpretation, both GPT copies never happen in place simultaneously:

  • for the v1.0 MD superblock the second GPT (at the end) will be missing (moved earlier, into place BIOS doesn't expect);
  • for v1.1 and v1.2 MD superblocks the first GPT (in the beginning) will be shifted so firmware won't find it.

Either way, it will not recognize disks as having valid partition table and refuse to boot from them.

In addition to that, if you want to boot with UEFI, you need to know that EFI firmware doesn't have a clue that ESP could be a software RAID (there is nothing about it in the spec). So it must not. ESP must always be a simple GPT partition.

So to resolve this, instead of building RAID and then partitioning, you first partition the disks and then collect some partitions into RAIDs. While this is disputable, I suggest the following scheme:

  • for EFI install: a ESP (type 1) of 511MiB (default offset 1MiB), then 512MiB for /boot of type Linux RAID, then the rest for the rest of type Linux RAID (this is type 29 in fdisk if I am not mistaken).
  • for legacy install: 1 MiB (type 4 — biosgrub), 510 MiB boot (RAID), and the rest RAID.

Then, you create two RAIDs (/boot and the rest), and select one of the ESPs to be "the ESP". You'll enable the boot from the second disk after install. And then you create LVM on the big RAID, to hold filesystems; there you may create swap volume, root FS volume (30 GiB is enough for Debian and it's easy to enlarge on the fly; notice that you will place all data to other mounted dedicated volumes — it's no good to store application data in the root volume). The rest can be created as needed, during the system lifetime.

Then you install the system as usual. It must create FAT32 on the ESP partition; Debian 11 installer had problems with that so I had to create it manually; I don't know for 12 since I hadn't performed such an install, only upgrades. When it comes to bootloader installation, you just do as it suggests for EFI, while for legacy you may just repeat this step and install it twice, selecting second disk at the second time, so it will be redundant bootable right away.

For EFI, after first system boot, you need to manually create file FAT32 file system on the second "ESP" partition, mount it somewhere (I use /boot/efi2) and copy everything from /boot/efi, retaining the structure. Then you create a second firmware boot entry using efibootmgr, here are the instructions.

1

Done... Thanks, Nikita for pointing to the right direction! Here is a quick cooking recipe of how it worked for me (I omitted a couple of "do you really want to repartition your disks" - confirmations):

  • Go to the mainbords BIOS configuration and disable all UEFI stuff.
  • Boot from the USB stick with the Debian 12 installation image.
  • Start "Graphical Install".
  • Do the first steps (network installation etc.) the normal way.
  • When the installer comes to "Partition disks", choose "Manual".
  • Delete all old partitions from your two target drives.
  • Create a new partition on the first target drive. When asked for the size, put "1MB". Put the partition to the "Beginning" of the available space. Change the "Use as" setting to "Reserved BIOS boot area".
  • Create a new partition on the first target drive. When asked for the size, put "max". Change the "Use as" setting to "physical volume for RAID".
  • Repeat the last two steps for the second target drive.
  • Back in the partinioning menue, select "Configure Software RAID". Select "Create MD device". Select "RAID1". Active devices is "2". Spare devices is "0". Then select the two RAID partitions you prepared earlier. Leave the RAID-Setup with "Finish"
  • Back in the partinioning menue, select "Guided partitioning". Select "Guided - use entire disk and set up LVM". Select the RAID device you prepared earlier. Select "All files in one partition". Asked about the size of the LVM, answer "max".
  • Your partition list should now look somewhat like this. Select "Finish partitioning and write changes to disk".
  • If everything went well, your system will now launch into the standard Debian installation process (which may take a while). You answer the questions exactly as you would on a normal installation process.
  • Wake up once more when it comes to writing the GRUB thing on your discs: Write it once on the first target disc then go back and do the same for the second target disc.
  • The rest of the procedure is standard...
  • You can check the status of your RAID (incl. progress of the syncing process) by launching watch cat /proc/mdstat from the command line.
LudgerH
  • 63