6

I know I can specify mounts in fstab by either putting their path (like /dev/sda1 or /dev/mapper/myvg-logicalVolume1) or by fs label (LABEL=root) or by UUID (UUID=1234-5678-...).

I see a clear advantage in terms of reliability using the UUID for "classic" partitions like /dev/sda1, because if you repartition your drive/more partitions around/add more disks it may be that some of your partitions now get recognized with another name, although mounting by UUID is more difficult to tell in which partition/LV your data is being stored.

But using LVM, my guts tell me that the LVM system itself manages the discovery of their disks/partitions and it doesn't matter if some PV is (after playing with partitions/disk) now named differently. So there won't be any difference (speaking of reliability) mounting by UUID or using the path like /dev/mapper/vg-lv, and the latter is more clear.

Is this correct?

Alexis Wilke
  • 2,496

4 Answers4

5

That's correct.

Mounting by UUID is one way to work around the old issue of partition names like /dev/sda1 changing if you put another drive in.

device-mapper will persistently name your LVM volumes into /dev/mapper/vg-lv so you can rely on this abstracted name to stay the same, regardless of changes to the underlying storage.

The same goes for devices handled by device-mapper-multipath either without using friendly names (/dev/mapper/WWID) or using friendly names and a bindings file (/dev/mapper/mpath0).

suprjami
  • 3,626
3

You'd only ever shoot yourself in the foot if you wanted to rename the volume group or logical volume later....(lvrename or vgrename).

Forgetting whatever reason you renamed the vg or lv, the action would screw up your mounts and exports.

LVUUID remains persistent through vg and lv rename commands.

It may be good to use UUID for this reason alone, especially if you are responsible for a large number of exports and mounts.

Derek Manning
  • 81
  • 1
  • 1
0

I just spend 3 hours fighting with a reboot that would fail over and over again...

The problem with the newer systems (I'm on 24.04 now,) is that they want to make everything graphical. So when it fails, good luck finding the error on your totally black screen. Sad, if you ask me. A console with an error makes it so much easier to fix problems quickly.

In my case, I wanted to move my /var to a different, safer drive (instead of the boot which I consider unsafe, no RAID-1, SSD... what can go wrong?!)

So I used the blkid command to get the UUID like so:

$ sudo blkid /dev/mapper/users-var
/dev/mapper/users-var: UUID="95e5e925-1f92-4578-a79a-b3b7cd508a73" BLOCK_SIZE="4096" TYPE="ext4"

and used that UUID in my fstab like so:

UUID="95e5e925-1f92-4578-a79a-b3b7cd508a73" /var ext4 default 0 1

I even made sure I could mount that way before rebooting:

$ mkdir test-mount
$ sudo mount UUID="95e5e925-1f92-4578-a79a-b3b7cd508a73" test-mount
$ ls test-mount
<expected files / directories from /var>

Then I tried to reboot...

$ sudo init 6

But before rebooting on my normal system, I start a Live version of Ubuntu to make sure I had the latest from the /var from the SSD copied to that new partition, using rsync:

$ mkdir var var2
$ sudo mount /dev/sda2 var
$ sudo mount UUID="95e5e925-1f92-4578-a79a-b3b7cd508a73" var2
$ sudo rsync --partial --info=progress2 -a /var/ /var2

Finally, rebooted from the Live version and tested the new setup.

It failed.

I got a black screen of death. Absolutely no feedback. Luckily, I have sshd setup and could connect from another computer and noticed two issues:

  1. the graphical.target was failing, looking at the reason, it said "dependency missing"
  2. looking for a failure, I found that the var.mount unit did not work; the necessary dependency which was missing

Note: to find those issues, I primarily used systemd-analyze and systemctl list-units.

So I verified my fstab file, I had two issues, although I do not think that the second one would have been a problem: i.e. I was still mounting the var partition to /var2 (as I was testing things, made backups, etc. and mounting the same partition twice is not possible—also that was my first fix and an intermediate reboot still failed). To do that first one, I rebooted the Live version of Ubuntu, and used the chroot trick to update the fstab and grub properly.

Since my home partition uses an LVM name entry:

/dev/users/users /home ext4 defaults 0 2

and it has been working for years, I decided to change the /var to also still use the LVM name instead of the UUID:

/dev/users/var /var ext4 defaults 0 1

Ran the necessary commands to make sure it would be taken in account:

$ sudo update-initramfs -u
$ sudo update-grub

and finally tried to reboot one more time:

$ sudo init 6

and that time it worked.

So result is clear to me: DO NOT USE UUIDs when mounting /dev/mapper/... partitions or you could end up breaking your boot process.

Alexis Wilke
  • 2,496
0

In an environment where you use SAN storage, this is a BAD idea.. since UUID are hardware driven if you do a full backup to tape and do a restore on new hardware, the system will not boot since all the UUID are now different.

Sanjay
  • 1