0

VPS setup is as follows

NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                    11:0    1 1024M  0 rom
vda                   253:0    0   60G  0 disk
├─vda1                253:1    0  9.8G  0 part /
└─vda2                253:2    0 50.2G  0 part
  └─VolGroup1-LogVol1 252:0    0 50.2G  0 lvm  /mnt/lvm1
vdb                   253:16   0   10G  0 disk
Disk /dev/vda: 60 GiB, 64424509440 bytes, 125829120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: XXXXXXXX
.
Device     Boot    Start       End   Sectors  Size Id Type
/dev/vda1  *        2048  20482047  20480000  9.8G 83 Linux
/dev/vda2       20482048 125829119 105347072 50.2G 83 Linux
.
Disk /dev/vdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX
.
Device     Start      End  Sectors Size Type
/dev/vdb1   2048 20969471 20967424  10G Linux filesystem
.
Disk /dev/mapper/VolGroup1-LogVol1: 50.2 GiB, 53934555136 bytes, 105340928 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

/dev/vdb/ is single block storage volume at initially 10GB size. I will add more space to this block storage later. Or could add more block storage (/dev/vdc, /dev/vdc, etc).

I need to mount it to /mnt/lvm1. The applications using this folder will need more an more space and I can't make them use multiple folders.

What is the optimal setup to keep adding space to a single mountpoint? Of course I can extend VolGroup1-LogVol1 onto /dev/vdb1, but are there other ways to do this which might be easier to manage? This could be in the form of a different PV/VG/LV setup and/or using multiple block storages.

Gaia
  • 1,975
  • 5
  • 36
  • 63

1 Answers1

1

There is no one optimal way to do this. There are however several ways that work better than others depending on your scenario.

In general, avoid as many abstraction layers as you can. If you're going to use the entire disk for LVM and nothing else, it doesn't make any sense to put a partition table on it - so eliminate that layer and make /dev/vdb an LVM physical volume on its own. This also makes resizing this device much easier and safer in the future, as you won't also have to resize a partition every time. Besides, LVM is like an advanced partition table anyways.

If this block device is being provided by something like EBS, then that volume can be expanded while online. Most other block device targets from various providers can be expanded online as well. Making LVM register this expanded volume takes only a single command (provided you're not using a partition table):

pvresize /dev/vdb

After that re-detection of physical volume capacity, you'll see new storage sizes reflected in LVM that will be immediately available to use. You can then freely use the expanded space by extending your LVs or adding new ones.

Adding more capacity by adding physical volumes as a method of expansion works, but it's best to avoid this if you can. Managing many physical volumes rather than one large one can be annoying to troubleshoot, especially when having to do things like globally filter multipath volumes, manage remote storage targets themselves, or determine if a given physical volume is giving a volume group issues.

However, in an environment where it's not easy or impossible to resize existing backing storage targets that are providing said PVs, it's easier to just use LVMs ability to aggregate block devices in a volume group and add more devices - this is usually the case when using "bare" hard drives, for example.

Spooler
  • 7,286