15

I got a running instance of a EC 2 server setup with Ubunto. What's the best way to increase the disk size without any downtime and minimal risk?

Reading through the guides, one way would be to create a new disk, migrate data, turn off the instance swap disk and turn it back on. This approach sounds a bit risk will require some downtime. I wonder if there is a better approach?

Evgeny Zislis
  • 9,023
  • 5
  • 39
  • 72
googletorp
  • 261
  • 2
  • 6

4 Answers4

14

Amazon AWS just released (on 13th February 2017) a new feature that allows to change a size of an EBS volume.

source: https://aws.amazon.com/blogs/aws/amazon-ebs-update-new-elastic-volumes-change-everything/

This allows to increase the size of an EBS volume on an existing instance, while it is running.

It is important to note that changing the volume size, does not change the size of the filesystem on the volume (for most filesystems). Additional steps might be required (depending on filesystem) in the operating system itself. For example a resizefs on ext4 filesystem used by most Linux OS today.

Full documentation from AWS describing the process can be found at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html

In some cases the block device has mapped partitions, and only then one (or more) of the partitions include a filesystem. In this case the partition will need to be resized first, and only then the filesystem. This process is described in the documentation as well.

The new "online" resize feature described in the blog only applies to current generation instances, and there are some other considerations and limitations that need to be checked before attempting a volume resize.

Evgeny Zislis
  • 9,023
  • 5
  • 39
  • 72
2

What I do (and that's not exactly answering your question) is as follow:

  1. Create an EBS volume and attach it to the instance doc is here
  2. Rescan the scsi buses echo '- - -' > /sys/bus/scsi/devices/host1/scsi_host/host1/scan (you may have to adapt the host number)
  3. Create a physical volume with pvcreate on the new disk found (fdisk -l to list all disk)
  4. Create a volume group and then logical volume on it (vgcreate and lvcreate)
  5. Format the logical volume with your desired filesystem
  6. tar the target mountpoint to restore it.
  7. Mount this volume where you need new space.
  8. Restore the tar into this newly mounted space.

Steps 6 and 8 are optional if you use a new space before installing something. If you want to replace an existing directory you obvisouly have to avoid something writing there between the end of the archive and the restoration.

You can repeat the steps 4 to 8 for different mountpoints, this allow to extend the space needed and then to resize those volumes on line without interruption.

Tensibai
  • 11,416
  • 2
  • 37
  • 63
1

ZFS Zero Downtime filesystem storage scaling on AWS (or elsewhere)

upsize

  1. Install ZFS on EC2.

    http://serverascode.com/2016/09/05/aws-zfs-user-data.html

  2. Make a zpool for your bulk data using an EBS volume.
  3. Add another EBS to get more block storage. (or set the pool autoexpand=true and just grow your EBS)
  4. Add the new EBS to your zpool to make the space available. (unless you used autoexpand and increased the EBS size)

Downsize

  1. Make a new zpool on new EBS big enough to hold the shrunken data. (doesn't need to be mounted yet, or even on the same EC2)
  2. snapshot the old too-big zpool
  3. zfs-send the snapshot to the new zpool
  4. promote the received snapshot on the new pool and mount it
  5. destroy the old pool
  6. destroy the old pool's EBS
Jeremy
  • 314
  • 1
  • 5
0

After resizing the EBS volume, here is what I just executed when I needed to expand a ZFS pool:

parted -l # Get the list of partitions 
parted /dev/xvdf rm 9 # Remove the buffer partition
parted /dev/xvdf resizepart 1 100% # Resize the partition
zpool online -e <zfs partition name> /dev/xvdf # Expand the zpool and the filesystem it holds
Dan Cornilescu
  • 6,780
  • 2
  • 21
  • 45
Michael Pereira
  • 651
  • 4
  • 12