24

I tried to mount a formerly readonly mounted filesystem read-writeable:

mount -o remount,rw /mountpoint

Unfortunately it did not work:

mount: /mountpoint not mounted already, or bad option

dmesg reports:

[2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list.  Please umount/remount instead

A umount does not work, too:

umount /mountpoint
umount: /mountpoint: device is busy.
    (In some cases useful info about processes that use
     the device is found by lsof(8) or fuser(1))

Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point.

So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

bmk
  • 2,399

6 Answers6

36

If you're using ext2 / ext3 / ext4 you should be able to use e2fsck to clean up orphaned inodes:

e2fsck -f

For reiserfs, you can use reiserfsck which will also clean up orphaned inodes.

11

e2fsck -f <mount point> won't work.

First find out the mount points with

sudo mount -l

Then fsck the drive directly.

For example for me

sudo e2fsck -f /dev/xvda2
6

You clean up the unprocessed orphan inode list by unmounting and remounting the filesystem.

An extended discussion from the linux-ext4 mailing list has more information about what this message is and why it may appear. In short, one of two things has happened: Either you've run into a kernel bug, or much more likely, some filesystem corruption happened one of the previous times you remounted the filesystem readonly. Which is probably why the system thinks something is still using the filesystem when there isn't.

If it's been a year and you still haven't rebooted the machine, just give up and schedule a maintenance window.

Michael Hampton
  • 252,907
1

I would recommend to first unmount the partition forcefully, i.e. using the -f option, and the running a file system check using fsck.

wolfgangsz
  • 9,007
1

You should probably try a lazy unmount, i.e:

umount -l
0

I was facing the same issue on an AWS EC2 machine. To complicate the resolution, the volume that was affected was the root volume of the EC2 instance. Hence the device was failing to boot and SSH was also not possible to the instance.

The following steps helped me resolve the issue:

  1. Detach the volume from the EC2 instance.
  2. Configure a new EC2 instance using the same AMI and in the same AZ as that of the old one.
  3. Attach the volume (detached in Step 1) to the new instance.
  4. Execute the following commands:
# Switch to Root user:
sudo -i

Identify the device Filesystem name and save it as a variable:

lsblk rescuedev=/dev/xvdf1 # Mention the right Filesystem for the particular volume.

Use /mnt as the mount point:

rescuemnt=/mnt mkdir -p $rescuemnt mount $rescuedev $rescuemnt

Mount special file systems and change the root directory (chroot) to the newly mounted file system:

for i in proc sys dev run; do mount --bind /$i $rescuemnt/$i ; done chroot $rescuemnt

Download, install and execute EC2Rescue tool for Linux to fix the issues:

curl -O https://s3.amazonaws.com/ec2rescuelinux/ec2rl.tgz tar -xf ec2rl.tgz cd ec2rl-<version_number> ./ec2rl run cat /var/tmp/ec2rl/*/Main.log | more ./ec2rl run --remediate

Switch back from the Root user and unmount the volume:

exit umount $rescuemnt/{proc,sys,dev,run,}

  1. Shut down the EC2 instance and detach the volume.
  2. Attach the volume to the original instance and start the EC2 instance.