46

I have a fileserver where df reports 94% of / full. But according to du, much less is used:

# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             270G  240G   17G  94% /
# du -hxs /
124G    /

I read that open but deleted files could be responsible for it but a reboot did not fix this.

This is Linux, ext3.

regards

7 Answers7

38

Ok, found it.

I had a old backup on /mnt/Backup in the same filesystem and then an external drive mounted in that place. So du didn't see the files. So cleaning up this gave me back my disk space.

It probably happened this way: the external drive once was unmounted while the daily backup script run.

19

I don't think you will find a more thorough explanation that then this link for all the reasons it could be off. Some highlights that might help:

  • What is your inode usage, if it is almost at 100% that can mess things up:

    df -i

  • What is your block size? Lots of small files and a large block size could skew it quite a bit.

    sudo tune2fs -l /dev/sda1 | grep 'Block size'

  • Deleted files, you said you investigated this, but to get the total space you could use the following pipeline (I like find instead of lsof just because lsof is a pain to parse):

    sudo find /proc/*/fd -printf "%l\t%s\n" | grep deleted | cut -f2 | (tr '\n' +; echo 0) | bc

However, that is almost 2x off. Run fsck on the partition while it is unmounted to be safe.

Kyle Brandt
  • 85,693
16

It looks like a case of files being removed while processes still have them open. This disconnect happens because the du command totals up space of files that exist in the file system, while df shows blocks available in the file system. The blocks of an open and deleted file are not freed until that file is closed.

You can find what processes have open but deleted files by examining /proc

find /proc/*/fd -ls | grep deleted
TCampbell
  • 2,054
7

I agree that

lsof +L 1 /home | grep -i deleted

is a good place to start, in my case I notice that I had lots of perl scripts that was running, and keeping a lot of files alive, even though they was supposed to be deleted.

I killed the perl functions, and this made du and df almost identical, case closed.

Bart De Vos
  • 18,171
Sverre
  • 783
7

The most likely reason in your case is that you have lots of files that are very small (smaller than your block size on the drive). In that case df will report the sum of all used blocks, whereas du will report the actual sum of file sizes.

wolfgangsz
  • 9,007
2

By default, when you format a filesystem with EXT3, 5% of the drive is reserved for root. df accounts for this reserve when it reports what is available, while du shows what is actually in use.

You can view the reserved blocks by running:

tune2fs -l /dev/sda|grep -i reserve

and you will get something like:

Reserved block count:     412825
Reserved GDT blocks:      1022
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)

If you would like to adjust that to a lower percent, you can do so with something like

tune2fs -m 1 /dev/sda

You can reduce it to 0, however since this is your root filesystem I would be wary of doing that. If the filesystem actually filled it may make maintenance tasks required to clean it up difficult.

Alex
  • 6,723
-1

Is it possible that perhaps du doesn't add in the size of the directories as well? Still, seems like a HUGE difference though, that can't be responsible for all of it.