1

Possible Duplicate:
rm on a directory with millions of files

Hello,

So I'm stuck with this directory:

drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2

The directories contents is small - just millions of tiny little files.

I want to wipe it from the filesystem but have been unable to. My first try was:

find sessions2 -type f -delete

and

find sessions2 -type f -print0 | xargs -0 rm -f

but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory.

So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory?

So I did this (foolishly): tune2fs -O^dir_index /dev/xxx

Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage.

I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault!

2 questions:

1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage.

2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system?

Thanks!

Alexandre
  • 151

4 Answers4

1

See this question: rm on a directory with millions of files

This was my answer, but there were other great answers, too:

Would it be possible to backup all of the other files from this file system to a temporary storage location, reformat the partition, and then restore the files?

jftuga
  • 5,831
1

I'd be inclined to move the current directory out of the way, make a new directory, and then remove the old one:

mv dirname dirname.old; mkdir dirname ls -ld dirname dirname.old # verify the permissions are correct rm -rf dirname.old

This has the added benefit that you don't end up with an "rm -rf dirname" in your history that you might accidentally re-run. :-)

"rm -rf" should remove the directory using only very little memory overhead. But the "find" commands you've run before should also not be using a lot of overhead memory. How are you measuring the memory use?

You aren't measuring this memory use by looking at the "free" column of the output of the "free" command, are you? Because Linux will use unused memory for disc caching, and the top column under "free" doesn't take that into account. Are you seeing the memory use in one of the programs? Which one? "ps awwlx --sort=vsz" will show you the big memory using programs, sorted with the big ones at the end...

0

Your issue is you are piping it to xargs.. probably eating all your memory there.. find has a delete option

find sessions2 -type f -delete
Mike
  • 22,748
0

Hmm a crazy idea, I don't know if it would work. What if you try to do it in batches? First rename the dir and create a new empty one, so new files don't get in the way. Then output the contents of the dir. to a text file. Then go through that file and remove 100 files at a time, sleeping and sync in between?

Jure1873
  • 3,762