2

I have 3 different folders containing particular information about sales orders. Everything was working fine but a few days ago I started having an issue about the amount of subdirectories in each of those main folders (orders above 32K).

My temporary solution was to move the oldest data to a backup and remove it from the production environment but I would really like to have it there, so my question is:

What options do you recommend to store a structure where I can save incremental subfolders without hitting the maximum? I am on a Ubuntu server box with ext3

It looks something like

-tmp/
--order_1/
--order_2/
...
--order_32000/
...
-imgs/
--order_1/
--order_2/
...
--order_32000/
...
-hd_imgs/
--order_1/
--order_2/
...
--order_32000/

inside each order_xx folder live around 1 to 30 files.

2 Answers2

2

It sounds to me like you need a (real) database (as opposed to the filesystem), and some development time to make a front end for it. Investigate MongoDB or Postgres.


If you need a faster solution, try breaking up your orders by time: Store them in a hierarchy like [year]/[month]/order_###### (you can keep using serial order numbers if you want, or compose the order number as YYYYMM##### so it's easier to find in the system later without having to do searches within the directory hierarchy).

This will work as long as the number of orders in a month is less than about 30,000 or so. The next limit you will hit is the filesystem inode limit though, and the only solution there is a new filesystem (or splitting your data across several filesystems). Take a look at df -i on your system today, and remember that every file and directory will chew up one more inode. Eventually you'll run out.

voretaq7
  • 80,749
0

You could upgrade to ext4 to get around the 32k limit.

psusi
  • 3,447
  • 1
  • 19
  • 9