0

I have a simple script that I wrote in PHP (which, after reading lots of QAs seems to have less than desirable memory management). It's an extremely small script that loops through configuration files in a folder; compares that to the record in the database, and, if there are differences, it updates the record accordingly.

While the script is less than 20 lines, I am dealing with over 30,000 config files at any given time.

My computer is a an Intel dual core 3.06 Ghz with 8 Gigs of Ram running on Ubuntu 12.04.

When I execute the script CPU climbs to 100% almost immediately and stays there. Using the top command I can see the memory for the PHP process increase consistently until it finally maxes out at 8 gigs and crashes. (I usually get about 3/4 through the script before it crashes, which currently takes about 90 minutes.)

From a hardware perspective, how can I make this process more efficient? If I upgrade to a quadcore, will that cut the time in half it takes to execute the script? If I upgrade to a hexcore, will that cut the time needed by 4? Also, does finishing the script 2x or 4x faster also mean I cut the memory usage, or would I still need to upgrade the RAM?

2 Answers2

1

Instead of reading the files constantly looking for changes (which is likely causing more IO wait than actual CPU usage) I'd highly recommend using inotify to watch for changes and only react when a change happens:

Docs: http://php.net/manual/en/book.inotify.php

See http://en.wikipedia.org/wiki/Inotify for more background.

Dave Forgac
  • 3,636
0

More CPU just means you would run out of RAM faster. I would have to say your best bet is to correct the memory management in your script. To use that much RAM, I have to think you have a memory leak.