0

I've got:

  • a CPU E5-2407 (quadcore Xeon 2.2GHz)
  • 16GB of memory DDR3
  • RAID1 Intel520 SSD 240GB

My homemade applications are using a permanent PG connection on 7 pages (one called by customers as API, others called by a CURL bash loop with 10s timer) + website. The volume will go up to 1 million call per day on the API page (now it's around 1000 per day). The API page role is only to insert data in a table, handle by loop process afterwards.

I'll change the bash loop by a C program in maybe one month so the API page will be the only called using nginx (except website but it'll be a very very small volume compared to API).

What would you recommend as settings for processes/child processes/cache/buffer of nginx/PG/FPM ?

Thanks :)

1 Answers1

1

I can't speak for the other layers of the stack, but for Pg, put PgBouncer in transaction pooling mode in front of PostgreSQL. That'll mean that as you spawn more persistent workers in your client code you don't blow out the number of PostgreSQL backends massively. It'll also reduce the startup/teardown costs for tons of short-lived Pg workers.

Postgresql max_connections slots aren't free, even if unused, and performance scales non-linearly with number of connections and with number of actively working sessions. See this wiki page for the details.

Additionally, if the app has lots of read-only, slow-changing data consider caching it in something like Redis or Memcached. You'll find that PostgreSQL's LISTEN and NOTIFY features make fine-grained and timely cache invalidation easy when you're doing this. It's probably only something to pursue once you know what the real world load is like and what gets hit most often.

Beyond that: benchmark it with your workload. Then benchmark it some more. Then some more. There is no substitute for simulated workload benchmarking - except real world workloads, which always throw some surprises at you.

Craig Ringer
  • 11,525