7

I'm having performance issues with my PostgreSQL 9.4.1 server. I have tuned the server using the usual best practices (pgtune + google). Here's the relevant config:

# <snip> the default config above

default_statistics_target = 50
maintenance_work_mem = 960MB
constraint_exclusion = on
checkpoint_completion_target = 0.9
effective_cache_size = 11GB
work_mem = 96MB
wal_buffers = 8MB
checkpoint_segments = 16
shared_buffers = 4GB
max_connections = 200

autovacuum = on
log_autovacuum_min_duration = 10000
autovacuum_max_workers = 5
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 25
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.1
#autovacuum_freeze_max_age = 200000000
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = -1

#log_statement='mod'
#log_statement='all'

logging_collector = on

Yet, query performance is very bad at times of high load. At those same times, the total RAM usage on the server is 4GB (out of 16GB available).

Server load during a peak

I can't help to think that these two issues are related. And that performance would increase if pg would only utilize more of the available RAM.

The server is Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz, 16GB RAM. We have 2 disks (WDC WD1003FBYZ-0) but no RAID. The DB with performance issues is split between these disks using tablespaces. OS is Debian 7.5.

So, anything obvious I have missed? Am I barking up the wrong tree? Or is there truly something fishy here?

panta82
  • 173
  • 1
  • 5

1 Answers1

11

PostgreSQL relies on the operating system's disk cache for most caching. This cache is usually reported as "free" RAM by most tools, because modern operating systems use all but a little bit of the currently-free RAM for disk cache.

This is normal.

To confirm, use a better tool that shows buffers/cache separately from truly free memory. On Linux, free -m will show you more info.

Also note that for most systems the accounting for shared memory means that the memory use reported for PostgreSQL processes is pretty bogus. They tend to count all the shared memory touched by a process as used by that process, thus counting each shared memory page many times over. Or they tend to ignore shared memory entirely, not counting it at all. The best way to estimate PostgreSQL's actual memory use is to get the unshared memory use for each process, add them up, and add the shared_buffers size to the total.

BTW, shared_buffers is a second level cache. Any clean buffers in there probably duplicate what's in the OS's disk cache. So there's a system and workload specific size that's optimal. Too small and you start struggling to find free pages to read in, or forcing dirty buffers out too early for efficient write combining. Too big and you double-cache too many clean pages from the OS disk cache.

Craig Ringer
  • 57,821
  • 6
  • 162
  • 193