0

I am doing testing on a replica, trying to diagnose why servers take several minutes to shutdown when fast mode is used. It seems to correlate with high shmem usage, can anyone recommend a good query / command to run which would cause postgresql to allocate / use a lot of shared buffers (shmem)?

I am a Linux engineer but somewhat new to postgres, and only know the basics from the DBA standpoint, but am knowledgable about systems and Linux as well as memory/OS concepts. I've tried running \dt+ and looking for tables that are reasonable large (1G), turning off the pager and doing SELECT * on the table in psql, but it didn't seem to cause much usage increase in the shared memory. I am looking to increase the footprint by ~ <= 200G, ideally.

Preferably something simple which doesn't actually write to disk. I don't want to modify the actual state much, just cause high use of shmem (e.g: perhaps a large select, or something ephemeral).

This is on a Linux system 3.x kernel without hugepages (THP is enabled at the moment) and mmap is being used for shmem, mainly. postgres 13.6. I am trying to get ~ 500G of shared memory usage

1 Answers1

1

why servers take several minutes to shutdown when fast mode is used

On shutdown, postgresql creates a checkpoint. Or restartpoint if the database is in hot standby mode. This means writing (with fsync) all changed data from shared buffers. The more changes happened on the primary, the more the database needs to write.

The usual procedure for a production database for a quick restart is:

  • run checkpoint command on primary, wait for execution
  • run the checkpoint command on the replica we want to shut down or restart
  • if the checkpoint was executed for more than ten seconds - repeat from the beginning
  • restart replica

What is a good test command to cause Postgres to use/fill shared_buffers?

pg_prewarm of some big relation.

Melkij
  • 3,912
  • 8
  • 17