0

I'm having 'out of shared memory' issue in PostgreSQL 13.3 on x86_64-pc-linux-gnu (The database has 41G data), first, it suggested me to increase max_locks_per_transaction.

After increasing max_locks_per_transaction to 1024, I still get 'out of shared memory', but now it suggests:

ERROR:  out of shared memory
HINT:  You might need to increase max_pred_locks_per_transaction.

It seems very confusing to me. So my question:

  • Why does max_pred_locks_per_transaction affect on shared_memory ?
  • The shared_buffers is already 4GB, should I increase it ?
  • The 'out of shared memory' relates to which: max_locks_per_transaction * (max_connections + max_prepared_transactions) or shared_buffers ?

Thank you all.

MiH
  • 125
  • 3

1 Answers1

1

These error messages indicate that the hash table to store locks has been exhausted. This table is located in the shared memory segment and has a constant size, determined when the database starts as:

max_locks_per_transaction * (
  (max_connections + autovacuum_max_workers + 1 +
     max_worker_processes + max_wal_senders) # this is MaxBackends
  + max_prepared_transactions
)

Not related to shared_buffers at all. This and many other structures in shared memory structures are added in addition to the amount required to accommodate shared_buffers.

Look at your pg_locks to find out exactly what data the lock table stores in your database.

Common reasons:

  • excessive use of partitioning, when a query on a table with 5 indexes instead of 6 locks requires >6000 locks due to the existing 1000 partitions
  • widespread use of serializable transaction isolation. Predicate locks are stored in pg_locks as mode=SIReadLock; there can be a lot of them
  • many advisory locks
Melkij
  • 3,912
  • 8
  • 17