2

I have a table named, and it has enough CPU, memory, and shared buffers. But when I run a simple query:

Explain analyse SELECT * FROM assets WHERE leased_to = 'org_name' or owned_by = 'org_name' ;

planning takes more than 6 seconds for new session but subsequent sessions takes less less planing time. For your info leased_to and owned_by are indexed.

Explain analyse SELECT * FROM assets WHERE leased_to = 'org_name' or owned_by = 'org_name';
                                                          QUERY PLAN                                                           
-------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on assets  (cost=5.01..102.52 rows=147 width=2040) (actual time=0.060..0.293 rows=147 loops=1)
   Recheck Cond: ((leased_to = 'org_name'::text) OR (owned_by = 'org_name'::text))
   Heap Blocks: exact=56
   ->  BitmapOr  (cost=5.01..5.01 rows=149 width=0) (actual time=0.044..0.046 rows=0 loops=1)
         ->  Bitmap Index Scan on idx_leased_to  (cost=0.00..1.49 rows=32 width=0) (actual time=0.031..0.031 rows=32 loops=1)
               Index Cond: (leased_to = 'org_name'::text)
         ->  Bitmap Index Scan on idx_owned_by  (cost=0.00..3.45 rows=147 width=0) (actual time=0.012..0.012 rows=149 loops=1)
               Index Cond: (owned_by = 'org_name'::text)
 Planning Time: 8732.087 ms
 Execution Time: 0.380 ms
(10 rows)

max_connections = 200 shared_buffers = 1536MB wal_buffers = 16MB work_mem = 4096kB checkpoint_completion_target = 0.9 effective_io_concurrency = 200 maintenance_work_mem = 128MB effective_cache_size = 1023MB random_page_cost = 1.1 wal_level = logical max_replication_slots = 15

Version: postgresql:14.4.0-debian-11-r9

SELECT schemaname, relname, n_dead_tup FROM pg_catalog.pg_stat_all_tables ;
     schemaname     |           relname           | n_dead_tup 
--------------------+-----------------------------+------------
 pg_catalog         | pg_statistic                |    1567454
 pg_toast           | pg_toast_2619               |     429588
 public             | assets                      |         65

We have not recently triggered VACUUM manually , as autovacuum is on. We will give it a try.

Erwin Brandstetter
  • 185,527
  • 28
  • 463
  • 633
sam
  • 21
  • 2

1 Answers1

2

1.6M dead tuples in pg_statistic is way beyond normal. If you are at liberty to do so:

VACUUM FULL pg_catalog.pg_statistic;

Else, at least plain VACUUM.

I wonder how you got there. Looks like very aggressive settings for ANALYZE (which writes to that table), but extremely lax settings for VACUUM (at least for the system table pg_statistic) - or some locking issue that prevented VACUUM.

Repeated invocations get faster planning time, that points to a caching effect. Rows in pg_statistic are comparatively big. I measured an avg of 600 bytes per row in my test DB. Your mileage may vary, especially in case of an increased statistics target (?). Your shared_buffers setting of 1.5 GB should still be enough to cache the table, but we don't know how much cache memory is needed for other stuff. And cache needs to be populated first, which may be what causes the excessive planning time. That, in turn, points to a lack of RAM or very low activity in your DB, so it got evicted from cache earlier. Getting rid of the bloat will help across the board.

Either way, also upgrade to a current version of Postgres, as commented.

Erwin Brandstetter
  • 185,527
  • 28
  • 463
  • 633