We have a projects UI page where users can search projects. I noticed that search queries will be slow the first run (up to 10 seconds) and faster on subsequent runs (around 2 seconds). Could this explain the 2.28% disk hit? This seems like a high percentage right? If we want to ensure these queries are always fast, could we use pg_prewarm to keep them in memory? Would we need to run pg_prewarm periodically? This projects table is 15 GB and we have over 100 GB of server memory on Heroku.
table name | disk hits | % disk hits | % cache hits | total hits
---------------------------------+-----------+-------------+--------------+--------------
all | 730484143 | 0.32 | 99.68 | 225556991278
projects | 34777074 | 2.28 | 97.72 | 1526319404
I'm also trying to understand the pg_prewarm config autoprewarm_interval. What is this used for? This means it will unload the projects table from memory for example?
Here's the memory from the logs https://devcenter.heroku.com/articles/heroku-postgres-metrics-logs
6 GB - sample#memory-postgres: Approximate amount of memory used by your database’s Postgres processes in kB. This includes shared buffer cache as well as memory for each connection.
125 GB - sample#memory-total: Total amount of server memory available.
875 MB - sample#memory-free: Amount of free memory available in kB.
114 GB - sample#memory-cached: Amount of memory being used by the OS for page cache, in kB.