-3

My question is simply: have i missed any configuration settings that might make postgres achieve better read throughput?

Background
I'm modeling a very simple twitter program. My schema has only 3 tables: subscriptions, posts, and timelines.

I have indexes on each of the tables, most importantly the timeline.time field.

I start postgres with:

postgres -h 127.0.0.1 -p 10000 -D /mnt/tmp/postgres -c synchronous_commit=off
-c fsync=off -c full_page_writes=off  -c bgwriter_lru_maxpages=0
-c shared_buffers=24GB  -c bgwriter_delay=10000 -c checkpoint_segments=600

Note that 24GB is somewhat bigger than my dataset (i tried using 36GB with no additional gain). Also, /mnt/tmp is a tmpfs in ram so any 'disk' writes should be as fast as possible.

Am i missing anything major?

1 Answers1

3

Yes, this is possible. Here's a quick recipe:

  1. Find out what's the slowest part
  2. Make it go faster

If you think this answer is too generic and useless, well so was the question :) So unless you have more specific issues/problems/questions, I think this is the best we can do.