6

I have this table in PostgreSQL 9.4:

CREATE TABLE user_operations( 
    id SERIAL PRIMARY KEY, 
    operation_id integer, 
    user_id integer )

The table consists of ~1000-2000 different operations each of which corresponds to some subset (consisting of approximately 80000-120000 elements each) of the set S of all users:

S = {1, 2, 3, ... , 122655}

Parameters:

work_mem = 128MB
table_size = 880MB

I also have an index on the operation_id.

QUESTION: What would be the most optimal plan for querying all distinct user_id for a significant part of the operation_id set (20%-60%) like:

SELECT DISTINCT user_id FROM user_operation WHERE operation_id < 500

It's possible to create more indexes on the table. Currently, the plan for the query is:

HashAggregate  (cost=196173.56..196347.14 rows=17358 width=4) (actual time=1227.408..1359.947 rows=598336 loops=1)
  ->  Bitmap Heap Scan on user_operation  (cost=46392.24..189978.17 rows=2478155 width=4) (actual time=233.163..611.182 rows=2518122 loops=1)
        Recheck Cond: (operation_id < 500)
        ->  Bitmap Index Scan on idx  (cost=0.00..45772.70 rows=2478155 width=0) (actual time=230.432..230.432 rows=2518122 loops=1)
              Index Cond: (operation_id < 500)

Is such query plan really optimal in such circumstances? I mean, I'm not sure about correctness of using Bitmap Heap Scan. I'll appreciate any references to relevant articles.

St.Antario
  • 1,285
  • 1
  • 10
  • 15

1 Answers1

4

What would be the most optimal plan for querying all distinct user_id for a significant part of the operation_id set (20%-60%).

Use a recursive query:

WITH RECURSIVE cte AS (
   (  -- parentheses are required
   SELECT user_id
   FROM   user_operations
   WHERE  operation_id < 500
   ORDER  BY user_id
   LIMIT  1
   )
   UNION ALL
   SELECT u.user_id
   FROM   cte, LATERAL (
      SELECT user_id
      FROM   user_operations
      WHERE  operation_id < 500
      AND    user_id > cte.user_id  -- lateral reference
      ORDER  BY user_id
      LIMIT  1
      ) u
   )
TABLE cte;

In combination with an index on (user_id, operation_id) - columns in that order. I expect index scans that filter on the second column. Reasonably accurate table statistics are important so Postgres knows it will only have to skip a few rows in the index to find the next user_id. Generally, one might want to increase the statistics target for operation_id in particular:

ALTER TABLE user_operations ALTER operation_id SET STATISTICS 1000;

Since there are only ~1000-2000 different operations, this may not even be necessary, but it's a small price to pay.

Details:

If the predicate operation_id < 500 is stable (always the same), make that a partial index on just (user_id) instead:

CREATE INDEX foo ON user_operations (user_id) WHERE operation_id < 500;

Then statistics on operation_id are not relevant to this query any more.

Even if the predicate is not stable, there may be ways to optimize - depending on the full range of possible conditions and value frequencies.

Performance should be ... delicious.

I optimized the technique in this related answer on SO (with detailed explanation):

If you have a separate users table and a large part of all users can be found in your sample, even faster query styles are possible. Details in the linked answer.

Erwin Brandstetter
  • 185,527
  • 28
  • 463
  • 633