5

I'm having a hard time figuring out how to exactly implement a 'insert if not found' function. Consider the following.

We have a table called artist with 2 columns, (name, id) where name is the unique and id is a serial primary key. It's a contrived example, but it illustrates my problem:

    SESSION A                     SESSION B
1.                                SELECT id FROM artist
                                    WHERE name = 'Bob';
2.  INSERT INTO artist (name)
      VALUES ('Bob')
3.                                INSERT INTO artist (name)
                                    VALUES ('Bob')
4.   code that users 'Bob'
     (e.g., a FK to Bob's ID)
5.                                ??? Bob already exists, but we
                                  can't find it
4.  COMMIT

Session B begins by trying to find an artist called Bob, which fails. However, Session A then creates Bob. Session B tries to insert an artist called Bob, which fails as it violates the primary key. But here's the bit I don't get -- if I change operation 3 to be a select on artist the table is still empty! This is because I'm using the serializable isolation level, but how can I handle this case?

It seems the only option I have is to abort the entire transaction and try again. If this is the case, should I throw my own 'could not serialize' exception, indicating the application should retry? I already wanted this 'find-or-insert' in a plpgsql function, where I would INSERT, and if that failed SELECT but it seems impossible to find the conflicting row...

ocharles
  • 208
  • 2
  • 7

2 Answers2

6

This is a bit of a FAQ. You'd find more information if you searched for ON DUPLICATE KEY UPDATE (the MySQL syntax), MERGE (the SQL-standard syntax), or UPSERT. It's surprisingly hard.

The best article I've seen on it yet is Depesz's "why is upsert so complicated". There's also the SO question Insert, on duplicate update (postgresql) which has suggestions but lacks explanation and discussion of the issues.

The short answer is that, yes:

It seems the only option I have is to abort the entire transaction and try again.

When using SERIALIZABLE transactions you just have to re-issue them when they fail. Which they will. By design - and much more frequently on Pg 9.1 and above because of greatly improved conflict detection. Upsert-like operations are very high conflict, so you may land up retrying quite a bit. If you can do your upserts in READ COMMITTED transactions instead it'll help, but you should still be prepared to retry because there are some unavoidable race conditions.

Let the transaction fail with a unique violation when you insert the conflicting row. If you get a SQLSTATE 23505 unique_violation failure from the transaction and you know you were attempting an upsert, re-try it. If you get a SQLSTATE 40001 serialization_failure you should also retry.

You fundamentally cannot do that retry within a PL/PgSQL function (without dirty hacks like dblink), it must be application side. If PostgreSQL had stored procedures with autonomous transactions then it'd be possible, but it doesn't. In READ COMMITTED mode you can check for conflicting inserts made since the transaction started, but not after the statement that calls the PL/PgSQL function started, so even in READ COMMITTED your "detect conflict with select" approach simply will not work.

Read depesz's article for a much better and more detailed explanation.

Craig Ringer
  • 57,821
  • 6
  • 162
  • 193
0

On most applications I have worked with this is something that is possible but rarely occurs with good transaction management. I strongly advise that transactions not be open during communications with the user. In most cases, this results in sub-second transaction times. Optimistic locking is your friend here.

Your transactions now becomes:

  • Users A and B search for Bob and don't find him.
  • Both A and B try to add Bob.
  • B's add arrives first and is committed.
  • A's add arrives later and is dealt with appropriately (design decision).
  • Both A and B can find Bob.

There is a chance of a race condition if both A and B submit their adds at the same time, but in practice this is highly unlikely. Depending on workflow, updates are more likley to encounter this problem. In this case, the last user to submit usually gets a data update by other user type error. If the get back the updated data, they can retry the update if appropriate. In cases where the second updates does not conflict, the it can be silently skipped or its changes applied as appropriate.

Long running transactions can cause data inconsistencies.

BillThor
  • 4,518
  • 20
  • 12