I will provide an answer from a developer point of view.
In my opinion, when you encounter a row contention such as the one you describe, it's because you have a bug in your application. In most cases this type of contention is a sign of a lost-update vulnerability. This thread on AskTom explains the concept of a lost update:
A lost update happens when:
session 1: read out Tom's Employee
record
session 2: read out Tom's Employee
record
session 1: update Tom's employee
record
session 2: update Tom's employee
record
Session 2 will OVER WRITE session 1's
changes without ever seeing them --
resulting in a lost update.
You have experienced one nasty side-effect of lost update: session 2 can be blocked because session 1 has not commited yet. The main problem however is that session 2 blindly updates the record. Suppose that both sessions issue the statement:
UPDATE table SET col1=:col1, ..., coln=:coln WHERE id = :pk
After both statements, session1's modifications have been overwritten, without session2 having been notified that the row had been modified by session 1.
Lost update (and the contention side effect) should never happen, they are 100% avoidable. You should use locking to prevent them with two main methods: optimistic and pessimistic locking.
1) Pessimistic Locking
You want to update a row. In this mode you will prevent others from modifying this row by requesting a lock on that row (SELECT ... FOR UPDATE NOWAIT statement). If the row is already being modified, you will get an error message, which you can gracefully translate to the end-user (this row is being modified by another user). If the row is available, make your modifications (UPDATE), then commit whenever your transaction is complete.
2) Optimistic Locking
You want to update a row. However, you don't want to maintain a lock on that row, perhaps because you use several transactions to update the row (web-based stateless application), or perhaps you don't want any user from holding a lock for too long (which may result in other people being blocked). In that case you will not request a lock right away. You will use a marker to make sure that the row has not changed when your update will be issued. You could cache the value of all columns, or you could use a timestamp column that gets updated automatically, or a sequence-based column. Whatever your choice, when you are about to perform your update, you will make sure that the marker on that row has not changed by issuing a query like:
SELECT <...>
FROM table
WHERE id = :id
AND marker = :marker
FOR UPDATE NOWAIT
If the query returns a row, make your update. If it does not, this means that someone has modified the row since the last time you queried it. You will have to restart the process from the beginning.
Note: If you have a complete trust over all applications that access your DB, you can rely on a direct update for the optimistic locking. You could issue directly:
UPDATE table
SET <...>,
marker = marker + 1
WHERE id = :id;
If the statement updates no row, you know that someone has changed this row and you need to start all over.
If all applications agree on this scheme, you would never be blocked by someone else and you would avoid the blind update. However, if you don't lock the row beforehand, you are still susceptible to indefinite locking if another application, batch job or direct update doesn't implement optimistic locking. This is why I advise to always lock the row, whatever your locking scheme choice (the performance hit can be negligible since you retrieve all values including the rowid when you lock the row).
TL;DR
- Updating a row without having a lock on it beforehand exposes the application to potential "freezing". This can be avoided if all DML to the DB implement optimistic or pessimistic locking.
- Verify that the SELECT statement return values consistent with any previous SELECT (to avoid any lost update problem)