Mastering PostgreSQL 9.6
上QQ阅读APP看书,第一时间看更新

Observing deadlocks and similar issues

Deadlocks are an important issue and can happen in every database I am aware of. Basically, a deadlock will happen if two transactions have to wait on each other.

In this section, you will see how this can happen. Let us suppose we have a table containing two rows:

CREATE TABLE t_deadlock (id int);
INSERT INTO t_deadlock VALUES (1), (2);

The next listing shows what can happen:

As soon as the deadlock is detected, the following error message will show up:

ERROR:  deadlock detected 
DETAIL: Process 91521 waits for ShareLock on transaction 903; blocked by process 77185.
Process 77185 waits for ShareLock on transaction 905; blocked by process 91521.
HINT: See server log for query details.
CONTEXT: while updating tuple (0,1) in relation "t_deadlock"

PostgreSQL is even kind enough to tell us which row has caused the conflict. In my example, the root cause of all evil is tuple (0, 1). What you can see here is a ctid. It tells us about the physical position of a row inside the table. In this example, it is the first row in the first block (0).

It is even possible to query this row if it is still visible to your transaction:

test=# SELECT ctid, * FROM t_deadlock WHERE ctid = '(0, 1)'; 
ctid | id
-------+----
(0,1) | 1
(1 row)

Keep in mind that this query might not return a row if it has already been deleted or modified.

However, it isn't only the case that deadlocks that can lead to potentially failing transactions. It can also happen that transactions are not serialized for various reasons. The following example shows what can happen. To make the example work, I assume that you've still got two rows: id = 1 and id = 2:

In this example, two concurrent transactions are at work. As long as transaction 1 is just selecting data, everything is fine because PostgreSQL can easily preserve the illusion of static data. But what happens if the second transaction commits a DELETE? As long as there are only reads, there is still no problem. The trouble begins when transaction 1 tries to delete or modify data, which is at this point already really dead. The only solution here for PostgreSQL is to error-out:

test=# DELETE FROM t_deadlock;
ERROR: could not serialize access due to concurrent update

Practically, this means that end users have to be prepared to handle erroneous transactions. If something goes wrong, properly written applications must be able to try again.