A beginner’s guide to the Phantom Read anomaly, and how it differs between 2PL and MVCC

(Last Updated On: September 8, 2018)

Introduction

Unlike SQL Server which, by default, relies on the 2PL (Two-Phase Locking) to implement the SQL standard isolation levels, Oracle, PostgreSQL, and MySQL InnoDB engine use MVCC (Multi-Version Concurrency Control).

However, providing a truly Serializable isolation level on top of MVCC is really difficult, and, in this post, I’ll demonstrate that it’s very difficult to prevent the Phantom Read anomaly without resorting to pessimistic locking.

Domain Model

For the upcoming examples, we are going to use the following database entities:

departmentemployee

The problem

Our company IT department has a budget of 100 000 per month. This must accommodate all salaries and bonuses so that we never go over budget. Alice, the company CEO, decides to give a bonus to all the employees in the IT department for they’ve done a very good job with the latest product release. However, the bonus must not exceed the budget. Meanwhile, Bob, the company HR, has finally managed to hire Carol, who is a great developer so we can afford to pay her as much as our budget allows us.

To illustrate how the Phantom Read anomaly can break serializability, consider the following steps:

  1. Alice reads the sum of all salaries in the IT department, which is 90 000
  2. Bob also reads the sum of all salaries in the IT department, and he decides that he give Carol a salary of 9 000 a month since the budget will now be 99 000.
  3. Alice decides to give a 10% bonus to all the employees in the IT department since the budget should be 99 000, right?

Obviously, one of these two transactions needs to fail to preserve the serializable transaction schedule. This anomaly is called a Phantom Read, and we are going to see how this phenomenon is handled by various RDBMS.

Some might think that this anomaly is not strictly a Phantom Read but a Write Skew. However, a Write Skew does not imply a range of records but two distinct tuples written by two consecutive transactions, but which are expected to be updated together as a single unit.

There are two types of strategies a database can use in order to prevent data integrity phenomena: either it tries to prevent conflicts by using a pessimistic locking or it allows conflicts to happen, but then it needs to detect them through optimistic concurrency control.

All the upcoming tests are available on GitHub, so you can easily run them on your favorite RDBMS and verify if a particular isolation level allows a phenomenon that it should normally prevent.

2PL (Two-Phase Locking)

Two-Phase Locking is the oldest concurrency control mechanism that’s guaranteed to provide a Serializable transaction schedule. 2PL uses two types of locks: shared (read) and exclusive (write) locks. A shared lock can be acquired by multiple transactions, but it prevents any transaction from acquiring an exclusive lock. An exclusive lock prevents both shared and exclusive locks to be acquired until the acquired exclusive lock is released (during transaction commit or rollback).

In Two-Phase Locking, locks can be acquired either at the row-level, to prevent lost updates, read and write skews, or they can be acquired for a range of rows so that phantom reads are prevented.

Next, we are going to see how various databases using the two-phase locking mechanism can prevent our Alice and Bob budget problem.

MySQL

MySQL has multiple storage engines, but we are only interested in the transactional InnoDB engine which is also the default storage engine since version 5.5. Even if InnoDB uses MVCC at its core, for the Serializable isolation level, MySQL acquires a shared physical lock on every row or range of rows that are selected by a given SQL query. Because every table is a clustered index in MySQL, InnoDB uses the underlying index structure to provide records, gaps or even next-key locks.

When rerunning our use case on MySQL Serializable isolation level, the following outcome is registered:

phantomread2plmysql

As previously stated, 2PL employs a conflict prevention mechanism, so Bob’s INSERT statement is blocked because Alice is holding a shared predicate lock that spans over all employees that are contained within the IT department. Bob’s transaction waits for a given period of time, and, because Alice transaction is still holding the lock, Bob’s statement fails with a timeout exception.

SQL Server uses 2PL by default so, if you want the lock acquisition to fail fast, you can use the NOWAIT directive.

MVCC (Multi-Version Concurrency Control)

Locks incur contention, and contention impacts scalability. The relation between contention and scalability is given by Neil Gunther’s Universal Scalability Law (USL). For this reason, researchers have studied complementary concurrency control mechanism to provide better performance and throughput while still preventing data integrity issues.

However, everything has a price, and MVCC is no different. MVCC is built on the assumption that Readers should not block Writers, and Writers should not block Readers. For this reason, shared locks are no longer employed, and transactions are allowed to modify entries that other concurrent transaction might have read in the meanwhile. So, MVCC takes an optimistic approach to resolving data integrity issues since conflicts can occur, but they need to be discovered prior to committing a given transaction.

Even if MVCC uses less locking than 2PL, exclusive locks are still acquired every time we modify a record since otherwise, dirty writes might happen, and atomicity would be compromised.

As previously stated, SQL Server offers two MVCC-based isolation levels: Read Committed Snapshot Isolation and Snapshot Isolation. The difference between these two isolation levels is the point-in-time used for constructing a stable data snapshot. For the Read Committed isolation level, the snapshot is relative to the beginning of the currently executing query while for Snapshot Isolation, the
snapshot is relative to the beginning of the currently running transaction.

Compared to Serializable, Snapshot Isolation is a weaker consistency model since it can prevent Dirty Reads, Lost Updates and Read Skews, but it cannot prevent Write Skews or Phantom Reads (unless it employs predicate or range lock).

Some might argue that Phantom Reads means just returning a consistent point-in-time snapshot, but, although necessary, that’s not sufficient. To prevent any Phantom Row so that the transaction schedule is serializable, either Alice’s or Bob’s transaction must be rolled back. If both transactions succeed, the Phantom Read is not actually prevented.

That being said, Snapshot Isolation is more or less on the same level with Repeatable Read, as illustrated by Kyle Kingsbury’s consistency hierarchy diagram.

Oracle offers two MVCC-based isolation levels: Read Committed and Serializable, so there is no 2PL-based concurrency control. Although Oracle calls it Serializable, the highest isolation level is actually a variant of Snapshot Isolation which is prone to Write Skew anomaly.

Unlike 2PL, there is no standard way of implementing isolation levels on top of MVCC, so each database uses its own implementation which tries to prevent as many anomalies as possible.

For this reason, it’s worth checking every use case because there might be edge cases when the MVCC algorithm cannot detect a Write Skew or a Phantom Read which would have been prevented in 2PL.

PostgreSQL

Unlike other databases engines using MVCC, PostgreSQL goes one step further and implements a Serializable Snapshot Isolation (SSI) level, which is a very complex concurrency control mechanism that can detect Write Skews.

For our example, PostgreSQL 9.5 is able to detect the Phantom Read phenomenon when using an aggregate function over all employee records in the IT department:

phantomreadmvccpostgresql

PostgreSQL SSI manages to detect the Phantom Read since Alice’s transaction is rolled back due to a serialization failure.

Returning a result set instead of an aggregated result value

Let’s see what happens if we select the salaries as a result set instead of an aggregated value:

phantomreadmvccpostgresqlunagreggated

PostgreSQL SSI manages to detect the Phantom Read, and Alice’s transaction is rolled back.

[Alice]: PSQLException: ERROR: could not serialize access due to read/write dependencies among transactions
         Detail: Reason code: Canceled on identification as a pivot, during write.
         Hint: The transaction might succeed if retried.

Returning a result set only in Alice’s transaction

However, if just Alice is reading the employee records in the IT department while Bob just issues the insert statement without reading the current employees, PostgreSQL does not prevent the second write anymore.

phantomreadmvccpostgresqlunagreggatedoneread

Now, you might this that this is an issue with PostgreSQL implementation of Serializability, but in fact it’s not. Serializability means that the two transactions can be reordered so that they are equivalent to one serial execution. In this example, if the two transactions were executed one after the other, meaning that Alice executes first and then Bob’s transaction followed, the outcome would be exactly the same as in the previous diagram. More, Serializability does not imply any physical time ordering. That’s only the case for Linearizability, meaning that’s the case for Strict Serializability.

Therefore, this is not an anomaly from the database concurrency control perspective, but it might be from our application logic perspective, so keep that in mind.

All these uses cases are properly prevented by MySQL since the shared predicate lock prevents Bob from acquiring an exclusive lock in order to insert a new row in the same range of records that Alice has already selected. But due to locking, MySQL offers Strict Serializability (Serializability + Linearizability), hence our issue is prevented.

If you enjoyed this article, I bet you are going to love my Book and Video Courses as well.

Conclusion

MVCC is a great concurrency control mechanism, but, because it does not use pessimistic Predicate or Range locks, it must detect anomalies by inspecting the currently running transaction schedule. This is a very complex task, and there might be edge cases when a database engine might not detect some anomaly that would otherwise be prevented by a 2PL-based concurrency control mechanism.

Compared to 2PL, which has been around since late 1970’s, the Serializable Snapshot Isolation algorithm is rather new, being published in 2008 and first introduced into Postgres 9.1 (2011). There is a lot of research undergoing in the field of database and distributed systems, and, in future, we might benefit from even more reliable optimistic concurrency control mechanisms. Meanwhile, it’s better to understand the tradeoffs and limitations of the current implementations to ensure that data integrity is not being compromised.

My next article will demonstrate how you can overcome Phantom Read anomalies when the underlying MVCC-based Serializable isolation level cannot prevent it properly.

Subscribe to our Newsletter

* indicates required
10 000 readers have found this blog worth following!

If you subscribe to my newsletter, you'll get:
  • A free sample of my Video Course about running Integration tests at warp-speed using Docker and tmpfs
  • 3 chapters from my book, High-Performance Java Persistence, 
  • a 10% discount coupon for my book. 
Get the most out of your persistence layer!

Advertisements

4 thoughts on “A beginner’s guide to the Phantom Read anomaly, and how it differs between 2PL and MVCC

  1. Obviously, one of these two transactions needs to fail to preserve the serializable transaction schedule.

    Yes, they are.

    This anomaly is called a Phantom Read, and we are going to see how this phenomenon is handled by various RDBMS.

    No, it’s not. It’s still called “write skew”

    1. Actually, you’re wrong. The Write Skew is about different tuples, not ranges.

      For instance, Oracle Snapshot Isolation does not prevent Write Skew, yet the use case presented here is prevented just fine because that’s a Phantom Read. Go ahead and test it yourself, and you are going to get a better understanding that the anomaly definition were written with Locks in mind when 2PL was the only known way to ensure Serializability.

      This topic is indeed complex, so if you want to get a better understanding of it, check out my Transaction and Concurrency Control patterns presentation. I’m sure you will understand it afterward. Cheers.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.