PostgreSQL SERIAL column and Hibernate IDENTITY generator

(Last Updated On: March 29, 2018)

Introduction

When using PostgreSQL, it’s tempting to use a SERIAL or BIGSERIAL column type to auto-increment Primary Keys.

However, this article will show you that this is not a very good idea when using JPA and Hibernate.

SERIAL or BIGSERIAL

If you’ve been using MySQL, you know that AUTO_INCREMENT is a very popular choice. When migrating to PostgreSQL, you will notice that SERIAL or BIGSERIAL column types can be used just like AUTO_INCREMENT in MySQL.

SERIAL is an auto-incremented integer column that takes 4 bytes while BIGSERIAL is an auto-incremented bigint column taking 8 bytes. Behind the scenes, PostgreSQL will use a sequence generator to generate the SERIAL column values upon inserting a new ROW.

Domain model

Now, assuming we have the following post table:

CREATE TABLE post (
    id  SERIAL NOT NULL, 
    title VARCHAR(255), 
    PRIMARY KEY (id)
)

For this table, PostgreSQL creates a sequence called post_id_seq that is associated with the id SERIAL column.

So, when inserting a post row, the INSERT statement can simply omit the id column:

INSERT INTO post (title) 
VALUES ('High-Performance Java Persistence')

The id column is also the Primary Key of the post table, and it uses a SERIAL column type. The id column will be automatically be assigned the next value of the underlying post_id_seq sequence generator.

To map the post table, we need a Post entity class that looks as follows:

The Post entity id property uses the GenerationType.IDENTITY generator because the SERIAL type acts as AUTO_INCREMENTED column.

@Entity(name = "Post")
@Table(name = "post")
public class Post {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer id;

    private String title;

    //Getters and setters omitted for brevity
}

Now, to see how the post_id_seq is used, consider the following test case:

Post post1 = new Post();
post1.setTitle(
    "High-Performance Java Persistence, Part 1"
);

entityManager.persist(post1);

Post post2 = new Post();
post2.setTitle(
    "High-Performance Java Persistence, Part 2"
);

entityManager.persist(post2);

entityManager.flush();
assertEquals(
    2,
    (
        (Number) entityManager
        .createNativeQuery(
            "select currval('post_id_seq')")
        .getSingleResult()
    ).intValue()
);

After inserting 2 Post entities and flushing the Persistence Context, the current value of the post_id_seq database sequence is going to be 2, and the next sequence value is going to be 3.

#JDBC batch inserts

As convenient as it might look, using the IDENTITY with Hibernate is not without issues.

If we enable JDBC batching:

<property name="hibernate.jdbc.batch_size" value="5"/>

And persist 3 Post entities:

for (int i = 0; i < 3; i++) {
    Post post = new Post();
    post.setTitle(
        String.format(
            "High-Performance Java Persistence, Part %d", 
            i + 1
        )
    );
    entityManager.persist(post);
}

Hibernate is going to generate the following SQL INSERT statements:

INSERT INTO post (title) 
VALUES ('High-Performance Java Persistence, Part 1')

INSERT INTO post (title) 
VALUES ('High-Performance Java Persistence, Part 2')

INSERT INTO post (title) 
VALUES ('High-Performance Java Persistence, Part 3')

So, batching is going to be disabled when inserting entities.

This is because, when persisting the entity, Hibernate needs to know the entity identifier in order to generate the key under which the entity is stored in the currently running Persistence Context.

To know the identifier, Hibernate needs to execute the INSERT statement, so by the time the Persistence Context is flushed, all inserts have been already executed. Therefore, Hibernate can longer batch the INSERT statements.

If you enjoyed this article, I bet you are going to love my Book and Video Courses as well.

Conclusion

Although convenient, and even suggested in many PostgreSQL book, the SERIAL and BIGSERIAL column types are not a very good choice when using JPA and Hibernate. Using a SEQUENCE generator is a better alternative since the identifier can be generated prior to executing the INSERT statement.

Behind the scenes, the SERIAL and BIGSERIAL column types use a database sequence anyway, so the only difference is that the SEQUENCE generator calls the sequence is a separate database roundtrip. However, this can also be optimized with the pooled and pooled-lo optimizers.

If the database server is close to the application servers and networking is fast, the extra database roundtrip is not going to be a performance bottleneck. For all these reasons, you should prefer using the SEQUENCE generator over IDENTITY no matter if you use PostgreSQL, Oracle or SQL Server.

Subscribe to our Newsletter

* indicates required
10 000 readers have found this blog worth following!

If you subscribe to my newsletter, you'll get:
  • A free sample of my Video Course about running Integration tests at warp-speed using Docker and tmpfs
  • 3 chapters from my book, High-Performance Java Persistence, 
  • a 10% discount coupon for my book. 
Get the most out of your persistence layer!

Advertisements

2 thoughts on “PostgreSQL SERIAL column and Hibernate IDENTITY generator

  1. Interesting, Vlad!

    I usually use IDENTITY generator with PostgreSQL except when I know up-front I’ll need any kind of batch processing to the specific entities. Of course, after reading your book I have been favoring sequences over auto-increment PKs as much as I can – that’s the best strategy in most of the cases.

    In my opinion, round-trips is an issue for high performance systems, that’s one of the reasons we favor SEQUENCE generator over IDENTITY one. I mean, although generating a sequence for the database is not expensive as inserting a row in a table, we still have the issue of latency in the scenario. But as you well said, configuring a pooled-lo identifier generator helps to lower those round-trips.

    I am just wondering what would be a good and optimized configuration for the pooled-lo identifier. Any suggestion?

    So, great post!

    1. Measuring is the best way to know the best settings. It’s all based on how much data you add. Gaps are not a problem. You can just set the max expected insert count per transaction and then you only get 1 extra roundtrip, which is actually very fast on the DB side. If the networking has good latency, it shouldn’t be a problem.

Leave a Reply

Your email address will not be published. Required fields are marked *