How to customize the JDBC batch size for each Persistence Context with Hibernate


JDBC batching has a significant impact on reducing transaction response time. As previously explained, you can enable batching for INSERT, UPDATE and DELETE statements with just one configuration property:

<property name="hibernate.jdbc.batch_size" value="5"/>

However, this setting affects every Persistence Context, therefore every business use case inherits the same JDBC batch size. Although the hibernate.jdbc.batch_size configuration property is extremely useful, it would be great if we could customize the JDBC batch size on a per Persistence Context basis. This article demonstrates how easily you can accomplish this task.

Time to upgrade

Hibernate 5.2 adds support for customizing the JDBC batch size at the Persistence Context level, as illustrated by the following example:

int entityCount = 20;

doInJPA(entityManager -> {

    for ( long i = 0; i < entityCount; ++i ) {
        Post post = new Post( i, 
            String.format( "Post nr %d", i )
        entityManager.persist( post );

In the test case above, the Hibernate Session is configured to use a JDBC batch size of 10.

When inserting 20 Post entities, Hibernate is going to generate the following SQL statements:

    (name, id) 
    ('Post nr 0', 0), ('Post nr 1', 1), 
    ('Post nr 2', 2), ('Post nr 3', 3), 
    ('Post nr 4', 4), ('Post nr 5', 5), 
    ('Post nr 6', 6), ('Post nr 7', 7), 
    ('Post nr 8', 8), ('Post nr 9', 9)
    (name, id) 
    ('Post nr 10', 10), ('Post nr 11', 11), 
    ('Post nr 12', 12), ('Post nr 13', 13), 
    ('Post nr 14', 14), ('Post nr 15', 15), 
    ('Post nr 16', 16), ('Post nr 17', 17), 
    ('Post nr 18', 18), ('Post nr 19', 19)

As you can see, the JDBC batch size allows us to execute only 2 database roundtrips instead of 20.

If you enjoyed this article, I bet you are going to love my book as well.


The Session-level JDBC batch size configuration is a very useful feature that Hibernate 5.2 has to offer, and you should definitely use it to tailor the JDBC batch size based on the underlying business use case requirements.

Enter your email address to follow this blog and receive notifications of new posts by email.


6 thoughts on “How to customize the JDBC batch size for each Persistence Context with Hibernate

  1. Hi Vlad,
    I tried to do batch UPDATE via Hibernate but it is is really slow because it executes a select before each update (It executes this select to attach the entity). Because the update is so slow, I decided to use the nasty native JDBC solution (JdbcTemplate.batchUpdate) and IDENTITY generation strategy. Maybe I am missing something. I would appreciate looking into the following:

    Updating 200 simple entities took 41 seconds.

    I am using Hibernate 5.2 and SQL Server. I set both “hibernate.order_inserts” and “hibernate.order_updates” to true, set batch_size property as well.

    Here is my code

    Very simple entity

    public class BaselineTest {
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "hilo_sequence_generator")
            name = "hilo_sequence_generator",
            strategy = "",
            parameters = {
                    @Parameter(name = "sequence_name", value = "hilo_seqeunce"),
                    @Parameter(name = "initial_value", value = "1"),
                    @Parameter(name = "increment_size", value = "3"),
                    @Parameter(name = "optimizer", value = "hilo")
    private Long id;
    @Column(name = "UpdatedBy")
    private String updatedBy;

    Repository code:

    @PersistenceContext(unitName = "entityManagerFactory")
    private EntityManager entityManager;
    public  Collection bulkeSave(Collection entities, int batchSize) {
    final List<T> savedEntities = new ArrayList<T>(entities.size());
    int i = 0;
    for (T t : entities) {
        if (i % batchSize == 0) {
    return savedEntities;
    private  T persistOrMerge(T t) {
        if (t.getId() == null) {
            return t;
        } else {
            return entityManager.merge(t);

    Thanks so much for your time,

  2. Totally appreciate your answer. When you used session.update, The update worked perfect as you said. Following to that:

    The batch insert worked perfectly for me as well and was fast. I used enhanced-sequence. As follow:

    strategy = “enhanced-sequence”,
    parameters = {
    name = “sequence_name”,
    value = “JPWH_SEQUENCE”
    name = “increment_size”,
    value = “500 ),
    name = “optimizer”,
    value = “pooled-lo”
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = “ID_GENERATOR_POOLED”)

    SQL server 2012
    Hibernate 5.2.10.Final

    I got a problem that Hibernate creates a table instead of a sequence. I first tried to let hibernate create the stuff for me (i.e. using “”) but it creates a table not a sequence.

    I then created the sequence manually (i.e. “CREATE SEQUENCE …. AS BigInt …”, and removed the property ”” but still did not work; Because Hibernate executed this query then got this exception:

    next_val as id_val
    JPWH_SEQUENCE with (updlock,
    2017-07-01 19:49:51.365 [main]$1$1.execute($1$1.execute({36} – could not read a hi value

    Clear that hibernate tries to read from a table not a sequence.

    From Hibernate documentation book: “enhanced-sequence—Uses a native database sequence when supported; otherwise falls back to an extra database table with a single column and row, emulating a sequence”

    SQL server 2012 supports sequences so why Hibernate does create tables here? What I am missing here?
    The idea is I don’t want a table as you recommended and don’t want to have a table for each entity.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s