How to customize the JDBC batch size for each Persistence Context with Hibernate
Are you struggling with performance issues in your Spring, Jakarta EE, or Java EE application?
Imagine having a tool that could automatically detect performance issues in your JPA and Hibernate data access layer long before pushing a problematic change into production!
With the widespread adoption of AI agents generating code in a heartbeat, having such a tool that can watch your back and prevent performance issues during development, long before they affect production systems, can save your company a lot of money and make you a hero!
Hypersistence Optimizer is that tool, and it works with Spring Boot, Spring Framework, Jakarta EE, Java EE, Quarkus, Micronaut, or Play Framework.
So, rather than allowing performance issues to annoy your customers, you are better off preventing those issues using Hypersistence Optimizer and enjoying spending your time on the things that you love!
Introduction
JDBC batching has a significant impact on reducing transaction response time. As previously explained, you can enable batching for INSERT, UPDATE and DELETE statements with just one configuration property:
<property name="hibernate.jdbc.batch_size" value="5"/>
However, this setting affects every Persistence Context, therefore every business use case inherits the same JDBC batch size. Although the hibernate.jdbc.batch_size configuration property is extremely useful, it would be great if we could customize the JDBC batch size on a per Persistence Context basis. This article demonstrates how easily you can accomplish this task.
Time to upgrade
Hibernate 5.2 adds support for customizing the JDBC batch size at the Persistence Context level, as illustrated by the following example:
int entityCount = 20;
doInJPA(entityManager -> {
entityManager.unwrap(Session.class)
.setJdbcBatchSize(10);
for ( long i = 0; i < entityCount; ++i ) {
Post post = new Post( i,
String.format( "Post nr %d", i )
);
entityManager.persist( post );
}
});
In the test case above, the Hibernate Session is configured to use a JDBC batch size of 10.
When inserting 20 Post entities, Hibernate is going to generate the following SQL statements:
INSERT INTO post
(name, id)
VALUES
('Post nr 0', 0), ('Post nr 1', 1),
('Post nr 2', 2), ('Post nr 3', 3),
('Post nr 4', 4), ('Post nr 5', 5),
('Post nr 6', 6), ('Post nr 7', 7),
('Post nr 8', 8), ('Post nr 9', 9)
INSERT INTO post
(name, id)
VALUES
('Post nr 10', 10), ('Post nr 11', 11),
('Post nr 12', 12), ('Post nr 13', 13),
('Post nr 14', 14), ('Post nr 15', 15),
('Post nr 16', 16), ('Post nr 17', 17),
('Post nr 18', 18), ('Post nr 19', 19)
As you can see, the JDBC batch size allows us to execute only 2 database roundtrips instead of 20.
If you enjoyed this article, I bet you are going to love my Book and Video Courses as well.
Conclusion
The Session-level JDBC batch size configuration is a very useful feature that Hibernate 5.2 has to offer, and you should definitely use it to tailor the JDBC batch size based on the underlying business use case requirements.





