How does MySQL result set streaming perform vs fetching the whole JDBC ResultSet at once
Imagine having a tool that can automatically detect JPA and Hibernate performance issues. Wouldn’t that be just awesome?
Well, Hypersistence Optimizer is that tool! And it works with Spring Boot, Spring Framework, Jakarta EE, Java EE, Quarkus, or Play Framework.
So, enjoy spending your time on the things you love rather than fixing performance issues in your production system on a Saturday night!
Introduction
I read a very interesting article by Krešimir Nesek regarding MySQL result set streaming when it comes to reducing memory usage.
Mark Paluch, from Spring Data, asked if we could turn the MySQL result set streaming by default whenever we are using Query#stream
or Query#scroll
.
That being said, the HHH-11260 issue was created, and I started working on it. During Peer Review, Steve Ebersole (Hibernate ORM team leader) and Sanne Grinovero (Hibernate Search Team Leader) expressed their concerns regarding making such a change.
First of all, the MySQL result set streaming has the following caveats:
- the
ResultSet
must be traversed fully before issuing any other SQL statement - the statement is not closed if there are still records to be read in the associated
ResultSet
- the locks associated with the underlying SQL statement that is being streamed are released when the transaction is ended (either commit or rollback).
How does MySQL result set streaming perform vs fetching the whole JDBC ResultSet at once @vlad_mihalceahttps://t.co/GhQ0ucJSjx pic.twitter.com/5ptqdyuPmG
— Java (@java) July 24, 2019
Why streaming?
In the vast majority of situations, you don’t need result set streaming for the following reasons:
- if you need to process a large volume of data, it’s much more efficient to process it in the database using a stored procedure. This is especially true for Oracle and SQL Server which offer a very solid procedural language.
- if you need to process the data in the application, then batch processing is the way to go. That being said, you only need to select and process small amounts of data at a time. This allows you to prevent long-running transactions, which are undesirable for both 2PL and MVCC database transactions. By splitting the data set into multiple batches, you can better parallelize the data processing task.
The being said, the only reason you should be using streaming is to restrict the memory allocation on the client-side while avoiding to execute an SQL statement for each batch execution.
However, issuing a new statement that fetches the current batch data can be a real advantage because the query can be paginated. If the filtered data set is fairly large, then you should be using Keyset Pagination, as Markus Winand explains in his SQL Performance Explained book. If the result set is not too large, then OFFSET pagination can be a solution as well.
Another great advantage of smaller paginated queries is index selectivity. If the filtered data set is rather large, it might be that you cannot benefit from indexing because the execution plan has decided to sue a sequential scan instead. Therefore the streaming query might be slow.
A paginated query that needs to scan a small data set can better take advantage of a database index because the cost of random access might be lower than the one associated with a sequential scan.
How MySQL streaming performs?
If you’re consuming the whole stream, just as Krešimir Nesek does in his article, then maybe you are better off using batch processing.
Let’s see what’s faster when it comes to consuming the whole ResultSet
the default fetch-all or the streaming alternative.
The default fetch-all is done as follows:
private void stream(EntityManager entityManager) { final AtomicLong sum = new AtomicLong(); try(Stream<Post> postStream = entityManager .createQuery("select p from Post p", Post.class) .setMaxResults(resultSetSize) .unwrap(Query.class) .stream()) { postStream.forEach(post -> sum.incrementAndGet()); } assertEquals(resultSetSize, sum.get()); }
while the JDBC Driver streaming is done using the org.hibernate.fetchSize
Hibernate Query
hint:
private void stream(EntityManager entityManager) { final AtomicLong sum = new AtomicLong(); try(Stream<Post> postStream = entityManager .createQuery("select p from Post p", Post.class) .setMaxResults(resultSetSize) .setHint(QueryHints.HINT_FETCH_SIZE, Integer.MIN_VALUE) .unwrap(Query.class) .stream()) { postStream.forEach(post -> sum.incrementAndGet()); } assertEquals(resultSetSize, sum.get()); }
In order to enable streaming when using MySQL, you either need to set the JDBC fetch size to
Integer.MIN_VALUE
or use a positive integer value as long as you also set theuseCursorFetch
connection property totrue
. For our test case, either option produced similar results.
The test does a warm-up of 25 000 method calls, and then it executes the stream
method 10 000 times while measuring the fetch time using Dropwizard Metrics.
On the y-axis, the diagram shows the 98th percentile that was recorded by the Dropwizard Timer
when consuming the whole ResultSet
.
On the x-axis, the resultSetSize
varies from 1, 2, 5, up to higher values (e.g. 5000).
The response time grows with the result set size. Therefore, in OLTP applications, you should always strive for keeping the JDBC
ResultSet
as small as possible. That’s why batch processing and pagination queries are usually a better alternative than streaming a large result set.
Code available on GitHub.
I'm running an online workshop on the 11th of October about High-Performance SQL.If you enjoyed this article, I bet you are going to love my Book and Video Courses as well.
Conclusion
Steve and Sanne’s assumptions turned out to be right. Streaming performs worse than just fetching the whole ResultSet
at once, which is the default strategy for both MySQL and PostgreSQL JDBC drivers.
Therefore, it’s not advisable to make the change proposed by the HHH-11260 Jira issue. That being said, it’s up to you to decide if streaming makes sense for your use case, or whether you should be using batch processing with paginated queries.
