The best way to log JDBC statements

(Last Updated On: January 29, 2018)

Introduction

Whenever you are using a data access framework that auto-generates statements on your behalf, it is mandatory to log all statements to ensure their effectiveness, as well as to assert the possible performance implications.

Hibernate logging

Hibernate defines the hibernate.show_sql configuration property to enable logging. Unfortunately, the logs go to console, which makes it very difficult to filter them appropriately. A much better approach is to enable SQL statement logging using a log adapter:

<logger name="org.hibernate.SQL" level="debug"/>

Because Hibernate uses PreparedStatement(s) exclusively, the bind parameter values are not available when the statement gets printed into the log:

INSERT INTO post (title, version, id) VALUES (?, ?, ?)

Although bind parameters might be logged separately (e.g. org.hibernate.type.descriptor.sql), the most straight-forward way of logging SQL statements along with their runtime bind parameter values is to use an external DataSource proxy. Because the proxy intercepts all statement executions, the bind parameter values can be introspected and printed as well.

Either the JDBC Driver or the DataSource must be proxied to intercept statement executions and log them along with the actual parameter values. Besides statement logging, a JDBC proxy can provide other cross-cutting features like long-running query detection or custom statement execution listeners.

DataSource-proxy

A lesser-known JDBC logging framework, datasource-proxy provides support for custom JDBC statement execution listeners. In Java EE, not all application servers allow configuring an external DataSource, as they rely on their own custom implementations that bind the user-supplied JDBC Driver. Because it can only decorate a DataSource, datasource-proxy might not be suitable for all Java EE environments.

DataSourceProxyArchitecture

On the other hand, the programmatic configuration support fits the Java-based configuration approach taken by most modern Spring applications:

@Bean 
public DataSource dataSource() {
    SLF4JQueryLoggingListener loggingListener = new SLF4JQueryLoggingListener();
    loggingListener.setQueryLogEntryCreator(new InlineQueryLogEntryCreator());
    return ProxyDataSourceBuilder
        .create(actualDataSource())
        .name(DATA_SOURCE_PROXY_NAME)
        .listener(loggingListener)
        .build();
}

In the following example, datasource-proxy is used to log a batch insert of three PreparedStatement(s). Although normally a batch is printed in a single line of log, the output was split into multiple lines to fit the current page layout.

Name:DATA_SOURCE_PROXY, Time:6, Success:True, 
Type:Prepared, Batch:True, QuerySize:1, BatchSize:3, 
Query:["insert into post (title, version, id) values (?, ?, ?)"], 
Params:[(Post no. 0, 0, 0), (Post no. 1, 0, 1), (Post no. 2, 0, 2)]

Not only the bind parameter values are now present, but, because they are grouped all together, it is very easy to visualize the batching mechanism too.

With the custom statement listener support, datasource-proxy allows building a query count validator to assert the auto-generated statement count and, therefore, prevent N+1 query problems during the development phase.

P6spy

P6Spy was released in 2002, in an era when J2EE application servers were ruling the world of enterprise systems. Because Java EE application servers do not allow programmatic DataSource configuration, P6Spy supports a declarative configuration approach (through a spy.properties file).

P6Spy offers support for proxying both a JDBC Driver (which is suitable for Java EE applications) or a JDBC DataSource (supported by some Java EE containers and common practice for Spring enterprise applications).

P6SpyArchitecture

Running the previous example gives the following output (formatting was also applied):

p6spy - 1448122491807|0|batch|connection 7|
    insert into post (title, version, id) values (?, ?, ?)|
    insert into post (title, version, id) values ('Post no. 0', 0, 0)
p6spy - 1448122491807|0|batch|connection 7|
    insert into post (title, version, id) values (?, ?, ?)|
    insert into post (title, version, id) values ('Post no. 1', 0, 1)
p6spy - 1448122491807|0|batch|connection 7|
    insert into post (title, version, id) values (?, ?, ?)|
    insert into post (title, version, id) values ('Post no. 2', 0, 2)
p6spy - 1448122491812|5|statement|connection 7|
    insert into post (title, version, id) values (?, ?, ?)|
    insert into post (title, version, id) values ('Post no. 2', 0, 2)

In the order of their occurrence, the output is built out of the following columns:

Field Description
Timestamp The statement execution timestamp
Execution time The statement execution duration (in milliseconds)
Category The current statement category (e.g. statement, batch)
Connection The database connection identifier (as assigned by P6Spy)
Original statement The original statement that was intercepted by P6Spy
Formatted statement The statement with all parameter placeholders replaced with the actual bind values

The first three lines are associated with adding statements to the batch, while the fourth line is logging the actual batch execution (which also explains the execution time column value).

One very useful configuration is the outagedetection property, which can detect long-running statements.

If you enjoyed this article, I bet you are going to love my Book and Video Courses as well.

Video

Because this is a very interesting topic, I decided to record a video as well. Enjoy watching it!

Conclusion

Logging statements is a very important aspect for a data access layer, and using a JDBC statement interceptor framework can leverage other cross-cutting functionalities, such as monitoring statement execution, or even automatically detecting N+1 query problems.

Subscribe to our Newsletter

* indicates required
10 000 readers have found this blog worth following!

If you subscribe to my newsletter, you'll get:
  • A free sample of my Video Course about running Integration tests at warp-speed using Docker and tmpfs
  • 3 chapters from my book, High-Performance Java Persistence, 
  • a 10% discount coupon for my book. 
Get the most out of your persistence layer!

Advertisements

16 thoughts on “The best way to log JDBC statements

    1. It’s negligible. Think about all proxies added by Spring AOP which you use automatically for the @Transcational annotation, and the connection pool uses proxies too.

  1. Hello, and thanks for your interesting article.

    I’ve got a question : Why don’t you use the Statistic class of Hibernate ?

    I see full of method which permit to count UPDATE, DELETE, Fetch or whatever in it…. ?

      1. Ok I agree. But for fine grained count Hibernate Statistic class seems much better.
        Thx

      2. The Hibernate Statistics will not render proper results if you’re using Soft Delete because Hibernate counts entity state transitions not the actual SQL statements that are executed.

Leave a Reply

Your email address will not be published. Required fields are marked *