Part 2, Chapter 16
Every new chapter of my book is released right after it’s being completed, so the reader doesn’t have to wait for the whole part to be finished to get access to new material.
Table of content
This chapter explains how enterprise caching works, from database internal buffers, to application-level caching, and the second-level cache offered by Hibernate.
16.1 Caching flavors
16.2 Cache synchronization strategies
16.3 Database caching
16.4 Application-level caching
16.4.1 Entity aggregates
16.4.2 Distributed key/value stores
16.4.3 Cache synchronization patterns
16.4.4 Synchronous updates
16.4.5 Asynchronous updates
184.108.40.206 Change data capture
16.5 Second-level caching
16.5.1 Enabling the second-level cache
16.5.2 Entity cache loading flow
16.5.3 Entity cache entry
220.127.116.11 Entity reference cache store
16.5.4 Collection cache entry
16.5.5 Query cache entry
16.5.6 Cache concurrency strategies
18.104.22.168.1 Inserting READ_ONLY cache entries
22.214.171.124.2 Updating READ_ONLY cache entries
126.96.36.199.3 Deleting READ_ONLY cache entries
188.8.131.52.1 Inserting NONSTRICT_READ_WRITE cache entries
184.108.40.206.2 Updating NONSTRICT_READ_WRITE cache entries
220.127.116.11.3 Risk of inconsistencies
18.104.22.168.4 Deleting NONSTRICT_READ_WRITE cache entries
22.214.171.124.1 Inserting READ_WRITE cache entries
126.96.36.199.2 Updating READ_WRITE cache entries
188.8.131.52.3 Deleting READ_WRITE cache entries
184.108.40.206.4 Soft locking concurrency control
220.127.116.11.1 XA_Strict mode
18.104.22.168.2 XA mode
22.214.171.124.3 Inserting TRANSACTIONAL cache entries
126.96.36.199.4 Updating TRANSACTIONAL cache entries
188.8.131.52.5 Deleting TRANSACTIONAL cache entries
16.5.7 Query cache strategy
184.108.40.206 Table space query invalidation
220.127.116.11 Native SQL statement query invalidation
Caching is everywhere, and enterprise systems are no different. Before jumping to an application-level cache, it’s important to know that most database systems are designed to make use of caching, as mush as possible. Some database systems come with their own shared buffers, whereas others rely on the underlying operating system for caching disk pages in memory.
Even after tuning the database, to overcome the networking overhead, and to level-up traffic spikes, it’s common to use an application-level cache, like Redis or Memcached.
These key-value stores can be distributed on several nodes, therefore providing increased availability and data sharding capabilities. One major advantage of storing entity aggregates in a key-value database is that the application can work in a read-only mode even when the entire database cluster is down, for maintenance.
The only downside for using an application-level cache is to ensure that the two separate sources of data do not drift apart. For this reason, there are several cache concurrency strategies: cache-aside, read-through, write-through.
Being tightly coupled with Hibernate, thee second-level cache can speed up reads, without compromising data consistency. However, choosing the right cache concurrency strategy (READ_ONLY, NONSTRICT_READ_WRITE, READ_WRITE, TRANSACTION) requires understanding the inner-workings of the cache update policy. The entity query cache has its own rules, and because it employs an aggresive cache invalidation policy, it only applies to a certain data access pattern criteria.
With almost 60 pages, the Caching chapter is one of the largest chapters of this book, so enjoy reading High-Performance Java Persistence!