Things to consider before jumping to application-level caching

Introduction

Relational database transactions are ACID and the strong consistency model simplifies application development. Because enabling Hibernate caching is one configurations away, it’s very appealing to turn to caching whenever the data access layer starts showing performance issues. Adding a caching layer can indeed improve application performance, but it has its price and you need to be aware of it.

Database performance tuning

The database is therefore the central part of any enterprise application, containing valuable business assets. A database server has limited resources and it can therefore serve a finite number of connections. The shorter the database transactions, the more transactions can be accommodated. The first performance tuning action is to reduce the query execution times by indexing properly and optimizing queries.

When all queries and statements are optimized, we can either add more resources (scale up) or adding more database nodes (scale out). Horizontal scaling requires database replication, which implies synchronizing nodes. Synchronous replication preserve strong consistency, while asynchronous master-slave replication leads to eventual consistency.

Analogous to database replication challenges, cache nodes induce data synchronization issues, especially for distributed enterprise applications.

Caching

Even if the database access patterns are properly optimized, higher loads might increase latency. To provide predictable and constant response times, we need to turn to caching. Caching allows us to reuse a database response for multiple user requests.

The cache can therefore:

  • reduce CPU/Memory/IO resource consumption on the database side
  • reduce network traffic between application nodes and the database tier
  • provide constant result fetch time, insensitive to traffic bursts
  • provide a read-only view when the application is in maintenance mode (e.g. when upgrading the database schema)

The downside of introducing a caching solution is that data is duplicated in two separate technologies that may easily desynchronise.

In the simplest use case you have one database server and one cache node:

SingleCacheNode

The caching abstraction layer is aware of the database server, but the database knows nothing of the application-level cache. If some external process updates the database without touching the cache, the two data sources will get out of sync. Because few database servers support application-level notifications, the cache may break the strong consistency guarantees.

To avoid eventual consistency, both the database and the cache need to be enrolled in a distributed XA transaction, so the affected cache entries are either updated or invalidated synchronously.

Most often, there are more application nodes or multiple distinct applications (web-fronts, batch processors, schedulers) comprising the whole enterprise system:

MultipleCacheNodes

If each node has its own isolated cache node, we need to be aware of possible data synchronisation issues. If one node updates the database and its own cache without notifying the rest, then other cache nodes get out of sync.

In a distributed environment, when multiple applications or application nodes use caching, we need to use a distributed caching solution, so that:

  • cache nodes communicate in a peer-to-peer topology
  • cache nodes communicate in a client-server topology and a central cache server takes care of data synchronization

DistributedCacheNodes

If you enjoyed this article, I bet you are going to love my book as well.

Conclusion

Caching is a fine scaling technique but you have to be aware of possible consistency issues. Taking into consideration your current project data integrity requirements, you need to design your application to take advantage of caching without compromising critical data.

Caching is not a cross-cutting concern, leaking into your application architecture and requiring a well-thought plan for compensating data integrity anomalies.

Enter your email address to follow this blog and receive notifications of new posts by email.

Advertisements

4 thoughts on “Things to consider before jumping to application-level caching

  1. My observation is before you consider using cache at all you should ask your customer: “Is it acceptable that data XYZ may be sometimes stale?”.
    If they respond with “NO” (as did nearly all customers on my projects), then you know caching is out (apart from read-only data of course).

    1. You can always serve stale data even without caching. Think that after you run a query, a concurrent transaction updates what you already fetched and displayed in the UI.
      What’s really important is to detect that you want to modify stale data, and that’s where optimistic locking comes to the rescue.

      Caching is not only meant for reducing response time. With caching you can also come in handy when you have unforeseen traffic spikes, maybe due to a DDoS attack.
      With an application-level caching in place, you can also switch to a read=only mode, like many renowned sites do: GitHub, StackOverflow.

      1. Sure, application-level cache can be a great thing, but I do not really consider Hibernate 2-nd level cache as a typical app-cache (as it is managed by Hibernate not your application and based on Hibernate rules).
        I also think (though I do not really know) that caching on sites you mention is done on a different (closer to application domain) level than pure DB entities or queries cache.

      2. 2nd level cache is not an application-level cache indeed, but many enterprise systems have multiple layers of caching.
        You can leverage an application-level cache that serves aggregates from Redis or Memcached and to bypass the Data Access Layer altogether.

        In the same time, an aggregate cache is much more difficult to keep in sync with the DB and most of the time you need to invalidate a whole tree when just one inner node has changed.
        For this purpose, the 2nd level cache can help because it allows for a much finer granulation being closer to database rows than Domain Model entity hierarchies.

        So you can combine both caching layers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s