Spring request-level memoization

Introduction

Memoization is a method-level caching technique for speeding-up consecutive invocations.

This post will demonstrate how you can achieve request-level repeatable reads for any data source, using Spring AOP only.

Spring Caching

Spring offers a very useful caching abstracting, allowing you do decouple the application logic from the caching implementation details.

Spring Caching uses an application-level scope, so for a request-only memoization we need to take a DIY approach.

Continue reading “Spring request-level memoization”

Advertisements

Java Performance Workshop with Peter Lawrey

Peter Lawrey at IT Days

I’ve just come back from a Java Performance Workshop held by Peter Lawrey at Cluj-Napoca IT Days.

Peter Lawrey is a well-known Java StackOverflow user and the creator of Java Chronicle open-source library.

Of Java and low latency

Little’s Law defines concurrency as:

Throughput = \dfrac{Served Requests}{Latency}

To increase throughput we can either:

  • increase server resources (scaling vertically or horizontally)
  • decrease latency (improve performance)

While enterprise applications are usually designed for scaling, trading systems focus on lowering latencies.

As surprising as it may sound, most trading systems, I’ve heard of, are actually written in Java. Java automatic memory management is a double-edged sword, since it trades simplicity for flexibility.

A low latency system worse nightmare is a stop-of-the-world process, like a garbage collector major collection. So in order to avoid such situations, you must go off-heap.

Continue reading “Java Performance Workshop with Peter Lawrey”

The fastest way of drawing UML class diagrams

A picture is worth a thousand words

Understanding a software design proposal is so much easier once you can actually visualize it. While writing diagrams might take you an extra effort, the small time investment will pay off when others will require less time understanding your proposal.

Software is a means, not a goal

We are writing software to supports other people business requirements. Understanding business goals is the first step towards coming up with an effective design proposal. After gathering input from your product owner, you should write down the business story. Writing it makes you reason more about the business goal and the product owner can validate your comprehension.

After the business goals are clear you need to move to technical challenges. A software design proposal is derived from both business and technical requirements. The quality of service may pose certain challenges that are better addressed by a specific design pattern or software architecture.

Continue reading “The fastest way of drawing UML class diagrams”

The data knowledge stack

Concurrency is not for the faint-hearted

We all know concurrency programming is difficult to get it right. That’s why threading tasks are followed by extensive design and code reviewing sessions.

You never assign concurrent issues to inexperienced developers. The problem space is carefully analyzed, a design emerges and the solution is both documented and reviewed.

That’s how threading related tasks are usually addressed. You will naturally choose a higher level abstraction since you don’t want to get tangled up in low-level details. That’s why the java.util.concurrent is usually better (unless you build a High Frequency Trading system) than hand-made producer/consumer Java 1.2 style thread-safe structures.

Is database programming any different?

In a database system, the data is spread across various structures (SQL tables or NoSQL collections) and multiple users may select/insert/update/delete whatever they choose to. From a concurrency point of view, this is a very challenging task and it’s not just the database system developer’s problem. It’s our problem as well.

A typical RDBMS data layer requires you to master various technologies and your solution is only as strong as your team’s weakest spot.

Continue reading “The data knowledge stack”

The simple scalability equation

Queuing Theory

The queueing theory allows us to predict queue lengths and waiting times, which is of paramount importance for capacity planning. For an architect this is a very handy tool, since queues are not just the appanage of messaging systems.

To avoid system over loading we use throttling. Whenever the number of incoming requests surpasses the available resources, we basically have two options:

  • discarding all overflowing traffic, therefore decreasing availability
  • queuing requests and wait (for as long as a time out threshold) for busy resources to become available

This behavior applies to thread-per-request web servers, batch processors or connection pools.

What’s in it for us?

Agner Krarup Erlang is the father of queuing theory and traffic engineering, being the first to postulated the mathematical models required to provisioning telecommunication networks.

Erlang formulas are modeled for M/M/k queue models, meaning the system is characterized by:

The Erlang formulas give us the servicing probability for:

This is not strictly applicable to thread pools, as requests are not fairly serviced and servicing times not always follow an exponential distribution.

A general purpose formula, applicable to any stable system (a system where the arrival rate is not greater than the departure rate) is Little’s Law.

Continue reading “The simple scalability equation”