Professional connection pool sizing with FlexyPool


I previously wrote about the benefits of connection pooling and why monitoring it is of crucial importance. This post will demonstrate how FlexyPool can assist you in finding the right size for your connection pools.

Know your connection pool

The first step is to know your connection pool settings. My current application uses XA transactions, therefore we use Bitronix transaction manager, which comes with its own connection pooling solution.

Accord to the Bitronix connection pool documentation we need to use the following settings:

  • minPoolSize: the initial connection pool size
  • maxPoolSize: the maximum size the connection pool can grow to
  • maxIdleTime: the maximum time a connection can remain idle before being destroyed
  • acquisitionTimeout: the maximum time a connection request can wait before throwing a timeout. The default value of 30s is way too much for our QoS

Configuring FlexyPool

FlexyPool comes with a default metrics implementation, built on top of Coda Hale Metrics and offering two reporting mechanisms:

An enterprise system must use an central monitoring tool, such as Ganglia or Graphite and instructing FlexyPool to use a different reporting mechanism is fairly easy. Our example will export reports to CSV files and this is how you can customize the default metrics settings.

Initial settings

We only have to give a large enough maxOverflow and retryAttempts and leave FlexyPool find the equilibrium pool size.

Name Value Description
minPoolSize 0 The pool starts with an initial size of 0
maxPoolSize 1 The pool starts with a maximum size of 1
acquisitionTimeout 1 A connection request will wait for 1s before giving up with a timeout exception
maxOverflow 9 The pool can grow up to 10 connections (initial maxPoolSize + maxOverflow)
retryAttempts 30 If the final maxPoolSize of 10 is reached and there is no connection available, a request will retry 30 times before giving up.

Metrics time

Our application is a batch processor and we are going to let it process a large amount of data so we can gather the following metrics:

  1. concurrentConnectionsHistogram


  2. concurrentConnectionRequestsHistogram


  3. maxPoolSizeHistogram


  4. connectionAcquireMillis


  5. retryAttemptsHistogram


  6. overallConnectionAcquireMillis


  7. connectionLeaseMillis
  8. connectionLeaseMillis-initialSettings

After analysing the metrics, we can draw the following conclusions:

  • The max pool size should be 8
  • For this max pool size there is no retry attempt.
  • The connection acquiring time has stabilized after the pool has grown to its max size.
  • There is a peak connection lease time of 50s causing the pool size to grow from 7 to 8. Lowering the time the connections are being held allows us to decrease the pool size as well.

If the database maximum connection count is 100 we can have at most 12 concurrent applications running.

Pushing the comfort zone

Let’s assume that instead of 12 we would need to run 19 such services. This means the pool size must be at most 5. Lowering the pool size will increase the connection request contention and the probability of connection acquire retry attempts.

We will change the maxOverflow to 4 this time while keeping the other settings unchanged:

Name Value Description
maxOverflow 4 The pool can grow up to 10 connections (initial maxPoolSize + maxOverflow)

Metrics reloaded

These are the new metrics:

  1. concurrentConnectionsHistogram


  2. concurrentConnectionsCountHistogram


  3. maxPoolSizeHistogram


  4. connectionAcquireMillis


  5. retryAttemptsHistogram


  6. overallConnectionAcquireMillis


  7. connectionLeaseMillis
  8. connectionLeaseMillis-contentionSettings

Analysing the metrics, we can conclude that:

  • For the max pool size of 5 there are at most 3 retry attempts.
  • The overall connection acquiring time confirms the retry attempts.
  • The peak connection lease time was replicated, even if it’s around 35s this time.

If you enjoyed this article, I bet you are going to love my book as well.


FlexyPool eases connection pool sizing while offering a failover mechanisms for those unforeseen situations when the initial assumptions don’t hold up anymore.

Alerts may be triggered whenever the number of retries exceeds a certain threshold, allowing us to step in as soon as possible.

If you liked this article, you might want to subscribe to my newsletter too.


2 thoughts on “Professional connection pool sizing with FlexyPool

  1. Hi Vlad, If I understand correctly FlexyPool is your own implementation which is a wrapper for connection pools(like slf4j) and it can be used with all types of connection pooling apis available in market.
    I found this link when I was searching about the problem we have in our application in Production. We are using MySQL and C3P0 for connection pooling and we have been facing communication link failure exception often.

    1. Hi Loganathan,

      FlexyPool is a library that stands before a connection pool and adds Metrics and Fall-back strategies.

      Indeed, It resembles SLF4J, because it integrates most commonly used stand along connection pooling solutions.

      There are large systems using FlexyPool in production to both monitor connection usage and to provide some fall-back mechanisms to buy you more time during traffic spikes.

      If you have any question don’t hesitate to ask me. I suggest using StackOverflow and just send me a message so I can answer it.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s