How does Hibernate guarantee application-level repeatable reads

Introduction

In my previous post I described how application-level transactions offer a suitable concurrency control mechanism for long conversations.

All entities are loaded within the context of a Hibernate Session, acting as a transactional write-behind cache.

A Hibernate persistence context can hold one and only one reference of a given entity. The first level cache guarantees session-level repeatable reads.

If the conversation spans over multiple requests we can have application-level repeatable reads. Long conversations are inherently stateful so we can opt for detached objects or long persistence contexts. But application-level repeatable reads require an application-level concurrency control strategy such as optimistic locking.

The catch

But this behavior may prove unexpected at times.

Continue reading “How does Hibernate guarantee application-level repeatable reads”

Advertisements

MongoDB Incremental Migration Scripts

Introduction

An incremental software development process requires an incremental database migration strategy.

I remember working on an enterprise application where the hibernate.hbm2ddl.auto was the default data migration tool.

Updating the production environment required intensive preparation and the migration scripts were only created on-the-spot. An unforeseen error could have led production data corruption.

Incremental updates to the rescue

The incremental database update is a technical feature that needs to be addressed in the very first application development iterations.

We used to develop our own custom data migration implementations and spending time on writing/supporting frameworks is always working against your current project budget.

A project must be packed with both application code and all associated database schema/data updates scripts. Using incremental migration scripts allows us to automate the deployment process and to take advantage of continuous delivery.

Nowadays you don’t have to implement data migration tools, Flyway does a better job than all our previous custom frameworks. All database schema and data changes have to be recorded in incremental update scripts following a well-defined naming convention.

A RDBMS migration plan addresses both schema and data changes. It’s always good to separate schema and data changes. Integration tests might only use the schema migration scripts in conjunction with test-time related data .

Flyway supports all major relation database systems but for NoSQL (e.g. MongoDB) your need to look somewhere else.

Continue reading “MongoDB Incremental Migration Scripts”

Integration testing done right with Embedded MongoDB

Introduction

Unit testing requires isolating individual components from their dependencies. Dependencies are replaced with mocks, which simulate certain use cases. This way, we can validate the in-test component behavior across various external context scenarios.

Web components can be unit tested using mock business logic services. Services can be tested against mock data access repositories. But the data access layer is not a good candidate for unit testing because database statements need to be validated against an actual running database system.

Integration testing database options

Ideally, our tests should run against a production-like database. But using a dedicated database server is not feasible, as we most likely have more than one developer to run such integration test-suites. To isolate concurrent test runs, each developer would require a dedicated database catalog. Adding a continuous integration tool makes matters worse since more tests would have to be run in parallel.

Lesson 1: We need a forked test-suite bound database

When a test suite runs, a database must be started and only made available to that particular test-suite instance. Basically we have the following options:

  • An in-memory embedded database
  • A temporary spawned database process

Continue reading “Integration testing done right with Embedded MongoDB”

Logical vs physical clock optimistic locking

Introduction

In my previous post I demonstrated why optimistic locking is the only viable solution for application-level transactions. Optimistic locking requires a version column that can be represented as:

  • a physical clock (a timestamp value taken from the system clock)
  • a logical clock (an incrementing numeric value)

This article will demonstrate why logical clocks are better suited for optimistic locking mechanisms.

System time

The system time is provided by the operating system internal clocking algorithm. The programmable interval timer periodically sends an interrupt signal (with a frequency of 1.193182 MHz). The CPU receives the time interruption and increments a tick counter.

Both Unix and Window record time as the number of ticks since a predefined absolute time reference (an epoch). The operating system clock resolution varies from 1ms (Android) to 100ns (Windows) and to 1ns (Unix).

Continue reading “Logical vs physical clock optimistic locking”