How does Hibernate guarantee application-level repeatable reads

Introduction In my previous post I described how application-level transactions offer a suitable concurrency control mechanism for long conversations. All entities are loaded within the context of a Hibernate Session, acting as a transactional write-behind cache. A Hibernate persistence context can hold one and only one reference to a given entity. The first level cache guarantees session-level repeatable reads. If the conversation spans over multiple requests we can have application-level repeatable reads. Long conversations are inherently stateful so we can opt for detached objects or long persistence contexts. But application-level repeatable reads… Read More

MongoDB Incremental Migration Scripts

Introduction An incremental software development process requires an incremental database migration strategy. I remember working on an enterprise application where the hibernate.hbm2ddl.auto was the default data migration tool. Updating the production environment required intensive preparation and the migration scripts were only created on-the-spot. An unforeseen error could have led production data corruption. Incremental updates to the rescue The incremental database update is a technical feature that needs to be addressed in the very first application development iterations. We used to develop our own custom data migration implementations and spending time on writing/supporting… Read More

Integration testing done right with Embedded MongoDB

Introduction Unit testing requires isolating individual components from their dependencies. Dependencies are replaced with mocks, which simulate certain use cases. This way, we can validate the in-test component behavior across various external context scenarios. Web components can be unit tested using mock business logic services. Services can be tested against mock data access repositories. But the data access layer is not a good candidate for unit testing because database statements need to be validated against an actual running database system. Integration testing database options Ideally, our tests should run against a production-like… Read More

Logical vs physical clock optimistic locking

Introduction In this article, I’m going to explain how logical and physical clock versioning strategies work and why you should prefer using logical clocks for concurrency control. Optimistic locking is a viable solution for preventing lost updates when running application-level transactions. Optimistic locking requires a version column that can be represented as: a physical clock (a timestamp value taken from the system clock) a logical clock (an incrementing numeric value) This article will demonstrate why logical clocks are better suited for optimistic locking mechanisms. System time The system time is provided by… Read More