Speedment ORM – deliberate enterprise caching

A small company with big dreams

I first heard of Speedment while watching a Hazelcast webinar about a RDBMS Change Data Capture approach for updating the in-memory data grid. Their products are very much in-line with Greg Luck’s Deliberate caching and I can see many enterprise products gaining from this new paradigm shift.

We have the pleasure of talking to Per-Åke Minborg, who is the CTO and one of the founders of Speedment AB.

Hi Per. Can you please describe Speedment goals?

Speedment brings a more modern way of handling data located in various data stores, such as SQL databases. We are using standard Java 8 streams for querying the data sources, so the developers do not need to learn new complex APIs or configure ORMs. Thus, we speed up the development process. Speedment will also speed up the application as such. One problem with some of the existing products are that they are actually making data access SLOWER than JDBC and not faster. We think it should be the other way around and have developed a way to make applications run quicker with Speedment.

How does it differ from any existing data access technologies (JPA, jOOQ)?

JPA is a separate framework that is added to Java and it is not always easy to understand how to use it in the best way. It does not take advantage of all the new features that became available with Java 8. Also, in JPA, you start to construct your Java domain model and then you translate it to a database domain. Speedment and jOOQ does it the other way around. You start with your data model and then extract your Java domain model from that. Even though you can cache data with JPA, neither JPA nor jOOQ can provide data acceleration as Speedment can. Furthermore, with Speedment, you do not have to learn a new query API, because it relies on standard Java 8 streams.

Does Speedment shifts to a deliberately cache architecture?

The enterprise version of Speedment contains “caching” functionality. The new thing with our “caching” is that you will be able to locate objects in the cache in many ways (i.e. using several keys or search criteria) and not only as a key/value store. The “cache” can also be shared concurrently with the application so that cache operations can be completed in O(1) time. For example, if you are retrieving all users named “Bob” you use a “view” of the user cache (i.e. the “name” view) and you will be able to obtain all the Bobs rapidly and in constant time regardless if there where just one Bob or if there were thousands of Bobs. Speedment is using an advanced in-JVM-memory solution that can extend the JVM to hundreds of terabytes without garbage collect problems and enables a JVM that is bigger than the actual RAM size. This allows all your data to fit into the same JVM as the application. The Speedment cache is “hot”, so if the underlying database changes, then the cache is updated accordingly using a reactive model.

How does it integrate with Hazelcast in-memory data grid

Hazelcast provides an add-on solution to hold all the caching elements using a distributed memory data grid. That way, you will be able to scale out applications indefinitely into any number of terabytes. Speedment provides a hot-cache solution for Hazelcast that can be compared to Oracles Golden Gate Hotcache.

Which parts of the whole platform are going to be included in the open-source version

Speedment Open Source is a native Java 8 API with powerful developer features. In addition to this there are add-ons available as enterprise features. The team will continuously evaluate the community contributions and add features to the main branch continuously. We want our users to be able to test, develop and deploy applications with Speedment Open Source, but we would also like to be able to offer them something more. If they want enterprise grade functions like really big data sets, high availability or clustered environments this can all be added from the enterprise portfolio. You can go to our open source project on GitHub and follow the development there and perhaps also join the community! The first official version speedment-2.0.0 will be launched in the end of the summer and will contain many new interesting Java 8 features.

Thanks Per for taking your time to give this interview. Let’s hope we hear more from you guys.

If you liked this article, you might want to subscribe to my newsletter too.

Advertisements

8 thoughts on “Speedment ORM – deliberate enterprise caching

  1. For example, if you are retrieving all users named “Bob” you use a “view” of the user cache (i.e. the “name” view) and you will be able to obtain all the Bobs rapidly and in constant time regardless if there where just one Bob or if there were thousands of Bobs.

    Sounds like a materialized view. OTOH, all this seems to come at the cost of losing SQL as a language for querying. Would be interesting to see concrete Java Streams exampes!

    1. The active rows are stored in-memory. It will use Hazelcast to distribute all data across multiple nodes, even using off-heap memory. This way you can use in-memory querying at the cost of loosing strong consistency. I advocate for going to the DB when consistency is more important than low latency. In my opinion, it’s more like a platform than a framework.

      1. I understand the benefits, but imagine using SQL to query that off-heap memory across multiple nodes… SQL isn’t only good for RDBMS storage, you know 🙂

      2. That would be great but a simple key/value structure is easier to distribute across multiple nodes. To support SQL over all nodes is something I hope Hazelcast will eventually add support for.

      3. Well, from a user perspective, I don’t really care how the data is distributed. E.g. if it’s a rowid->row key/value structure or a btree or whatever. I think that’s low-level storage stuff that I simply don’t want to worry about when querying (apart from when tuning, of course)

      4. I was actually going to ask that if Speedment exposes the same API as JINQ or even builds on top of it…

        Speaking of SQL on a K/V cache… This interesting article has been written just recently: http://aakashjapi.com/caching-with-jooq-and-redis. They’re using jOOQ (almost SQL) and transform the AST to intercept cachable queries or invalidate said cache. Quite cunning. It won’t work well in all use-cases, but quite well in theirs.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s