A small company with big dreams
I first heard of Speedment while watching a Hazelcast webinar about a RDBMS Change Data Capture approach for updating the in-memory data grid.
In this article, we will have the pleasure of talking to Per-Åke Minborg, who is the CTO and one of the founders of Speedment AB.
Hi Per. Can you please describe Speedment goals?
Speedment brings a more modern way of handling data located in various data stores, such as SQL databases. We are using standard Java 8 streams for querying the data sources, so the developers do not need to learn new complex APIs or configure ORMs. Thus, we speed up the development process. Speedment will also speed up the application as such. One problem with some of the existing products is that they are actually making data access SLOWER than JDBC and not faster. We think it should be the other way around and have developed a way to make applications run quicker with Speedment.
How does it differ from any existing data access technologies (JPA, jOOQ)?
JPA is a separate framework that is added to Java and it is not always easy to understand how to use it in the best way. It does not take advantage of all the new features that became available with Java 8. Also, in JPA, you start to construct your Java domain model and then you translate it to a database domain. Speedment and jOOQ do it the other way around. You start with your data model and then extract your Java domain model from that. Even though you can cache data with JPA, neither JPA nor jOOQ can provide data acceleration as Speedment can. Furthermore, with Speedment, you do not have to learn a new query API, because it relies on standard Java 8 streams.
Does Speedment shifts to a deliberate cache architecture?
The enterprise version of Speedment contains “caching” functionality. The new thing with our “caching” is that you will be able to locate objects in the cache in many ways (i.e. using several keys or search criteria) and not only as a key/value store. The “cache” can also be shared concurrently with the application so that cache operations can be completed in O(1) time. For example, if you are retrieving all users named “Bob” you use a “view” of the user cache (i.e. the “name” view) and you will be able to obtain all the Bobs rapidly and in constant time regardless if there where just one Bob or if there were thousands of Bobs. Speedment is using an advanced in-JVM-memory solution that can extend the JVM to hundreds of terabytes without garbage collect problems and enables a JVM that is bigger than the actual RAM size. This allows all your data to fit into the same JVM as the application. The Speedment cache is “hot”, so if the underlying database changes, then the cache is updated accordingly using a reactive model.
How does it integrate with Hazelcast in-memory data grid
Hazelcast provides an add-on solution to hold all the caching elements using a distributed memory data grid. That way, you will be able to scale out applications indefinitely into any number of terabytes. Speedment provides a hot-cache solution for Hazelcast that can be compared to Oracles Golden Gate Hotcache.
Which parts of the whole platform are going to be included in the open-source version
Speedment Open Source is a native Java 8 API with powerful developer features. In addition to this, there are add-ons available as enterprise features. The team will continuously evaluate the community contributions and add features to the main branch continuously. We want our users to be able to test, develop and deploy applications with Speedment Open Source, but we would also like to be able to offer them something more. If they want enterprise-grade functions like really big data sets, high availability or clustered environments this can all be added from the enterprise portfolio. You can go to our open source project on GitHub and follow the development there and perhaps also join the community! The first official version speedment-2.0.0 will be launched at the end of the summer and will contain many new interesting Java 8 features.
Thanks, Per for taking the time to give this interview. Let’s hope we hear more from you guys.