apavlo
a day ago
> EloqKV brings significant innovations to database design
What is the novel part? I read your "Introduction to Data Substrate" blog article and the architecture you are describing sounds like NuoDB from the early 2010s. The only difference is that NuoDB scales out the in-memory cache by adding more of what they call "Transaction Engine" nodes whereas you are scaling up the "TxMap" node?
See also Viktor Leis' CIDR 2023 paper with the Great Phil Bernstein:
eloqdata
a day ago
Thank you, Andy! This is Jeff, CEO and Chief Architect of EloqData. It's a great honor for us to have THE Andy Pavlo join the discussion on our first HN submission.
If I remember correctly, NuoDB uses a shared cache with a cache coherence protocol, whereas EloqKV uses a shared nothing (partitioned) cache. The former is a local read but needs to broadcast each write to all nodes. The latter has no broadcast for writes but may be a remote read. The tradeoff is evident and we are actively exploring opportunities to strike a balance, e.g., for frequently-read, rarely-write data items, use the shared cache mode.
We appreciate you pointing us to the CIDR paper. I had the pleasure of working with Phil for some time and fondly remember many discussions with Phil on various topics many years ago. To address your question, yes, we've been trying to solve the research challenges presented in the CIDR paper. The devil is in the details. We've developed numerous new algorithms and invested significant engineering effort into the design and implementation of our products. The benefits are as follows:
- Optimality: We believe we have an overall design that optimizes synchronous disk writes and network round-trips. For instance, when the design is reduced to a single node, its performance matches or exceeds that of single-node servers. As you might expect, a lot of innovation has gone into making distributed transactions as efficient as non-distributed ones, comparable to those in MySQL or PostgreSQL.
- Modularity: Our architecture allows us to easily replace the Parser/Compute layer and Storage/Persistence layer with the best existing solutions. This means we can create new databases by leveraging existing parsers and compute engines from current database implementations to achieve API-compatibility, as well as leveraging existing high-performance KV stores for the persistence layer. This allows us to avoid reinventing the wheels and to take advantage of decades of innovations in the database community.
- Scalability: The entire system operates without a single synchronization point—not even a global sequencer. We drew many inspirations from the Hekaton and your TicToc paper. All four types of resources (CPU, Memory, Storage, Logging) can be scaled independently, as we mentioned earlier. More importantly, they can scale dynamically to accommodate workload changes without service disruptions.
We look forward to sharing more technical details as we move out of stealth mode. I hope to continue this conversation with you in person in the near future.