Cdb: Add support for cdb64

78 pointsposted 6 hours ago
by kreco

22 Comments

wolfgang42

5 hours ago

CDB is an interesting format, optimized for read-heavy write-rarely[1] random lookups on slow media. This isn’t a very common requirement these days, but it’s convenient for very specific use cases.

[1] You “update” by overwriting the entire file. This is remarkably fast and means that there’s no overhead/tracking for empty space, but it does mean you probably want this to be a fairly rare operation.

I rolled my own cdb reader library for a project a few years ago, and wrote up my notes on the format and its internals here: https://search.feep.dev/blog/post/2022-12-03-cdb-file-format

a-dub

5 hours ago

GALACTIC SCALE QMAIL that can run efficiently on a 486 AND survive a supernova!

tptacek

5 hours ago

Haven't there been 64-bit ports of CDB for ages?

wolfgang42

5 hours ago

Yes, the modifications you need to support it are trivially obvious (literally just replace “4 bytes” with “8 bytes” everywhere in the spec) and have been implemented by a number of authors, some of which this page links to. I guess it’s nice that they’ve been “officially” acknowledged, though.

eesmith

an hour ago

And update the hash algorithm, yes?

tombert

4 hours ago

I'm kind of surprised I hadn't heard of this, I could see this being something useful for a few projects. Historically for things in this space I've used RocksDB but RocksDB has given me headaches with unpredictable memory usage for large data sets.

Bolwin

6 hours ago

Interesting, never heard of this before. I'm assuming the use case is when your data is too large to conveniently fit into memory?

tptacek

5 hours ago

It's a database for strictly exact-match lookups for very read-intensive workloads; think systems where the database only changes when the configuration changes, like email alias or domain lookups. It's very simple (a first-level hash table chaining to a second-level open-addressed hash table) and easy to get your head around, but also very limiting; an otherwise strict K-V system that uses b-trees instead of hash tables can do range queries, which you can build a lot of other stuff out of.

Most people would use Redis or SQLite today for what CDB was intended for; CDB will be faster, but for a lot of applications that speed improvement will be sub-threshold for users.

kimos

4 hours ago

Great reply.

What comes to mind from my experience is storing full shipping rate tables for multiple shipping providers. Those change extremely rarely but are a high throughput exact lookup in a critical path (a checkout).

But we just implemented them in SQLite and deployed that file with the application. Simple clean, effective, and fast. Maybe shipping rate data is smaller than this is intended for, but I doubt using this instead would see a consequential perf increase. Seems niche, like the domain name lookup example.

paws

5 hours ago

For me this answer was helpful and succinct, thank you.

dsr_

5 hours ago

It is a database for when you read a lot and don't write too often; when a write might be pretty big but not frequent; when you don't want to write a database engine yourself (I.e. figure out what to write and when). And, especially, when corrupting the data would be a big problem.

And it is especially good on copy-on-write filesystems, because it is CoW itself.

bloppe

5 hours ago

So it's not constant?

tptacek

5 hours ago

The lookups are ~O(1).

renewiltord

5 hours ago

Nothing is truly constant lookup in number of elements in nature because we can’t pack it tighter than a sphere.

waynesonfire

2 hours ago

cdb is a fun format to implement! highly recommend it.

binary132

4 hours ago

Now I’m curious about working around the writer limitations….

tptacek

4 hours ago

It's designed to rebuild the whole database with every write, and the format reflects that.