0xbadcafebee
2 hours ago
There's a ton of jargon here. Summarized...
Why EBS didn't work:
- EBS costs for allocation
- EBS is slow at restores from snapshot (faster to spin up a database from a Postgres backup stored in S3 than from an EBS snapshot in S3)
- EBS only lets you attach 24 volumes per instance
- EBS only lets you resize once every 6–24 hours, you can't shrink or adjust continuously
- Detaching and reattaching EBS volumes can take 10s for healthy volumes to 20m for failed ones, so failover takes longer
Why all this matters: - their AI agents are all ephemeral snapshots; they constantly destroy and rebuild EBS volumes
What didn't work: - local NVMe/bare metal: need 2-3x nodes for durability, too expensive; snapshot restores are too slow
- custom page-server psql storage architecture: too complex/expensive to maintain
Their solution: - block COWs
- volume changes (new/snapshot/delete) are a metadata change
- storage space is logical (effectively infinite) not bound to disk primitives
- multi-tenant by default
- versioned, replicated k/v transactions, horizontally scalable
- independent service layer abstracts blocks into volumes, is the security/tenant boundary, enforces limits
- user-space block device, pins i/o queues to cpus, supports zero-copy, resizing; depends on Linux primitives for performance limits
Performance stats (single volume): - (latency/IOPS benchmarks: 4 KB blocks; throughput benchmarks: 512 KB blocks)
- read: 110,000 IOPS and 1.375 GB/s (bottlenecked by network bandwidth
- write: 40,000–67,000 IOPS and 500–700 MB/s, synchronousy replicated
- single-block read latency ~1 ms, write latency ~5 mshedora
an hour ago
Thanks for the summary.
Note that those numbers are terrible vs. a physical disk, especially latency, which should be < 1ms read, << 1ms write.
(That assumes async replication of the write ahead log to a secondary. Otherwise, write latency should be ~ 1 rtt, which is still << 5ms.)
Stacking storage like this isn’t great, but PG wasn’t really designed for performance or HA. (I don’t have a better concrete solution for ansi SQL that works today.)
mfreed
4 minutes ago
A few datapoints that might help frame this:
- EBS typically operates in the millisecond range. AWS' own documentation suggests "several milliseconds"; our own experience with EBS is 1-2 ms. Reads/writes to local disk alone are certainly faster, but it's more meaningful to compare this against other forms of network-attached storage.
- If durability matters, async replication isn't really the right baseline for local disk setups. Most production deployments of Postgres/databases rely on synchronous replication -- or "semi-sync," which still waits for at least one or a subset of acknowledgments before committing -- which in the cloud lands you in the single-digit millisecond range for writes again.
graveland
an hour ago
(I'm on the team that made this)
The raw numbers are one thing, but the overall performance of pg is another. If you check out https://planetscale.com/blog/benchmarking-postgres-17-vs-18 for example, in the average QPS chart, you can see that there isn't a very large difference in QPS between GP3 at 10k iops and NVMe at 300k iops.
So currently I wouldn't recommend this new storage for the highest end workloads, but it's also a beta project that's still got a lot of room for growth! I'm very enthusiastic about how far we can take this!
bradyd
an hour ago
> EBS only lets you resize once every 6–24 hours
Is that even true? I've resized an EBS instance a few minutes after another resize before.
electroly
an hour ago
AWS documents it as "After modifying a volume, you must wait at least six hours and ensure that the volume is in the in-use or available state before you can modify the same volume" but community posts suggest you can get up to 8 resizes in the six hour window.
jasonthorsness
32 minutes ago
The 6-hour counter is most certainly, painfully true. If you work with an AWS rep please complain about this in every session; maybe if we all do they will reduce the counter :P.
thesz
an hour ago
What does EBS mean?
It is used in first line of the text but no explanation was given.
karanbhangui
an hour ago
znpy
an hour ago
Reminds me of about ten years ago when a large media customer was running NetApp on cloud to get most of what you just wrote on AWS (because EBS features sucked/sucks very bad and are also crazy expensive).
I did not set that up myself, but the colleague that worked on that told me that enabling tcp multipath for iscsi yielded significant performance gains.