(Hopping in here because the discussion is interesting... feel very free to ignore.)
Thanks for writing this up! It was a very interesting read about a part of networking that I don't get to seriously touch.
That said: I'm sure you guys have thought about this a lot and that I'm just missing something, but "why can't every proxy probe every [worker, not application]?" was exactly one of the questions I had while reading.
Having the workers being the source-of-truth about applications is a nicely resilient design, and bruteforcing the problem by having, say 10k proxies each retrieve the state of 10k workers every second... may not be obviously impossible? Somewhat similar to sending/serving 10k DNS requests/s/worker? That's not trivial, but maybe not _that_ hard? (You've been working on modern Linux servers a lot more than I, but I'm thinking of e.g. https://blog.cloudflare.com/how-to-receive-a-million-packets...)
I did notice the sentence about "saturating our uplinks", but... assuming 1KB=8Kb of compressed critical state per worker, you'd end up with a peak bandwidth demand of about 80 Mbps of data per worker / per proxy; that may not be obviously impossible? (One could reduce _average_ bandwidth a lot by having the proxies mostly send some kind of "send changes since <...>" or "send all data unless its hash is <...>" query.)
(Obviously, bruteforcing the routing table does not get you out of doing _something_ more clever than that to tell the proxies about new workers joining/leaving the pool, and probably a hundred other tasks that I'm missing; but, as you imply, not all tasks are equally timing-critical.)
The other question I had while reading was why you need one failure/replication domain (originally, one global; soon, one per-region); if you shard worker state over 100 gossip (SWIM Corrosion) instances, obviously your proxies do need to join every sharded instance to build the global routing table - but bugs in replication per se should only take down 1/100th of your fleet, which would hit fewer customers (and, depending on the exact bug, may mean that customers with some redundancy and/or autoscaling stay up.) This wouldn't have helped in your exact case - perfectly replicating something that takes down your proxies - but might make a crash-stop of your consensus-ish protocol more tolerable?
Both of the questions above might lead to a less convenient programming model, which be enough reason on its own to scupper it; an article isn't necessarily improved by discussing every possible alternative; and again, I'm sure you guys have thought about this a lot more than I did (and/or that I got a couple of things embarassingly wrong). But, well, if you happen to be willing to entertain my questions I would appreciate it!