duskwuff
3 days ago
Before you get too excited, keep two things in mind:
1) Using a single compression context for the whole stream means you have to keep that context active on the client and server while the connection is active. This may have a nontrivial memory cost, especially at high compression levels. (Don't set the compression window any larger than it needs to be!)
2) Using a single context also means that you can't decompress one frame without having read the whole stream that led up to that. This prevents some possible useful optimizations if you're "fanning out" messages to many recipients - if you're compressing each message individually, you can compress it once and send the same compressed message to every recipient.
adzm
3 days ago
The analogy to h264 in the original post is very relevant. You can fix some of the downsides by using the equivalent of keyframes, basically. Still a longer context than a single message but able to be broken up for recovery or etc.
yellow_lead
3 days ago
> This may have a nontrivial memory cost, especially at high compression levels. (Don't set the compression window any larger than it needs to be!)
It sounds like these contexts should be cleared when they reach a certain memory limit, or maybe reset periodically, i.e every N messages. Is there another way to manage the memory cost?
michaelt
3 days ago
LZ77 compression (a key part of gzip and zip compression) uses a 'sliding window' where the compressor can tell the decompressor 'repeat the n bytes that appeared in the output stream m bytes ago'. The most widely used implementation uses a 15 bit integer for m - so the decompressor never needs to look more than 32,768 bytes back in its output stream.
Many compression standards include memory limits, to guarantee compatibility, and the older the standard the lower that limit is likely to be. If the standards didn't dictate this stuff, DVD sellers could release a DVD that needed a 4MB decompression window, and it'd fail to play on players that only had 2MB of memory - setting a standard and following it avoids this happening.
treyd
3 days ago
That's a misunderstanding. Compression algorithms are typically designed with a tunable state size paramter. The issue is if you have a large transfer that might have one side crash and resume, you need to have some way to persist the state to be able to pick up where you left off.