Nvidia releases 8B model with learned 8x KV cache compression

6 pointsposted 10 hours ago
by alecco

3 Comments

vercaemert

8 hours ago

I'd be interested to hear some use cases people have for large contexts on an 8B model. Other than sentiment analysis or summarization (this release implies agentic use). My experience with the general intelligence of agentic interactions is that everything is unusable before 32B for any context greater than 4k tokens.

user

8 hours ago

[deleted]