Hackernews
new
show
ask
jobs
From 300KB to 69KB per Token: How LLM Architectures Solve the KV Cache Problem
3 points
posted 14 hours ago
by future-shock-ai
(news.future-shock.ai)
No comments yet