LLM Neuroanatomy II: Modern LLM Hacking and Hints of a Universal Language?

7 pointsposted 11 hours ago
by zdw

2 Comments

vibe42

11 hours ago

Pretty sweet hack as it's orthogonal to quantisation. And while it uses more compute, it doesn't require more VRAM.

Maybe in the future circuits will become modular and composable like models are today?