Caltech Researchers Claim Compression of High-Fidelity AI Models

2 pointsposted 8 hours ago
by jonbaer

3 Comments

grimm8080

8 hours ago

I need a subscription to read that. Though, I don't get why 1 bit LLMs are smaller, if you go from 8 bit to 1 bit, you'll need 8 times more nodes right?