Ask HN: Relatively SoTA LLM Agents from Scratch?

3 pointsposted a day ago
by solsane

Item id: 46236222

3 Comments

bjourne

13 hours ago

Read this article: https://dl.acm.org/doi/10.1145/3712285.3759827 Training algorithms are relatively simple (base training, fine-tuning, RL), but the scale is critical. I.e., the engineering infrastructure. The authors recommend a 128 GPU cluster minimum and many petabytes of training data.