Ask HN: Relatively SoTA LLM Agents from Scratch?

4 pointsposted 2 months ago
by solsane

Item id: 46236222

3 Comments

bjourne

2 months ago

Read this article: https://dl.acm.org/doi/10.1145/3712285.3759827 Training algorithms are relatively simple (base training, fine-tuning, RL), but the scale is critical. I.e., the engineering infrastructure. The authors recommend a 128 GPU cluster minimum and many petabytes of training data.