Ask HN: Are past LLM models getting dumber?

4 pointsposted 25 days ago
by hmate9

Item id: 46958617

5 Comments

muzani

22 days ago

I've been mostly using GPT-5 (the first) recently because it's better than 5.1 and 5.2. There's a clear difference. They're removing it next week; I'm canceling my subscription to ChatGPT.

This happens all across the board. Sometimes to Anthropic. Almost every time with Gemini. They optimize for some solutions and cause problems to others. Some people want a syncopathic AI like GPT-4o.

Personally, I loved the early patched GPT-5's ability to jailbreak itself. It was clearly a bug; great at finding solutions and skirting the absolute edges of the guardrails. They got rid of it because someone used this to override safety guardrails and commit suicide.

ChatGPT has nearly a billion active users. As long as they keep trying to please all of them, win the next 1 billion, and avoid being banned by governments, they're going to keep doing this.

The fix is to have your own model that works for a particular problem and self-host it or something. I still have a pet Mistral NeMo.

segmondy

24 days ago

No, you're just getting used to them.

rafiki6

24 days ago

No technical analysis, but all models experience drift eventually.