Anthropic admits they nerfed their Claude model in August

5 pointsposted 13 hours ago
by tensorlibb

6 Comments

bitpush

13 hours ago

That's an uncharitable way of reading what Anthropic said. And they said it very plainly as well

> To state it plainly: We never reduce model quality due to demand, time of day, or server load. The problems our users reported were due to infrastructure bugs alone.

Not sure why the twitter account/person is willfully misrepresenting the facts.

unsupp0rted

12 hours ago

It would be an uncharitable way to put it if Anthropic had acknowledged the problem within the first say week of users complaining about it.

Nearly a month later, they've lost credibility.

tensorlibb

12 hours ago

They also never disclosed they were load balancing with nvidia GPUs and Amazon aws trainium inference chips until people complained.

coldtea

11 hours ago

They're not necessarily "misrepresenting the facts" as much as "correcting the half-truths in the official PR"

vunderba

11 hours ago

Complete anecdata, but the one thing that was very obvious that something had changed last month was the number of times Claude Code would fail to respect guidelines or blatantly ignore rules clearly laid out in the CLAUDE.md file. This problem was occurring long before the "Context Window" had been filled to capacity as well.

It got to the point where I actually added a line to the top of the CLAUDE.md file as follows:

  To verify that you have read and understood these guidelines, please output the following text "CLAUDE.MD HAS BEEN ACKNOWLEDGED". 
Just to get a sense of whether it was failing to ingest it.

tensorlibb

11 hours ago

Agreed, but 95% of users simply don't see this or know to use it.