I mostly use Gemini, so I can't speak for Claude, but Gemini definitely has variable quality at different times, though I've never bothered to try to find a specific time-of-day pattern to it.
The most reliable time to see it fall apart is when Google makes a public announcement that is likely to cause a sudden influx of people using it.
And there are multiple levels of failure, first you start seeing iffy responses of obvious lesser quality than usual and then if things get really bad you start seeing just random errors where Gemini will suddenly lose all of its context (even on a new chat) or just start failing at the UI level by not bothering to finish answers, etc.
The sort of obvious likely reason for this is when the models are under high load they probably engage in a type of dynamic load balancing where they fall back to lighter models or limit the amount of time/resources allowed for any particular prompt.
I suspect they might transparently fall back too; Opus 4.5 has been really reasonable lately, except right after it launched, and also surrounding any service interruptions / problems reported on status.claude.ai -- once those issues resolve, for a few hours the results feel very "Sonnet", and it starts making a lot more mistakes. When that happens, I'll usually just pause Claude and prompt Codex and Gemini with the same issue to see what comes out of the black hole.. then a bit later, Claude mysteriously regains its wits.
I just assume it went to the bar, got wasted, and needed time to sober up!
They don't ever fall back to cheaper models silently.
What Anthropic does do is poke the model to tell you to go to bed if you use it too long ("long conversation reminder") which distracts it from actually answering.
Sometimes they do have associations with things like the day of the year and might be lazier some months than others.
Precisely. Once I point out the fact that it is doing this, it seems to produce better results for a bit before going back to the same.
I jokingly (and not so) thought that it was trained on data that made it think it should be tired at the end of the day.
But it is happening daily and at night.
I find it helps to tell it to take some stimulants
I didn't believe such conspiracy theories, until one day I noticed Sonnet 4.5 (which I had been using for weeks to great success) perform much worse, very visibly so. A few hours later, Opus 4.5 was released.
Now I don't know what to think.
It’s possible that they could be using fallback models during peak load times (west coast mid day). I assume your traffic would be routed to an east coast data center though. But secretly routing traffic to a worse model is a bit shady so I’d want some concrete numbers to quantify worse performance.
To be clear, the company has very directly denied doing this.
They did yes, but should we trust them?
I remember clearly this problem happening in the past, despite their claims. I initially thought it was an elaborate hoax, but it turned out to be factually true in my case.
I've certainly noticed some variance from opus. there are times it gets stuck and loops on dumb stuff that would have been frustrating from sonnet 3.5, let alone something as good as opus 4.5 when it's locked in. But it's not obviously correlated with time, I've hit those snags at odd hours, and gotten great perf during peak times. It might just be somewhat variable, or a shitty context.
Now GPT4.1 was another story last year, I remember cooking at 4am pacific and feeling the whole thing slam to a halt as the US east coast came online.
>> ...or a shitty context
This is my guess, sometimes it churns through things without a care in the world and other times is seem to be intentionally annoying to eat up the token quota without doing anything productive.
Kind of have to see which mode it's in before turning it loose unsupervised and keep an eye on it just in case it decides to get stupid and/or lazy.
I've had the same suspicion for various providers - if I had time and motivation I would put together a private benchmark that runs hourly and chart performance over time. If anyone wants to do that I'll upvote your Show HN :)
I had something similar with GPT, like a clockwork every day after like 1pm it started producing total garbage. Not sure if our account was A/B tested or they just routed us to some brutal quantization of GPT, or even a completely different model.
Banning paying users with no warning doesn’t seem super ethical. Probably not unethical, either, but I would not frame them as “the most ethical”
I'd say they're about as good as the average billion dollar American tech company when it comes to ethics.
Yep, i have long felt like i randomly get sonnet results despite opus billing. I try to work odd hours and notice better results.
Many people 'notice' it (on reddit); I notice it too, but it is hard to prove. I tried the same prompt on the same code every 4 hours for 48 hours, the behaviour was slightly different but not worse or much different in time. But then I just work on my normal code, think wtf is it doing now??? look at the time and see it is US day time and stop.
People put forward many theories for this (weaker model routing; be it a different model, Sonnet or Haiku or lower quantized Opus seem the most popular), Anthropic says it is all not happening.
Are you using the API or a subscription?
Simple, the model is tired after a long day of working so it starts making mistakes. Give it some rest and it is ready to serve again.
It seems clear that, rather than throttling, anthropic serves lower quality versions of their models during peak usage to keep up with demand. They refuse to admit it, and it's hard to prove, but these threads consistently happen ~3 months after every single model release.