cosine_tech
a month ago
I’ve run into similar issues, and I don’t think it’s just you. On the first question: I don’t think ChatGPT is necessarily getting “dumber,” but it does feel less predictable in longer or repeated use. A lot of the frustration seems to come from how context and memory are handled rather than from raw model capability. On the follow-up: In my case, the bigger problem is that I discuss many unrelated topics in ChatGPT. Its memory / context system appears to have bugs: it sometimes carries formatting rules or constraints from other conversations into a new one, while forgetting the things I actually want it to remember. Before ChatGPT introduced any form of cross-session memory, this problem didn’t really exist. Each chat was clearly a fresh start, and I knew exactly what context the model was using. Now the boundaries are fuzzier, and when that implicit memory misfires, it feels like “bugs” rather than simple mistakes. So the issue may be less about hype collapse, and more about state and memory management becoming visible to users before it’s truly robust.