YossarianFrPrez
4 days ago
What a terrible, awful tragedy!
A few months ago, OpenAI shared some data about how with 700 million users, 1 million people per week show signs of mental distress in their chats [1]. OpenAI is aware of the problem [2], not doing enough, and they shouldn't be hiding data. (There is also a great NYT Magazine piece about a person who fell into AI Psychosis [3].)
The links in other comments to Less Wrong posts attempting to dissuade people from thinking that they have "awoken their instance of ChatGPT into consciousness", or that they've made some breakthrough in "AI Alignment" without doing any real math (etc.) suggest that ChatGPT and other LLMs have a problem of reinforcing patterns of grandiose and narcissistic thinking. The problem is multiplied by the fact that it is all too easy for us (as a species) to collectively engage in motivated social cognition.
Bill Hicks had a line about how if you were high on drugs and thought you could fly, maybe try taking off from the ground rather than jumping out of a window. Unfortunately, people who are engaging in motivated social cognition (also called identity protective cognition) and are convinced that they are having a divine revelation are not the kind of people who want to be correct and who are therefore open to feedback. Because one could "simply" ask a different LLM to neutrally evaluate the conversation / conversational snippets. I've found Gemini to be useful for a second or even third opinion. But this means that one would be happy to be told that one is wrong.
[1] https://www.bmj.com/content/391/bmj.r2290.full [2] https://openai.com/index/strengthening-chatgpt-responses-in-... [3] https://www.nytimes.com/2025/08/08/technology/ai-chatbots-de...
JohnMakin
4 days ago
It's probably an artifact of how I use it (I turn off any kind of history or "remembering" of past conversations), but when I started becoming really impressed by tools like claude/chatgpt/etc. was the first time I was chasing down some dumb idea I had for work, convinced I was right, and it finally gently told me I was wrong (in its own way). That is exactly what I want these things to do, but it seems like most users do not want to be told they are wrong, and the companies are not very incentivized to encourage these tools to behave that way.
I have identified very few instances where something like chatGPT just randomly started praising me (outside of the whole "you're absolutely correct to push back on this" kind of thing). I guess leading questions probably have something to do with this.
Avamander
4 days ago
In one recent thread about StackOverflow dying, some people theorized that the success of LLMs and thus failing of SO could mostly be attributed to the amount of sycophancy of LLMs.
I tend to agree more and more. People need to be told when their ideas are wrong, if they like it or not.
StableAlkyne
4 days ago
There's also the communications aspect:
SO was/is a great site for getting information if (and only if) you properly phrase your question. Oftentimes, if you had an X/Y problem, you would quickly get corrected.
God help you if you had an X/Y Problem Problem. Or if English wasn't your first language.
I suspect the popularity is also boosted by the last two; it will happily tell you the best way to do whatever cursed thing you're trying to do, while still not judging over English skills.
bsder
4 days ago
SO is dying simply because SO became garbage.
It became technically incorrect. You couldn't dislodge old, upvoted yet now incorrect answers. Fast moving things were answered by a bunch of useless people. etc.
Combine this with the completely dysfunctional social dynamics and it's amazing SO has lasted as long as this.
thephyber
4 days ago
The technically incorrect issue is downstream of their rigid policies.
Yes, answers which were accepted go Python 2 may require code changes to run on Python 3. Yes, APIs
One of the big issues is that accepted answers grow stale over time, similar to bitrot of the web. But also, SO is very strict about redirecting close copies of previously answered questions to one of the oldest copies of the question. This policy means that the question asker is frustrated when their question is closed and linked to an old answer, which may or may not answer their new question.
But the underlying issue is that SO search is the lifeblood of the app, but the UX is garbage. 100% of searches show a captcha when you are logged out. The keyword matching is tolerable, but not great. Sometimes Google dorking with `site:stackoverflow.com` is better than using SO search.
Ultimately, the UX of LLM chatbots are better than SO. It’s possible that SO could use a chatbot interface to replace their search and improve usability by 10x…
nurettin
3 days ago
SO is officially dead according to the graph of number of questions posted per month.
Google+SO was my LLM between 2007-2015. Then the site got saturated. All questions were answered. Git, C# Python, SQL, C++, Ruby, PHP, most popular topics got "solved". The site reached singularity. That is when they should have frozen it as the encyclopedia of software.
Then duplicates, one-offs, homeworks started to destroy it. I think earth society collectively got dumber and entitled. Decline of research and intelligence put into online questions is a good measure of this.
JohnMakin
4 days ago
> People need to be told when their ideas are wrong, if they like it or not.
This is one of those societal type of problems rather than a technological one. I waffle on the degree of responsibility technology should have (especially privately owned ones) in trying to correct societal wrongs. There is definitely a line somewhere, I just don’t pretend to know where it is. You can definitely go too far one way or another - look at social media for an example
okayGravity
4 days ago
It all has to do with specific filler words you use when prompting, especially chatGPT. If you use words that suggest a heavy (and I mean you really have to make the LLM know you're questioning), then it will question to an extent as you imply. If you look at the chats that they do have from this incident, he phrased his prompts as more convincing rather than questioning (i.e "Shes doing this because of this!") So chatGPT roleplays and goes along with the delusion.
Most people will just talk to LLMs like they are a person, even though LLMs won't understand the difference in complex social language and reasoning. It's almost like robots aren't people!
baranul
3 days ago
Companies want the money and continual engagement. People getting addicted to AI, as trusted advisor or friend, is money in their pockets. Just like having people addicted to gambling or alcohol, it's all big business.
It's becoming even more apparent, that there is a line between using AI as a tool to accomplish a task versus excessively relying on it for psychological reasons.
DocTomoe
4 days ago
> A few months ago, OpenAI shared some data about how with 700 million users, 1 million people per week show signs of mental distress in their chats
Considering that the global prevalence of mental health issues in the population is one in seven[1], that would make OpenAI users about 100 times more 'sane' than the general population.
Either ChatGPT miraculously selects for an unusually healthy user base - or "showing signs of mental distress in chat logs" is not the same thing as being mentally ill, let alone harmed by the tool.
[1] https://www.who.int/news-room/fact-sheets/detail/mental-diso...
zahlman
4 days ago
Having a mental health issue is not at all the same thing as "showing signs of mental distress" in any particular "chat". Many forms of mental illness wouldn't show up in dialogue normally; when it would, it doesn't necessarily show up all the time. And then there's the matter of detecting it in the transcript.
tehjoker
4 days ago
I don't know the full details, but 700M users and 1 million per a week, means up to 52M per year though I imagine a lot of them show up multiple weeks.
DocTomoe
4 days ago
You also don't take into account that the userbase itself is shifting.
That being said: Those of us who grew up when the internet was still young remember alt.suicide.holiday, and when you could buy books explaining relatively painless methods on amazon. People are depressed. It's a result of the way we choose to live as a civilization. Some don't make the cut. We should start accepting that. In fact, forcing people to live on in a world that is unsuited for happiness might constitute cruel and unusual punishment.
GreenWatermelon
a day ago
Maybe, just maybe, we should fix the fucked up world we created instead? Shunning the modern culture of individualism would be a great first step, followed by promoting communal culture. Live exactly how we evolved to live for hundreds of thousands of years.
mmooss
3 days ago
> Because one could "simply" ask a different LLM to neutrally evaluate the conversation / conversational snippets.
The problem is using LLMs beyond a limited scope, which is free ideas but not reliable reasoning or, goodness forbid, decision-making.
Maybe the model for LLMs is a very good, sociopathic sophist or liar. They know a lot of 'facts', true or false, and are can con you out of your car keys (or house or job). Sometimes you catch them at a lie and their dishonesty becomes transparent. They have good ideas, though their usefulness only enhances their con jobs. (They also tell everything you say with others.)
Would you rely on them for something of any importance? Simply ask a human.
gaigalas
4 days ago
Why do you think a breakthrough in AI Alignment should require doing math?
Many alignment problems are solved not by math formulas, but by insights into how to better prepare training data and validation steps.
YossarianFrPrez
4 days ago
Fair question. While I'm not an expert on AI Alignment, I'd be surprised if any AI alignment approach did not involve real math at some point, given that all machine learning algorithms are inherently mathematical-computational in nature.
Like I would imagine one has to know things like how various reward functions work, what happens in the modern variants of attention mechanisms, how different back-propagation strategies affect the overall result etc. in order to come up with (and effectively leverage) reinforcement learning with human feedback.
I did a little searching, here's a 2025 review I found by entering "AI Alignment" into Google Scholar, and it has at least one serious looking mathematical equation: https://dl.acm.org/doi/full/10.1145/3770749 (section 2.2). This being said, maybe you have examples of historical breakthroughs in AI Alignment that didn't involve doing / understanding the mathematical concepts I mentioned in the previous paragraph?
In the context of the above article, I think it's possible that some people are talking to ChatGPT on a buzzword level end up thinking that alignment can be solved via "fractal recursion of human in the loop validation sessions" for example. It seems like a modern incarnation of people thinking they can trisect the angle: https://www.ufv.ca/media/faculty/gregschlitt/information/Wha...
DenisM
4 days ago
> maybe you have examples of historical breakthroughs in AI Alignment that didn't involve doing / understanding the mathematical concepts I mentioned in the previous paragraph?
Multi agentic systems appear to have strong potential. Will that work out? I don’t know. But I know the potential there.
gaigalas
4 days ago
> maybe you have examples of historical breakthroughs in AI Alignment
OpenAI confessions is a good example of largely non-mathematical insight:
https://arxiv.org/abs/2512.08093
I don't know, I think it's good stuff. Would you agree?
> I think it's possible that some people are talking to ChatGPT on a buzzword level
I never said this is not happening. This definitely happens.
What I said is very different. I'm saying that you don't need to be a mathematician to have good insights into novel ways of improving AI alignment.
You definitely need good epistemic intuition though.