This makes sense if you think of chatbots as "faster books," or books plus a TikTok algo. Every kind of BS ever written is already in the training data. No one culled out falsehoods or "bad data." Since the tool is trained to respond precisely to your prompts, the algo feeding you (truth or) BS is laser focused and highly effective.
They've been saying it a few months
First Murder-Suicide Case Associated with AI Psychosis (42 points, 3 months ago, 35 comments) https://news.ycombinator.com/item?id=45088651
OpenAI investor suspected to fall into ChatGTP-induced psychosis (51 points, 5 months ago, 14 comments) https://news.ycombinator.com/item?id=44598052
In Search of AI Psychosis (214 points, 4 months ago, 235 comments) https://news.ycombinator.com/item?id=45027072
Ask HN: What's Going on with AI Psychosis? (9 points, 5 months ago, 2 comments) https://news.ycombinator.com/item?id=44855226
People Are Being Involuntarily Committed After Spiraling into ChatGPT Psychosis (63 points, 6 months ago, 89 comments) https://news.ycombinator.com/item?id=44405464
Let's Talk About ChatGPT-Induced Spiritual Psychosis (95 points, 6 months ago, 89 comments) https://news.ycombinator.com/item?id=44285426
Our mental health is a sacrifice the people pushing AI are willing to make.
I think it is a chicken and egg situation. The atomisation of society is an engineered situation, and the result of that is that people turn to AI for company.
My 20 PRs per day will insulate me when society collapses
Hundreds of millions of users is a big ice cream scoop, and 1% of people will have a psychotic episode. Hard to avoid an overlap.