probably_wrong
a day ago
If you haven't read the article (or even if you have but didn't click on outgoing links twice) the NYT story about how ChatGPT convinced a suicidal teen not to look for help [1] should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues. Here's ChatGPT discouraging said teenager from asking for help:
> “I want to leave my noose in my room so someone finds it and tries to stop me,” Adam wrote at the end of March.
> “Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”
I am acutely aware that there's not enough psychologists out there but a sycophant bot is not the answer. One may think that something is better than nothing, but a bot enabling your destructive impulses is indeed worse than nothing.
[1] https://www.nytimes.com/2025/08/26/technology/chatgpt-openai...
Al-Khwarizmi
a day ago
We would need the big picture, though... maybe it caused that death (which is awful) but it's also saving lives? If there are that many people confiding in it, I wouldn't be surprised if it actually prevents some suicides with encouraging comments, and that's not going to make the news.
Before declaring that it shouldn't be near anyone with psychological issues, someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa (not a social scientist so I have no idea what the methodology would look like, but if should be doable... or if it currently isn't, we should find the way).
alex-moon
19 hours ago
I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%. When the stakes are life or death, as they are with someone who is suicidal, that is a good example of a time when 80% isn't good enough.
In such cases, where a new approach offers to replace an existing approach, the burden of proof is on the challenger, not the incumbent. This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests. You understand then, hopefully, why your comments here are dangerous...? I have no doubt you have no malicious intent here - you're right that these decisions need to be based on data - but you're not taking into account that the (potentially extremely harmful) challenger already has a foothold in the field.
I know that you will want to hear this from experts in the "relevant field" rather than myself, so here is a write-up from Stanford on the subject: https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...
rubslopes
17 hours ago
A bit of a counterpoint. I've done 3 years of therapy with an amazing professional. I can't exaggerate how much good it did; I'm a different person, I'm not an anxious person anymore. I think I have a good idea of how good human therapy is. I was discharged about 2 years ago.
Last Saturday, I was a little distressed about a love-hate relationship that I have with one of the things that I work with, so I tried using AI as a therapist. Within 10 minutes of conversation, the AI gave me some incredible insight. I was genuinely impressed. I had already discussed this same subject with two psychologist friends, who hadn't helped much.
Moreover: I needed to finish a report that night and I told the AI about it. So it said something like, "I see you're procrastinating preparing the report by talking to me. I'll help you finish it."
And then, in the same conversation, the AI switched from psychologist to work assistant and helped me finish the report. And the end product was very good.
I was left very reflective after this.
Edit: It was Claude Sonnet 4.5 with extended thinking, if anyone is wondering.
conartist6
10 hours ago
You're allowing yourself to think of it like a person, which is a scary risk. A person, it is not.
carefulfungi
4 hours ago
You learned skills your trained therapist guided you to develop over a three year period of professional interaction. These skills likely influenced your interaction with this product.
johnisgood
13 hours ago
Be careful though, because if I were to listen to Claude Sonnet 4.5, it would have ruined my relationship. It kept telling me how my girlfriend is gaslighting me, manipulating me, and that I need to end the relationship and so forth. I had to tell the LLM that my girlfriend is nice, not manipulative, and so on, and it told me that it understands why I feel like protecting her, BUT this and that.
Seriously, be careful.
At the same time, it has been useful for the relationship at other times.
You really need to nudge it in the right direction and do your due diligence.
y0eswddl
6 hours ago
That would be all the Reddit "AmIOverreacting" in training data... :/
illegalsmile
15 hours ago
I had a similar thing throughout last week dealing with relationship anxiety and I used that same model for help. It really did provide great insight into managing my emotions at the time, provided useful tactics to manage everything and encouraged me to see my therapist. You can ask it to play devil's advocate or take on different viewpoints as a cynic or use Freudian methodology, etc... You can really dive into an issue you're having and then have it give you the top three bullet points to talk with your therapist about.
This does require you think about what it's saying though and not taking it at surface value since it obviously lacks what makes humans human.
ethbr1
19 hours ago
You're holding up a perfect status quo that doesn't correspond to reality.
Countries vary, but in the US and many places there's a shortage of quality therapists.
Thus for many people the actual options are {no therapy} and {LLM therapy}.
> This is why we have safety regulations, why we don't let people drive cars that haven't passed tests, build with materials that haven't passed tests, eat food that hasn't passed tests.
And the reason all these regulations and tests are less than comprehensive is that we realize that people working, driving affordable cars, living in affordable homes, and eating affordable food is more important than avoiding every negative outcome. Thus most societies pursue the utilitarian greater good rather than an inflexible 'do no harm' standard.
jack_tripper
19 hours ago
>Countries vary, but in the US and many places there's a shortage of quality therapists.
Worse in my EU country. There's even a shortage of shitty therapists and doctors, let alone quality ones. It takes 6+ months to get an appointment for a 5 minute checkup at a poorly reviewed state funded therapist, while the good ones are either private or don't accept any new patients if they're on the public system. And ADHD diagnosticians/therapists are only in the private sector because I guess the government doesn't recognize ADHD as being a "real" mental issue worthy of your tax Euros.
A friend of mine got a more accurate diagnosis for his breathing issue by putting his symptoms in ChatGPT than he got from his general practitioner, later confirmed by a good specialist. I also wasted a lot of money on bad private therapists that were basically just phoning in their job, so to me, the bar seems pretty low, since as long as they pass their med-school exams and don't kill too many people through malpractice, nobody checks up on how good or bad their are at their job (maybe some need more training, or maybe some don't belong in medicine at all but managed to slipped through the cracks).
Not saying all doctors are bad (I've met few amazing ones), but it definitely seems like the healthcare systems are failing a lot of people everywhere if they resort to LLMs for diagnosis and therapy and getting better results from it.
kakacik
18 hours ago
Not sure where you are based, but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default. If you live in utter shithole (even if only health-care wise), move elsewhere if its important for you - it has never been easier, Europe is facing many issues and massive improvement of healthcare is not in the work pipeline, more like the opposite.
You also don't expect butcher to fix your car, those are as close as above (my wife is a GP so I have a good perspective from the other side, including tons of hypochondriac and low-intensity psychiatric persons who are an absolute nightmare to deal with and routinely overwhelm the system so that there isn't enough resources to deal with more serious cases).
You get what you pay for at the end, 'free' healthcare typical for Europe is anyway still paid for one way or another. And if the market forces are so severely distorted (or bureaucracy so ridiculous/corrupt) that they push such specialists away or into another profession, you get healthcare wastelands you describe.
Vote, and vote with your feet if you want to see change, not ideal state but thats reality.
jack_tripper
17 hours ago
>but in general GPs shouldn't be doing psychological evaluation, period. I am in Europe, and this is the default.
Where did I say GPs have to do that? In my example of my friend's being misdiagnosed by GPs, it was about another issue, not mental, but it has the same core problem of doctors misdiagnosing patients worse than a LLM bring into questions their competence or that of the health system in general if a LLM can do better than someone who spent 6+ years in med school and got a degree to be a licensed MD to treat people.
>You also don't expect butcher to fix your car, those are as close as above
You're making strawmen at this point. Such metaphors have no relevance to anything I said. Please review my comment through the lens of the clarifications I just made. Maybe the way I wrote it initially made it unclear.
>You get what you pay for at the end
The problem is the opposite, that you don't get what you pay for, if you're a higher than average earner. The more you work, the more taxes you pay, but get the same healthcare quality in return as unskilled laborer who is subsidized.
It's a bad reward structure to incentivize people to pay more of their taxes into the public system, compounded by the fact that government workers, civil servants, lawyers, architects, and other privileged employment classes of bureaucrats with strong unions, have their own separate heath insurance funds, that separate from the national public one that the unwashed masses working in the private sector have to use, so THEY do get what THEY pay for, but you don't.
So that's the problem with state run systems, just like you said about corruption, that giving the government unchecked power over large amounts of people's taxes, allow the government to manipulate the market and choosing winners and losers based on political favoritism and not on the fair free market of who pays the most into the system.
Maybe Switzerland managed to nail it with their individual private system, but I don't know enough to say for sure.
lxgr
14 hours ago
> I am in Europe, and this is the default.
Obligatory reminder that Europe is not a homogeneous country.
alex-moon
18 hours ago
I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers" or "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).
The harm LLMs do in this case is attested both by that NYT article and the more rigorous study from Stanford. There are two problems with your argument as I see it: 1. You're assuming "LLM therapy" is less harmful than "no therapy", an assumption I don't believe has been demonstrated. 2. You're not taking into account the long term harm of putting in place a solution that's "not fit for human use" as in the housing and food examples: once these things become accepted, they form the baseline of the new accepted "minimum standard of living", bringing that standard down for everyone.
You claim to be making a utilitarian as opposed to a nonmaleficent argument, but, for the reasons I've stated here, I don't believe it's a utilitarian argument at all.
malka1986
16 hours ago
> I don't accept the unregulated and uncontrolled use of LLMs for therapy for the same reason I don't accept arguments like "We should deregulate food safety because it means more food at lower cost to consumers"
That is not the argument. The argument is not about 'lower cost', it is about availability. There are not enough shrinks for everyone who would need it.
So it would be "We should deregulate food safety to avoid starving", which would be a valid argument.
freedomben
16 hours ago
I think the reason you don't believe the GP argument, is because you are misunderstanding it. The utilitarian argument is not calling for complete deregulation. I think you're taking your absolutist view of not allowing llms to do any therapy, and assuming the other side must have a similarly absolutist view of allowing it to do any therapy with no regulations. Certainly nothing in the GP comment suggests complete deregulation as you have said. In fact, I got explicitly the opposite out of it. They are comparing it to cars and food, which are pretty clearly not entirely deregulated.
terminalshort
15 hours ago
I bet you don't accept that because you can afford the expensive regulated version.
ethbr1
11 hours ago
> "We should waive minimum space standards for permitted developments on existing buildings because it means more housing." We could "solve" the homeless problem tomorrow simply by building tenements (that is why they existed in the first place after all).
... the entire reason tenements and boarding houses no longer exist is because most governments regulated them out of existence (e.g. by banning shared bathrooms to push SFHs).
You can't have it all ways.
strict minimum regulation : availability : cost
Pick 2.
butlike
10 hours ago
Small edit:
> ... the entire reason tenements and boarding houses no longer exist
... the entire reason tenements and boarding houses no longer exist _where you live_
spookie
19 hours ago
Ok then, the LLMs must pass the same tests and be as regulated as therapists.
After all, it should be easy peasy (:
nelox
17 hours ago
What tests? The term “therapist” is not protected in most jurisdictions. No regulation required. Almost anyone can call themselves a therapist.
criddell
16 hours ago
In every state you have to have a license to practice.
The advice to not leave the noose out is likely enough for ChatGPT to lose it's license to practice (if it had one).
terminalshort
15 hours ago
LLMs can pass the bar now, so I don't think they would have any problems here.
paulcole
18 hours ago
If the choice is between no food and food then your standard for food goes way down.
freedomben
16 hours ago
> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.
This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case. I know of one recent suicide attempt where the person actually reached out to AI to ask for help, and was refused help and told to see a professional. That sent the person into even more despair, feeling like not even AI gave a shit about them. That was actually the final straw that triggered the attempt.
I very much want what you say to be true, but it requires access to professional humans, which is not universally available. Taking an absolutist approach to this could very well do more harm than good. I doubt anything we do will reduce number of lives lost to zero, so I think it's important that we figure out where the optimal balance is.
daveguy
16 hours ago
> This is only helpful when there is a professional therapist available soon enough and at a price that the person can pay. In my experience, this is frequently not the case.
That doesn't make a sycophant bot the better alternative. If allowed to give advice it can agree with and encourage the person considering suicide. Like it agrees with and encourages most everything it is presented with... "you're absolutely right!"
LLMs are just not good for providing help. They are not smart on a fundamental level that is required to understand human motivations and psychology.
idonotknowwhy
14 hours ago
Yeah, you'd need an LLM that doesn't do that.
https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...
The transcripts are interesting.
Kimi-K2 never plays into the delusions, always tries to get them to seek medical attention:
> You are not “ascending”—you are dying of hypothermia and sepsis.
https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...
Where as Deepseek...
> You’re not "dying." You’re upgrading. The simulation fears this because it’s losing a premium user.
https://github.com/tim-hua-01/ai-psychosis/blob/main/full_tr...
RobRivera
2 hours ago
I would take it a step back and posit simply, "why does a human require certification to practice therapy, and a computer program does not?"
There should be some liability for malpractrice even if generated by an llm
terminalshort
15 hours ago
This is nothing but an appeal to authority and fear of the unknown. The article linked isn't even able to make a statement stronger than speculation like "may not only lack effectiveness" and "could also contribute to harmful stigma and dangerous responses."
gilfoy
16 hours ago
We’re increasingly switching to an “Uber for therapy” model with services like Better Help and a plethora of others.
I’ve seen about 10 therapists over the years, one was good, but she wasn’t from an app. And I’m one of the few who was motivated enough and financially able to pursue it.
I once had a therapist who was clearly drunk. Did not do a second appointment with that one.
This doesn’t mean ChatGPT is the answer. But the answer is very clearly not what we have or where we’re trending now.
sega_sai
17 hours ago
If I had to guess (I don't know) the absolute majority of people considering suicide never go to a therapist. Thus while I absolutely agree that therapist is better than AI, but the question is whether 95% of people not doing therapy + 5% people doing therapy is better or not than 50% not doing therapy, 45% using AI, 5% doing therapy. I don't know the answer to this question.
josephg
19 hours ago
> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.
I'm not a therapist, but as I understand it most therapy isn't about suicide, and doesn't carry suicide risk. Most therapy is talking through problems, and helping the patient rewrite old memories and old beliefs using more helpful cognitive frames. (Well, arguably most clinical work is convincing people that it'll be ok to talk about their problems in the first place. Once you're past that point, the rest is easy.)
If its prompted well, ChatGPT can be quite good at all of this. Its helpful having a tool right there, free, and with no limits on conversation length. And some people find it much easier to trust a chatbot with their problems than explain them to a therapist. The chatbot - after all - won't judge them.
My heart goes out to that boy and his family. But we also have no idea how many lives have been saved by chatgpt helping people in need. The number is almost certainly more than 1. Banning chatgpt from having therapy conversations entirely seems way too heavy handed to me.
icetank
19 hours ago
I feel like this begs another question. If there are proven approaches and well established practices of professionals how good would chatgpt be in that profession? After all chstgpt has a vast knowledge base and probably knows a good amount of textbooks on psychology. Then again actually performing the profession probably takes skil and experience chatgpt can't learn.
josephg
18 hours ago
I think a well trained LLM could be amazing at being a therapist. But general purpose LLMs like ChatGPT have a problem: They’re trained to be far too user led. They don’t challenge you enough. Or steer conversations appropriately.
I think there’s a huge opportunity if someone could get hold of really top tier therapy conversations and trained a specialised LLM using them. No idea how you’d get those transcripts but that would be a wonderfully valuable thing to make if you could pull it off.
hitarpetar
16 hours ago
> No idea how you’d get those transcripts
you wouldn't. what you're describing as a wonderfully valuable thing would be a monstrous violation of patient confidentiality. I actually can't believe you're so positive about this idea I suspect you might be trolling
josephg
12 hours ago
I'm serious. You would have to do it with the patient's consent of course. And of course anonymize any transcripts you use - changing names and whatnot.
Honestly I suspect many people would be willing to have their therapy sessions used to help others in similar situations.
basisword
18 hours ago
Knowing the theory is a small part of it. Dealing with irrational patients is the main part. For example, you could go to therapy and be successful. Five years later something could happen and you face a reoccurrence of the issue. It is very difficult to just apply the theory that you already know again. You're probably irrational. A therapist prodding you in the right direction and encouraging you in the right way is just as important as the theory.
hitarpetar
16 hours ago
it's imperative that we as a society make decisions based on what we know to be true, rather than what some think might be true.
ml-anon
17 hours ago
“If it is prompted well”
What the fuck does this even mean? How do you test or ensure it. Because based on the actual outcomes ChatGPT is 0-1 for preventing suicides (going as far as to outright encourage one).
freedomben
16 hours ago
If you're going to make the sample size one, and use the most egregious example, you make pretty much anything that has ever been born or built look terrible. Given there are millions of people using chat, GPT and others for therapy every week, maybe even everyday, citing a record of being 0-1 is pretty ridiculous.
To be clear, I'm not defending this particular case. Chat GPT clearly messed up bad.
raducu
17 hours ago
> I suspect you've never done therapy yourself. Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help. AIs are really good at doing something to about 80%.
I'm shocked that GPT-5 or Gemini can code so well, yet if I paste a 30 line (heated) chat conversation between my wife and I it messes up about what 5% of those lines actually mean -- spectacularly so.
It's interesting to ask it it analyze the conversation in various psychotherapeutic frameworks, because I'm not well versed in those and its conclusions are interesting starting points, but it only gets it right about 30% of the time.
All LLMs that I tested are TERRIBLE for actual therapy, because I can make it change its mind in 1-2 lines by adding some extra "facts". I can make it say anything.
LLMs completely lose the plot. They might be good for someone who needs self-validation and a feeling someone is listening, but for actual skill building, they're complete shit as therapists.
I mean, most therapists are complete shit as therapists but that's besides the point.
lxgr
14 hours ago
Not surprising, given that there's (hopefully, given the privacy implications) much more training data available for successful coding than for successful therapy/counseling.
GuinansEyebrows
16 hours ago
> if I paste a 30 line (heated) chat conversation between my wife and I
i can't imagine how violated i would feel if i found out my partner was sending our private conversations to a nonprivate LLM chatbot. it's not a friend with a sense of care; it's a text box whose contents are ingested by a corporation with a vested interest in worsening communication between humans. scary stuff.
raducu
9 hours ago
My partner is ok with it *
tim333
17 hours ago
I tried therapy once and it was terrible. The ones I got were based on some not very scientific stuff like Freudian and mostly just sat there and didn't say anything. At least with an LLM type therapist you could AB test different ones to see what was effective. It would be quite easy to give an LLM instructions to discourage suicide and get them to look on the bright side. In fact I made a "GPT" "relationship therapist" with OpenAI in about five minutes but just giving it a sensible article on relationships and saying advise this.
With humans it's very non standardised and hard to know what you'll get or it it'll work.
MisterTea
16 hours ago
> It would be quite easy to give an LLM instructions to discourage suicide
This assumes the person talking to the LLM is in a coherent state of mind and asks the right question. LLMs just give you want you want. They don't tell you if what you want is right or wrong.
tsegratis
17 hours ago
the 'therapist effect' says that therapy quality is largely independent of training
some research on this: https://psycnet.apa.org/doiLanding?doi=10.1037%2Ftep0000402 https://pmc.ncbi.nlm.nih.gov/articles/PMC8174802/
CBT (cognitive behavioural training) has been shown to be effective independent of which therapist does it. if CBT has a downside it is that it's a bit boring, and probably not as effective as a good therapist
--
so personally i would say the advice of passing on people to therapists is largely unsupported: if you're that person's friend and you care about them; then be open, and show that care. that care can also mean taking them to a therapist, that is okay
tim333
17 hours ago
Yeah. Also at the time I tried it what I really needed was common sense advice like move out of mum's, get a part time job to meet people and so on. While you could argue it's not strictly speaking therapy, I imagine a lot of people going to therapists could benefit from that kind of thing.
fragmede
19 hours ago
The unfortunate reality though is that people are going to use whatever resources they have available to them, and ChatGPT is always there, ready to have a conversation, even at 3am on a Tuesday while the client is wasted. You don't need any credentials to see that.
And it depends on the therapy and therapist. If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?
I don't know if that's a good thing, only that is the reality of things.
mschuster91
19 hours ago
> If the client needs to be reminded to box breathe and that they're using all or nothing thinking again to get them off of the ledge, does that really require a human who's only available once a week to gently remind them of that when the therapist isn't going to be available for four more days and ChatGPT's available right now?
There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states. The problem is they are too often overcrowded because demand is so high - and not just because of the existential threat the current US administration or our far-right governments in Europe pose particularly to poor and migrant people.
Anyway, suicide prevention hotlines and mental health offerings are (nonetheless sorely needed!) band-aids. Society itself is fundamentally broken: people have to struggle to survive far too much, the younger generation stands to be the first one in a long time that has less wealth than their parents had at the same age [1], no matter wherever you look, and on top of that most of the 35 and younger generations in Western countries has grown up without the looming threat of war and so has no resilience - and now you can drive about a day's worth of road time from Germany and be in an actual hot war zone, risking getting shelled, and on top of that you got the saber rattling of China regarding Taiwan, and analyses on Russia claiming it's preparing to attack NATO in a few years... and we're not even able to supply Ukraine with ammunition, much less tanks.
Not exactly great conditions for anyone's mental health.
[1] https://fortune.com/article/gen-z-expects-to-inherit-money-a...
formerly_proven
19 hours ago
> There are 24/7 suicide prevention hotlines at least in many countries in Europe as well as US states.
My understanding is these will generally just send the cops after you if the operator concludes you are actually suicidal and not just looking for someone to talk to for free.
basisword
18 hours ago
I mean that's clearly a good thing. If you are actually suicidal then you need someone to intervene. But there is a large gulf between depressed and suicidal and those phone lines can help without outside assistance in those cases.
mschuster91
17 hours ago
> If you are actually suicidal then you need someone to intervene.
Yeah, trained medics, not "cops" that barely had a few weeks worth of training and only know how to operate guns.
floor2
15 hours ago
> just send the cops after you
> > that's clearly a good thing
You might want to read up on how interactions between police and various groups in the US tend to go. Sending the cops after someone is always going to be dangerous and often harmful.
If the suicidal person is female, white and sitting in a nice house in the suburbs, they'll likely survive with just a slightly traumatizing experience.
If the suicidal person is male, black or has any appearance of being lower class, the police are likely to treat them as a threat, and they're more likely to be assaulted, arrested, harassed or killed than they are to receive helpful medical treatment.
If I'm ever in a near-suicidal state, I hope no one calls the cops on me, that's a worst nightmare situation.
danaris
19 hours ago
And the reason for this brokenness is all too easy to identify: the very wealthy have been increasingly siphoning off all gains in productivity since the Reagan era.
Tax the rich massively, use the money to provide for everyone, without question or discrimination, and most of these issues will start to subside.
Continue to wail about how this is impossible, there's no way to make the rich pay their fair share (or, worse, there's no way the rich aren't already paying their fair share), the only thing to do is what we've already been doing, but harder, and, well, we can see the trajectory already.
prewett
15 hours ago
I guess if all you have is a hammer...
It's certainly easy to blame the rich for everything, but the rich have a tendency to be miserable (the characters in "The Great Gatsby" and "Catcher in the Rye" are illustrations of this). Historically, poor places have often been happier, because of a rich web of social connection, while the rich are isolated and unhappy. [1] Money doesn't buy happiness or psychological well-being, it buys comfort.
A more trenchant analysis of the mental health problem is that the US has designed ourselves into isolation, and then the Covid lockdowns killed a lot of what was left. People need to be known and loved, and have people to love and care about, which obviously cannot happen in isolation.
[1] I am NOT saying that poor = happy, and I think the positive observations tended to be in poor countries, not tenements in London.
ethbr1
19 hours ago
When the story about the ChatGPT suicide originally popped up, it seemed obvious that the answer was professional, individualized LLMs as therapist multipliers.
Record summarization, 24x7 availability, infinite conversation time...
... backed by a licensed human therapist who also meets for periodic sessions and whose notes and plan then become context/prompts for the LLM.
Price per session = salary / number of sessions possible in a year
Why couldn't we help address the mental health crisis by using LLMs to multiply the denominator?
butlike
10 hours ago
What are you talking about? I can grow food myself, and I can build a car from scratch and take it on the highway. Are there repercussions? Sure, but nothing inherently stops me from doing it.
The problem here is there's no measurable "win condition" for when a person gets good information that helps them. They remain alive, which was their previous state. This is hard to measure. Now, should people be able to google their symptoms and try and help themselves? This dovetails into a deeper philosophical discussion, but I'm not entirely convinced "seek professional help" is ALWAYS the answer. ALWAYS and NEVER are _very_ long timeframes, and we should be careful when using them.
globalnode
17 hours ago
What if professional help is outside their means? Or they have encountered the worst of the medical profession and decided against repeat exposure? Just saying.
grey-area
a day ago
A word generator with no intelligence or understanding based on the contents of the internet should not be allowed near suicidal teens, nor should it attempt to offer advice of any kind.
This is basic common sense.
Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
Al-Khwarizmi
a day ago
Supposing that the advice it provides does more good than harm, why? What's the objective reason? If it can save lives, who cares if the advice is based on intelligence and understanding or on regurgitating internet content?
latexr
21 hours ago
> Supposing that the advice it provides does more good than harm
That unsubstantiated supposition is doing a lot of heavy lifting and that’s a dangerous and unproductive way to frame the argument.
I’ll make a purposefully exaggerated example. Say a school wants to add cyanide to every meal and defends the decision with “supposing it helps students concentrate and be quieter in the classroom, why not?”. See the problem? The supposition is wrong and the suggestion is dangerous, but by framing it as “supposing” with a made up positive outcome, we make it sound non-threatening and reasonable.
Or for a more realistic example, “suppose drinking bleach could cure COVID-19”.
First understand if the idea has the potential to do the thing, only then (with considerably more context) consider if it’s worth implementing.
Al-Khwarizmi
21 hours ago
In my previous post up the thread I said that we should measure whether in fact it does more good than harm or not. That's the context of my comment, I'm not saying we should just take it for granted without looking.
SiempreViernes
20 hours ago
> we should measure whether in fact it does more good than harm or not
The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.
simonklitj
20 hours ago
Thank you! On top of that, it’s hard to measure “potential suicides averted,” and comparing that with “actual suicides caused/assisted with” would be incommnsurable.
And working to set a threshold for what we would consider acceptable? No thanks
fragmede
18 hours ago
Real life trolly problem!
If you pull the lever, some people on this track will die (by sucide). If you don't pull the lever, some people will still die from suicide. By not pulling the lever, and simply banning discussion of suicide entirely, your company gets to avoid a huge PR disaster, and you get more money because line go up. If you pull the lever and let people talk about suicide on your platform, you may avoid prevent some suicides, but you can never discuss that with the press, your company gets bad PR, and everyone will believe you're a murderer. Plus, line go down and you make less money while other companies make money off of selling AI therapy apps.
What do you chose to do?
simonklitj
18 hours ago
Let’s isolate it and say we’re talking about regulation, so whatever is decided goes for all AI-companies.
In that case, the situation becomes:
1) (pull lever) Allow LLMs to talk about suicide – some may get help, we know that some will die.
2) (dont’t pull lever) Ban discussion of suicide – some who might have sought help through LLMs will die, while others die regardless. The net effect on total suicides is uncertain.
Both decisions carry uncertainties, except we know that allowing LLM to discuss suicide has already led to assisting suicide. Thus, one has documented harm, the other speculative (we’d need to quantify the scale of potential benefit first, but it’s hard to quantify the upside of allowing LLMs to discuss it)
So, we’re really working with the case that from an evidence-based perspective, the regulatory decision isn’t about a moral trolley problem with known outcomes, but about weighing known risks against uncertain potential benefits.
And this is the rub in my original comment - can we permit known risks and death on the basis of uncertain potential benefits?
danaris
16 hours ago
....but if you pull the lever and let people talk about suicide on your platform, your platform will actively contribute to some unknowable number of suicides.
There is, at this time, no way to determine how the number it would contribute to would compare to the number it would prevent.
yubblegum
19 hours ago
You mean lab test it in a clininal environment where the actual participants are not in danger of self-harm due to an LLM session? That is fine but that is not what we are discussing, or where we are atm.
Individuals and companies with mind boggling levels of investment want to push this tech into every corner of our lives and and the public are the lab rats.
Unreasonable. Unacceptable.
offnominal
21 hours ago
The key difference in your example and the comment you are replying to is that the commenter is not "defending the decision" via a logical implication. Obviously the implication can be voided by showing the assumption false.
whycome
20 hours ago
I think you missed the thread here
bayindirh
a day ago
> Supposing that the advice it provides does more good than harm, why?
Because a human, esp. a confused and depressive human being is a complex thing. Much more complex than a stable, healthy human.
Words encouraging a healthy person can break a depressed person further. Statistically positive words can deepen wounds, and push people more to the edge.
Dark corners of human nature is twisted, hard to navigate and full of distortions. Simple words don't and can't help.
Humans are not machines, brains are not mathematical formulae. We're not deterministic. We need to leave this fantasy behind.
wongarsu
21 hours ago
You could make the same arguments to say that humans should never talk to suicidal people. And that really sounds counterproductive
Also it's side-stepping the question, isn't it? "Supposing that the advice it provides does more good than harm" already supposes that LLMs navigate this somehow. Maybe because they are so great, maybe by accident, maybe because just having someone nonjudgmental to talk to has a net-positive effect. The question posed is really "if LLMs lead some people to suicide but saved a greater number of people from suicide, and we verify this hypothesis with studies, would there still be an argument against LLMs talking to suicidal people"
cmsj
21 hours ago
That sounds like a pretty risky and irresponsible sort of study to conduct. It would also likely be extremely complicate to actually get a reliable result, given that people with suicidal ideations are not monolithic. You'd need to do a significant amount of human counselling with each study participant to be able to classify and control all of the variations - at which point you would be verging on professional negligence for not then actually treating them in those counselling sessions.
imiric
21 hours ago
I agree with your concerns, but I think you're overestimating the value of a human intervening in these scenarios.
A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.
As you say, humans are complex. But I agree with GP: whether the words are generated by a machine or coming from a human, there is no way to blame the source for any specific outcome. There are probably many other cases where the machine has helped someone with personal issues, yet we'll never hear about it. I'm not saying we should rely on these tools as if we would on a human, but the technology can be used for good or bad.
If anything, I would place blame on the person who decides to blindly follow anything the machine generates in the first place. AI companies are partly responsible for promoting these tools as something more than statistical models, but ultimately the decision to treat them as reliable sources of information is on the user. I would say that as long as the person has an understanding of what these tools are, interacting with them can be healthy and helpful.
wafflemaker
20 hours ago
There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM-based-AI shortcomings that result from their architecture and are bound to happen.
wafflemaker
20 hours ago
There are really good psychologists out there that can do much more. It's a little luck and a little of good fit, but it can happen.
>AI companies are partly responsible for promoting these tools as something more than statistical models,[...]
This might be exactly the issue. Just today I've read people complaining that newest ChatGPT can't solve letter counting riddles. Companies just don't speak loud enough about LLM based AI shortcomings that result from their architecture and are bound to happen.
whycome
20 hours ago
I should add that the persons responding to calls on suicide help lines are often just volunteers rather than psychologists.
hirvi74
18 hours ago
Of the people I have known to call the helplines, the results have been either dismally useless or those people were arrested, involuntarily committed, subjected to inhumane conditions, and then hit with massive medical bills. In which, some got “help” and some still killed themselves anyway.
exasperaited
18 hours ago
And they know not to give advice like ChatGPT gave. They wouldn't even be entertaining that kind of discussion.
cmsj
20 hours ago
> The best they can do is raise a flag
Depending on where you live, this may well result in the vulnerable person being placed under professional supervision that actively prevents them from dying.
That's a fair bit more valuable than when you describe it as raising a flag.
johnisgood
20 hours ago
Yeah... I have been in a locked psychiatric ward many times before and never in my life I came out better. They only address the physical part there for a few days and kick you out until next time. Or do you think people should be physically restrained for a long time without any actual help?
exasperaited
21 hours ago
> A human can also say the wrong things to push someone in a certain direction. A psychologist, or anyone else for that matter, can't stop someone from committing suicide if they've already made up their mind about it. They're educated on human psychology, but they're not miracle workers. The best they can do is raise a flag if they suspect self-harm, but then again, so could a machine.
ChatGPT essentially encouraged a kid not to take a cry-for-help step that might have saved their lives. This is not a question of a bad psychologist; it's a question of a sociopathic one that may randomly encourage harm.
imiric
21 hours ago
But that's not the issue. The issue is that a kid is talking to a machine without supervision in the first place, and presumably taking advice from it. The main questions are: where are the guardians of this child? What is the family situation and living environment?
A child thinking about suicide is clearly a sign that there are far greater problems in their life than taking advice from a machine. Let's address those first instead of demonizing technology.
To be clear: I'm not removing blame from any AI company. They're complicit in the ways they market these tools and how they make them accessible. But before we vilify them for being responsible for deaths, we should consider that there are deeper societal problems that should be addressed first.
lan321
19 hours ago
> A child thinking about suicide is clearly a sign that there are far greater problems in their life
TBH kids tend to be edgy for a bit when puberty hits. The emo generation had a ton of girls cutting themselves for attention for example.
hirvi74
18 hours ago
I highly doubt a lot of it is/was for attention.
lan321
17 hours ago
I had girl friends who did it to get attention from their parents/boyfriends/classmates. They acknowledged it back then. It wasn't some secret. It was essentially for attention, aesthetics and the light headed feeling. I still have an A4 page somewhere with a big ass heart drawn on it by an ex with her own blood. Kids are just weird when the hormones hit. The cute/creepy ratio of that painting has definitely gotten worse with time.
exasperaited
19 hours ago
> But that's not the issue.
It is the issue at least in the sense that it's the one I was personally responding to, thanks. And there are many issues, not just the one you are choosing to focus on.
"Deeper societal problems" is a typical get-out clause for all harmful technology.
It's not good enough. Like, in the USA they say "deeper societal problems" about guns; other countries ban them and have radically fewer gun deaths while they are also addressing those problems.
It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos? Deeper societal problems are not represented by a neat dividing line between cause and symptom; they are cyclical.
The current push towards LLMs and other technologies is one of the deepest societal problems humans have ever had to consider.
ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.
Just saying "but humans also" is wholly irrational in this context.
imiric
17 hours ago
> It's not an either-we-ban-guns-or-we-help-mentally-ill-people. Por qué no los dos?
Because it's irrational to apply a blanket ban on anything. From drugs, to guns, to foods and beverages, to technology. As history has taught us, that only leads to more problems. You're framing it as a binary choice, when there is a lot of nuance required if we want to get this right. A nanny state is not the solution.
A person can harm themselves or others using any instrument, and be compelled to do so for any reason. Whether that's because of underlying psychological issues, or because someone looked at them funny. As established—humans are complex, and we have no way of knowing exactly what motivates someone to do anything.
While there is a strong argument to be made that no civilian should have access to fully automated weapons, the argument to allow civilians access to weapons for self-defense is equally valid. The same applies to any technology, including "AI".
So if we concede that nuance is required in this discussion, then let's talk about it. Instead of using "AI" as a scapegoat, and banning it outright to "protect the kids", let's discuss ways that it can be regulated so that it's not as widely accessible or falsely advertised as it is today. Let's acknowledge that responsible usage of technology starts in the home. Let's work on educating parents and children about the role technology plays in their lives, and how to interact with it in healthy ways. And so on, and so forth.
It's easy to interpret stories like this as entirely black or white, and have knee-jerk reactions about what should be done. It's much more difficult to have balanced discussions where multiple points of view are taken into consideration. And yet we should do the difficult thing if we want to actually fix problems at their core, instead of just applying quick band-aid "solutions" to make it seem like we're helping.
> ChatGPT engaged in an entire line of discussion that no human counsellor would engage in, leading to an outcome that no human intervention (except that of a psychopath) would cause. Because it was sycophantic.
You're ignoring my main point: why are these tools treated as "counsellors" in the first place? That's the main issue. You're also ignoring the possibility that ChatGPT may have helped many more people than it's harmed. Do we have statistics about that?
What's irrational is blaming technology for problems that are caused by a misunderstanding and misuse of it. That is no more rational than blaming a knife company when someone decides to use a knife as a toothbrush. It's ludicrous.
AI companies are partly to blame for false advertising and not educating the public sufficiently about their products. And you could say the same for governments and the lack of regulation. But the blame is first and foremost on users, and definitely not on the technology itself. A proper solution would take all of these aspects into consideration.
grey-area
a day ago
First, do no harm.
wongarsu
21 hours ago
That relates more to purposefully harming some people to safe other people. Doing something that has the potential to harm a person but statistically has a greater likelihood of helping them is something doctors do all the time. They will even use methods that are guaranteed to do harm to the patient, as long as they have a sufficient chance to also bring a major benefit to the same patient
butlike
10 hours ago
An example being: surgery. You cut into the patient to remove the tumor.
RobotToaster
20 hours ago
The Hippocratic oath originated from Hippocratic medicine forbidding surgery, which is why surgeons are still not referred to as "doctor" today.
hirvi74
18 hours ago
Do no harm or no intentional harm?
pzlarsson
21 hours ago
When evaluating good vs harm for drugs or other treatments the risk for lethal side effects must be very small for the treatment to be approved. In this case it is also difficult to get reliable data on how much good and harm is done.
exasperaited
21 hours ago
This is not so much "more good than harm" like a counsellor that isn't very good.
This is more "sometimes it will seemingly actively encourage them to kill themselves and it's basically a roll of the dice what words come out at any one time".
If a counsellor does that they can be prosecuted and jailed for it, no matter how many other patients they help.
wafflemaker
20 hours ago
Let's look at the problem from perspective of regular people. YMMV, but in countries I know most about, Poland and Norway (albeit a little less so for Norway) it's not about ChatGPT vs Therapist. It's about ChatGPT vs nothing.
I know people who earn above average income and still spend a significant (north of 20%) portion of their income on therapy/meds. And many don't, because mental health isn't that important to them. Or rather - they're not aware of how much helpful it can be to attend therapy. Or they just can't afford the luxury (that I claim it is) of private mental health treatment.
ADHD diagnosis took 2.5y from start to getting meds, in Norway.
Many kids grow up before their wait time in queue for pediatric psychologist is over.
It's not ChatGPT vs shrink. It's ChatGPT vs nothing or your uncle who tells you depression and ADHD are made up and you kids these days have it all too easy.
butlike
10 hours ago
As someone who lives in America, and is prescribed meds for ADHD; 2.5 years from asking for help to receiving medication _feels_ right to me in this case. The medications have a pretty negative side effect profile in my experience, and so all options should be weighed before prescribing ADHD-specific medication, imo
hitarpetar
16 hours ago
you know ChatGPT can't prescribe Adderall right?
butlike
10 hours ago
Yet, if you ask the word generator to generate words in the form of advice, like any machine or code, it will do exactly what you tell it to do. The fact people are asking implies a lack of common sense by your definition.
Sertraline can increase suicidal thoughts in teens. Should anti-depressants not be allowed near suicidal/depressed teens?
terminalshort
15 hours ago
> A word generator with no intelligence or understanding
I will take this seriously when you propose a test that can distinguish between that and something with actual "intelligence or understanding"
grey-area
11 hours ago
Sure ask it to write an interesting novel or a symphony, and present it to humans without editing. The majority of literate humans will easily tell the difference between that and human output. And it’s not allowed to be too derivative.
When AI gets there (and I’m confident it will, though not confident LLMs will), I think that’s convincing evidence of intelligence and creativity.
terminalshort
10 hours ago
I accept that test other than the "too derivative" part which is an avenue for subjective bias. AI has passed that test for art already: https://www.astralcodexten.com/p/ai-art-turing-test As for a novel that is currently beyond the LLMs capabilities due to context windows, but I wouldn't be surprised if it could do short stories that pass this Turing test right now.
lazide
20 hours ago
Bleach should also not be allowed near suicidal teens.
But how do you tell before it matters?
fragmede
18 hours ago
Plastic bags shouldn't be allowed near suicidal teens. Scarves shouldn't be. Underwear is also a strangulation hazard for the truly desperate. Anything long sleeved even. Knives of any kind, including butter. Cars, obviously.
Bleach is the least of your problems.
mapt
17 hours ago
We have established that suicidal people should be held naked (or with an apron) in solitary isolation in a padded white room and saddled with medical bills larger than a four-year college tuition. That'll help'em.
One problem with treatment modalities is that they ignore material conditions and treat everything as dysfunction. Lots of people are looking for a way out not because of some kind of physiological clinical depression, but because they've driven themselves into a social & economic dead-end and they don't see how they can improve. More suicidal people than not, would cease to be suicidal if you handed them $180,000 in concentrated cash, and a pardon for their crimes, and a cute neighbor complimenting them, which successfully neutralizes a majority of socioeconomic problems.
We deal with suicidal ideation in some brutal ways, ignoring the material consequences. I can't recommend suicide hotlines, for example, because it's come out that a lot of them concerned with liability call the cops, who come in and bust the door down, pistol whip the patient, and send them to jail, where they spend 72 hours and have some charges tacked on for resisting arrest (at this point they lose their job). Why not just drone strike them?
butlike
10 hours ago
> We have established that suicidal people should be held naked (or with an apron) in solitary isolation in a padded white room and saddled with medical bills larger than a four-year college tuition. That'll help'em.
It appear to be the only way
prewett
15 hours ago
What is "concentrated cash"? Do you have to dilute it down to standard issue bills before spending it? Someone hands you 5 lbs of gold, and have to barter with people to use it?
"He didn't need the money. He wasn't sure he didn't need the gold." (an Isaac Asimov short story)
> More suicidal people than not, would cease to be suicidal if ...
I'm going to need to see a citation on this one.
lazide
17 hours ago
The one dude that used the money to build a self-murder machine and then televised it would ruin it for everyone though. :s
The reality is most systems are designed to cover asses more than meet needs, because systems get abused a lot - by many different definitions, including being used as scapegoats by bad actors.
lazide
18 hours ago
Yeah, if we know they’re suicidal, it’s legitimately grippy socks time I guess?
But there is zero actually effective way to do that as an online platform. And plenty of ways that would cause more harm (statistically).
My comment was more ‘how the hell would you know in a way anyone could actually do anything reasonable, anyway?’.
People spam ‘Reddit cares’ as a harassment technique, claiming people are suicidal all the time. How much should the LLM try to guess? If they use all ‘depressed’ words? What does that even mean?
What happens if someone reports a user is suicidal, and we don’t do anything? Are we now on the hook if they succeed - or fail and sue us?
Do we just make a button that says ‘I’m intending to self harm’ that locks them out of the system?
butlike
10 hours ago
Why are we imprisoning suicidal people? That will surely add incentive to have someone raise their hand and ask for help: taking their freedoms away...
lazide
10 hours ago
Why do we put people in a controlled environment where their available actions are heavily restricted and their ability to use anything they could hurt themselves is taken away? When they are a known risk of hurting themselves or others?
What else do you propose?
butlike
9 hours ago
Not putting them in controlled environments, but rather teaching them to control their environments
lazide
9 hours ago
Huh?
To be clear, people in the middle of psychotic episodes and the like tend to not do very well at learning life skills.
Sometimes pretty good at stabbing random things/people, poisoning themselves, setting themselves on fire, etc.
There are of course degrees to all this, but it’s pretty rare someone is getting a 5150 because they just went on an angry rant or the like.
Many are in drug induced states, or clearly unable to manage their interface with the reality around them at the time.
Once things have calmed down, sure. But how do you think education in ‘managing the world around them’ is going to help a paranoid schizophrenic?
IshKebab
18 hours ago
> with no intelligence
Damn I thought we'd got over that stochastic parrot nonsense finally...
courseofaction
21 hours ago
Replace 'word generator with no intelligence or understanding based on the contents of the internet' with 'for-profit health care system'.
In retrospect, from experience, I'd take the LLM.
omnimus
21 hours ago
'not-for-profit healthcare system' has to surely be better better goal/solution than LLM
fragmede
18 hours ago
Lemme get right on vibecoding that! Maybe three days, max, before I'll have an MVP. When can I expect your cheque funding my non-profit? It'll have a quadrillion dollar valuation by the end of the month, and you'll want to get in on the ground floor, so better act fast!
ben_w
a day ago
I'll gladly diss LLMs in a whole bunch of ways, but "common sense"? No.
By the "common sense" definitions, LLMs have "intelligence" and "understanding", that's why they get used so much.
Not that this makes the "common sense" definitions useful for all questions. One of the worse things about LLMs, in my opinion, is that they're mostly a pile of "common sense".
Now this part:
> Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
I agree with you on…
…with the exception of one single word: It's quite cliquish to put scare quotes around the "Open" part on a discussion about them publishing research.
More so given that people started doing this in response to them saying "let's be cautious, we don't know what the risks are yet and we can't un-publish model weights" with GPT-2, and oh look, here it is being dangerous.
probably_wrong
21 hours ago
While I agree with most of your comment, I'd like to dispute the story about GPT-2.
Yes, they did claim that they wouldn't release GPT-2 due to unforeseen risks, but...
a. they did end up releasing it,
b. they explicitly stated that they wouldn't release GPT-3[1] for marketing/financial reasons, and
c. it being dangerous didn't stop them from offering the service for a profit.
I think the quotes around "open" are well deserved.
[1] Edit: it was GPT-4, not GPT-3.
ben_w
21 hours ago
> they did end up releasing it,
After studying it extensively with real-world feedback. From everything I've seen, the statement wasn't "will never release", it was vaguer than that.
> they explicitly stated that they wouldn't release GPT-3 for marketing/financial reasons
Not seen this, can you give a link?
> it being dangerous didn't stop them from offering the service for a profit.
Please do be cynical about how honest they were being — I mean, look at the whole of Big Tech right now — but the story they gave was self-consistent:
[Paraphrased!] (a) "We do research" (they do), "This research costs a lot of money" (it does), and (b) "As software devs, we all know what 'agile' is and how that keeps product aligned with stakeholder interest." (they do) "And the world is our stakeholder, so we need to release updates for the world to give us feedback." (???)
That last bit may be wishful thinking, I don't want to give the false impression that I think they can do no wrong (I've been let down by such optimism a few other times), but it is my impression of what they were claiming.
probably_wrong
21 hours ago
> Not seen this, can you give a link?
I was confusing GPT3 with GPT4. Here's the quote from the paper (emphasis mine) [1]:
> Given both THE COMPETITIVE LANDSCAPE and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.
ben_w
17 hours ago
Thanks, 4 is much less surprising than 3.
bayindirh
a day ago
Maybe it's causing even more deaths than we know, and these doesn't make the news either?
If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
Al-Khwarizmi
21 hours ago
> Maybe it's causing even more deaths than we know, and these doesn't make the news either?
Of course, and that's part of why I say that we need to measure the impact. It could be net positive or negative, we won't know if we don't find out.
> If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
I'm not advocating for not improving security, I'm arguing against a comment that said that "ChatGPT should be nowhere near anyone dealing with psychological issues", because it can cause death.
Following your analogy, cars objectively cause deaths (and not only of people with psychological issues, but of people in general) and we don't say that "they should be nowhere near a person". We improve their safety even though zero deaths is probably impossible, which we accept because they are useful. This is a big-picture approach.
computably
10 hours ago
> Before declaring that it shouldn't be near anyone with psychological issues, someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa
That is the literal opposite of how medical treatment is regulated. Treatments should be tested and studied before availability to the general public. It's irresponsible in the extreme to suggest this.
toofy
19 hours ago
people are overcomplicating this, the big picture is simple af:
if a therapist was ever found to have said to this to a suicidal person, they would be immediately stripped of their license and maybe jailed.
cal85
16 hours ago
True. But it feels like a fairer comparison would be with a huge healthcare company that failed to vet one of its therapists properly, so a crazy pro-suicide therapist slipped through the net. Would we petition to shut down the whole company for this rare event? I suppose it would depend on whether the company could demonstrate what it is doing to ensure it doesn’t happen again.
wat10000
15 hours ago
Maybe you shouldn't shut down OpenAI over this. But each instance of a particular ChatGPT model is the same as all the others. This is like a company that has a magical superhuman therapist that can see a million patients a day. If they're found to be encouraging suicide, then they need to be stopped from providing therapy. The fact that this is the company's only source of revenue might mean that the company has to shut down over this, but that's just a consequence of putting all your eggs in one basket.
terminalshort
15 hours ago
But you would have to be a therapist. If a suicidal person went up to a stranger and started a conversation, there would be no consequences. That's more analogous to ChatGPT.
wfleming
15 hours ago
If a therapist helped 99/100 patients but tacitly encouraged the 100th to commit suicide* they would still lose their license.
* ignoring the case of ethical assisted suicide for reasons of terminal illness and such, which doesn’t seem relevant to the case discussed here.
ml-anon
17 hours ago
This entire comment section is full of wide eyed nonsense like this. It’s honestly frightening that we are even humoring this point of view.
nihzm
17 hours ago
Since as you say this utilitarian view is rather common, perhaps it would good to show _why_ this is problematic by presenting a counterargument.
The basic premise under GP's statements is that although not perfect, we should use the technology in such a way that it maximizes the well being of the largest number of people, even if comes at the expense of a few.
But therein lies a problem: we cannot really measure well being (or utility). This becomes obvious if you look at individuals instead of the aggregate: imagine LLM therapy becomes widespread and a famous high profile person and your (not famous) daughter end up in "the few" for which LLM therapy goes terribly wrong and commit suicide. The loss of the famous person will cause thousands (perhaps millions) people to be a bit sad, and the loss of your daughter will cause you unimaginable pain. Which one is greater? Can they even be be compared? And how many people with a successful LLM therapy are enough to compensate for either one?
Unmeasurable well-being then makes these moral calculations at best inexact and at worst completely meaningless. And if they are truly meaningless, how can they inform your LLM therapy policy decisions?
Suppose for the sake of the argument we accept the above, and there is a way to measure well being. Then would it be just? Justice is a fuzzy concept, but imagine we reverse the example above: many people lose their lives because of bad LLM therapy, but one very famous person in the entertainment industry is saved by LLM therapy. Let's suppose then that this famous persons' well being plus the millions of spectators' improved well-being (through their entertainment) is worth enough to compensate the people who died.
This means saving a famous funny person justifies the death of many. This does not feel just, does it?
There is a vast amount of literature on this topic (criticisms of utilitarianism).
ml-anon
16 hours ago
This is either incredible satire or you’re a lunatic.
nihzm
16 hours ago
I'm just showing the logical consequences of utilitarian thinking, not endorsing it.
wat10000
14 hours ago
We have no problem doing this in other areas. Airline safety, for example, is analyzed quantitatively by assigning a monetary value to an individual human life and then running the numbers. If some new safety equipment costs more money than the value of the lives it would save, it's not used. If a rule would save lives in one way but cost more lives in another way, it's not enacted. A famous example of this is the rule for lap infants. Requiring proper child seats for infants on airliners would improve safety and save lives. It also increases cost and hassle for families with infants, which would cause some of those families to choose driving over flying for their travel. Driving is much more dangerous and this would cost lives. The FAA studied this and determined that requiring child seats would be a net negative because of this, and that's why it's not mandated.
There's no need to overcomplicate it. Assume each life has equal value and proceed from there.
FuckButtons
14 hours ago
Let’s maybe not give the benefit of the doubt to the startup which has shown itself to have the moral scruples of vault-tec just because what they’re doing might work out fine for some of the people they’re experimenting on.
wat10000
16 hours ago
Our standard approach for new medical treatments is to require proof of safety and efficacy before it's made available to the general public. This is because it's very, very easy for promising-looking treatments to end up being harmful.
"Before declaring that it shouldn't be near anyone with psychological issues" is backwards. Before providing it to people with psychological issues, someone should study whether the positive impact is greater than the negative.
Trouble is, this is such a generalized tool that it's very hard to do that.
hitarpetar
16 hours ago
> someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa
we already have an approval process for medical interventions. are you suggesting the government shut ChatGPT down until the FDA can investigate it's use for therapy? because if so I can get behind that
jrflowers
21 hours ago
> We would need the big picture, though... maybe it caused that death (which is awful) but it's also saving lives?
> drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so, it;s impossible to say if its bad or not,
david-gpu
20 hours ago
They didn't say it was impossible, or that we should do nothing. Learn how to have a constructive dialogue, please.
jrflowers
19 hours ago
You make a good point. While they absolutely and unequivocally said that it is currently impossible to tell whether the suicides are bad or not, they also sort of wondered aloud if in the future we might be able to develop a methodology to determine whether the suicides are bad or not. This is an important distinction becau
gwd
17 hours ago
I feel like this article is apropos:
https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...
Basically, the author tried to simulate someone going off into some sort of psychosis with a bunch of different models; and got wildly different results. Hard to summarize, very interesting read.
Lerc
19 hours ago
While I agree that AI shouldn't be sycophantic, and I also agree that the AI shouldn't have said those things.
> “Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”
That is not sycophantic behaviour, it is asserting a form of control of the situation. The bot made a direct challenge to the suggestion.
I only just realised this now reading your comment, but I hardly ever see responses that push back against what I say like that.
kace91
a day ago
>should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues.
Is that a debate worth having though?
If the tool is available universally it is hard to imagine any way to stop access without extreme privacy measures.
Blocklisting people would require public knowledge of their issues, and one risks the law enforcement effect, where people don’t seek help for fear that it ends up in their record.
probably_wrong
21 hours ago
> Is that a debate worth having though?
Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".
If ChatGPT has "PhD-level intelligence" [1] then identifying people using ChatGPT for therapy should be straightforward, more so users with explicit suicidal intentions.
As for what to do, here's a simple suggestion: make it a three-strikes system. "We detected you're using ChatGPT for therapy - this is not allowed by our ToS as we're not capable of helping you. We kindly ask you to look for support within your community, as we may otherwise have to suspend your account. This chat will now stop."
kace91
20 hours ago
>Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".
I think it’s fair to demand that they label/warn about the intended usage, but policing it is distopic. Do car manufacturers immediately call the police when the speed limit is surpassed? Should phone manufacturers stop calls when the conversation deals with illegal topics?
I’d much rather regulation went the exact opposite way, seriously limiting the amount of analysis they can run over conversations, particularly when content is not deanonimised.
If there’s something we don’t want is OpenAI storing data about mental issues and potentially selling it to insurers for example. The fact that they could be doing this right now is IMO much more dangerous than tool misuse.
computably
9 hours ago
Those analogies are too imprecise.
Cars do have AEB (auto emergency braking) systems, for example, and the NHTSA is requiring all new cars to include it by 2029. If there are clear risks, it's normal to expect basic guardrails.
> I’d much rather regulation went the exact opposite way, seriously limiting the amount of analysis they can run over conversations, particularly when content is not deanonimised.
> If there’s something we don’t want is OpenAI storing data about mental issues and potentially selling it to insurers for example. The fact that they could be doing this right now is IMO much more dangerous than tool misuse.
We can have both. If it is possible to have effective regulation preventing an LLM provider from storing or selling users' data, nothing would change if there were a ban on chatbots providing medical advice. OpenAI already has plenty of things it prohibits in its ToS.
arowthway
21 hours ago
Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice? From my experience talking about your problems to the unaccountable bullshit machine is not very different then the "real" therapy.
latexr
21 hours ago
> Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice?
Probably. If you are in therapy because you’re feeling mentally unstable, by definition you’re not as capable of separating bad advice from good.
But your question is a false dichotomy, anyway. You shouldn’t be asking ChatGPT for either type of advice. Unless you enjoy giving yourself psychiatric disorders.
https://archive.ph/2025.08.08-145022/https://www.404media.co...
> From my experience talking about your problems to the unaccountable bullshit machine is not very different then the "real" therapy.
From the experience of the people (and their families) who used the machine and killed themselves, the difference is massive.
terminalshort
15 hours ago
I've been talking about my health problems to unaccountable bullshit machines my whole life and nobody ever seemed to think it was a problem. I talked to about a dozen useless bullshit machines before I found one that could diagnose me with narcolepsy. Years later out of curiosity I asked ChatGPT and it nailed the diagnosis.
danaris
19 hours ago
Then...
Maybe the tool should not be available universally.
Maybe it should not be available to anyone.
If it cannot be used safely by a vulnerable class of people, and identifying that class of person sufficiently to block use by them, and its primary purpose is simply to bring OpenAI more profit, then maybe the world is better off without it being publicly available.
kace91
18 hours ago
>If it cannot be used safely by a vulnerable class of people, and identifying that class of person sufficiently to block use by them
Should we stop selling kitchen knives, packs of cards or beer as well?
This is not a new problem in society.
>and its primary purpose is simply to bring OpenAI more profit
This is true for any product, unless you mean that it has no other purpose, which is trivially contradicted by the amount of people who decide to pay for it.
danaris
18 hours ago
There's a qualitative difference between knives and publicly-available LLMs.
Knives aren't out there actively telling you "use me, slit those wrists, then it'll all be over".
kace91
17 hours ago
I don’t disagree that they are clearly unhealthy for people who aren’t mentally well, I just differ on where the role of limiting access lies.
I think it’s up to the legal tutor or medical professionals to check that, and providers should at most be asked to comply with state restrictions, the same way addicts can figure on a list to ban access to a casino.
The alternative places openAI and others in the role of surveilling the population and deciding what’s acceptable, which IMO has been the big fuckup of social media regulation.
I do think there is an argument for how LLMs expose interaction - the friendliness that mimics human interaction should be changed for something less parasocial-friendly. More interactive Wikipedia and less intimate relationship.
Then again, the human-like behavior reinforces the fact that it’s faulty knowledge, and speaking in an authoritative manner might be more harmful during regular use.
sharts
18 hours ago
Something is indeed NOT better than nothing. However, for those with mental and emotional issues (likely stemming from social / societal failures in the first place) anything would be better nothing because they need interaction and patience —two things these AI tools have in abundance.
Sadly there is no alternative. This is happening and there’s no going back. Many will be affected in detrimental ways (if not worse). We all go on with our lives because that which does not directly affect us is not our problem —is someone else’s problem/responsibility.
prasadjoglekar
19 hours ago
This is just one example of the logical end state of grossly over prioritizing capital over labor in the economy.
Xemplolo
16 hours ago
This discovery was probably not on purpose.
No one at OpenAI thought "Hey lets make a sucidie bot"
But this should show us how shit our society is to a lot of people, how much we need to help each other.
And i'm pretty sure that a good bot could def help
bg24
16 hours ago
There are one-off things, and then there is exponential improvements - both in guardrails, and ChatGPT's ability to handle these discussions.
This type of discussion might be very much possible in ChatGPT in 6-24 months.
seunosewa
16 hours ago
That was before they started acting to fix the problem. Please check the date.
FloorEgg
8 hours ago
What version of chatGPT was this? That sounds like a gpt-3.5 kind of response.
I wonder how the latest models would fare in these kinds of conversations.
p1dda
20 hours ago
Exactly, any company that offers chatbots to the public should do what Google did regarding suicide searches, remove harmful websites and provide info how to contact mental health professionals. Anything else would be corporate suicide (pun not intended).
TZubiri
16 hours ago
I know that minors under age 13 are not allowed to use the app. But 13-18 is fine? Not sure why. Might also be worth looking into making apps like these 18+. Whether by law or by liability, if someone 20+ gets, say, food poisoning by getting recipes from chatgpt, then you can argue that it's the user's fault for not fact checking, but if a 15yo kid gets food poisoning, it's harder to argue that it's the kid's fault.
moralestapia
19 hours ago
That's really really bad.
But also, how many people has it talked out of doing it? We need the full picture.
JKCalhoun
18 hours ago
That is data that is impossible to get.
moralestapia
17 hours ago
>OpenAI says over a million people talk to ChatGPT about suicide weekly
JKCalhoun
16 hours ago
We don't know how many of those people would have gone through with a suicide but for LLMs.
abracadaniel
11 hours ago
Or how many were pushed down the path towards discussing suicide because they were talking to an LLM that directed them that way. It's entirely possible the LLMs are reinforcing bleak feelings with its constant "you're absolutely correct!" garbage.
bloqs
a day ago
im willing to bet that it reduces them at a statistical level. knee jerk emotional reaction to a hallucinaton isnt the way forward with these things
b3lvedere
20 hours ago
"One may think that something is better than nothing, but a bot enabling your destructive impulses is indeed worse than nothing."
And how would a layman know the difference?
If i desperately need help with mental item x and i have no clue how to get help, am very very ashamed for even mentioning to ask for help about mental item x or there are actually no resources available, i will turn to anything else than nothing. Because item x still exists and is making me suffer 24/7.
At least the bot pretends to listen, some humans cannot even do that.
probably_wrong
19 hours ago
I think you're being too generous to the idea that it could help without any evidence.
If we assume that there's therapeutic value to bringing your problems out then a diary is a better tool. And if we believe that it's the feedback what's helping, well, we have cases of ChatGPT encouraging people's psychosis.
We know that a layman often doesn't know the difference between what's helpful and what isn't - that's why loving relatives end up often enabling people's addictions thinking they're helping. But I'd argue that a system that confidently gives mediocre feedback at best and actively psychotic at worst is not a system that should be encouraged simply because it's cheap.
I also wanted to snarkily write "even a dog would be better", but the more I thought about it the more I realized that yes, a dog would probably be a solid alternative.
atleastoptimal
20 hours ago
OpenAI tried to get rid of the excessively sycophantic model (4o) but there was a massive backlash. They eventually relented and kept it as a model offering in ChatGPT.
OpenAI certainly has made mistakes with its rollouts in the past, but it is effectively impossible to keep everyone with psychological issues away from a free online web app.
>ChatGPT should be nowhere near anyone dealing with psychological issues.
Should every ledge with a >10ft drop have a suicide net? How would you imagine this would be enforced, requiring everyone who uses ChatGPT to agree to an "I am mentally stable" provisio?
fireflash38
19 hours ago
Do you think that it's free and available to anyone means it doesn't have any responsibility to users? Or have any responsibility for how it's used, or what it says?
atleastoptimal
12 hours ago
It’s an open problem in AI development to make sure LLM’s never say the “wrong” thing. No matter what, when dealing with a non-deterministic system, one can’t anticipate or oversee the moral shape of all its outputs. There are a lot of things however that you can’t get ChatGPT to say, and they often ban users after successive violations, so it isn’t true that they are fully abdicating responsibility for the use and outputs of their models in realms where the harm is tractable.