nojs
11 hours ago
> Every week, somewhere between 1.2 and 3 million ChatGPT users, roughly the population of a small country, show signals of psychosis, mania, suicidal planning, or unhealthy emotional dependence on the model.
> Why is mental-health crisis not a gating category, the kind where the conversation stops, full stop, and the user is routed to a human?
Well, obviously “routing to a human” is not feasible at that scale. And cold exiting the conversation is probably worse for the user than answering carefully.
godelski
10 hours ago
> is not feasible at that scale
I want to use an analogy here. The same arguments are often made about cleaning up environmental damage. So either make the companies doing the polluting pay for the costs themselves or if we care so much about them being profitable then we subsidize them by paying for those cleanup efforts out of taxes. Doing nothing is a worse form of subsidy as it not only costs more (in literal dollars) but shoulders that costs onto the people with the least ability to pay for it. The problem is you're treating "doing nothing" as having no cost. It has a high cost, but the cost is also highly distributed.So if it is not scalable, then why subsidize them? This is literally a tragedy of the commons situation. Personally, I'm in favor of making the people who make a mess clean up that mess. I really don't understand why this is such a contentious opinion.
Gigachad
11 hours ago
Tech companies will pull trillions of dollars out of their asses when the problem is boosting ad revenue or automating people out of a job. But when asked to deal with the crisis they invented and dumped on society the answer is “that’s impossible, doesn’t scale”
CobrastanJorji
11 hours ago
Figure a "mental health crisis" human conversation takes 30 minutes. Three million incidents per week would require 37,500 qualified mental health counselors on the phones working a 40 hour shift that week. Figure they make $75k/year each. You're now spending $3 billion per year on crisis response, and you're employing like 10% of all of the health counselors in the US. And all you're providing is 30 minute chats.
godelski
10 hours ago
> You're now spending $3 billion per year on crisis response
Honestly? That's really affordable[0]. That would be cheap if these were just for the US but it looks like these are global numbers. We spend $2bn/yr alone on "BREASTFEEDING PEER COUNSELORS AND BONUSES"[1]. I mean let's be serious, even in the article that OpenAI published says that it is a small portion of their users. So it doesn't "need to scale" as the scale is relatively small. But just because it is small doesn't mean it is unimportant.$3bn/yr is a lot of people money, but it is nothing for government money.
Edit: Last round of OpenAI funding was $122bn[2] and in the same article they are saying that they are generating $2bn in revenue per month. While that's not profit, it is worth mentioning that what you are saying "doesn't scale" is about 12% of the revenue of something that does scale. A single company. And mind you if we implemented what you're proposing it would be available to all the AI companies and more. Making it only a smaller drop in the bucket, not larger.
[0] Not to mention that better mental health care services will result in savings elsewhere. It's always way more expensive to fix a broken pipe that's flooding your house than it is to fix a pipe with a small crack. "Don't fix what ain't broken" is used too broadly. Maintenance is always cheaper than repair, but people just can't seem to understand this.
[1] https://www.usaspending.gov/federal_account/012-3510
[2] https://openai.com/index/accelerating-the-next-phase-ai/
Gigachad
11 hours ago
Mark Zuckerberg can spend $80B on the failed metaverse experiment, but can't spare some relative pocket change on solving the psychosis issue his products caused.
intended
3 hours ago
So what?
That underinvestment is the entire reason their stock prices are so high. This is effectively pollution of our information economy and environment, and the costs are offloaded to society.
The fact that we have the first generation with lower education attainment is not a problem for their stock prices or operational profit.
Tech has ungodly profit margins, because they are all about scaling without having to bring people in. Sadly there is no such thing as a free lunch, and if firms are made to clean up their mess?
Oil spills affect Oil firms more than Tech fallout affects Tech firms.
hx8
11 hours ago
I don't think it's obvious that routing to a human is infeasible. I'm sure many local authorities, health agencies, and non-profits would be okay being routed to. Additionally, I'm sure many of the users are the same week over week, so giving them long term care would reduce the total volume. Finally, there is a long gap between psychosis and emotional dependence, so there could be some triage to make sure those most in need have human intervention.
intended
3 hours ago
None of them are resourced enough (globally) to do this.
Safety is my area, and I interact with help lines and safety networks. Most of the time they are getting crushed and are underfunded. Offloading the work to them is hard and it requires investment in staffing, people, and organization.
It’s currently cheaper to do some amount of donation and support to such orgs, and bury the issue, than it is to actually deliver / invest in the degree of support needed.
These are also long tail problems, so solutions for a case can take years. For example if you are a woman in Pakistan who has been a victim of revenge porn, you are going to be spending a good chunk of your life trying to get those images/videos taken down from sites that are not based in Pakistan.
This is only an example of the types of problems that these helplines will have to triage. There will definitely be cases that can be resolved with a single call.
There isn’t any money in it, and it is seen as support work.
concinds
11 hours ago
"Routed to a human" is what the suicide hotline numbers do. OpenAI employees are neither trained nor credible to do that stuff.
GardenLetter27
5 hours ago
And what will a human do better? Why will the human care? Who will pay the human?
swatcoder
11 hours ago
Well, then maybe you can't scale it as a free service with self-serve signups. Maybe you need to gate who you allow to use it and pace how intensely they can engage. Or maybe you need to look for other solutions.
Yielding to "not feasible at scale" is exactly how we ended up with a lot of today's most pressing and almost intractible problems, from social media's ills to person and society straight through to enshittification and non-repairability.
The_Blade
10 hours ago
> ...straight through to enshittification and non-repairability.
funny as "enshittification" was the topic of a 99% Invisible pod just a few days ago and I also was listening to the new Stewart Brand book that Stripe published. i fixed a Norwegian desk I bought a decade ago on Valencia. happily not feasible at scale but neither was how i broke it :)
anal_reactor
3 hours ago
Step 1: route to a human
Step 2: 90% of users stop sharing their negative thoughts because "talking to a machine, not a human" was the entire selling point, giving them a sense of privacy and safety
Step 3: metrics go brrrrrrrr