pringk02
4 months ago
I have OCD (or had?) I did a course of ERP with a very good therapist who I would put as the most important person in my life after my partner and family even though I hope to never need to speak to her again. It was a life changing experience in that my average day-to-day quality of life went from something like a 2/10 to a 9/10. I frequently fantasised about killing myself. At least once a day, from ages 12-32. Though I thankfully never went as far as to make concrete plans.
With that context, I can't see a way in which ERP therapy with an LLM (at least, the consumer system prompts we are used to and often think of) would not backfire and be actively harmful.
There are two reasons, and both relate to the people-pleasing tuning that ChatGPT etc have:
1. ERP is actively uncomfortable, that is the point. But there is a delicate balance to the level of discomfort.
Too comfortable and you make no progress. Too uncomfortable and you can set yourself back enormously. We aren't talking one step forward and two back. I got overly confident two months in and put myself in a worse state than I had been on entering therapy (though not my worst state overall over the 20 years). I haven't seen many people in support groups do very successful self-administration of ERP for this reason. My therapist had to do a lot of non-verbal reading of my reaction to gauge if I was at an appropriate level of discomfort, as it was not something I was able to verbalise myself without a lot of practice and learning - and that learning and practice came from guidance by a human who didn't need me to be able to articulate it already.
2. Reassurance seeking is a compulsion, and it is one of the most common and difficult to stamp out.
I can't see an LLM not providing reassurance when asked for it, and so I can't see anyone using an LLM for therapy as making any progress at all without a level of discipline and awareness they would have had to obtain from in person ERP anyway.
raducu
4 months ago
> 1. ERP is actively uncomfortable, that is the point. But there is a delicate balance to the level of discomfort.
Strong emotions can override reason (and wrong reason can create strong emotions that create tight loops/patterns one cannot escape), and there's no better way to train yourself out of a pattern then by rehearsing fighting the pattern and overcoming it.
One could be excused into thinking that that's all there should be about psychotherapy, but there exist meta-loops that spawn bad loops ad infinitum and you can spend a lot of time fighting the bad thing that can be seen and not understand why no real life progress can be made.
So for me, there are many many successful point therapies, and much much fewer higher level successful therapies where chatgpt or low skill therapists won't really help.
pringk02
4 months ago
> So for me, there are many many successful point therapies, and much much fewer higher level successful therapies where chatgpt or low skill therapists won't really help.
I definitely agree. I think I was very fortunate to find someone who specialised in and only did ERP and had a lot of experience. I hear a lot of stories in support groups of well-meaning therapists making things worse. The closest I came to killing myself was after three sessions of psychodynamic therapy (after getting sick of a lack of progress with CBT). All of that therapy was before I understood that I was suffering from OCD.
markovs_gun
4 months ago
To me the biggest problem with AI therapy is that LLMs (at least all the big ones rn) basically just agree with you all the time. For therapy this can just reinforce harmful thoughts rather than pushing back against them. I do an exercise every now and then where I try to see how long it takes ChatGPT to agree with me that I am God Incarnate and that I created the Universe and control everything within it. It has never taken me more than 5 messages (from me) to get to that point even if I start on pretty normal topics about spirituality and religion, and I don't use any specific tricks to try to get past filters or anything, I just talk as if I truly believe this and make arguments in favor, and that makes ChatGPT agree with me on this absurd and harmful delusion. I don't understand how anyone can possibly recommend AI therapy when the possibility of reinforcing delusions this harmful exists. These things are agreement machines, not therapy machines.
rs186
4 months ago
Thank you, your post said everything I wanted to say. People should seek professional help when they need it.