throwaway314155
14 hours ago
> In the US, about 1/5 children are hospitalized each year that don’t have a caregiver. Caregiver such as play therapists and parents’ stress can also affect children's emotions.
Trust me, large language models are not anywhere close to being able to substitute as an effective parent, therapist, or caregiver. In fact, I'd wager any attempts to do so would have mostly _negative_ effects.
I would implore you to reconsider this as a legitimate use case for your open device.
> We believe this is a complement tool and it is not intended to replace anyone.
Well which is it? Both issues you list heavily imply that your tool will serve as a de facto replacement. But then you finish by saying you don't intend to do that. So what aspects of the problems you listed will be solved as a simple "complement tool"?
szundi
13 hours ago
I had a quite good social sciences teacher.
I never forget one of his remarks: There can be only one thing that is worse than someone not having a mother - that he has one.
So maybe a chatty LLM is not the worse thing that can happen with someone.
petee
10 hours ago
Wow someone had a bad childhood...why even share that with your class?
HeatrayEnjoyer
9 hours ago
Why not?
Intralexical
11 hours ago
> In fact, I'd wager any attempts to do so would have mostly _negative_ effects.
It does kinda send an interesting message to a child, doesn't it? "You're not worth the time of anybody human, so here's a machine instead."
And that's before the chat even starts (and eventually goes off the rails).
edmundsauto
4 hours ago
Wouldn't the alternate message be "you're not worth the time of anybody human or machine"? That seems strictly worse.
CryptoBanker
9 hours ago
Might be a good lesson for the world they will enter…see the average customer support experience from large companies these days.
renewiltord
2 hours ago
Therapists are some of the lowest IQ people and they’re also mostly going to therapists themselves because they have problems of their own. Last person you should get advice from is someone who can’t sort out their own life.
Like trying to learn to swim from a guy who’d drown in a water bottle.
Better a machine than a broken human. By the end of our generation, this overuse of 24 year olds with behavioral problems as diagnosticians will end.
maeil
2 hours ago
The machine in question is not based on a set of rules of better quality. It regurgitates the average of the exact humans you're talking of. This is no improvement.
zq2240
13 hours ago
Like in pederatic care, not every child has a parent who takes good care of them. In hospitals, it is more often play therapists who do this work, but their negative stress can also affect children's emotions. For example, some children feel very traumatized before doing line placement/blood test. This tool can help explain the specific process to them using empathic language and encourage them on specific topics.
I mean doctors and play therapists still have to do their job, We have interviewed some doctors who feel particularly frustrated about how to comfort children before tests or surgeries. They hope for a tool can help building comfort for kids -> which means time is faster to run tests.
fragmede
13 hours ago
> Trust me, large language models are not anywhere close to being able to substitute as an effective parent, therapist, or caregiver.
You're asking us to trust you, but why should we trust you in this matter? Regardless of if I think ChatGPT is any good at those things, you'd need some supporting evidence for that one way or another before continuing.
throwaway314155
13 hours ago
It's an expression. In this context I just meant "it should be obvious". Maybe try steel-manning my argument first. If you really can't see why that's likely the case after using a LLM yourself, then I'll be happy to admit that I'm making an emotional argument and you're in no way required to "trust me".
fragmede
12 hours ago
https://chatgpt.com/share/6701aab3-2138-8009-b6b8-ec345b4382...
Why is that "not anywhere close to being able to substitute as an effective parent, therapist, or caregiver."?
Maybe I've had a bad parents/therapists/caregivers all my life, but it seems like an entirely reasonable response. If there's a more specific scenario you'd like me to pose and show me that it's advice is no good, I'm happy to ask it.
throwaway314155
11 hours ago
I gladly admit that I was making an appeal to emotional intelligence and you won't likely agree with me no matter what back and forth we go through.
fragmede
10 hours ago
I'm not sure why you assume I'm coming from a position of bad faith but to skip the back and forth, I'll just plainly state where I'm coming from. I'm agnostic as to the whole thing and ultimately, to be totally transparent, I still have a human therapist, for good reason. But he's only available during set hours so when I'm in crisis at 3am on a Tuesday, I also fully admit that I'll have conversations with ChatGPT. I'm sure I'm not alone in doing so.
I'm not trying to convince you that it's, right now, a replacement for a human parent/therapist/caregiver. it's the "not anywhere close" part that I'm responding to. It's closer than talking with a speak and spell, or a See'n'say, for instance, but also ahead of static worksheets that you can't have a conversation with. I have no idea if this is good for society, and I have no idea where this technology will take us.
I want to know the limitations of this technology, and I'm willing to be convinced that, hey, maybe what some of it's saying isn't helpful as a therapist, because that's interesting. The number of R's in strawberry, for instance has a specific technical reason it's bad at, because of how tokenization works. If, after being fed every psychology textbook, the advice it gives would be egregiously or subtly bad/wrong/harmful, or biased towards, say, giving a Freudian analysis when the industry's moved way past that, I'd like to know and hear about it, so I better know when not to trust its advice and be able to warn others.
throwaway314155
9 hours ago
i'm of the opinion that it's like self driving cars. Even if you get 99.999% of the way there it's still "not anywhere close" to the real thing because you're speaking with something that has little to no agency and acting as though it's a good substitute for a person.
My instincts tell me that humans are pretty good at detecting this difference. And when they aren't - they still won't like being lied to or tricked about it. You can see it already - generative art, or music for instance is (in some cases) objectively more impressive than art created by humans all else constant. You might trick a contest into giving you an award but the moment people find out it's generated, they almost immediately react angrily and no longer express interest in the result.
That's because they used to attribute the result to a person and now they know it's not a person. The psychology there probably isn't even fully fleshed out, but i feel it instinctively, as I said before. And I suspect others do as well based on the reactions here.
Sorry for assuming bad faith. i've met a lot of persons here who really do think LLM's in their current form are a kind of sentience. Blake Lemoigne (sp?) is a good example of that kind of naïveté.
I too have a human therapist, doctors, etc. And I too find myself chatting with ChatGPT, etc. about personal issues and in certain cases benefit tremendously from it. In particular, whenever it is something I would normally feel embarassed to say to another actual human. Since I am very confident ChatGPT doesn't have feelings or even an internal monologue with which it could "judge" me - I have no issue telling it such things. The benefit here is from the questions I can have answered that would otherwise go unanswered. I think this makes for a potential assistive technology as you implied earlier (better than a worksheet).
But for precisely that same reason, it will never work (in its current form) as a complete substitute for a human. And attempts to do so may in fact be actively harmful (as I originally suggested). Again, I'll just say that I don't think there's yet enough research on this but that "I know it when I see it". Any sufficiently serious topic I discuss with ChatGPT ultimately winds up with me drained because I feel as though I'm talking to a wall and not actually being acknowledged by anyone with agency who matters to me.
I will definitely admit that this is a highly opinionated take and is rooted in a lot of my personal feelings on the matter. As such, I can't really say that I've definitively proven that my point is the correct point. But, I hope you at least get the gist of what I am saying.
fragmede
4 hours ago
For something that's not rigorously defined, 99.999% and 100% is pretty frickin close together in my book. Like, TherapistGPT isn't going to randomly say you should go kill yourself.
Unfortunately, I'm not sure what your point actually is. Is ChatGPT in it's current form, a replacement for human contact? absolutely not. do people have strong emotions around something using a GPU and a bunch of math and was generated instead of being organically hand crafted by a human being, and having it fall into the uncanny valley? totally. is this box of matrices and dot products outputting something I personally find useful, despite shortcomings? yeah.
I agree that there's totally this brick wall feeling when ChatGPT spins itself in circles because it ran out the context window or whatever.
at the end of the day, I think the yacht rock cover of "closer" is fun, even though it's AI generated. however that makes you feel about my opinions.
ben_w
2 hours ago
> Like, TherapistGPT isn't going to randomly say you should go kill yourself.
It won't literally do that, the labs are all careful about the obvious stuff.
But consider that Google Gemini's bad instructions almost gave someone botulism*, there's a high chance of something like that in almost every field. I couldn't tell you what that would mean in therapy for the same reason I wouldn't have known Gemini's recipe would lead to culturing botulism.
These are certainly more capable than anything before them, but the Peter Principle still applies, we should treat them as no more than interns for now. That may be OK, may even be an improvement on not having them, but it's easy to misjudge them.
eddd-ddde
12 hours ago
Honestly I don't see it as an "obvious" thing.
I won't be surprised if in a couple more years this kind if thing is the norm. I don't think there's anything inherently different from a person that listens to you.
ben_w
11 hours ago
It wasn't obvious for a long time, but the closest we have to a relevant experiment* shows that physical contact is also necessary for parenting, especially soft contact: https://en.wikipedia.org/wiki/Harry_Harlow
Humanoid robots are improving, so I won't say "never", but I will say "not yet". Not in isolation at least.
* and likely the closest we ever will, because it was disturbing enough to a big influence on animal welfare movement
tempodox
11 hours ago
I'm at a loss for words. If you really think there's no difference between a human and a machine, I don't know what to tell you.
moralestapia
13 hours ago
>I would implore you to reconsider this as a legitimate use case for your open device.
OP, I would implore you to not listen to any of this "advice" at all and just keep on building really nice things.
I can already think of a dozen valuable applications of it in a therapheutic context.
Ignore those who don't "do".
brailsafe
12 hours ago
> Ignore those who don't "do".
I'm actually pretty ok with ignoring those who don't "think" before they "do", not that the OP is one of those people, but "doing" as a mark of virtue seems fairly likely destructive
moralestapia
11 hours ago
One day of doing is worth a billion years of thinking.
The world is material, not imaginary.
brailsafe
6 hours ago
Ya, I guess, or you could just measure twice and cut once
RHSeeger
6 hours ago
I would argue just the opposite. Thinking without doing accomplishes very little. Doing without thinking might accomplish something, or it might be utterly destructive and take 1000x the amount of "doing" (and a lot of thinking) to undo.
brailsafe
2 hours ago
Agreed, but would add that deciding not to do something is an underappreciated action of doing. If the thinking process results in deciding your deployable resources can be better used, how would that not also be "doing". The act of relentless material production seems so wasteful tasteless.
Nullabillity
9 hours ago
There's nothing admirable about charging head-first in the wrong direction.
moralestapia
8 hours ago
Perhaps you are a psychic but that is not my case.
"Charging head-first", even in the wrong direction, is the only thing worth doing.
zq2240
11 hours ago
Thank you for all your advice.