myhf
10 hours ago
If an app makes a diagnosis or a recommendation based on health data, that's Software as a Medical Device (SaMD) and it opens up a world of liability.
https://www.fda.gov/medical-devices/digital-health-center-ex...
eastbound
7 hours ago
How do you suggest to deal with Gemini? Extremely useful to understand whether something is worrying or not. Whether we like it or not, it’s a main participant to the discussion.
derbOac
3 hours ago
Apparently we should hire the Guardian to evaluate LLM output accuracy?
Why are these products being put out there for these kinds of things with no attempt to quantify accuracy?
In many areas AI has become this toy that we use because it looks real enough.
It sometimes works for some things in math and science because we test its output, but overall you don't go to Gemini and it says "there's a 80% chance this is correct". At least then you could evaluate that claim.
There's a kind of task LLMs aren't well suited to because there's no intrinsic empirical verifiability, for lack of a better way of putting it.
jessetemp
6 hours ago
Ideally, hold Google liable until their AI doesn’t confabulate medical advice.
Realistically, sign a EULA waiving your rights because their AI confabulates medical advice
arkh
3 hours ago
> How do you suggest to deal with Gemini?
Don't. I do not ask my mechanic for medical advice, why would I ask a random output machine?
menaerus
3 hours ago
This "random output machine" is already in large use in medicine so why exactly not? Should I trust the young doctor fresh out of the Uni more by default or should I take advises from both of them with a grain of salt? I had failures and successes with both of them but lately I found Gemini to be extremely good at what it does.
Timon3
an hour ago
> This "random output machine" is already in large use in medicine so why exactly not?
Where does "large use" of LLMs in medicine exist? I'd like to stay far away from those places.
I hope you're not referring to machine learning in general, as there are worlds of differences between LLMs and other "classical" ML use cases.
thisislife2
3 hours ago
There's a difference between a doctor (an expert in their field) using AI (specialising in medicine) and you (a lay person) using it to diagnose and treat yourself. In the US, it takes at least 10 years of studying (and interning) to become a doctor.
vharish
2 hours ago
Even so, it's rather common for doctors to not be albe to diagonise correctly. It's a guessing game for them too. I don't know so much about US but it's a real problem in large parts of the world. As the comment stated, I would take anything a doctor says with a pinch of salt. Particularly so when the problem is not obvious.
menaerus
an hour ago
It takes 10 years of hard work to become a profound engineer too yet it doesn't prohibit us missing the things. That argument cannot hold. AI is already wide-spread in medical treatment.
schiffern
2 hours ago
Why stop at AI? By that same logic, we should ban non-doctors from being allowed to Google anything medical.
close04
2 hours ago
> This "random output machine" is already in large use in medicine
By doctors. It's like handling dangerous chemicals. If you know what you're doing you get some good results, otherwise you just melt your face off.
> Should I trust the young doctor fresh out of the Uni
You trust the process that got the doctor there. The knowledge they absorbed, the checks they passed. The doctor doesn't operate in a vacuum, there's a structure in place to validate critical decisions. Anyway you won't blindly trust one young doctor, if it's important you get a second opinion from another qualified doctor.
In the fields I know a lot about, LLMs fail spectacularly so, so often. Having that experience and knowing how badly they fail, I have no reason to trust them in any critical field where I cannot personally verify the output. A medical AI could enhance a trained doctor, or give false confidence to an inexperienced one, but on its own it's just dangerous.
overfeed
5 hours ago
> How do you suggest to deal with Gemini?
With robust fines based on % revenue whenever it breaks the law, would be my preference. I'm nit here to attempt solutions to Google's self-inflicted business-model challenges.
usefulposter
2 hours ago
schiffern
an hour ago
Yes.
>Argument By Adding -ism To The End Of A Word
Counterpoint: LLMs are inevitable.Can't put that genie back in the bottle, no matter how much the powers-that-be may wish. Such is the nature of (technological) genies.
The only way to 'stop' LLMs is to invent something better.
protocolture
2 hours ago
Thought terminating cliche.
ndsipa_pomu
2 hours ago
If it's giving out medical advice without a license, it should be banned from giving medical advice and the parent company fined or forced to retire it.
atoav
2 hours ago
As a certified electrical engineer, the amount of times googles LLM suggested a thing that would have at minimum started a fire is staggering.
I have the capacity to know when it is wrong, but I teach this at university level. What worries me, are the people who are on the starting end of the Dunning-Kruger curve and needed that wrong advice to start "fixing" the spaces where this might become a danger to human life.
No information is superior to wrong information presented in a convincing way.