AI seems to explain this better than as framed:
>...the body of the article doesn’t describe a panel of physicians making predictions at all. The headline says “AI fares better than doctors,” but the text says the model outperformed “risk scores currently relied upon by doctors,” i.e., standard scoring tools clinicians use—not the judgments of the surgeons on the case or an outside panel.
Human doctors have a tendency to underestimate their own complication rate, often because they are too delusional about their own capabilities. I've heard the same doctor say "this has never happened to me in my 20 years of doing surgery" twice, when a complication occurs during a surgical procedure.
The strange thing is that such articles always evoke the idea that AI is replacing humans even in serious work, which is frightening.
The strange thing is that this potentially life-saving tech will only collect dust because AI in medicine is only good for papers but not for real world usage. See all other AI medicine advancements. Same pattern. Medicine has a problem of not being willed to use modern tech to save lives.
I can understand reticence to basing conclusions about people's health on minimal evidence.
"Fares Better" sounds unscientific and very much like click bait
In cases where the numbers suggest that the average treated person "Fares better" than barely over 50% of the control group, or when effects are inconsistent, readers may not interpret the effects as profound.
Providing real numbers that are easily understandable, rather than evocative descriptions, allows readers to form their own conclusions about the results.
It says that doctors could predict accurately if a patient would die after surgery 60% of the time, and AI 85% of the time.
If a surgery is extremely risky, the doctors probably won't do it... so there's a systemic bias here in the data.