The reproducibility crisis and other problems in science: John Ioannidis [video]

54 pointsposted a month ago
by domofutu

44 Comments

dgfitz

a month ago

I spent two days trying to get repeatable data from a piece of hardware. Was not able to replicate a dataset once, found out we had a bad high-pressure hose for the brake system and the numbers were all broken.

I have no fucking clue how a soft science study could replicate itself.

stogot

a month ago

If even an author can’t replicate it, shouldn’t that make the results meaningless?

avs733

a month ago

not neccesarily? It depends on what the claims are. If prior art says 'this is impossible' then a study with one result that is hard to replicate is an interesting insight about the margins. If it is commonly accepted wisdom, we may need to address what that is. If it is your experiment, it doesn't mean your result is wrong but it means it should be significantly questioned. If I say its more that is on me...if I say it is what it is then the results are the results.

I think it is a growing, and problematic, misconception that science is about just reporting facts. It is about adding knowledge...soemtimes that is new facts, but more often it is theories, observations, and questions increasingly at the margins of randomness and probability as our species grows and learns.

inglor_cz

25 days ago

Substack psychologist Adam Mastroianni argued that if you need advanced statistics to tease out some effect in the soft sciences, it is actually likely that the effect is bogus.

How many of those non-reproducible studies showed a robust effect, in contrast to the barely there effect?

aeim

25 days ago

so, measuring progress by lines of knowledge added?

speaking openly

that kinda science seems like a bullshit factory (a very lucrative one, of course)

and analogous to an llm hallucination — shallow satisfaction of constraints by enumerating a problem space irrespective of contextual appropriateness, a liability, and forgotten as soon as real value comes along

surely the best science isn't just reproducible by scientists — engineers can get the grubby mits on it, to improve (ok, shape) the circumstances of our lives, and survival

that other stuff is busywork, not "literature". they are very different things...

domofutu

a month ago

There's also all the (potentially relevant) factors no one's considering - https://www.thetransmitter.org/methods/mouse-housing-tempera...

aaron695

a month ago

A bunch of women going on about giving mice air-conditioning or not?

Probably. We've filled science with Imposters. It seems predominantly women, they are the largest change in science.

The worst non-replicable sciences are also female majority.

Busywork on mice conditions is exactly the false science we are talking about. More excuse why it's not reproducible when mice are mice and are great sub-ins for humans.

"In mice" is the meme create to excuse the garbage that's produced. The studies never originally worked on mice, but now they have an excuse why they don't work on humans.

I wouldn't necessarily blame it 100% on women though. But I see your point.

mapt

a month ago

This guy doesn't get to say shit to us about literally anything after what he did. 2020's John Ioannidis stood on the shoulders of giants (2005's John Ioannidis) and was one of the first "Experts" to proudly proclaim that in his professional judgement, COVID-19 was no big deal and didn't warrant countermeasures.

https://sciencebasedmedicine.org/mistakes/

If you choose to become a policy entrepreneur and your success ends in a Holocaust-scale outcome, you get to shut up, retire, and thank your lucky stars we live in a society that strongly discourages blood debts. That's the deal. No book tour, do not pass go, do not collect $200, and stay the fuck off social media.

ttoinou

25 days ago

Seems like he was correct though. This article doesn’t debunk him, you can’t use official statistics to debunk a claim that the statistics are not computed correctly

user

25 days ago

[deleted]

decUser3

25 days ago

> in an effort— well intentioned—to control the coronavirus, we may inflict great damage on ourselves.

And you believe he was wrong and you believe Covid was a "Holocaust-scale outcome" ? He said the rates would be exaggerated and now with the benefit of hindsight, although some would claim common sense after seeing the sensationalism of the media, we can see he was correct. Or do you believe that's not true ?

bjourne

a month ago

Did no one watch the talk? :) After 13 minutes it ends and tells you to go to a IAI website to continue to watch. Fucking infuriating.

dang

a month ago

Yeah that's bad. And https://iai.tv/video/why-most-published-research-findings-ar... appears to do the same thing after 30 minutes. Does anyone have a link to the whole thing?

moomin

25 days ago

I think it’s fair to say that if someone is that determined for you to not hear what they have to say, you should oblige them.

dang

25 days ago

I'm sure Ioannidis isn't the person gatewalling his talk.

d0mine

a month ago

It is worth pointing out that compared to * everything else on the internet, science is the epitome of truth, facts, reproducibility, etc.

It might not appear so at the first glance due to Gell-Mann Amnesia effect.

7thaccount

a month ago

I don't think this isn't what the reproducibility issue is about.

The problem is there is a broken system in place that rewards publishing in high volumes, only getting positive results, and also doing research that supports those in charge. That doesn't mean new ideas don't win out when clear evidence exists but it is the exception.

All of this leads to economic incentives to lie with the data and then you get to scientific papers that people are making decisions based off (like where to put research dollars for Alzheimers) that are fraudulent. This is not good at all.

llamaimperative

a month ago

There are multiple reproducibility crises. The incentive structure is one of them, the other being the general challenge of reproducing results further "up the stack" in highly chaotic fields (the stack being physics -> chem -> bio -> psychology -> sociology, obviously nonexhaustive)

I actually find the reproducibility crisis you're mentioning to be both more problematic and to receive far less attention than the other. More problematic because it infects every form of science, even the harder/more fundamental sciences, and because it's way less clear how to fix it.

The reproducibility crisis near the top of the stack is just: "do more science and don't believe results until they've been replicated, refined, and matured."

7thaccount

25 days ago

Ah gotcha. Good point.

I've always felt that the psychology/sociology studies of the week I hear about on the radio were likely based on some weak statistics in the best of cases and most likely difficult to reproduce.

maest

a month ago

Fittingly for this stack model, maths rarely (but not never!) has reproducibility issues.

https://xkcd.com/435/

llamaimperative

a month ago

Math is brought up every damn time I bring up this stack! :D

vacuity

a month ago

The practice of the scientific method is the path to empirical truth. This doesn't necessarily mean that a nominal scientific finding is true. It is likely true that, on a given topic, an acredited scientist in that field has fairly correct opinions, but I would not take this too far. Conflicts of interest, personal biases, incentives to obtain results, simple lack of reproduction, corrupt peer review, etc. are clearly issues, and it's all the more unfortunate that we can't even say just how deep these issues run.

Science is timeless and powerful. Scientists are human beings that nominally, preferably, fallibly practice science.

karaterobot

a month ago

Gell-Mann amnesia is when you notice that somebody who is writing about something you're an expert in has made critical mistakes, and should not be trusted, but then you go on and trust people writing about topics you don't know as much about, failing to recognize they're probably also making critical mistakes about those other topics. I'm not sure the connection here.

Regarding the comparison between science and folk explanations on the internet, I think it's reasonable to hold science to a higher standard than we hold the general population. If most published research findings are false—as this video claims—and most of what idiots on the internet say is also false, that's neither an equivalence nor a victory for science. On the contrary, it erodes the value of science, both literally and figuratively. Right now we need sources of authority.

d0mine

25 days ago

It is not black or white. Yes, there are issues. No, it doesn’t mean a random comment on the internet should have as much authority as scientific paper. There is a strong background of anti-intellectualism right now.

The connection with Gell-Mann amnesia is that people overestimate truthiness of what they read online. Combined with reading about [real] issues in science, It might create an impression that scientific findings are relatively at the same level as everything else. My point is even all the troubles science is still a head above despite the perception.

jMyles

a month ago

This paper was life-changing for me as an undergrad (and I didn't discover the rest of his body of work until I ran into it later here on HN in 2010 or so).

We are blessed as a species that John stuck to his principles - and his thirst for empiricism - during the COVID-19 panic, and supported / encouraged his colleagues to do likewise.

This video is only the first 12 minutes of the talk. The rest is here (though it is possibly semi-paywalled? It let me watch it, even though it said it was going to make me sign up for a trial):

https://iai.tv/video/why-most-published-research-findings-ar...

matthewdgreen

a month ago

My understanding was that Ioannidis hugely underestimated the IFR of COVID and he did it mostly by cherry picking a handful of small-sample-size studies that were friendly to his political views. It was very much a “your heroes will absolutely let you down” moment in my scientific life, and to the extent that non-scientists have forgotten the episode, that’s kind of what I expect.

https://www.dailymail.co.uk/news/article-8843927/amp/Just-0-...

nostrebored

a month ago

No, the IFR of Covid was hugely overstated, which is why the projected population level impacts were completely wrong, even in places with limited interventions. Attributing cause of death is not as easy as it might seem.

MrMcCall

a month ago

Thanks. This guy (John Ioannidis) is the real deal. You can tell both by the dense, detailed fact sets, presented one after the other in logical order. And hearing the honest, intelligent tone of voice ensures his fidelity.

"The voice never lies." --Blind woman speaking to a friend

And his story about his hearing of Theranos is lowkey hilarious. And topical, because he's a Dunning-Kruger true-expert.

sdenton4

a month ago

During the early pandemic, he underestimated the rate of fatality from COVID (which, remember, was much higher before vaccines and paxlovid were deployed), and forcefully advocated for policy based on his lower fatality rate estimates. It was a stunning display of hubris: working on very limited information, he was pushing incautious policy responses which could have cost millions of lives.

https://en.m.wikipedia.org/wiki/John_Ioannidis

DAGdug

a month ago

As someone who felt the policy reaction to COVID was poor (no balanced assessment of the cost of false positives and negatives in decision-making, poor accounting of uncertainty), I concur that he didn’t apply his usual rigor or his critiques to his own work. He also had, IIRC, a conflict of interest and was funded by the airline industry for this research.

cempaka

a month ago

It's funny that the wildly overestimated IFRs like 3.4%, which were taken as gospel in the early days of the pandemic and drove the actual policy of sweeping shutdowns of schools, preventive medical care, and the economy at large -- a totally experimental and unprecedented measure with no empirical evidence whatsoever to demonstrate the benefits would outweigh the costs -- are not subjected to this same label of "hubris".

Iaonnidis's estimates of IFR in the 0.1% to 0.2% range were much closer to the mark.

cma

a month ago

Ionnandis didn't adjust for infection-death lag (which has big impact in the exponential infection increase part of the curve); his California IFR study was very flawed and undisclosed funded by the founder of Jet Blue.

His early influential paper before that was way off and said IFR might be even lower, around the common cold.

Around 0.5%-1.5% IFR without overwhelmed hospitals was the common scientific consensus very early on and was more right. Some treatment methods like proning and demonstrated effectiveness of steroids in a certain schedule helped drop things a good bit a few months in, around 30% if I remember.

edejong

a month ago

Early research always had 1.0% in its confidence intervals, which is most likely the right IFR during the first phases.

The 0.1%-0.2% was just bad science, taking medians over countries with lagging statistics reports.

Where did you find 3.4%? Isn’t that an upper bound?

llamaimperative

a month ago

Who was claiming 3.4%?

Here's a study from March 2020: https://pmc.ncbi.nlm.nih.gov/articles/PMC7118348/

> Adjusting for delay from confirmation to death, we estimated case and infection fatality ratios (CFR, IFR) for coronavirus disease (COVID-19) on the Diamond Princess ship as 2.6% (95% confidence interval (CI): 0.89–6.7) and 1.3% (95% CI: 0.38–3.6), respectively. Comparing deaths on board with expected deaths based on naive CFR estimates from China, we estimated CFR and IFR in China to be 1.2% (95% CI: 0.3–2.7) and 0.6% (95% CI: 0.2–1.3), respectively.

Please provide specific, contemporaneous examples of the 3.4% estimate and evidence of it being "taken as gospel" and "driving the actual policy."

delichon

a month ago

javagram

a month ago

The fact check page you provided seems clear that as early as March 5, 2020 (when the fact check was published) experts believed and were stating publicly that the IFR was not 3.4%:

> A Feb. 28 editorial in the New England Journal of Medicine, co-authored by Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, took the position that the mortality rate may well fall dramatically. It said if one assumed that there were several times as many people who had the disease with minimal or no symptoms as the number of reported cases, the mortality rate may be considerably less than 1%. That would suggest “the overall clinical consequences of Covid-19 may ultimately be more akin to those of a severe seasonal influenza,” the editorial said.

There are many similar lines.

llamaimperative

a month ago

That’s not IFR.

The “I can be trusted with scientific comprehension” crowd fails comprehension once again!

DAGdug

a month ago

Closer to the mark where? If you aggregate over countries with disproportionately young demographics and questionable reporting, yes! In much of the developed world, during the phase of the pandemic when JI did his reporting, heck no.

jsnell

a month ago

> Iaonnidis's estimates of IFR in the 0.1% to 0.2% range were much closer to the mark.

The problem is, that was not his estimate for the IFR. It was his estimate for the CFR.

He was predicting 10k dead in the US, which was off by two orders of magnitude. I don't know that anyone was further from the mark than him.

llamaimperative

25 days ago

Jay Battacharya, our incoming head of NIH, was pretty close to as hubristically wrong though!

People need to get over their fertilization of contrarianism. Very often, the consensus view is correct.

llamaimperative

25 days ago

HAH, whoops. Fertilization was meant to be fetishization.

whoknowsidont

25 days ago

>It's funny that the wildly overestimated IFRs like 3.4%

Where. Source it.