Radiology-specific foundation model

183 pointsposted 6 days ago
by pyromaker

147 Comments

ilaksh

6 days ago

I think the only real reason the general public can't access this now is greed and a lack of understanding of technology. They will say that it is dangerous or something to let the general public access it because they may attempt to self-diagnose or something.

But radiologists are very busy and this could help many people. Put a strong disclaimer in there. Open it up to subscriptions to everyone. Charge $40 per analysis or something. Integrate some kind of directory or referral service for human medical professionals.

Anyway, I hope some non-profit organizations will see the capabilities of this model and work together to create an open dataset. That might involve recruiting volunteers to sign up before they have injuries. Or maybe just recruiting different medical providers that get waivers and give discounts on the spot. Won't be easy. But will be worth it.

arathis

6 days ago

You think the only real reason the public don't get to use this tool is because of greed?

Like, that's the only REAL reason? Not the technological or ethical implications? The dangers in providing people with no real concept of how any of this works the means to evaluate themselves?

mhuffman

6 days ago

Not to speak of the "greed" on this particular item but in Europe you can buy real time glucose monitors, portable ecg, and low calorie meal replacements over the counter. In the US, all of these require a doctor's prescription. It wouldn't take a leap in logic to think that was greed or pressure from the AMA lobby (one of the most funded lobbies in the US, btw).

rscho

6 days ago

> in Europe you can buy real time glucose monitors, portable ecg, and low calorie meal replacements over the counter.

True! And, aside from people with chronic conditions like diabetics, who are forced to know how their glucose levels work, nobody uses those. So it certainly does change the cost, but I don't think it would be any more useful in the US.

medimikka

6 days ago

Unfortunately not. There are dozens of companies reselling "old" Libre 2 sensors for "fitness and health" applications. BG has joined HRV and other semi-bogus metrics as one of the numbers that drive a whole subculture of health data.

To correct this, though. You can buy all those in the US as well. Holter and FirstBeat are selling clinically validated and FDA approved mutli-lead ECG, Derxcom is selling an over the counter CGM, as is Abbott with the Libre 2, and a Chinese company has recently joined there, too.

Low calorie meal replacements are all over the store, too.

If you're a member of this orthorexia/orthovivia crowd, you have the same access to tools as you do in the EU, often more so.

mzmoen

6 days ago

That’s not true at all? There are companies like Levels who sell CGMs to non-diabetics to try and optimize their health.

rscho

6 days ago

In my experience, it doesn't seem to be a common occurrence. At least, I personally know no one doing that. YMMV, I guess. Also, it seems to me like a very bad idea to do that.

delichon

6 days ago

As a type 2 diabetic I used a couple of different glucose monitors and got a lot of benefit from them. I gave one to a friend who I thought had diabetic symptoms. First I tried to get him to go to a doctor but he wouldn't. But he tried the CGM and found numbers well into the diabetic range. Then he immediately changed his diet and started treatment. Not sure but I may have committed a crime.

What was the potential harm that was greater than the reward?

haldujai

6 days ago

Potential harm is always the same - misdiagnosis and/or mismanagement.

It’s probably very low in the context of CGM and diabetes as the potentially harmful treatments require prescriptions.

Device prescription requirements are usually due to product labelling and the manufacturers application. There are OTC fingerstick glucometers and CGMs approved.

rscho

6 days ago

Accidentally diagnosing someone is quite different from someone healthy trying to 'optimize' their health, whatever that means...

delichon

6 days ago

Strong disagree, diagnosing chronic dysfunction is essential to optimizing health. There's a big difference between an optimal blood glucose range and one that triggers insurance companies to authorize treatment. If you only pay attention to the later it could cost years of healthy life.

It's like, not being obese enough for your insurance company to pay for medical intervention doesn't mean that your weight is optimal enough to enjoy a long retirement.

nradov

5 days ago

Bad idea how? It's expensive but not dangerous. Some people find the results interesting, and serious athletes have had some good results using them to optimize diet and training.

phkahler

6 days ago

>> Like, that's the only REAL reason? Not the technological or ethical implications? The dangers in providing people with no real concept of how any of this works the means to evaluate themselves?

On the surface those all sound like additional reasons not to make it available. But they are also great rationalizations for those who want to maintain a monopoly on analysis.

Personally I found all the comparisons to other AI performance bothersome. None of those were specifically trained on diagnostics AFAICT. Comparison against human experts would seem to be the appropriate way to test it. And not people just out of training taking their first test, I assume experts do better over time though I might be wrong on that.

jarrelscy

6 days ago

Developer here - its a good point that most of the models were not specifically trained on diagnostic imaging, with the exception of Llava-Med. We would love to compare against other models trained on diagnostic imaging if anyone can grant us access!

Comparison against human experts is the gold standard but information on human performance in the FRCR 2B Rapids examination is hard to come by - we've provided a reference (1) which shows comparable (at least numerically) performance of human radiologists.

To your point around people just out of training (keeping in mind that training for the FRCR takes 5 years, while doing practicing medicine in a real clinical setting) taking their first test - the reference shows that after passing the FRCR 2B Rapids the first time, their performance actually declines (at least in the first year), so I'm not sure if experts would do better over time.

1. https://www.bmj.com/content/bmj/379/bmj-2022-072826.full.pdf

rscho

6 days ago

Someone downvoted the author !? This site never ceases to amaze.

K0balt

6 days ago

Yeah, we should also limit access to medical books too. With a copy of the MERK manual, what’s to stop me from diagnosing my own diseases or even setting up shop at the mall as a medical “counselor” ?

The infantilization of the public in the name of “safety” is offensive and ridiculous. In many countries, you can get the vast majority of medicines at the pharmacy without a prescription. Amazingly, people still pay doctors and don’t just take random medications without consulting medical professionals.

It’s only “necessary” to limit access to medical tools in countries that have perverted the incentive structure of healthcare to the point where, out of desperation, people will try nearly anything to deal with health issues that they desperately need care for but cannot afford.

In countries where healthcare costs are not punitive and are in alignment with the economy, people opt for sane solutions and quality advice because they want to get well and don’t want to harm themselves accidentally.

If developing nations with arguably inferior education systems can responsibly live with open access to medical treatment resources like diagnostic imaging and pharmaceuticals, maybe we should be asking ourselves what is it, exactly, that is perverting the incentives so badly that having ungated access to these lifesaving resources would be dangerous?

Calavar

6 days ago

> If developing nations with arguably inferior education systems can responsibly live with open access to medical treatment resources like diagnostic imaging and pharmaceuticals,

Well, the conditional in this if statement doesn't hold.

Yes, pharmaceuticals are open access in much of the developing world, but it has not happened responsibly. For example, Carbapenem-resistant bacteria are 20 times as common in India as they are in the U.S [1]

I really don't like this characterization of medical resource stewardship as "infantilization" because it implies some sort of elitism amongst doctors, when it's exactly the opposite. It's a system of checks and balances that limits the power afforded to any one person, no matter how smart they think they are. In a US hospital setting, doctors do not have 100% control over antibiotics. An antibiotic stewardship pharmacist or infectious disease specialist will deny and/or cancel antibiotics left and right, even if the prescribing doctor is chief of their department or the CMO.

[1] https://www.fic.nih.gov/News/GlobalHealthMatters/may-june-20...

rscho

6 days ago

Honestly, that's a short-sighted interpretation. Would you get treated by someone who's fresh out of school? If not, why? They're the ones with the most up to date and extensive knowledge. Today, medicine is still mostly know-how acquired through practical training, not books. A good doc is mostly experience, with a few bits of real science inside.

K0balt

4 days ago

I don’t get The relevance to my comment here,Maybe you replied to the wrong one? Or were you thinking I was Seriously implying that a book was a suitable substitute for a doctor? (I wasn’t)

pc86

6 days ago

Someone fresh out of medical school is a resident so they're under direct supervision for 3-7 years. And unless you live in an area with an abundance of hospitals, there's a large change your local hospital is a teaching hospital staffed largely by residents and the attendings that supervise them. You can request non-resident care only but it's a request and is not guaranteed.

The TLDR is that most people when interacting with anything other than their GP family doctor, are probably interacting with someone "fresh out of school."

rscho

6 days ago

So, why do residents need so much supervision? Since they have the most recent, and also usually most extensive knowledge. Granted, specialized knowledge is sometimes acquired during residency. Still, it's mostly taught by attendings instead of being read from books. Medicine is a know-how profession.

pc86

5 days ago

You don't learn how to be a radiologist, or an orthopedic surgeon, or an OB-GYN, or any other specialty, in med school. You can't learn surgery from a book. Maybe there are large parts of family or internal medicine you can learn from a book but those residencies are already several years shorter than most surgical specialties.

You wouldn't drop a fresh college CS grad by themselves in a group a developers and expect them to just figure it out. Just like medical school doesn't really teach you how to be a doctor, a CS degree doesn't really teach you how to code. They're both much more academic than the day-to-day of the job you're getting that degree for. They'd still get mentorship from colleagues, supervisors, and others. The only difference is medicine has the ACGME and all the government regulations to make it much more structured than what you need for most everything else.

TeMPOraL

6 days ago

> Since they have the most recent, and also usually most extensive knowledge.

They've crammed it, yes. They need some extra time learn how to make use of that knowledge in day-to-day practice.

taneq

6 days ago

Could there be, perhaps, a middle ground between “backyard chop shops powered by YouTube tutorials and Reddit posts” and the U.S.’ current regulatory-and-commercial-capture exploitation?

user

6 days ago

[deleted]

K0balt

a day ago

I honestly don’t think we need more amateurs performing healthcare services for fun and profit, but I also think that barriers to self-care should be nearly nonexistent while encouraging an abundance of caution. Not sure how to best accommodate those somewhat disparate goals.

BaculumMeumEst

6 days ago

> The infantilization of the public in the name of “safety” is offensive and ridiculous.

It comes from dealing with the public.

> In many countries, you can get the vast majority of medicines at the pharmacy without a prescription. Amazingly, people still pay doctors and don’t just take random medications without consulting medical professionals.

I see people on this site of allegedly smart people recommending taking random medications ALL THE TIME. Not only without consulting medical professionals, but _in spite of medical professional's advice_, because they think they _know better_.

Let's roll out the unbelievably dumb idea of selling self-diagnosis AI on radiology scans in the countries you’re referring to and ask them how it works out. If you want the freedom to shoot from the hip on your healthcare, you've got the freedom to move to Tijuana. We're not going to subject our medical professionals to deal with an onslaught of confidently wrong individuals who are armed with their $40 AI results from an overhyped startup. Those startups can make their case to the providers directly and have their tools vetted.

whamlastxmas

6 days ago

Doctors give out wrong and bad advice all the time. Doctors in general make mistakes all the time to the point that there’s some alarming statistic about how preventable medical errors is a scary high percentage of deaths. People should absolutely question their doctors and get more opinions, and in a world where my last 10 minute doctor visit would have cost $650 without insurance, for a NP, I don’t blame them for trying to self diagnose.

BaculumMeumEst

6 days ago

You are proving my point talking about the percentage of deaths caused by medical errors. If you had 100,000 people receive medical care, 10 die, and 5 of them are due to medical errors, then sure, you could spin that as "50% of deaths were caused by medical errors". Never mind the context, never mind the fact that we are actually able to identify the errors in the first place!

So again, if you want to ignore the safeguards that we've built for good reason - take your business to Tijuana.

rscho

6 days ago

TBF, 'medical error' is a super wide definition. Most aren't diagnostic errors, and they encompass all healthcare professions, not only doctors. It makes a big difference in interpretation and potential solutions.

ilaksh

5 days ago

Well.. funny you say that.. I did move to Tijuana some years ago. One time while I was there, I was sick and a neighbor (Mexican) seemed to insist that I go to the doctor. She recommended a hole in the wall office above a pharmacy that looked like a little-league concession stand.

It was a serious 30 something woman who collected something like 50 pesos (around $3), listened to me for about 30 seconds, and told me to make sure I slept and ate well (I think she specifically said chicken soup). I asked about antibiotics or medicine and she indicated it wasn't necessary.

So I rested quite seriously and ate as well as I could and got better about a week later.

During the time that I was in Playas de Tijuana I would normally go to nicer pharmacies though, and they didn't ask for a prescription for my asthma or other medicine which was something like 800% less expensive over there. They did always wear nice lab coats and take their job very seriously if I asked for advice. Although I rarely did that.

I do remember one time asking about my back acne problems at a place in the mall and the lady immediately gave me an antibiotic for maybe $15 which didn't cure it but made it about 75% better for a few months.

Another time at the grocery store I asked about acne medicine and the lady was about to sell me something like Tretinoin cream for a price probably 1/4 of US price. She didn't have anything like oral Accutane of course. It was just a Calimax Plus.

There are of course quite serious and more expensive actual doctors in Tijuana but I never ended up visiting any of them. I was on a budget and luckily did not have any really critical medical needs. But if I had, I am sure it would have cost dramatically less than across the border.

EDIT: not to say the concession-stand office lady wasn't an actual doctor. I don't know, she may have had training, and certainly had a lot of experience.

K0balt

5 days ago

I live in the Dominican Republic. People here go to the doctor for things I never would have in the USA. If anything, people here self treat much less Than the USA, even though you can walk into any pharmacy or imaging center and ask for whatever you want.

They go to the doctor because the healthcare system here works, for the most part, and they value and respect the expert counsel in matters of their health.

BaculumMeumEst

4 days ago

That's interesting, thanks for the context. I think it takes a unique kind of arrogance to self-diagnose medical problems w/ no knowledge or understanding of what you are talking about, and while I love this country I think that arrogance is in high supply here. Many people here aren't aware that it creates a huge strain on physicians or don't care because they think the world revolves around themselves.

pc86

6 days ago

What is your specialty?

I'm curious what you think the problem is, concretely, with a tool like this in the hands of the public which you clearly have such disdain for. Let's assume I buy this thing (the horror). I have to actually get access to my scans, which despite being legally required to provide most providers will be loathe to actually do. So I get my scans, I get this AI tool, I ask it some questions. It's definitely going to get some answers right, and it's very likely going to get some answers wrong. I'd be shocked if it's much less accurate than a resident, and if they're commercializing it there's a decent chance it's more accurate than the average experienced attending.

What is your doomsday scenario now that I have some correct data and some incorrect data? What am I going to do with that information that is so "unbelievably dumb" that I need the AMA to play daddy and prevent me from hurting myself? I can't get medication based on my newfound dangerous knowledge. I can't schedule a surgery or an IR procedure. I can't go into an ER and say "give me a cast here's a report showing I need one."

BaculumMeumEst

5 days ago

It's not about you.

pc86

5 days ago

I don't know what point you think that comment makes but it certainly doesn't answer any of the very legitimate questions I posed, including the first one since I'm willing to bet you have a pretty big conflict of interest here.

rscho

6 days ago

Even if this worked as well as a human radiologist, diagnosis is not only made of radiology. That's why radiology is a support specialty. Other specialists incorporate radiology exams into their own assessment to decide on a treatment plan. So in the end, I don't think it'll change as much as you'd think, even if freely accessible.

crabbone

6 days ago

Absolutely this. Also radiologists are usually given notes on patients that accompany whatever image they are reading, and in cases like, eg. ulstrasound often perform the exam themselves. So, they are able to asses presentation, hear patient's complaints, learn the history of the patient etc.

Not to mention that in particularly sick patients problems tend to compound one another and exams are often requested to deal with a particular side of the problem, ignoring, perhaps, the major (but already known and diagnosed) problem etc.

Often times factors specific to a hospital play crucial role: eg. in hospitals for rich (but older) patients it may be common to take chest X-rays in a sited position (s.a. not to discomfort the valuable patients...) whereas in poorer hospitals siting position would indicate some kind of a problem (i.e. the patient couldn't stand for whatever reason).

That's not to say that automatic image reading is worthless: radiologists are, perhaps, one of the most overbooked specialists in any hospital, and are getting even more overbooked because other specialists tend to be afraid to diagnose w/o imaging / are over-reliant on imaging. From talking to someone who worked as a clinical radiologist: most images are never red. So, if an automated system could identify images requiring human attention, that'd be already a huge leap.

robertlagrant

6 days ago

You could imagine imprinting into the scan additional info such as "seated preferred" or "seated for pain". There is more encoding that could be done.

crabbone

6 days ago

Current "solutions" generally ignore or don't know how to incorporate any textual data that accompanies the image. You are trying to incorporate non-existent data that nobody ever put into any kind of medical system...

Yes, in principle, if people taking the images had infinite time and could foresee what kind of accompanying data will be useful at the analysis time, and then had a convenient and universal format to store that data, and models could select the relevant subsets of features for the problem being investigated... I think you should see where this is going: this isn't going to happen in our lifetime, most likely never.

jarrelscy

6 days ago

Developer of the model here. We built this model in the form of an LLM precisely to address this problem - to be able to utilize the textual data that accompanies the image such as the order history or clinical background e.g. patient demographics. Images and text are both embedded into the conversation, meaning the LLM can in theory respond using both.

Of course, there are lots of remaining challenges around integration and actually getting access to these data sources e.g. the EMR systems, when trying to use this in practice.

crabbone

5 days ago

My experience with working with hospital textual data is that, for the most part, it's either useless, or doesn't exist. The radiologist reading the image is expected to phone the specialist who requested the images to be red in order to figure out what to do with the image.

Hospital systems are atrocious for providing useful information anyways. They are often full of unnecessary / unimportant fields that the requesting side either doesn't know how to fill, or will fill with general nonsense just to get the request through the system.

It gets worse when it's DICOMs: the format itself is a mess. You never know where to look for the useful information. The information is often created accidentally, by some automated process that is completely broken, but doesn't create any visible artifacts for whoever handles the DICOM. Eg. the time information in the machine taking the image might be completely wrong, but it doesn't appear anywhere on the image, but then, say, the research needs to tell the patient's age... and is off by few decades.

Any attempt I've seen so far to run a study in a hospital would result in about 50% of collected information being discarded as completely worthless due to how it was acquired.

Radiologists have general knowledge about the system in which they operate. They can identify cases when information is bogus, while plausible. But this is often so much tied to the context of their work, there's no hope for there to be a practical automated solution for this any time soon. (And I'm talking about hospitals in well-to-do EU countries).

NB. It might sound like I'm trying to undermine your work, but what I'm actually trying to say is that the environment in which you want to automate things isn't ready to be automated. It's very similar to the self-driving cars: if we built road infrastructure differently, the task of automating driving could've been a lot easier, but because it's so random and so dependent on local context, it's just too hard to make useful automation.

jarrelscy

5 days ago

Thanks for the comments. I’m well aware as I’m also a practicing radiologist! Some hospitals in Australia where I work do a good job of enforcing that radiology orders are sent with the appropriate metadata but I agree that is not the case around the world. Integration, as always, remains the hardest step.

PS genuinely appreciate the engagement and don’t see it as undermining.

robertlagrant

5 days ago

I think this is too pessimistic. You can slowly add useful information that makes things more useful, if there's value in incorporating the information. I'm very familiar with EHRs and I get the problem, but it's not insoluble. And the full problem doesn't need to be solved to make progress.

xarope

6 days ago

putting on my cynical hat, I feel this will just be another way for unscrupulous healthcare organizations to charge yet another service line item to patients/insurance...

  - X-Ray: $20
  - Radiologist Consultation: $200
  - Harrison.AI interpretation: $2000

gosub100

6 days ago

Yep, while justifying a reduction in force to radiology practices and keeping the extra salaries for the CEO and investors. Then when it inevitably kills someone, throw the AI under the bus, have a pre planned escape hatch so the AI company never has to pay any settlements. Have them sell their "assets" to the next third party.

vrc

6 days ago

Yeah, and the bill will come back adjusted to

  - X-Ray: $15
  - Radiologist Consultation: $125
  - Harrison.AI interpretation: $20
The cat and mouse between payer and system will never die given how it's set up. There's a disincentive to bill less than maximally, and therefore to not deny and adjust as much as possible. Somewhere in the middle patients get squished with the burden of copays and uncovered expenses that the hospital is now legally obligated to try and collect on or forfeit that portion for all future claims (and still have a copay on that new adjustment)

user

6 days ago

[deleted]

littlestymaar

6 days ago

A model that's accurate only 50% of time is far from helpful in terms of public health: it's high enough so that people could trust it and low enough to cause harm by misdiagnosing stuff.

CamperBob2

6 days ago

The models are already more accurate than highly-trained human diagnosticians in many areas.

littlestymaar

6 days ago

If you want it to be used by the public it doesn't matter if it's more accurate on some things if it's very bad at other things and the user has no idea in which situation we are.

As a senior developer I routinely use LLMs to write boilerplate code, but that doesn't mean that the layman can get something working by using an LLM. And it's exactly the same for other professions.

rscho

6 days ago

On paper. Not in the trenches.

robertlagrant

6 days ago

I don't understand the greed argument. Is the reason you draw a salary "greed"? Would gating it behind $40 not be "greed" to someone?

It's more likely that regardless of disclaimers people will still use it, and at some point someone will decide that that outcome is still the provider's fault, because you can't expect people to not use a service when they're impoverished and scared, can you?

rscho

6 days ago

> a lack of understanding of technology

Unfortunately, it's the other way around. The tech sector understands very little about clinical medicine, and therefore spends its time fighting windmills and shouting in the dark at docs.

ImHereToVote

6 days ago

Doctors should be like thesis advisors for their patients. If the patients undergo a minimum competency test. If you can't pass. You don't get a thesis advisor.

owenpalmer

6 days ago

I had an MRI on my ankle several years ago. At first glance, the doctor told me there was nothing wrong, even though I had very painful symptoms. While the visit was unproductive, I requested the MRI images on a CD, just because I was curious (I wanted to reconstruct the layers into a 3D model). After receiving the data in the mail weeks later, I was surprised to find a formal diagnosis on the CD. Apparently a better doctor had gotten around to analyzing it (they never followed up). If I hadn't requested my records, I never would have gotten a diagnosis. I had a swollen retrocalcaneal bursa. I googled the treatments, and eventually got better.

I'm curious whether this AI model would have been able to detect my issue more competently than the shitty doctor.

rasmus1610

6 days ago

To be honest, I heard of several radiology practices that hand the patients a normal report directly after the exam and they look at the actual images only after the patient has left.

I guess the reasoning is that they want to provide „good service“ by giving the patient something to work with directly after the exam and the workload is so high that they couldn’t look at the images so fast. And they accept the risk that some people are getting angry because their exam wasn’t normal in the end.

But on the scale a typical radiology practice operates today, the few patients who don’t have a normal exam don’t matter (the number of normal exams in an outpatient setting is quite high).

I find it highly unethical, but some radiologists are a little bit more ethically relaxed I guess.

What I want to say is that it might be more of a structural/organisational problem than incompetence by the radiologist in your case.

(Disclaimer: I’m a radiologist myself)

HPsquared

6 days ago

This is one of those comments where I started thinking "oh come on no way, this guy clearly has no idea what he's talking about" then read the last part and realization dawned the world is actually a very messy place.

lostlogin

6 days ago

How did this happen?

Surely your results went to a requesting physician who should have been following up with you? Radiology doctors don’t usually organise follow up care.

Or was the inaccurate result from the requesting physician?

owenpalmer

6 days ago

I don't know, just incompetence and disorganization on their part. Directly after my MRI, they told me the images didn't indicate any meaningful information.

rscho

6 days ago

You got lost in the mess of files and admin. The process is usually that you get the exam, they give you a first impression orally. Then they really get to work and look properly, and produce a written report, which the requesting doc will use for treatment decisions. At that point, they're supposed to get back to you, but apparently someone dropped you along the way.

jeffxtreme

6 days ago

Yet they still managed to give them the CD with the diagnosis... Such a strange process

rscho

6 days ago

Very likely because radio did their work correctly (with the misfortune of a wrong prelim assessment), but the requesting doc either forgot or chose not to act on the results. Or results were not transmitted correctly, so the requesting doc never was aware... many things can go wrong in collaborative work. Anyways, a communication issue for sure.

quantumwoke

6 days ago

The radiographer or the radiologist? Did you see your requesting doctor afterwards?

daedalus_f

6 days ago

The FRCR 2b examination consists of three parts, a rapid reporting component (the candidate assess around 35 x-rays in 30 minutes where the candidate is simply expected to mark the film as normal or abnormal, this is a perceptual test and is largely limited to simple fracture vs normal) alongside a viva and long cases component where the candidate reviews more complex examinations and is expected to provide a report, differential diagnosis and management plan.

A quick look at the paper in the BMJ shows that the model did not sit the FRCR 2b examination as claimed, but was given a cut down mock up of the rapid reporting part of the examination invented by one of the authors.

https://www.bmj.com/content/bmj/379/bmj-2022-072826.full.pdf

nopinsight

6 days ago

The paper you linked to was published in 2022. The results there were for a different system for sure.

Were the same tests also used here?

jarrelscy

6 days ago

One of the developers here. The paper links to an earlier model from a different group that could only interpret X-rays of specific body parts. Our model does not have such limitation.

However, the actual FRCR 2B Rapids exam question bank is not publicly available and the FRCR is unlikely to agree to release them as this would compromise the integrity of their examination in the future- so the test used are mock examinations, none of which have been provided to the model during training.

daedalus_f

6 days ago

Interesting, is your model still based on radiographs alone, or can it look at cross-sectional imaging as well?

jarrelscy

6 days ago

This current model is radiographs alone. The FRCR 2B Rapids exam is based on only radiographs.

nopinsight

6 days ago

This is impressive. The next step is to see how well it generalizes outside of such tests.

"The Fellowship of the Royal College of Radiologists (FRCR) 2B Rapids exam is considered one of the leading and toughest certifications for radiologists. Only 40-59% of human radiologists pass on their first attempt. Radiologists who re-attempt the exam within a year of passing score an average of 50.88 out of 60 (84.8%).

Harrison.rad.1 scored 51.4 out of 60 (85.67%). Other competing models, including OpenAI’s GPT-4o, Microsoft’s LLaVA-Med, Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.5 Pro, mostly scored below 30*, which is statistically no better than random guessing."

rafram

6 days ago

Impressive, but was it trained on questions from the exam? Were any of those other models?

aengustran

6 days ago

harrison.rad.1 was not trained on any of the exam questions. It can't be guaranteed however that other models were not trained on them though.

trashtester

6 days ago

AI models for regular X-rays seems to be achieving high quality human level performance, which is not unexpected.

But if someone is able to connect a network to the raw data outputs from CT or MR machines, one may start seeing these AI's radically outperform humans at a fraction of the cost.

For CT machines, this could also be used to concentrate radiation doses into parts of the body where the uncertainty of the current state is greatest, even in real time.

For instance, if using a CT machine to examine a fracture in a leg bone, one could start out with a very low dosage scan, simply to find the exact location of the bone. Then slightly higher concentrated scan of the bone in the general area, and then an even higher dosage in an area where the fracture is detected, to get a high resolution picture of the damage, and splinters etc.

This could reduce the total dosage the patient is exposed to, or be used to get a higher resolution image of the damaged area than one would otherwise want to collect, or possibly to perform more scans during treatment than is currently considered worth the radiation exposure.

Such machines could also be made multi modal, meaning the same machine could carry both CT, MR, ultrasound sensors (dopler + regular). Possibly even secondary sensors, such as thermal sensors, pressure sensors or even invasive types of sensors.

By fusing all such inputs (+ the medical records, blood sample data etc) for the patient, such a machine may be able to build a more complete picture of a patient's conditions than even the best hospitals can provide today, and a at a fraction of the cost.

Especially for diffuse issues, like back pains where information about bone damage, bloodflow (from the Doppler ultrasound), soft tissue tension/condition etc could be collected simultaneously and matched with the reported symptoms in real time to find location where nerve damage or irritation could occur.

To verify findings (or to exclude such, if more than one possible explanation exists), such an AI could then suggest experiments that would confirm or exclude possibilities, including stimulating certain areas electrically, apply physical pressure or even by inserting some tiny probe to inspect the location directly.

Unfortunately (or fortunately to the medical companies), while this cold lower the cost per treatment, the market for such diagnostics could grow even faster, meaning medical costs (insurance/taxes) might still go up with this.

smitec

6 days ago

A very exciting release and I hope it stacks up in the field. I ran into their team a few times in a previous role and they were always extremely robust in their clinical validation which is often lacking in the space.

I still see somewhat of a product gap in this whole area when selling into clinics but that can likely be solved with time.

davedx

6 days ago

“AI has peaked”

“AI is a bubble”

We’re still scratching the surface of what’s possible. I’m hugely optimistic about the future, in a way I never was in other hype/tech cycles.

Almondsetat

6 days ago

"AI" here refers to general intelligence. A highly specific ML model for radiology is not AI, but a new avenue for improvements in the field of computer vision.

the8472

6 days ago

So, hypothetically, a general-intelligence-capable architecture isn't allowed to specialize in a particular task without losing its GI status? I.e. trained radiologists wouldn't be a general intelligence? E.g. their ability to produce text is really just a part of their radiologist-function to output data, right?

Almondsetat

6 days ago

It's impossible for humans to know a lot about everything, while LLMs can. So an LLM that sacrifices all that knowledge for a specific application is no longer an AI, since it would show its shortcomings more obviously.

the8472

6 days ago

They're still very bounded systems (not some galaxy brain) and training them is expensive. Learning tradeoffs have to be made. The tradeoffs are just different than in humans. Note that they're still able to interact via natural language!

whamlastxmas

6 days ago

The world’s shittiest calculator powered by a coin battery is an AI. I think you’re being overly narrow or confusing it with AGI

user

6 days ago

[deleted]

davedx

6 days ago

What, no it doesn’t, that’s “AGI” - it has a G in it. This is ML/AI

GaggiX

6 days ago

When did "AI" become "general intelligence"?

ygjb

6 days ago

It's a bike shed. It's easier to argue the definition or terminology than the technology, so it's the thing people go for.

gosub100

6 days ago

They're still going to charge the same amount (or more). At best this will divert money from intelligent, hard working physicians into sv tech bros who dropped out of undergrad (while putting patient lives at higher risk).

user

6 days ago

[deleted]

bobbiechen

6 days ago

"We'd better hope we can actually replace radiologists with AI, because medical students are no longer choosing to specialize in it."

- one of the speakers at a recent health+AI event

I'm wondering what others in healthcare think of this. I've been skeptical about the death of software engineering as a profession (just as spreadsheets increased the number of accountants), but neither of those jobs requires going to medical school for several years.

doctoring

6 days ago

I don't know for other countries, but for the United States, "medical students are no longer choosing it" is very very untrue, and it is trivial to look up as this information is public from the NRMP (the organization that runs the residency match).

Radiology remains one of the most competitive and in-demand specialties. In this year's match, only 4 out of ~1200 available radiology residency positions went unfilled. Last year was 0. Only a handful of other specialties have similar rates.

As comparison, 251 out of ~900 pediatric residency slots went unfilled this year. And 636 out of ~5000 family medicine residency slots went unfilled. (These are much higher than previous years.)

However, I do somewhat agree with the speaker's sentiment if for a different reason. Radiologist supply in the US is roughly stable (thanks to the US's strange stranglehold on residency slots), but demand is increasing: the number of scans ordered on a per patient continues to rise, as does the complexity of those scans. I've heard of hospital systems with backlogs that result in patients waiting months for, say, their cancer staging scan. One can hope we find some way to make things more efficient. Maybe AI can help.

bobbiechen

5 days ago

Thanks for the info (and same for the sibling comments)! Seems that the hype does not match the reality again.

yurimo

6 days ago

Interesting take. Had a friend recently start medschool (in US) and he said radiology was some of the top directions people were considerings, because as he put it "the pay is decent and they have a life". Anecdotal but I wonder what is the reason of not specializing in it then. If anything AI can help reduce the workload further and identify patterns that can be missed.

husarcik

6 days ago

I'm a third year radiology resident. That speaker is misinformed as diagnostic radiology has become one of the most competitive subspecialties to get into. All spots fill every year. We need more radiology spots to keep up with the demand.

nradov

6 days ago

I'm glad to see that this model uses multiple patient chart data elements beyond just images. Some earlier more naive models attempted to treat it as a pure image classification problem which isn't sufficient outside the simplest cases. Human radiologists rely heavily on other factors including patient age, sex, previous diagnoses, patient reported symptoms, etc.

lostlogin

6 days ago

> patient reported symptoms

You make it sound like the reporting radiologist is given a referral with helpful, legible information on it. That this ever happened doubtful.

nradov

6 days ago

Referrals are more problematic but if the radiologist works in the same organization as the ordering physician then they should have access to the full patient chart in the EHR.

aqme28

6 days ago

This is far from the first company to try to tackle AI radiology, or even AI x-ray radiology. It's not even the first company to have a model that works on par or better than radiologists. I'm curious how they solve the commercial angle here, which seems to be the big point of failure.

crabbone

6 days ago

The real problem is liability. Radiologist, if they make a mistake can be sued. Who are you going to sue when the program misdiagnoses you?

NB. In all claims I've seen so far about outperforming radiologist, the common denominator was that people creating these models have mostly never even seen a real radiologist and had no idea how to read the images. Subsequently, the models "worked" due to some kind of luck, where they accidentally (or deliberately) were fed data that made them look good.

augustinemp

6 days ago

I spoke to radiologist in a customer interview yesterday. They mentioned that they would really like a tool that could zoom on a specific part of an image and explain what is happening. For extra points they would like it to be able to reference literature where similar images were shown.

hgh

6 days ago

Connecting your comment to another about commercial model, seems the potential win here is selling useful tools to radiologists that may leverage AI rather than to end customers with the idea to replace some radiology consultations.

This seems generally aligned with AI realities today: it won't necessarily replace whole job functions but it can increase productivity when applied thoughtfully.

Workaccount2

6 days ago

Aren't radiologists that "tool" from the perspective of primary doctors?

darby_nine

6 days ago

Sort of like primary doctors are just a "tool" to get referrals for treatment

husarcik

6 days ago

As a radiology resident, it would be nice to have a tool to better organize my dictation automatically. I don't want to ever have to touch a powerscribe template again.

I'd be 2x as productive if I could just speak and it auto filled my template in the correct spots.

isaacfrond

6 days ago

From the article: Other competing models, including OpenAI’s GPT-4o, Microsoft’s LLaVA-Med, Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.5 Pro, mostly scored below 30*, which is statistically no better than random guessing.

How is chatgpt the competion? It’s mostly a text model?

exe34

6 days ago

gpt4o is multimodal

seanvelasco

6 days ago

following this, gonna integrate this with a DICOM viewer i'm developing from the ground up

lostlogin

6 days ago

Fixing the RIS would make radiology happier than fixing the viewer.

And while you’re at it, the current ‘integrations’ between RIS and PACS are so jarring it sets my teeth on edge.

rasmus1610

6 days ago

Yes please. We hope to move away from our RIS and integrate our reporting workflow into our PACS this year

rahkiin

6 days ago

Can you help me with those acronyms?

tecleandor

6 days ago

Radiology Information System. The software that manages the radiology workflow in a clinic.

Schedules radiology studies, exchanges information with the modalities (the radiology devices) so the studies have proper metadata, exchanges information with the PACS (the radiology image storage), it might be used by radiologists and/or transcriptionists to add the reports for the studies...

It might overlap a bit with the HIS (hospital information system) that's the more general hospital management software.

lostlogin

6 days ago

The RIS became a thing before the PACS as digital imaging arrived later in the piece. They should always have been integrated, as many things are clumsy when spread across two systems. Some ‘reports’ are an image (eg vessel mapping and many cardiac reports). Some imaging doesn’t require a written report (theatre screening for implant placement). Some imaging requires multiple reports (a cardiac CT scan which covers lungs often gets a radiology and cardiology report). Some imaging is done to aid the acquisition of a different type of imaging. All these scenarios are handled in various clumsy ways by the various system and work around workflows are made up by staff on a near daily basis.

tecleandor

6 days ago

Oh yes! I've been out of the sector for ~5years but I worked on it for 15 years and it's amusing (well, except when you have to work on it) how different everything is on every place.

The manual data introduction, the shortcuts and workarounds, the differences on roles and even meanings! The words RIS, PACS or HIS doesn't mean the same to every person and have different functions on different places. Just ask somebody to compare a PACS to a VNA and run away! :D

In a way I miss the field, as it felt more productive than whatever stupid consulting firm that can reach me through LinkedIn.

ZahiF

6 days ago

Super cool, love to see it.

I recently joined [Sonio](https://sonio.ai/platform/), where we work on AI-powered prenatal ultrasound reporting and image management. Arguably, prenatal ultrasounds are some of the more challenging to get right, but we've already deployed our solution in clinics across the US and Europe.

Exciting times indeed!

haldujai

6 days ago

> Arguably, prenatal ultrasounds are some of the more challenging to get right

Prenatal ultrasounds are one of the most rote and straight forward exams to get right.

ZahiF

6 days ago

By get right I meant to analyze, not just take the ultrasound.

haldujai

5 days ago

Yes that’s what I meant too.

Acquiring the images is the hard part in obstetrical ultrasound, reporting is very mechanical for the most part and lends itself well to AI.

whamlastxmas

6 days ago

It’s weird that I have to attest I’m a healthcare professional just to view your job openings

naveen99

6 days ago

Xray specific model. fractures are relatively easy. Chest and abdomen xrays are hard. Very large chest xray datasets have been out for a long time (like from stanford). problem solving is done with ct, ultrasound, pet, mri, fluoroscopy, other nuclear scans.

hammock

6 days ago

I looked at my rib images for days trying to find the fracture. Couldn't do it. Could barely count the ribs. All my doctor friends found it right away though

naveen99

3 days ago

Ok, yeah rib fractures on chest X-rays are hard also. Even extremity Fractures can be hard also. Some are not directly visible, but you can look for indirect signs such as hematomas displacing fat pads. Stress fractures show up only on mri or bone scans…

Improvement

6 days ago

I can't find any git link, hopefully I will look into it later.

From their benchmarks it's looking like a great model that beat competition, but I will see the third party tests after they get released to determine the real performance.

moralestapia

6 days ago

"Exclusive Dataset"

"We have proprietary access to extensive medical imaging data that is representative and diverse, enabling superior model training and accuracy. "

Oh, I'd love to see the loicenses on that, :^).

infocollector

6 days ago

I don't see a release? Perhaps its an internal distribution to subscribers/people? Does anyone see a download/github page for the model?

stevenbuscemi

6 days ago

Harrison.ai typically productionize and commercialize their models through child companies (Annalise.ai for radiology, Franklin.ai for pathology).

I'd imagine access to the model itself will remain pretty exclusive, but would love to see them adopt a more open approach.

blazerunner

6 days ago

I can see a link to join a waitlist for the model, as well there is this:

> Filtered for plain radiographs, Harrison.rad.1 achieves 82% accuracy on closed questions, outperforming other generalist and specialist LLM models available to date (Table 1).

The code and methodology used to reach this conclusion will be made available at https://harrison-ai.github.io/radbench/.

joelthelion

6 days ago

Too bad it's not available llama-style. We'd see a lot of progress and new applications if something like that was available.

newyankee

6 days ago

I wonder if there is any open source radiology model that can be used to test and assist real world radiologists

zxexz

6 days ago

I recall there being a couple non-commercial ones on physionet trained on the MIMIC CXR dataset. I could be wrong, I'll hopefully remember to check.

amitport

6 days ago

there are a few for specific tasks (e.g., lung cancer), no "foundation" models afaikt.

zxexz

6 days ago

There really should be at this point. Annotated radiology datasets, patients numbering into the millions, are the easiest healthcare datasets to obtain. I suspect there are many startups, and know of several long since failed, who trained on these. I've met radiologists who assert most of their job comes down to contextualizing their findings to their colleagues, as well as within the scope of the case itself. That's relevant here - it doesn't matter how accurate or precise your model is, if it can't do that. Radiologists already use "AI" tools that are very good, and radiology is a very welcoming field for new technology. I think the promise of foundation models at the moment would be to ease burden and help prevent burnout. Unfortunately, those models aren't "sexy" - they reduce administrative burden, assemble contextual evidence for better retrieval (have interfaces that don't suck when integrated with the EMR).

ilaksh

6 days ago

Can you provide a link or search term to give a jumpstart for finding good radiology datasets?

naveen99

6 days ago

Tgca from ncia has some for cancer

Deeplesion is another one out of nih.

Segmed is a yc company that sells access to radiology datasets

hammock

6 days ago

Radiology is the best job ever. Work from home, click through pictures all day. Profit

user

6 days ago

[deleted]

user

6 days ago

[deleted]