The End of ‘Algorithm Radiology’
RADIOLOGY IN THE AGE OF AI & VLMS | ARTICLE 1 OF 14
The era of the narrow, single-task detection algorithm is over. What’s replacing it is something fundamentally different, and most radiologists don’t realize it yet.
When the first wave of radiology AI hype crested around 2016, with headlines about algorithms replacing radiologists by 2021, I, like many of you, was skeptical. Not of AI in general, but of the specific claims being made. The systems being touted were impressive in controlled settings and surprisingly brittle everywhere else. They were very narrow. One task, one modality, one population.
Some things stuck: triage tools for large vessel occlusion, fracture flagging on plain films, PE detection, lung nodule tracking which are useful. But the revolution didn’t arrive on schedule, and a lot of radiologists quietly filed ‘AI’ next to ‘voice recognition will replace dictation’ in the cabinet labeled ‘technology that overpromised.’
I’m writing this series because I think that mental model is ready to be updated. Not because AI hype has returned (it has, loudly) but because this time, the underlying architecture is categorically different. And most of us haven’t had time to understand why that matters.
The First Wave: What It Could and Couldn’t Do
The generation of AI that dominated radiology from roughly 2012 to 2022 was built on convolutional neural networks (CNNs). These are extraordinarily powerful at one thing: pattern recognition on images. Show a CNN ten million chest X-rays labeled with pneumonia and not-pneumonia, and it will learn to detect pneumonia with remarkable accuracy. That’s genuinely useful.
But a CNN trained on chest X-rays knew nothing about the radiology report. It knew nothing about the patient’s prior imaging, their clinical history, or the structured language radiologists use to describe what they see. It processed pixels. It output a probability score. It was, in a very real sense, blind to everything that gives a radiologic finding clinical meaning.
That limitation was a feature of the architecture. CNNs are fundamentally single-modal, single-task tools. The ‘AI’ being deployed in most radiology practices today is still mostly this generation: dozens of narrow algorithms running in parallel, each doing one thing, none of them talking to each other or to the report.
The Pivot Moment: Language Enters the Model
Something changed around 2022 that didn’t generate nearly enough attention in radiology circles. A new class of model emerged: Vision Language Models, or VLMs, trained not on images alone, but on image-text pairs. Millions of them. Radiology reports paired with the images that generated them including clinical notes, pathology results and EHR data.
For the first time, language became part of the model, not a downstream output layer bolted onto a vision system, but woven into the architecture from the beginning. These models can look at a chest CT and reason about it in the same cognitive space as the report. They understand that ‘ground-glass opacity in the right upper lobe’ means something different in the context of a 70-year-old smoker than in a 30-year-old presenting with fever. They can synthesize prior imaging and draft a report.
That last sentence deserves to sit for a moment. VLMs are already generating preliminary radiology reports, at scale, in production, at institutions you know. Article 2 of this series goes inside that workflow: what the architecture actually looks like, what radiologists in these trials say about using VLM output the way they’d use a resident’s prelim, and where the real risks lie.
Next week, Article 2: ‘From Algorithms to Vision: How VLMs Are Generating Preliminary Reports (And What That Means)’
Two Studies That Make the Argument Better Than I Can
If you want a concrete illustration of why the algorithm era is over, two papers published in the past year do more work than any amount of theoretical argument.
The first is BrainIAC, published in Nature Neuroscience in 2026. It is a foundation model that takes a routine brain MRI and simultaneously outputs seven distinct clinical predictions: IDH mutation status, survival probability, dementia risk, brain age, stroke timing, and more. Think carefully about what that means operationally. Studies that previously required invasive tissue biopsy to answer a single question are now being interrogated for a panel of biomarkers from a single non-invasive scan. That is not an algorithm. That is a generalized reasoning system applied to imaging data.
The second is OMAFound, published in Nature Health in 2026. A single non-contrast CT scan, screened simultaneously for breast cancer and lung cancer, with breast cancer detection performance matching dedicated mammography. One study, two cancers and no contrast. If your mental model of radiology AI is still a PE detection tool or a fracture flagging tool, this paper nudges you to update it. The value density of a single imaging study is being redefined, and the implications for how we think about ordering, reimbursement, and the radiologist’s interpretive role are significant.
Both of these are real papers with real data. This is not a roadmap, this is already happening.
We will return to BrainIAC and OMAFound in Article 14, when we look at what imaging is actually worth and how AI changes the economics of the study.
Why the Skeptic’s Objection Deserves a Direct Answer
‘We’ve been through this before.’ You may say and it’s a fair objection. I want to address it directly.
The 2012–2022 hype was real and so was the disappointment. But the failure mode of that era was architectural, not aspirational. Narrow models built on single-modal data hit a ceiling imposed by their design. VLMs don’t hit that same ceiling; they were designed to reason across modalities, to integrate text and image, to generalize.
That said, the skeptic is right about one thing: architectural capability does not equal deployed reliability. VLMs have their own failure modes, and they’re more complex and harder to audit than CNN failures. They can hallucinate findings. They can encode the biases of the radiologist-authored reports they trained on. They can degrade silently when the input distribution shifts. Articles 6 and 7 of this series address those failure modes directly, because understanding them is not optional for anyone who will be signing reports that AI touched.
Coming in Articles 7 and 8: Why AI needs an immune system, and how bias, drift, and ‘collision risk’ between tools can silently degrade performance. And in Article 7 specifically: what happens when AI stops being a co-pilot and starts acting as an autonomous agent.
What This Series Is About
How do we practice, lead, and build careers in a field that is being reorganized around us in real time?
Over fourteen articles, we’re going to cover the terrain that matters. How VLMs work and what they can actually do today. How to read AI performance statistics without being misled by validation theater (Article 3). What happens to productivity, and to practice economics, if AI doubles throughput (Article 4). What the radiology labor market looks like when you put the workforce shortage and AI efficiency on the same graph (Article 5).
We’ll go deep on what could happen if AI stops being a co-pilot and starts acting as an autonomous agent, making decisions without a radiologist in every loop (Article 7). We’ll get into the legal and liability questions that keep practice leaders up at night: who is responsible when AI misses a finding you signed off on (Article 9). We’ll examine the national data infrastructure being built right now through TEFCA and QHINs, and what it means that your imaging data is flowing through networks you may not have evaluated (Article 10). We’ll look at how radiology is colliding with pathology and endoscopy AI in ways that redefine what a ‘diagnostic specialist’ even is (Article 11).
And we’ll end where I think this conversation should end: not with alarm, but with strategy. How to evaluate and buy AI without getting burned (Article 12). How to reclaim the consultative, high-complexity role that makes radiology irreplaceable (Article 13). And what the radiologist who thrives in 2030 looks like: what they know, how they practice, and what they’ve built (Article 14).
One Thing Before We Start
I am a radiologist and I’ve built and invested in technology companies. I have skin in this game and I’ve tried to let that sharpen rather than distort my analysis. Where I have a point of view, I’ll say so. Where the evidence is uncertain, I’ll say that too.
This is the most significant structural change to hit diagnostic radiology since PACS. Not because AI is infallible, of course it isn’t. Not because the economics are simple, far from it. But because the direction of travel is clear, the pace is accelerating, and the radiologists who navigate it best will be the ones who understood it earliest.
That’s what this series is for.
Next Tuesday, Article 2: From Algorithms to Vision: How VLMs Are Generating Preliminary Reports (And What That Actually Means)
References
Schouten et al. The Future of Radiology: The Path Towards Multimodal AI and Superdiagnostics. Current Problems in Diagnostic Radiology, 2025.
Li et al. AI Solutions to the Radiology Workforce Shortage. npj Health Systems, 2025.
Navigating the AI Revolution: Will Radiology Sink or Soar? PMC Review, 2025.
Kann et al. A Generalizable Foundation Model for Analysis of Human Brain MRI (BrainIAC). Nature Neuroscience, 2026.
Qian et al. A Foundation Model for Breast and Lung Cancer Screening Using Non-Contrast CT (OMAFound). Nature Health, 2026.
Ty Vachon, M.D.
Radiologist | Entrepreneur | Radiology in the Age of AI & VLMs Series

