The New Radiology Class Is Here. The Department Is Ours to Design.

RADIOLOGY IN THE AGE OF AI & VLMS  |  ARTICLE 5 OF 14

Radiology programs across the country posted their new PGY-2 classes this week. The photos are familiar: white coats, big smiles, the particular relief of a match that worked out. What the photos do not show is the question underneath all of it, the one this series has been working toward since Article 1: what kind of department are these residents walking into?

The honest answer is that it depends on what those of us already here decide to build. The department these residents will work in at the height of their careers does not quite exist yet. The AI tools that will define it are being deployed right now, the economic structures around them are hardening, and the decisions being made at the departmental level today will shape the environment these new colleagues inherit. That is not abstract. It is a design problem with a real deadline.

Three facts frame the current moment, and they look like they are pulling in opposite directions. More vacant positions than graduating residents. Salaries at record highs. AI that is already demonstrating meaningful efficiency gains in high-volume imaging work, with considerably more on the horizon.

This article explores these facts more.


The shortage is structural, not incidental

Start with what is actually driving the labor market, independent of AI entirely. The radiology workforce shortage predates the current wave of AI tools by years and has nothing to do with them. The causes are well understood: an aging workforce approaching retirement, a residency pipeline that cannot be meaningfully expanded on a short timeline, chronic geographic maldistribution that leaves rural and safety-net systems perpetually underserved, and a burnout-driven attrition rate that accelerated sharply during and after the pandemic.

The AAMC projects a combined shortage of 17,000 to 42,000 radiologists, pathologists, and psychiatrists by 2033.¹ That is not a rounding error. It represents a structural gap between demand and supply that no credible near-term AI deployment scenario is positioned to close. Closing a workforce gap requires people, and AI does not hold a medical license. The residents matching this week enter independent practice around 2030, right inside the window when that gap is projected to be widest and when AI adoption curves are expected to steepen simultaneously. They will not be insulated from this. They will be practicing inside it.

At the same time, imaging volume continues to grow. An aging population, expanding access to care, and the proliferation of imaging-intensive oncologic and cardiovascular protocols are all pushing exam volumes upward. More imaging. Fewer radiologists available to read it. The near-term arithmetic is clear.²

The near-term paradox

This is where AI enters the picture, and where most of the public narrative gets muddled. AI can make radiologists faster. The Northwestern Medicine deployment covered in Article 4 documented average efficiency gains of 15.5 percent across 24,000 real-world reports, with individual ceilings reaching 40 percent.³ That is meaningful. But efficiency gains in a supply-constrained market do not produce unemployment. They produce capacity relief.

If your group is chronically short-staffed and reading more volume than is comfortable, a 20 percent throughput boost means you are short-staffed but somewhat less exhausted. It means a rural hospital on the margins of financial viability can sustain teleradiology coverage for a modality it otherwise could not. It does not mean you are being replaced. In the near term, demand is growing faster than AI can automate, and the workforce gap is growing alongside it.

Near-term and long-term are not the same sentence, though. And that distinction deserves more honesty than it usually gets.

Where the math changes

The scenario that should have practice leaders paying attention is not the one where AI displaces your job directly. It is the one where AI doubles throughput at the same moment imaging volume plateaus.

That combination has not materialized yet. But it is not implausible on a five-to-ten year horizon. Langlotz and colleagues modeled 14 to 49 percent reductions in radiologist hours over five years under various AI adoption scenarios.⁴ A Perspective published in 2025 made the distributional argument more directly: AI productivity gains will not flow to employed radiologists. They will flow to the employers, vendors, and private equity groups who control the contracts.⁵ Private equity-backed groups now employ approximately 12 percent of U.S. radiologists, a tenfold increase in a decade. AI-enabled throughput is attractive to PE not primarily because it improves patient care, but because it changes the labor-to-revenue ratio. If you are an employed radiologist, that is worth sitting with.

The practices positioned to capture the largest productivity gains from AI are precisely those already operating at the high end of the volume curve: high-throughput breast imaging, screening chest X-ray, and routine overnight coverage. The gains are real there and will compound as the technology matures. For radiologists in complex subspecialty or academic settings, the 40 to 80 percent efficiency projections circulating in the literature are aspirational rather than operational, at least in the near term. AI does not amplify what it cannot yet automate.

What the data says about the radiologist’s irreplaceable function

Here is the argument I find most compelling for why radiologists are not being replaced in any near-term scenario, and it comes from a large prospective dataset rather than reasoning by analogy.

Larson, Poff, and colleagues at Stanford’s AIDE Lab, working with Radiology Partners and Aidoc, prospectively evaluated 13 AI models across 12 clinical tasks and roughly 89,000 real-world exams.⁶ The finding worth anchoring to: across that entire dataset, AI consistently achieved higher sensitivity, while radiologists consistently maintained higher positive predictive value.

This is a description of two different cognitive tools performing two different functions. AI is better at not missing things. Radiologists are better at confirming that what they call positive is actually positive.

These capabilities are not interchangeable; they are complementary.

And the clinical and medicolegal architecture of radiology is built around the PPV function. The signed report, the professional judgment and the liability structure all play a part. You cannot run a diagnostic system on sensitivity alone, someone has to decide what is real.

The human-AI team outperforms either component working independently, which is the empirical basis for why “AI versus radiologist” is the wrong frame. The correct frame is “what does this specific combination produce that neither could produce alone.”

A randomized mock-juror experiment from Bernstein, Sheppard, Bruno, Baird, and colleagues at Brown University, Penn State College of Medicine, and Seton Hall University School of Law puts a number on what that complementarity is worth in practice.⁸ The study tested 282 participants on a hypothetical malpractice scenario: a radiologist misses a brain bleed on CT that AI had correctly flagged as abnormal. In the single-read condition, where the radiologist interpreted the case only after seeing the AI output, 74.7 percent of mock jurors sided with the plaintiff. In the double-read condition, where the radiologist interpreted the case independently before receiving AI feedback, plaintiff-siding dropped to 52.9 percent. A 22 percentage point reduction driven entirely by workflow design, with no change in the clinical outcome or the AI tool itself.

The workforce implication is direct. Double-read preserves independent radiologist judgment as a documented first-pass clinical act before AI input is introduced. That documented judgment is precisely the cognitive function the Larson data describes as the radiologist’s comparative advantage: precision over recall. Double-read is how that complementarity gets operationalized in daily practice and built into the medical record. It is not a workaround; it is the correct architecture for a human-AI diagnostic team.

Supporting this from a different angle: the MedVersa reader study, which compared standard templates, GPT-4o drafts, and MedVersa drafts across 75 chest X-rays read by ten board-certified radiologists, found measurable reductions in report discrepancies when AI-drafted reports were used.⁹ Fewer discrepancies between AI-assisted and final radiologist reports is not evidence that the radiologist is redundant. It is evidence that the combination surfaces disagreements worth resolving, which is exactly the function an independent first read is designed to protect.

Teleradiology and the compression risk

There is one corner of this labor market that deserves direct attention. Global AI-powered teleradiology is already scaling. Platforms combining low-cost international radiologist labor with AI pre-reads and structured reporting automation can deliver overnight coverage at a fraction of domestic cost. This is not theoretical; it is the current business model of multiple active commercial platforms.⁷

For routine studies in commoditized categories, including normal chest X-rays, after-hours non-urgent plain films, and overnight CT coverage for community hospitals, the pricing pressure is real and already present. The subspecialty radiologist reading complex oncologic MRI at an academic center is essentially insulated from this dynamic. The general teleradiologist reading high-volume routine studies is not. AI amplifies that divergence; it does not create it.

The strategic response

If you are a department leader, the question is not whether AI will reshape how your group functions. It will. The question is whether you are the one doing the shaping or whether you find out after the fact what someone else decided.

The departments that will be best positioned are those that do three things deliberately.

First, move into high-complexity, AI-resistant subspecialty work where clinical judgment, not throughput, is the value driver. Second, structure AI adoption so efficiency gains translate into departmental capacity and revenue rather than flowing entirely to vendors and employers. Third, take seriously what equity participation in AI infrastructure looks like for the physicians deploying these tools, rather than treating technology adoption as something that happens to them.

The build-versus-buy question, addressed at length in Article 12, is inseparable from this. The economics of AI deployment are not neutral with respect to who captures the gains. That question belongs in the department’s hands.

The new residents who matched this week will spend thirty-plus years in this specialty. The department they work in is not fixed. It is being designed right now, in procurement decisions and contract negotiations and governance conversations that most of them are not yet in the room for. Getting them into those rooms, and building a department worth inheriting, is the actual work in front of us.


Up Next in Article 6:

The residents matching this week will spend their careers reading alongside AI tools that are being deployed right now, before the infrastructure to govern them fully exists. Article 6 examines what that preparation requires, starting from a premise any radiologist who trained in a high-stakes environment will recognize: readiness is built before the mission launches, not assembled in response to a mishap after the fact.


AI can increase output per radiologist if it behaves like a well-trained fellow.

If it behaves like a first-year, supervision friction is too high.

If you want to deploy AI in a way that expands effective capacity, protects revenue, and surfaces risk early, I can help.

I am identifying a small number of forward-leaning partner sites to build and pilot independent AI performance evaluation software in real clinical workflows. Email me here: ty@orainformatics.com


References

1. AI Solutions to the Radiology Workforce Shortage. Li et al., npj Health Systems, 2025. https://www.nature.com/articles/s44401-025-00023-6

2. The Role of AI in Mitigating the Impact of Radiologist Shortages: A Systematised Review. PMC, 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC12085355/

3. Generative AI Boosts Radiology Productivity Up to 40% in Large Multi-Site Clinical Deployment. Huang, Etemadi et al. (Northwestern Medicine), JAMA Network Open, 2025. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2834943

4. What Effect Will AI Have on the Radiologist Workforce? Langlotz et al. (Stanford), AuntMinnie/MedRxiv, 2024. https://www.auntminnie.com/imaging-informatics/artificial-intelligence/article/15814368/what-effect-will-ai-have-on-the-radiologist-workforce

5. Perspective: AI Productivity Will Not Benefit Employed Radiologists. ScienceDirect, 2025. https://www.sciencedirect.com/science/article/pii/S3050577125000313

6. Predicting the Value of Radiology AI Applications: Large-Scale Predeployment Evaluation of a Portfolio of Models. Larson, Poff et al. (Stanford AIDE Lab / Radiology Partners / Aidoc), AJR, March 2026. https://www.ajronline.org/doi/10.2214/AJR.25.34340

7. Teleradiology Trends and Industry Changes: A Deep Dive for 2025. AAG Health, 2025. https://www.aag.health/post/teleradiology-trends-industry-changes

8. The Radiologist-AI Workflow and the Risk of Medical Malpractice Claims. Bernstein, Sheppard, Bruno, Baird et al. (Brown University / Penn State College of Medicine / Seton Hall University School of Law), Nature Health, March 10, 2026. https://www.nature.com/articles/s44360-026-00085-2

9. MedVersa: A Generalist Foundation Model for Diverse Medical Imaging Tasks. Zhou, Acosta, Adithan, Datta, Topol, Rajpurkar et al. (Harvard / Scripps Research / Stanford), NEJM AI, 2025/2026. https://ai.nejm.org/doi/full/10.1056/AIoa2500595

Menu