Radiologist’s Intro to Machine Learning – 2 of 10

Article 2 – How Radiology and AI Will Come Together

An efficient, accurate resident with an experienced attending is a force to be reckoned with. The team can crush the list of patients and take trainwreck cases in stride. The resident can pull up labs and history as needed, and they can get the right person on the phone at the right time and handle all but the most complicated clinical scenarios. It takes years of training to achieve this.

Let’s take a look at Dr. Keith Dreyer’s outline of what clinical diagnostics looks like.

Start with the patient in the lower left and move left to right.











Now, this clinical workflow outline is very general, but we can trace the steps of the patient entering the healthcare system to the provider ordering imaging to the radiologist doing what he/she does best. It provides a good overview of the steps taken within a patient’s diagnostic path. The next image will provide more context in terms of radiology.









The clinical pathway from above is narrowed to follow a patient with acute neurological symptoms. From left to right: symptom, imaging, image interpretation and clinical management.

From the above slide, we can categorize the vast array of functions that a radiologist is a part of on a day to day basis, multiple times a day. Specific points in the process involve the radiologist’s knowledge with some focus on teamwork throughout. Managing the care for one patient takes a whole team, interrelating factors, amidst a complex working environment.

Moreover, as we all know, there can be points in the workflow that are tedious to complete and that could be automated. This could improve the radiologist’s workflow, and potentially that improved efficiency will translate into improved patient outcomes and real dollars saved.

This is where artificial intelligence can come into play. In the above slide, the AI algorithms come into the Detect stage. Generally speaking, this is where most of machine learning is currently being focused.  However, in the future, we can imagine a world where most of these steps can be augmented or assisted by a smart algorithm to improve workflow efficiency. As you move further along this series of articles, this combination of machine and human interactions will be more clear.


Radiology has tended to be at the forefront of technology. “Medical imaging has learnt itself well into modern medicine as a hallmark in the last 30 years and revolutionized health care delivery and medical industry.” (1) The MRI and PACS systems have revolutionized how imaging is delivered, consumed, and interpreted. These have paved the way for the general notion that radiology should be at the head of the pack.

Innovations in imaging can not only equip the radiologist to better understand an underlying disease process, but can also translate into improved patient outcomes.


So, where does AI fit in?

One of the first applications is in cardiac MRI. Typically, radiologists (fellows/residents) complete the very tedious task of drawing outlines of the cardiac outflow tracts. This could take hours. Now, in theory, this can be completely automated. The potential algorithm can, like Facebook, automatically detect the aortic root and draw it’s outline. Repeat for every needed outline. Once the outlines are completed, the calculations for flow, stenosis and regurgitation could also be automated.

From an attending physician’s perspective, if your 4th year resident consistently underestimated the aortic root circumference, you would encourage them to improve on this specific task using your educational style and technique. Which, if you remember from residency, varied from staff to staff.

Ideally, algorithms will be better than residents in this scenario. As you manipulate the images to correct any errors, the algorithm will automatically learn what is accepted and what is wrong. Now think of this on a scaled version within an institution or even a cluster of institutions. It should be nearly transparent to you. It will have an intuitive system, so when you make a change it makes small course corrections and continually improves.

This is just one scenario among many that will be investigated as intelligence is embedded within the clinical workflow.


“Extending human vision into the very nature of disease, medical imaging is enabling new and more powerful generation of diagnosis and intervention” (1).

The radiology community is part of a unique subset of physicians – we tend to be early adopters of technology. As such, we are primed to be on the cutting edge of what is being developed as tools that will pervade our workspace. No – we do not need to know how to physically program a neural network; we can leave that to the Kaggle developers and others mentioned in the previous article. But it will be very helpful to understand the basics, underlying motivation, and implications of machine learning.

Next week:

Article 3 – Machine Learning and Artificial Intelligence Introduction



  1. W. Gao, X. (2011). The State of the Art of Medical Imaging Technology: from Creation to   Archive and Back. The Open Medical Informatics Journal, 5(1), 73-85. doi:10.2174/1874431101105010073
  2. Dreyer, K Deep Learning, Clinical Data Science and Radiology

Radiologist’s Intro to Machine Learning – 1 of 10

Article 1 – Introduction to Our Series

In 2017, the Kaggle Data Science Bowl took aim at using machine learning and artificial intelligence to fight the leading cause of cancer death in the US among both men and women.  Entrants were challenged to use a dataset of thousands of high-resolution pulmonary CT images to create new lung cancer detection algorithms. These algorithms were made to improve diagnosis and reduce false positive rates.

Of the 394 competing teams, which team received the top prize?  A team combining members from both the Medicine and Computer Science Departments of Tsinghua University in China.

Competitions such as this are a great way to combine international talent with global problems. This style of teamwork is just scratching the surface of the infinite potential for advancement within our field through interactions between medical professionals and computer science.

During radiology training, we learn that a 3-cm, spiculated, soft-tissue attenuating lung mass has a very high probability of being cancer.  Likewise, a 5-mm, smooth, calcified nodule has a very low probability of being cancer.

However, we also know that many pulmonary nodules lay somewhere in between our ability to accurately predict malignancy.  The Fleischner Society worked very hard to offer a solution with its updated follow-up criteria in 2017, which included both size and density changes. However, we still can’t look at an 8-mm nodule with a slightly irregular border and say how likely it will be cancer.

To take the Kaggle Competition one step further, there is a very real possibility that Fleischner criteria (or its replacement) will be very customizable and lung nodule tracking will improve. We will dive more into this in Article 2.


These Kaggle teams use similar technology behind Facebook’s facial recognition. Have you ever wondered how they can determine who is in your photos? This technology is called deep learning, a subset of machine learning and a subset of artificial intelligence, and we will also dive deeper into these topics in Articles 3, 5 and 6.

These sorts of efforts are a testament to the open-source community, and how people are determined to find novel solutions to important problems by working together and sharing data.

Let’s take the contrarian view, that machine learning and artificial intelligence may portent to the obsolescence of the radiology specialty.  Professor Geoffrey Hinton comes to mind.


Geoffrey Hinton is a very smart guy, but his lack of medical training on the nuances of the radiology specialty, such as image-guided biopsies, tumor board and discussion with our surgical colleagues, etc., has perhaps given him only a superficial view of our profession.   Radiologists’ jobs will morph with new tools but will be around as long as we continue to assist our clinical colleagues.

The outcome of the Kaggle competition was also benefited by timing and available resources. Thanks to the huge video gaming market, for the first time we have cost effective high power computing called Graphical Processing Units (GPUs). More in Article 3. Another area we may take for granted is voice recognition. While we may or may not see day to day improvement, it has certainly improved over the past decade. More in Article 4.

Another very important piece to the puzzle is data. Lots of data. Lots of data that is properly labeled. The American College of Radiology (ACR) and Stanford are currently working on this.  More in Article 5. (or 7).

Collaborative teams of computer scientists and medical professionals have amazing potential for development of field-changing algorithms. But when we talk about inserting these technologies into our daily workflow, or in the context of privacy, or data management – cue the crickets.  More in article 8.


Before we go any further, we would like to formally introduce ourselves.

Ty Vachon –

“This is the first of a series of 10 articles aimed to guide my colleagues. I have been tracking the growth of ML and medical applications since 2012 and following great mentors like Drs. Dreyer and Michalski. Informatics, particularly medical image utilization, has been a large part of my background. I received my radiology training in the US Navy after serving as a flight surgeon with the Marine Corps. My final Navy tour was in Okinawa, including a Radiology Department Head tour, before completing my Navy commitment and moving back to San Diego as an Angel Investor, entrepreneur, and informatics advisor and consultant. As of this writing, I have no relevant financial disclosures regarding this series.”

Danilo Pena –

“I have a background in chemical engineering, and worked as an engineer for two years. During the job, I realized that I needed to make a larger impact on society, and I also wanted to learn to code. Thus, I applied to school, got in, and quit my job. I am currently a Biomedical Informatics Master’s student at the University of Texas Health Science Center in Houston and an Albert-Schweitzer Fellow. I am always learning, and I am excited to help others learn what I know. I hope that through this series of articles, people from the medical field to the machine learning field to just the average person can use this information to understand the current landscape of radiology and its relationship with technology advancements in artificial intelligence.”

We believe that through our disparate, but complementary skill sets, we can educate others about this exciting field.


Now that you know a little bit about us, you might be wondering why should you spend your time on our series.

During the series, we will start slow and review key terminology. We will also discuss enough recent historical progress to provide context and address new trends. If you are a little more advanced, please offer clarifying thoughts from personal experience in the comments. And of course, if you notice any erroneous text, we will humbly review those comments as well.

This series is not meant to be exhaustive, but these articles are meant to level the playing field when it comes to radiology and artificial intelligence. This field is rapidly changing, and there are a lot of moving parts. We are doing our part to educate and learn through this process.

Join us for an interesting set of articles curated by a seasoned radiologist who is technology focused and a student interested in understanding how ML and AI will affect the next generation of healthcare.

Next week:

Article 2 – How Radiology and AI Will Come Together

Editor: Michael Doxey, M.D.

Radiology Residents and Machine Learning – 6 quick tips and links

There are many moving parts and there is a lot of information passing before our eyes. This is my version of a summary snapshot as we move into Summer 2018. What did I forget? Please add your thoughts below.

6 quick tips, links and areas to consider as we move forward.

1. Pay attention to Drs. Dreyer and Michalski and their team at MGH & BWH CENTER FOR CLINICAL DATA SCIENCE

“Radiologists will not be replaced by ML, however radiologist who don’t use ML may be replaced by those who do.” Mark Michalski and Keith Dreyer

2. Get a sense of what’s out there.

There were 30 ML/AI companies at RSNA last year and Dr. Harvey has done a nice job curating the list:

The A-Z Guide to Radiology AI Companies

Per MIT, there are 130 companies working on AI and healthcare in China

3. See where the ML vendors are working to add to our current workflow. Some examples: Carestream, Fuji and IBM.


Fujifilm Showcases Enterprise Imaging Portfolio and AI Initiative

TriHealth hospitals pay $10 million to adopt IBM Watson Health enterprise imaging

4. Take an intro ML class online – for free.



Get a sense of how computer scientists think. We are not so different but there are clinical things that they just don’t know. And that brings us to:

5. Own this paradigm shift.

Once we have an idea of how developers think, we can offer helpful feedback.

“I love my EMR” said no one ever. Let’s be involved in this tech shift. When your institution adopts an algorithm, insist on a system to provide user feedback.

There are so many areas to improve within radiology and between radiology and our clinical counterparts. Be creative. Think big.

6. Subscribe to a few newsletters with AI and ML stories, healthcare and others:



We are at the very beginning of this new time in radiology and quite frankly you could probably navigate the rest of your career and avoid any significant change.

But where is the fun in that?


Morning Routine – 3 Years Later

March 18, 2015 was the first day I started my daily morning routine and published an article on that in October of that year.

Answering the phone at work has been a noticeable change.

At work today, listen to how others and you answer the phone. I used to think answering the phone was not my real job. I was trained to be a radiologist. And during that training, any time the phone rang it was a distraction.

But during residency I noticed that no matter how curt I was on the phone, it kept ringing. As a staff I decided to at least be nicer, but it still frustrated me.

The phone rang. I got frustrated.

I can’t predict when the phone will ring, but I can choose my reaction.

Over the past 3 years of making time for myself each morning, I have been able to put a little space between things I cannot predict and my reaction.

Now the phone rings. I take a slight pause and answer like a calm professional (most of the time).

Of course my ringing phone is almost always a fellow provider looking for help. I make time to help and I am happy to do it (most of the time).

You’ve made it this far in your career so you know you can google lots of resources, this one is a decent start. No affiliation to me.

RSNA and ML: 3 Big Questions

There is no doubt image recognition will be an increasing aspect of our radiology practice. Now what? In addition to the obvious next question on everyone’s mind – how can we use it? here are 3 more.

1. How can we better predict which patients need scanning? Will it be clinical decision support? Will new legislation help or hurt? Can we use the rest of the clinical, social, genomic and family history to better care for patients overall, to included proper imaging?

2. With that, is there a system to support ordering providers, on not just imaging utilization, but working with lab, blood, pharmacy, physical therapy and the many other teammates standing by to help with patient care. Can we decrease variability, improve accuracy and offer more value to the patient and the system?

3. Who is going to carefully follow incidental findings? Does the average radiologist know who is supposed to come in next month for their follow-up scan? Should this duty be taken on by radiology? Can we borrow systems from our colleagues in sales and make a Patient Relationship Management platform like their CRM (Customer)? If ML algorithms are expected to find more than I can, how can we prep for this now?

There will be many RSNA updates over the next week, I’ll be following to see who is thinking about radiology fitting in the bigger picture.

Have fun in Chicago and hope to see you there next year.

Three questions as we craft our ML/radiology future

As we watch the medical imaging future unfold, there are reasonable questions to consider.

As a radiologist, I am very excited. I welcome the the upcoming changes and new tools being developed as we speak. While the computer science and data analytics experts create these tools, I challenge my colleagues to create a framework that best utilize these tools and elevate the standard of care.

A machine learning algorithm can find things that I cannot. This is happening right now and it is impressive. My first question is, what if the study has no findings? How could we avoid an unnecessary study to begin with? The financial, time and possible radiation cost could be avoided. Can we apply deep learning algorithms to clinical scenarios and avoid the CT radiation to a 11 year boy with belly pain for a possible appendicitis and ultimately has enteritis?

In a similar vein, why does a common emergent CT scan start at the diaphragm and end at the symphysis pubis? Over the years we have found if the patient complains of pain from ribs down, we’ll probably find something in there. Can we tailor the exam? If there truly is pain at McBurney’s point in the right lower quadrant, can we scan a smaller area? For younger patients the radiation dose savings can add up.

Finally, when these systems become so efficient at finding masses, nodules, cysts, focal thickening and small fluid collections, how do we follow them up? We have a decent amount of them now and the system is inefficient at best and non-existent at worst. Who will lean forward to this space? Radiologists, primary care or our computer scientists and data analyst colleagues?

Let’s create the framework to expect nodules and carefully track them. I for one want to avoid a situation where a family member is diagnosed with advanced lung cancer only to see a small nodule on the edge of an abdomen CT scan which included the lung bases a year ago.

These are reasonable short term goals. The horizon is broad and to tell you the truth, I am looking forward to the next, next thing.

Clinical Cofounder

We can provide insightful questions for the machine learning engineer to answer; provide supervised learning guidance throughout the process as well.

Like the technical cofounder, the clinical cofounder wants to create a product or service and provide value.

Medical training however can be as much a hindrance as it helps.

In medical school, and subsequent residency, we are drilled on careful attention to detail for each patient. Unless you are in a leadership role, public health or preventive medicine, you rarely think of treating a group of people.

This is the opposite of scalable and this is the opposite of value added to a growing company, mostly.

By using skills learned while caring for one patient at a time, the forward thinking clinicians can lift his or her head above the noise to see trends. We can identify pain points in our workflow and recognize them in our colleagues. We can make changes in workflow and assess efficiency. We can measure diagnosis per patients per day.

These front line observations are critical for a founding team to create the first version of a product that clinicians will find valuable.

The medical tools created and given to us can be flexed, and broken if needed. Electronic health records can be compared to banking, travel or social user experiences. Why can Expedia predict where I am going but the radiology system cannot? This juxtaposition is how a clinician can help the user experience developer tailor an intuitive interface.

Physicians are great at continuing education but the majority are not great at, or care to, broaden their knowledge base. We live in a time where on-demand classes exist to learn about machine learning and cognitive neural networks. Not that we would perform these function but in the context of the founding team we can converse efficiently with the technical team to best create a system.

If my family member needs care today, the physician who couldn’t care less about all of this is the right person for us.

When we need care in 5 years, I am betting on significant improvement in healthcare delivery as a result of the clinical cofounder.