AI in Health and Medical Imaging – 5 Problems to Solve
Some of these have early solutions but nowhere near widespread distribution.
- Variability in care. There is variability between doctors in the care they provide from ordering tests, imaging exams, specialist referral and ultimately outcomes for their patients. Seven years ago Walmart created its Center of Excellence (COE) program for it’s 1.5M employees to send sick and injured employees to centers with the best outcomes. This took a significant amount of work to collect and review clinical and outcome data and only about 5000 Walmart employees have used the COE program to date. Of course once the system is in place, maintenance is manageable and benefits can be realized in the long term. As expressed in The Checklist Manifesto, Atul Gawande notes that physicians hold up independence as a badge as opposed to pilots who hold up discipline as theirs. Most of the conditions facing patients are similar to many others and decision support mechanisms and reporting, coupled with leadership tactics for doctors are crucial to decrease variability and improve quality of care. This is not an easy task but finding, coaching and holding outlying doctors accountable is needed. There is reasonable progress in capturing data to apply analytics but this is a great area to deploy resources. Once benchmark data is clean, AI can be applied to find outlying providers, ideally in real time.
- Waste. Physician variability is one component of waste and per JAMA, of the estimated annual $935B in wasted healthcare dollars, up to $100B is in low value care. While it is not realistic to recoup all of this estimated waste, it certainly demonstrates inefficiencies to be addressed. Walmart sees the overall benefit of measuring outcomes and does not focus on short sighted, small population ROI. The link to the article covers other areas to include: failure of care delivery and failure of care coordination. This is intuitive as we hear of friends and family who have experienced (or experienced ourselves) the frustrating system by which our care is delivered. Some large private and academic institutions are making significant strides by collecting data on the process and looking at patient groups, condition by condition, to identify common sticking points and creating treatment paths. The true leaders have this data, captured and clean, to apply data analytics properly and finding even more areas to improve. The article notes $265B wasted in administrative complexity and I will give a specific example in number 3.
- Redundant administrative burden. In a radiology practice, there is plenty of revenue generated to pay the radiologists well and hire people to make other problems go away. While systems could be created, it is simply cheaper and easier to hire more people to fax paper, scan paper, make phone calls and schedule appointments. Progress is being made on this front with acquisition of radiology groups to consolidate tasks for automation but that is certainly not the norm for the 30,000 radiologists in the United States. Simple business systems will go a long way to decrease the administrative burden and once those systems are in place, the collected data can be further evaluated with algorithms to find thousands of regions of micro-inefficiency and subsequently improve upon them.
- Too few AI Imaging Algorithms. Within the past 10 years, nearly 70 AI algorithms have been approved by the FDA for medical imaging. To place this in context. Dr. Charles Khan, a long time leader in radiology, states that there are over 12,000 medical conditions that cause imaging findings. This means for most radiologists, the current AI platforms cover an insignificant part of their work. For some it is gaining traction, particularly in large teleradiology groups where smart work lists and triage algorithms are making a real improvement. For fewer radiologists still, there are the academic pioneers who use this every day and see mind boggling results in not only imaging, but clinical data as well. In addition, Dr. Khan has painstakingly curated and noted over 4000 imaging findings that cause the 12,000 conditions. Just to clarify a ‘finding’ can be many things. The presence or absence of something, or increase or decrease in size, or change in appearance can be a finding. Many of these imaging findings are perfectly suitable for training with a properly sized data set. Another point to consider is that these are findings that we can physically see with human eyes and there are current algorithms already quantifying imaging data that I cannot see. I would like to see 1000 AI imaging algorithms in use to get a sense of what AI can really do.
- Voice Recognition is Not Quite Reliable Yet. As a radiologist I use voice recognition many hours per day to dictate my reports which are subsequently sent to the medical record. Radiologist are among the heaviest users voice recognition usually dictating between 10k and 20k reports annually. In addition, Natural Language Processing (NLP) is employed to make this and other unstructured prose clinically usable. Both of these applications of AI, voice recognition and NLP, are gaining significant traction but there is a safety issue to keep front of mind. In your day to day life, how many times do you use Siri, Alexa, Hey Google or another platform? How many times is it exactly correct? In my radiology daily work, I look through images to find and interpret abnormalities, dictate and proofread the report. Anyone who has read a few radiology reports can attest the proofreading is the least prioritized step. There are nonsense syllables and incorrect words reasonably frequently. As the next human reads the reports, they can fill in the blanks to get the important points and can look at the images of the scans again themselves to clarify, but as voice recognition and NLP progress, ideally there will be an electronic editor looking over my shoulder decreasing the chances for dictation errors. Additionally, it would be ideal if we capture patient and doctor interactions in such a way to make it helpful on both sides. In a primary care encounter, it would be comforting to know everything was captured properly with voice recognition for both the patient and doctor to use as a reference. And adding an asynchronous component to allow for supplementation or correcting information would be helpful. Particularly when we are in the parking lot after the appointment and say, “Oh, I forgot to ask about this!” With asynchronous patient data collection, we could say “Hey Siri, please please ask my doctor about colon cancer screening options that are best suited for me when I turn 50.” An algorithm could capture this on the medical side, check your record for risk factors and help your doctor decide if a fecal test is an appropriate substitute for a colonoscopy. Your doctor could then say, “Hey Google, reply to Ty and say the data supports the fecal test at this time. Please see the attached risk matrix and explanation.”
You may be interested in a similar article: Ten Boxes to Check Before Investing in an Early Stage AI Startup
Ty Vachon is a practicing radiologist and speaker and author on machine learning in medical imaging and healthcare. He is determined to provide clinically useful tools for his healthcare colleagues to best leverage the vast amounts of data generated daily. He advises several companies and lives in San Diego California. Feel free to email to learn more: firstname.lastname@example.org