Well the short answer is “Yes”, longer answer is “not right now but in a few years”. I know that is very provocative and most people would disagree, especially radiologists. Stanford researchers have been able to create machine learning models and algorithms that can detect brain aneurysms much more effectively than a radiologist.
What is a brain aneurysm?
Brain aneurysm is a bulge in the blood vessels that could potentially leak or burst causing brain hemorrhage, damage or even death.
So the question is, if AI can do a better job of identifying aneurysms, can it be used in place of radiologists doing a similar function? If it can, then we will not be limited by number of radiologists there are but by number of servers we can add. Moore’s law applies to machines and not to humans so over time it’d become cheaper to deploy AI Radiologists than human radiologists.
This new AI tool is apparently built on an algorithm called HeadXNet. Researchers note however that the results are dependent on “scanner hardware and imaging protocols” which are not standardized, providers (hospital or lab) might have different hardware and use different imaging techniques that will influence how the AI tool detects or misses accurate diagnosis.
“A.I. could play a big role in supporting prevention, diagnosis, treatment plans, medication management, precision medicine and drug creation” __Bruce Liang, Chief Information Officer of Singapore’s Ministry of Health
In software development, versioning is one of the key tenets of good programming. One can go back in history using a version control system such as git, svn, cvs etc to troubleshoot bugs, reverse deployment. Wouldn’t it be cool if a similar system existed in medical imaging which can assist radiologists to quickly “see” if a treatment is positively or negatively affecting the patient? Computer vision can process images and highlight differences between two or more images in real-time. That means, a radiologist need not spend hours retrieving, interpreting images of a patient and identifying the differences, with a click of a button on their phone they can see highlights of what has changed between images.
If there was a hypothetical medical imaging versioning system, how would such a system work, how would it be implemented and deployed in hospital systems, who would primarily use it, how would it enhance treatment effectiveness?
“Medical imaging guides the course of much of patient care and is an essential element of biomedical research. From x-rays and ultrasound to computerized tomography (CT), functional magnetic resonance imaging (fMRI), and positron emission tomography (PET), medical imaging helps clinicians diagnose, treat, and understand a range of diseases and conditions, including cancer, cardiovascular disease, and neurodegenerative disorders.”
Internet Working Group for Medical Imaging (IWDMI) defined the above as the four key pieces for better healthcare through effective medical imaging. I’m particularly interested in “Advanced Computation & Machine Learning” aspect of the roadmap.
Here is a set of breast cancer images for a patient taken at regular intervals. I’m not going to pretend to know what’s going on in the following image but anyone with half a brain can guess that it shows images of different stages of breast cancer and it’s trying to help the physician understand treatment’s effectiveness over time (week 1 through week 20).
For a radiologist to pull such a report, my sense is, it’s not straightforward. Retrieving images from disparate systems, putting them next to each other for quick and easy comparison and looking at the treatments (dosage etc) alongside the images and viewing that over time to get a sense of disease progression probably takes hours if not days.
This can be streamlined and automated using better image storage, retrieval and computer vision. If we can reduce the amount of time to generate this report from days/hours to minutes/seconds, it would save precious time for physicians and might be a life saver for the patient.
In this series, next we will see where the current art is on this issue and subsequently look at the possibilities of using latest Computer Vision (CV) techniques to save time for Radiologists and Pathologists.
“If AI can recognize disease progression early, then treatments and outcomes will improve.”
Isn’t it fascinating how little we understand about the brain?. A really good case for applying deep learning AI to recognize subtle patterns and changes to neuron activity can help in early diagnosis of Alzheimers disease. Using Positron Emission Tomography (PET) scans researchers are able to measure the amount of glucose a brain cell consumes.
A healthy brain cell consumes glucose to function, the more active a cell is the more glucose it consumes but as the cell deteriorates with disease, the amount of glucose it uses drops and eventually goes to zero. If doctors can diagnose the patterns of drop in glucose consumption levels sooner, they can administer drugs to help patients recover these cells which otherwise would die and cause Alzheimers.
“One of the difficulties with Alzheimer’s disease is that by the time all the clinical symptoms manifest and we can make a definitive diagnosis, too many neurons have died, making it essentially irreversible.”
JAE HO SOHN, MD, MS
Human radiologists are really good at detecting a focal point tumor but subtle global changes over time are harder to spot by the naked eye. AI is good at analyzing time series data and identifying micro patterns.
Other areas of research where AI is being applied to improve diagnosis is in osteoporosis detection and progression through bone imaging and comparison of subtle changes in the time series of images.
Stroke management is another area where machine learning has started to assist radiologists and neurologists. For example, here is a picture of how computers are trained with stroke imaging and then that model is used to predict if a “new image” has infarctions or not (it’s an yes or no answer).
Furthermore the ML model can identify the exact location of stroke and highlight it for the physicians, saving precious time and helping expedite treatment, in stroke treatment seconds shaved off can mean the difference between life and death.
The areas in which deep learning can be useful in Radiology are lesion or disease detection, classification, quantification, and segmentation.
“Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes)”. __RSNA
Convolutional Neural Networks (CNN) algorithms have become popular in identifying patterns in data automatically without any external engineering, especially in image processing. CNNs are developed on the basis of biological neuron structures. Here is an example of how biological neurons detect edges through visual stimuli i.e. seeing.
and here is how a similar structure can be developed using CNNs
The “deep” term in deep learning comes from the fact that there are multiple layers between inputs and outputs as represented in a simplified diagram below
If we apply the above CNNs structure to radiology images as inputs to detect disease or segment the image we can have an output that might highlight the areas where there is possible disease and/or output that says what the image might represent
“Many software frameworks are now available for constructing and training multilayer neural networks (including convolutional networks). Frameworks such as Theano, Torch, TensorFlow, CNTK, Caffe, and Keras implement efficient low-level functions from which developers can describe neural network architectures with very few lines of code, allowing them to focus on higher-level architectural issues (36–40).”
“Compared with traditional computer vision and machine learning algorithms, deep learning algorithms are data hungry. One of the main challenges faced by the community is the scarcity of labeled medical imaging datasets. While millions of natural images can be tagged using crowd-sourcing (27), acquiring accurately labeled medical images is complex and expensive. Further, assembling balanced and representative training datasets can be daunting given the wide spectrum of pathologic conditions encountered in clinical practice.”
“The creation of these large databases of labeled medical images and many associated challenges (54) will be fundamental to foster future research in deep learning applied to medical images.” __RSNA