Greg Freiherr has reported on developments in radiology since 1983. He runs the consulting service, The Freiherr Group.
Why We Have to Pay Attention to AI Right Now
Image courtesy of Pixabay
Where will artificial intelligence (AI) be in a year? Five years? A decade? A century?
In the snapshot of AI we are now viewing, we see where AI is at the present time. If we want AI to reach its potential for helping people, we have to look ahead to where AI will be and adjust what we are doing now in AI.
In imaging we are building digital savants. Among the examples on the exhibit floor of RSNA 2017: GE’s algorithm, embedded on portable X-ray machines to spot diseases such as pneumothorax; and Siemens’ AI-fueled system for optimally positioning patients in high-end CTs. More recently, HeartSmartIMT Plus came to light, its cloud-based algorithms designed to help cardiologists perform echocardiograms on patients in their offices, then analyze the images in the cloud.
These are just a few of the many algorithms being groomed for imaging. And they are only a small slice of the ones being developed for all of healthcare.
AI Tools
At the Healthcare Information and Management Systems Society (HIMSS) 2018 conference, multiple presenters described smart algorithms as tools. That’s all they are, said one presenter after the other. Whether these algorithms are looking for patterns in clinical images or in petabytes of population health data — regardless of whether it is supervised or unsupervised learning — smart algorithms are just tools.
But they learn. And that makes an enormous difference.
These tools are not being asked to interpret tasks as good or bad, or their findings as things that may help or harm a patient. They are simply being coded to stay within the scopes of their tasks. They find things. By design, everything they learn is applied to one task area. They are assistants, limited to specific tasks, focusing on individual problems and accessing highly selective datasets.
But will AI developers be able to put blinders on smart algorithms forever? Will algorithms always be digital beasts of burden? Already some of the AI tools envisioned for near- or mid-term application are being groomed to examine images so they can “assess” the value of radiology reports. For example, if an algorithm spots an aneurysm in patient images and that aneurysm is not mentioned in the radiology report, the algorithm might flag the report as “incomplete,” pending review by the radiologist.
Smart algorithms may also be tasked with interpreting radiology reports for patients who access the reports and images through portals. Patient engagement is huge and is gathering momentum. To further that engagement, algorithms might be asked to explain findings in language understandable to the patient. To do so would require the algorithm to get to know each patient and tailor responses accordingly.
The simple truth is that we don’t know for sure that we will be able to control what these algorithms will become. Similarly, we cannot know whether we will be able to control the speed at which smart algorithms evolve.
We have some baselines, if we consider our own development. But the applicability is suspect. AI will not be constrained by physical or biological form. And it will be able to learn at unprecedented speeds.
But of greatest concern is that focusing on just one task forces the algorithm into a kind of tunnel vision that can flaw its decision-making. Recently MIT grad student Joy Buolamwini found that a basic type of facial analysis software did not detect her face. Why? Because the coders hadn’t written the algorithm to identify dark skin tones and certain facial structures associated with black people.
It was an all-too-easy oversight. In digital photography, the first color images were calibrated against white. Little wonder that more advanced coding could be similarly colorblind.
Recognizing that smart algorithms are playing increasingly high-profile roles, Buolamwini says she is “on a mission to stop an unseen force that’s rising,” a force she describes as algorithmic bias. Comparing algorithms to viruses in a Ted Talk, she opines that algorithms “can spread bias on a massive scale at a rapid pace.”
How biases might creep into the algorithms being written to analyze radiology data is impossible to say, just as it is impossible to say what effect these biases might have. There may be, however, a simple solution.
Human Governors
Tying AI to human intelligence could serve as a governor, of sorts. For this governor to operate effectively, people must be in the loop when decisions are made. Because workflow will depend on the speed or actions of the human, the algorithms will not be able to go beyond the control of people. This is the comforting implication behind building algorithms that serve as human assistants.
But what if the human in the loop is incompetent or too intellectually lazy to question the conclusions of the algorithm? And what about algorithms designed to interact with patients? Who will perform quality control in these instances?
Even scarier, a time may come or a circumstance may arise when a person is not directly in the loop.
But again there is a solution. Make the governor an inherent dedication to the patient. How about building healthcare algorithms that put patients first?
In his science fiction, Isaac Asimov described four laws that were intended to keep smart robots from hurting humanity. Time and again, however, the unforeseen mucked things up. An interesting potential: robots that unknowingly breach the laws because information is kept from them.
We may soon be using algorithms to assess medical reports; to identify weaknesses in human interpretations of medical data; to check whether follow-up tests recommended by a radiologist are done. In each instance, the algorithm will have been trained and then provided with highly selected — and limited — data sets. Not only might limited data impact the effectiveness of the algorithm, this limitation can impose biases.
Should we take the opportunity to teach machines, for example, why follow-up tests recommended by a radiologist are important, rather than to simply spot whether they were done? Should we be teaching algorithms to look out for patient welfare?
Patient centrism could serve as a governor. It would be more effective and more practical than making sure every smart algorithm has a human in the loop — and that human is competent.
Like Asimov’s Laws, writing algorithms that put the “patient first” could be a cornerstone in the evolution of healthcare algorithms.