Greg Freiherr has reported on developments in radiology since 1983. He runs the consulting service, The Freiherr Group.
Will Smart Medical Machines Take Us to the Eve of Destruction?
Graphic courtesy of Pixabay
It looked bad a half century ago. Real bad. Police dogs lunged at protesters in Alabama; frightened Vietnamese children ran from napalm; a student lay dead on the Kent State campus; an explosion ripped through the Army math building at the UW-Madison campus, killing a physics researcher unlucky to be working nearby in the early morning hours.
Barry McGuire, sang incredulously in one of the best hippie songs ever, “Eve of Destruction.” I was a kid then. I believed we were. But I lacked perspective.
I hadn’t lived during the Depression of the 1930s. Or World War II with its death camps. Or the first World War with its trench warfare, chlorine clouds and mustard gas.
If you believe Stephen Hawking and Elon Musk, we are again on the eve of destruction. Smart machines, they warn, could extinguish the human race. Will modern medicine be the tip of that spear?
If we want artificially intelligent machines to help diagnose and treat patients, we must give them knowledge about human anatomy and physiology. We must design smart machines to understand human vulnerability to disease and injury. Are we drawing a blueprint of our own destruction?
The fear of artificial intelligence is palpable. But is it reasonable? Does “will” naturally accompany intelligence? Does intelligence inevitably breed sentience? Is intelligence inherently evil?
Our definition of machine self-awareness has evolved with the competence of technology. The Turing Test, posed by Alan Turing in 1950 in his paper “Computing Machinery and Intelligence,” stated that a computer was intelligent when people could not distinguish machine from human. Arguably, we passed that point in May 1997 when IBM’s Deep Blue beat reigning world chess champion Garry Kasparov.
It turns out machines can fool us quite easily. Today chatbots regularly trick people into believing they are talking to other people. But that doesn’t make those machines intelligent … or malevolent.
I am not arguing that smart machines are harmless. Far from it. Keeping humans in the loop is prudent. But doing so could pose an even greater danger to humankind. At least one proven by history. From Hiroshima to Auschwitz, politicians have leveraged advanced technologies to promote their own nefarious agenda.
In a song about the protest movement of the 1960s, Buffalo Springfield, in “For What It’s Worth,” sang “Paranoia strikes deep. It starts when you are always afraid. Step out of line and the man comes and takes you away.”
In today’s version, it’s artificial intelligence (AI) that is coming.
Fear is a powerful motivator. It can lead us to make the wrong decisions. No fear strikes deeper than the fear of the unknown. We’ve amplified our fears with Hollywood projections that accentuate the power of computing and the weakness of people. From Hal to Ex Machina, the myth has become AI.
But, in reality, humans are not weak. We are tenacious. Resilient. We are at our best when times are worst. Wouldn’t it be nice, if we got a step or two ahead and reached our potential to solve problems before the circumstances were dire?
Computing can help us get there. Medicine can be more efficient, more effective and less costly. We need to explore the possibilities that AI offers. But we need to do so cautiously. If we have learned anything from history, it should be that our noblest creations can destroy. Recall Oppenheimer’s lament following the Trinity test of the A-bomb, his citation of Hindu scripture: “Now I am become Death, the destroyer of worlds.”
Aristotle tells us that the brave person “stands firm against the right things for the right end, in the right way, at the right time.” We must be brave when it comes to the development of AI.
We should not fear “intelligent” machines. But we should be vigilant.
We should fear what people will do with smart machines.
Editor’s note: This is the third blog in a series of four by industry consultant Greg Freiherr on Machine Learning and IT. The first blog, Will the FDA Be Too Much for Intelligent Machines?, can be found here. The second blog, Smart Scanners: Will AI Take the Controls?, can be found here.