Modalities used in spine surgery, like computed tomography (CT) scan-based navigation systems and even X-rays, reached a turning point in the last decade in terms of technological advancement. We saw developments in robotic surgery, computer-assisted navigation and advancement in minimally invasive surgeries. Even with developments like these, so much of the technology we continue to rely on in spine today utilizes radiography, which is long overdue for a makeover.
Consider that in June 1896, just months after X-rays came to exist, they were being used by battlefield physicians to locate bullets in wounded soldiers. Outside of its practical use, the public was fascinated by X-ray technology. People paid to look at their skeletons at carnivals through a fluoroscope that showed live, moving X-ray images. They were even used to attract women to shoe stores where they could see the bones in their feet inside their new shoes. This was long before we began to understand the dangers of radiation. Now, more than 125 years later, we are still using (a slightly more advanced) version of this technology to view spines and help inform how we can repair them surgically.
We’ve relied on other modalities including the magnetic resonance imaging (MRI) and CT scans for decades, and while more recent developments have gotten us closer to imaging advancements, we still have quite a distance to go. These modalities have served us well, but they don’t get us closer to understanding the varieties of data interpreted by artificial intelligence (AI) and more sophisticated modalities. We should be looking to develop imaging technologies that collect clean, informative datasets that we will be able to feed into algorithms to fuel truly autonomous AI. Additionally, these improved imaging technologies will better inform robot-assisted surgery in a way that will improve precision and accuracy because the robot will have higher quality information informing its movements.
Keeping on Top of Technological Advancements
When looking at patient outcomes, it is safe to say that spine surgery has improved throughout the years. It’s far less invasive than it was in its earliest days and the patient recovery time has decreased significantly. But technology is evolving rapidly — artificial discs that were once new and exciting a few years ago seem pretty basic now. Pedicle screw placement by a surgical robot was exciting and sophisticated at the time, now it seems limited in scope. The same goes for approaches to surgery — spinal fusion was once the premiere solution to address back pain until we learned about disc replacement. We’re constantly looking for new ways to minimize pain for patients, provide relief and allow them to move optimally in their lives.
These technologies have advanced spine surgery in a variety of ways, but none of them are collecting data in an intentional way that will enable us to provide more personalized care, increase opportunities for expanding the capacity of our overloaded medical workforce, or create more intuitive, efficient workstreams. By understanding the types of data we want to collect and how we want to apply them, we can move forward in obtaining better information about patient conditions and collect advanced data around surgeon performance. It’s incredible to consider that 50 gigabytes of data are being generated during each hour of surgery. However, that data is not being utilized to the benefit of the patient, physician or payor. We need to better collect, utilize and apply this information so we can conduct more precise procedures resulting in increasingly less invasive surgeries.
Innovating advanced imaging, navigation and robotics will improve all aspects of surgery for patients and surgeons alike. By extracting high quality information from surgery, we can better understand questions such as: Why did the surgeon make a certain decision? Why did they proceed this way, rather than that way? What about the patient’s anatomy prompted that decision? How long did it take the surgeon to decide how to proceed? Would that information be useful for the patient’s EHR? This level of data collection has the power to create improved financial outcomes for medical centers and patients if we can collect and analyze key data from different aspects of a procedure to focus on optimizing workflows to reduce operative costs, strain, and recovery.
Applying Advanced Imaging Solutions
The benefits for training surgeons is also a vast area of exploration extending to training and improving existing skills. If surgeons can use advanced imaging solutions to watch a procedure, they can study it in a way that streamlines and simplifies their approach to be more efficient.
A surgical robot can only work as well as it can see and understand where it is in relation to the human body. If the robot is off, it may perfectly drill a screw in exactly the wrong spot. Our industry needs to be looking to accentuate and enhance human performance. This is an integral part of the next evolution of robotics going away from legacy navigation, which has been around for the last 20 years. We are starting to look at highly refined, highly precise ways of navigating robotics, which will then open up the potential that it’s there to do increasingly complicated aspects of the procedure.
Radiography and fluoroscopy laid the foundation for improving surgical capabilities in our field through innovation. We’ve been able to advance these technologies marginally and introduce better navigational tools. We’ve already begun to incorporate better imaging technologies into the OR but what we’re using is a stay suture — it won’t give us powerful, valuable data to propel us forward and carry us into the future long term. We need to go further with the tools that will carry us into the next century — more sophisticated, data strategy-influenced robotics, augmented/virtual reality and smarter navigation systems. Robot-assisted navigation is only as good as the information that is available to it. With these improvements, surgical navigation technology will be able to better understand where it is operating with a higher quality of data fueling its work, allowing it to “see” more clearly than the existing plug and play type of robots that exist today, which are limited in their scope of abilities. With higher quality data, we can extract valuable insights that will improve patient outcomes, physician experience and financial margins in ways that we have yet to see in our industry.
Samuel Browd, MD, PhD, is the Co-founder and Chief Medical Officer at Proprio, Professor of Neurological Surgery at the University of Washington, and board-certified attending neurosurgeon at Seattle Children’s Hospital, Harborview Medical Center and the University of Washington (UW) Medical Center. He received his MD and PhD at the University of Florida, completed neurosurgical residency at the University of Utah, and Pediatric Neurosurgery Fellowship at the University of Washington/Seattle Children’s Hospital. He also completed a research post-doctoral fellowship on functional magnetic resonance imaging and operative navigation. In co-founding Proprio alongside University of Washington’s Sensor Systems Labs’ Dr. Joshua Smith, UW MBA graduate Gabriel Jones, and computer vision specialist James Youngquist, Browd sought to leverage the emerging technologies of AR/VR and AI to revolutionize the way surgeons navigate human anatomy.
Related content on the history of medical imaging:
The Eclectic History of Medical Imaging
MRI turns 50: Expert Brad Sutton Explains its History and Role in Understanding the Aging Brain