Nicholas Theodore, M.D., (center) and the Excelsius robot he designed for image-guided spine surgery. Image courtesy of Johns Hopkins Medicine.
October 20, 2017 — Surgeons at The Johns Hopkins Hospital have for the first time used a real-time, image-guided robot to insert screws into a patient’s spine. With last week’s surgery, Johns Hopkins joins the growing number of hospitals in the United States that offer robotic-assisted spine surgery.
“We are really excited to be able to offer this to our patients,” said Nicholas Theodore, M.D., professor of neurosurgery at the Johns Hopkins University School of Medicine and director of the Neurosurgical Spine Center of Johns Hopkins Medicine. The robot, he said, has the potential to improve patient safety and decrease procedure time in the operating room. Theodore, who invented the robot before joining the faculty at Johns Hopkins and maintains a financial interest in the technology, said, “This will take what we neurosurgeons do on a daily basis, elevate the art, enable us to do things much more precisely and allow us to perform our best every day.”
One main challenge in minimally invasive spine surgeries for conditions that include degenerative disease, spine tumors or trauma, is knowing where to minimally invade with the least number of readjustments. Currently, spinal screw placement relies on taking multiple X-rays during the procedure to ensure accurate placement. “But we know that about 20 percent of spinal screws inserted are not perfect, so I set out to reverse-engineer and automate accuracy and precision,” said Theodore.
When one drives a car and takes a quick glance to the side, often the steering wheel drifts in the same direction as the driver’s eyes. Theodore says current image-guided surgical procedures require the surgeon to look back and forth between the patient and an image, which causes imperfection of screw placement. While oftentimes these placements are “good enough,” this was not good enough for Theodore.
This new robot marries a computed tomography (CT) scan of the patient with the actual patient, allowing the surgeon to point to a spot on the CT scan and tell the robot to aim for that same spot. Connected to a camera, which itself reads landmarks on the patient, the robot is able to process what the camera sees with the CT image in real time. The biggest fear in this type of procedure is movement, Theodore said —what if the patient breathes or otherwise moves slightly—but this robot can sense changes in position and adjust accordingly.
This new robot joins a few similar robots on the market but works differently and, according to Theodore, holds more potential for other, non-spine uses in the future.
For more information: www.hopkinsmedicine.org