Virtual reality and artificial intelligence are being touted as the transformative tools that will reduce human error in the operating room. But how do they work? We speak to Gabriel Jones, CEO and co-founder of Seattle-based Proprio, to find out.
Proprio was founded in 2016 by Dr. Sam Browd, a pediatric neurosurgeon, with the initial goal to eliminate the need for loupes - the magnifying glasses surgeons wear to perform delicate operations, and replace them with a digital alternative. In the four years since, Browd and his co-founders Gabriel Jones and James Youngquist (Chief Technology Officer) have added machine learning, computer vision, robotics and mixed reality to their solution to augment human vision during surgery.
"Current surgical techniques are dependent on preoperative CT or MRI imaging to plan a surgery, and repetitive, irradiating reimaging during the operation to confirm changes to the anatomy" Jones explains. "There is currently no surgical navigation system that provides real time 3D information on the actual position of what a surgeon is performing surgery on. Proprio provides real time anatomical location and presents the surgeon with updated, instant information in a mixed reality environment, telling them where they need to be so they never have to take their eyes off the surgical site. The result is better surgical accuracy and efficiency."
Artificial intelligence is becoming widely adopted in healthcare, particularly in diagnostics for conditions like diabetes, cancer and Alzheimer's. Jones points out that a team from Stevens Institute of Technology recently developed an AI-driven algorithm that diagnosed Alzheimer’s with more than 95 percent accuracy.
Meanwhile researchers at Google used AI to analyse CT scans of the lungs of high-risk lung cancer patients, with AI outperforming the radiologists when it came to predicting which patients would develop cancer.
The use of VR is also showing promise, particularly for pre-op planning and surgical training. “VR allows for low-risk training for high-risk scenarios, and enables surgeons to remain up-to-date on new procedures and equipment without the need for in-person, time-intensive training” Jones says.
He adds that no company is deploying both AI and VR to inform live surgeries. “The commonplace usage of these technologies will only happen once the technologies themselves become transparent to the user. AI and VR are not novelties – their functionality will become so integrated into operative workflows and their value so apparent that they will become indispensable.”
“There is a term we use for this phenomenon that defines our design philosophy at Proprio: “calm design”. We can deliver only the information a surgeon needs, just when they need it. At that point, adoption is natural and usage becomes commonplace.”
Jones says current navigation technologies are time-consuming and costly, so they aim to demonstrate improvements to surgical accuracy without the need for a steep learning curve for either the surgeon or their support staff. “Our preclinical studies are already showing significant improvements in both accuracy and ease of adoption. The exciting part is what surgeons can do with the information and insights that are captured by using our system. We’re not disclosing those capabilities yet, but rest assured - they are revolutionary.”