Computer scientists at Johns Hopkins are making mathematical models to depict the safest and most effective ways to perform a surgery. Surgical tasks such as suturing, dissecting and joining tissue are well illustrated in this model.
The team's long-term goal is to develop an objective way of evaluating a surgeon's work and to help doctors improve their operating room skills. Ultimately, the research also could enable robotic surgical tools to perform with greater precision.
The project, supported by a three-year National Science Foundation grant, has yielded promising early results in modeling suturing work. The researchers performed the suturing with the help of a robotic surgical device, which recorded the movements and made them available for computer analysis.
'Surgery is a skilled activity, and it has a structure that can be taught and acquired,' said Gregory D. Hager, a professor of computer science in the university's Whiting School of Engineering and principal investigator on the project. 'We can think of that structure as ‘the language of surgery.' To develop mathematical models for this language, we're borrowing techniques from speech recognition technology and applying them to motion recognition and skills assessment.'
Complicated surgical tasks, Hager said, unfold in a series of steps that resemble the way that words, sentences and paragraphs are used to convey language. 'In speech recognition research, we break these down to their most basic sounds, called phonemes,' he said. 'Following that example, our team wants to break surgical procedures down to simple gestures that can be represented mathematically by computer software.'
With that information in hand, the computer scientists hope to be able to recognize when a surgical task is being performed well and also to identify which movements can lead to operating room problems. Just as a speech recognition program might call attention to poor pronunciation or improper syntax, the system being developed by Hager's team might identify surgical movements that are imprecise or too time-consuming.
But to get to that point, computers first must become fluent in the 'language' of surgery. This will require computers to absorb data concerning the best ways to complete surgical tasks. As a first step, the researchers have begun collecting data recorded by Intuitive Surgical's da Vinci Surgical Systems. These systems allow a surgeon, seated at a computer workstation, to guide robotic tools to perform minimally invasive procedures involving the heart, the prostate and other organs. Although only a tiny fraction of hospital operations involve the da Vinci, the device's value to Hager's team is that all of the robot's surgical movements can be digitally recorded and processed.
When a surgeon operates the controls of a da Vinci robotic system, the device records these hand movements.
In a paper presented at the Medical Image Computing and Computer-Assisted Intervention Conference in October 2005, Hager's team announced that it had developed a way to use data from the da Vinci to mathematically model surgical tasks such as suturing, a key first step in deciphering the language of surgery. The lead author, Johns Hopkins graduate student Henry C. Lin, received the conference award for best student paper.
'Now, we're acquiring enough data to go from ‘words' to ‘sentences,'' said Hager, who is deputy director of the National Science Foundation Engineering Research Center for Computer-Integrated Surgical Systems and Technology, based at Johns Hopkins. 'One of our goals for the next few years is to develop a large vocabulary that we can use to represent the motions in surgical tasks.'
The team also hopes to incorporate video data from the da Vinci and possibly from minimally invasive procedures performed directly by surgeons. In such operations, surgeons insert instruments and a tiny camera into small incisions to complete a medical procedure. The video data from the camera could contribute data to the team's efforts to identify effective surgical methods.
Hager's Johns Hopkins collaborators include David D. Yuh, a cardiac surgeon from the School of Medicine. 'It is fascinating to break down the surgical skills we take for granted into their fundamental components,' Yuh said. 'Hopefully, a better understanding of how we learn to operate will help more efficiently train future surgeons. With the significantly reduced number of hours surgical residents are permitted to be in the hospital, surgical training programs need to streamline their training methods now more than ever. This research work represents a strong effort toward this.'
Hager's other collaborators include Sanjeev Khudanpur, a Johns Hopkins assistant professor of electrical and computer engineering, and Izhak Shafran, who was a postdoctoral fellow affiliated with the university's Center for Language and Speech Processing and who is now an assistant professor at the Oregon Graduate Institute.
Hager cautioned that the project is not intended to produce a 'Big Brother' system that would critique a surgeon's every move. 'We're trying to find ways to help them become better at what they do,' he said. 'It's not a new idea. In sports and dance, people are studying the mechanics of movement to see what produces the best possible performance. By understanding the underlying structures, we can become better at what we do. I think surgery's no different.'
Source-Eurekalert PRI
|