Soon a simple robot that can learn the physical characteristics of the brain may give neurosurgeons finer control over surgical instruments during delicate brain operations
in a new procedure being developed at nasa's Ames Research Centre in Mountain View, us , a robotic probe will learn the brain's characteristics by using neural net software; the same type of software technology that helps focus camcorders. A tiny pressure sensor in the 'probe' enters the brain, gently locates the edges of tumors while preventing damage to critical arteries.
According to the principal investigator Robert W Mah of the Neuroengineering Group at Ames, "The robot will be able to feel brain structures better than any human surgeon, making slow, very precise movements during an operation. It is actually a difference of density between brain tumours and normal brain tissue that allows neurosurgeons to find the tumour's edge through experience."
To reduce potential brain damage, the probes used on robot are much smaller than standard probes. It is about one-third the standard size, which is 0.2 inches in diameter. A biopsy needle extracts a tissue sample through the probe. During standard brain surgery, the surgeon uses a magnetic resonance image to guide placement of the probe in the brain. The physician samples the tumour by inserting a biopsy probe through an opening in the skull.
According to Mah, if an artery gets damaged as the doctor inserts the probe, the patient could bleed to death. In contrast, during the robotic neural net procedure, the speed and maximum pressure are controlled by a smart computer programme that continues to learn as it gains more experience. If it hits an artery, the probe will stop before it penetrates. If the computer stops the probe, the surgeon can decide what to do next.
Ames is developing robotic telepresence surgery to deal with medical emergencies that may occur during long duration human space flights. "On a long duration mission, it is likely that there will not be a medical specialist on board to deal with a specific surgical problem," Mah said. "A surgeon on earth could control the surgery by issuing high-level commands such as 'start surgery' or 'take sample' to the robot. The computerised robot would go as far as it could within safe limits. Then it would wait for the next command from Earth," Mah explained.
During early tests, scientists used tofu, a food made from soyabeans that has a consistency very similar to the brain tissue, to model tissue types. "These tests were used to teach the neural net software what are normal brain tissues and arteries and what are not," Mah said.
The software learns to distinguish tumours from normal brain tissue by remembering the pressure signatures or profiles for each kind of tissue and then making the model. Using traditional computer programming to do brain modelling is not practical. "It is very difficult to model the human brain. A human computer programmer would have to mathematically model each patient and each kind of tissue," Mah says.
In addition to the brain surgery project, the Ames Neuroengineering Laboratory is developing other forms of software with potential uses such as balancing the centrifuge on the International Space Station, balancing airborne astronomical telescopes and emergency aircraft propulsion control.
We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.
Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.