how to speak your mind, literally.
Talking brains have been a staple of science fiction and comic books, usually taking the role of villains using their considerable intellect to destroy or conquer the world and implying that nerds with access to money and weapons can be really dangerous. The Brain from the DC Comics’ series Doom Patrol was essentially a raw intellect with a grudge and a mission, using powerful computer networks and machinery to stay alive and use his newfound abilities to subjugate the planet to his will. Despite being a disembodied mind, he found a way to talk by using a synthesizer that read his thoughts and vocalized his commands. When the comics featuring his were being drawn, the technology he used was science fiction. But today, a voice synthesizer able to turn thoughts into speech has been built and it could help severely disabled patients talk to those around them.
As some of the complex and powerful tools used by brilliant supervillains in pop sulture sprang from the page into high tech labs, they’re being used to enable the disabled to overcome the severe setbacks to their bodies and offering a chance to cope with one of the most terrifying phenomena known to medicine; being locked in. Imagine being unable to move or feel your body while being perfectly aware of what’s going on around you. It’s as if you’re inhabiting a slab of meat, trying to scream that you’re here, you’re listening and you can talk but you just can’t do it. Usually the tiniest voluntary gesture repeated on demand is what alerts doctors to what’s really going on and medical professionals are well aware that some trauma of the brain can spare the parts of the mind responsible for consciousness, cognition and response while devastating those intended to turn ideas and thoughts into actions. However, the process can be hijacked by hucksters and communication can be immensely difficult, leading to a lot of perfectly understandable anger and frustration from the patients.
This is where a new generating of technologies which aim to fuse humans with machines come into play and can help those suffering from strokes, debilitating injury or full paralysis. One type of brain implants known as BrainGate allow the patients to manipulate cursors on computer screen and hopefully, one day, command a number of machines which will help them regain just a little bit of independence. Now, a team of neurologists and computer scientists created the above mentioned voice synthesizer which picks up signals generated by the patient’s attempt to speak. After an electrode was implanted into the skull of a 26 year old victim of a brain stem stroke which left him paralyzed and locked in, special software can parse and identify the activity coming from his speech motor cortex. By matching the frequencies being generated in the cortex, the software tries to predict the phrases that the patients wants to say and via a synthesizer, says them out loud. The process can take as little as 50 milliseconds, about the same amount of time it takes an average person to do exactly the same thing with his or her mouth.
Still, there are some limitations to what the software can do and it’s not perfect at saying exactly what patients want to say. However, this is where the human brain kicks in and compensates. The patient testing the voice system was able to quickly master it and radically improve how accurately it translated the signals generated by his speech motor cortex. This uncanny ability to train our brains to work with complex electronic devices is in my humble opinion one of the most exciting things in this area of computer science. In many experiments with mind-machine interfaces, we seem to be able to learn how to use software and robotic prosthetics as a new limb. But that’s not all. The team writes that after just a little bit of training the patient hit an accuracy peak of 89% at the end of the trials, which is a very impressive result, especially when we take into account that all this is done with just one three wire electrode. With more electrodes, performance would quickly improve and this implies the potential for systems that can hit over 90% accuracy out of the box rather than 45% the current setup can achieve at startup by reading the cortical output in much more detail.
And there’s another first here. The signals from the patient’s speech motor cortex were transmitted wirelessly so the implant can permanently remain in his skull without external wires that could cause an infection of the implantation site. Not only does this experiment show that locked in patients may one day have conversations with friends, family and doctors at the same rate as a typical person, but these abilities could be permanent, provided of course that the speech centers of the brain are not the site of the stroke or injury and were not hit by the traumatic event. While we should hope that more and more advanced medical technology would allow doctors to spare more of the brain during potentially lethal events, this line of research shows great promise for those of us unfortunate enough to have the abilities we take for granted suddenly taken away.
See: Guenther, F., et al, (2009). A Wireless Brain-Machine Interface for Real-Time Speech Synthesis PLoS ONE, 4 (12) DOI: 10.1371/journal.pone.0008218