According to Edward Chang, the lead researcher and a neurosurgeon at the University of California San Francisco (UCSF), their findings can be used to create better speech-generating devices (SGDs) in the future to aid communication in people with speech difficulties, as is the case with stroke patients.
Researchers from UCSF sought to create a device that helps individuals with severe speech and language impairments communicate better than current technologies that generate speech from eye and muscle movements.
For instance, English scientist Stephen Hawking used such a device due to a progressive neuron disease that impaired his speech and motor skills over time. The device had a sensor that detected and interpreted the slightest movements from Hawking's cheek.
Each movement he made moved a cursor to spell the first few letters of his intended word. A sophisticated predictive text mechanism then determined the best word to use based on a database of Hawking's books and lectures.
Although effective and reliable for the most part, the device still required a great deal of effort from Hawking himself because of his paralysis. His utterances also did not occur fast enough to keep pace with natural conversation.
Despite the wide range of SGDs now available for people with speech disorders, neurological conditions and developmental disabilities, Chang notes that to date, there is no SGD that allows its user to interact on the rapid timescale of real-time human conversation.
The researchers studied three epileptic patients who were in line for neurosurgeries for their condition. Before their operations, all three patients received a small patch of electrodes on their brains for at least a week to monitor the origins of their epileptic seizures. None of the patients had preexisting speech disorders.
Chang and his team used the electrodes to record the patients' brain activities as each patient listened to a series of questions, responding to each one with an answer drawn from a given list of potential answers.
The team then built computer models programmed to match patterns of the patients' brain activities to the questions they heard and the answers they uttered.
The team also designed the experiment such that answers were limited to certain questions depending on the context. For instance, when describing their rooms, patients are limited to the following answers: bright, dark, hot, cold and fine.
Once “trained” using data from the electrodes and recordings from the question-and-answer experiment, the team's software could determine in real-time and from brain signals alone the question and the corresponding answer that a patient heard and spoke. The software was 76 percent accurate in determining the question and 61 percent accurate in predicting the answer.
The team noted that this is the first time this particular approach has been used to predict both heard and spoken words. Though limited to a certain list of words, the software allows for the integration of more contexts and words over time, which can help improve its decoding function.
All in all, their results demonstrate that a person's intention to utter certain words can be gleaned from brain signals alone and decoded in a conversational setting.
This breakthrough has important implications for people who are unable to communicate due to stroke, brain injuries and neurological disorders. (Related: Stroke patients can strengthen their brain function with ginseng.)
In its current form, the brain-reading software, dubbed the “neural decoder,” is limited to decoding predetermined words, phrases and sentences that it has been trained on. The team hopes that the software can be used to develop better SGDs in the future.
Read the latest articles about stroke and its effects at Brain.news.