
via University of California San Francisco
New Technology is a Stepping Stone to a Neural Speech Prosthesis, Researchers Say
A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract – an anatomically detailed computer simulation including the lips, jaw, tongue and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.
Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100 to 150 words per minute of natural speech.
The new system being developed in the laboratory of Edward Chang, MD – described April 24, 2019, in Nature – demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.
“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”
Virtual Vocal Tract Improves Naturalistic Speech Synthesis
The research was led by Gopala Anumanchipalli, PhD, a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.
From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.
“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”
In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center – patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery – to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.
Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.
This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.
The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.
As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.
“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”
Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance
The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.
Learn more: Synthetic Speech Generated from Brain Recordings
The Latest on: Neural speech prosthesis
[google_news title=”” keyword=”neural speech prosthesis” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Neural speech prosthesis
- Group B Streptococcal Prosthetic Joint Infectionson December 5, 2023 at 4:00 pm
We report 6 cases of group B streptococcal prosthetic joint infections seen in our institution and review 8 previously reported cases. These 14 patients (6 men and 8 women) had an average age of 69.
- Meta AI develops a non-invasive method to decode speech from brain activityon November 28, 2023 at 3:50 am
Recent technological advancements have opened invaluable opportunities for assisting people who are experiencing impairments or disabilities. For instance, they have enabled the creation of tools to ...
- Neural attentional filters and behavioural outcome follow independent individual trajectories over the adult life spanon November 26, 2023 at 4:01 pm
This study provides a valuable contribution to understanding the neural mechanisms underlying age-related changes in attention and speech understanding. The large dataset (N=105) provides solid ...
- The Cognitive and Neural Organisation of Speech Processingon November 22, 2023 at 8:02 pm
For instance, TMS and fMRI have been combined to establish the functional localisation and specific functional role in naming in aphasia patients, and manipulation of speech production has been used ...
- Speech from the Throneon November 20, 2023 at 4:00 pm
strengthen health care, lower costs for families and grow the province's low-carbon economy with a geothermal home heating program in the speech from the throne, `A New Day in Manitoba.' The speech ...
- Tag: synthetic speechon November 13, 2023 at 4:00 pm
ARY NEWS brings you 24/7 Live Streaming, Headlines, Bulletins, Talk Shows, Infotainment, and much more. Watch minute-by-minute updates of current affairs and happenings from Pakistan and all ...
- Neural decoding of visual information across different neural recording modalities and approacheson November 12, 2023 at 3:59 pm
The development of neural decoding technology will promote the development of neural prostheses and brain-computer interfaced devices. The researchers hope that their brief review will inspire ...
- Speech prosthetic translates brain signals into spoken wordson November 11, 2023 at 9:58 am
A team of neuroscientists, neurosurgeons, and engineers have successfully created a speech prosthetic that can translate a person's brain signals into spoken words. This cutting-edge technology holds ...
- Amazing Tiny Brain Implant Translates Brain Signals Into Speechon November 9, 2023 at 4:01 pm
The 'speech prosthetic' opens the door to a future where people unable to speak due to neurological conditions can communicate through thought. Your initial reaction might be to assume it reads minds.
- High-density electrocorticography for speech decoding and improved neural speech prostheseson November 7, 2023 at 4:00 pm
By deciphering brain impulses, neural speech prosthesis might restore communication. Current approaches, however, are hampered by coarse recordings that fail to capture the complex spatiotemporal ...
via Bing News