Advance marks critical step toward brain-computer interfaces that hold immense promise for those with limited or no ability to speak.
In a scientific first, Columbia neuroengineers have created a system that translates thought into intelligible, recognizable speech. By monitoring someone’s brain activity, the technology can reconstruct the words a person hears with unprecedented clarity. This breakthrough, which harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain. It also lays the groundwork for helping people who cannot speak, such as those living with amyotrophic lateral sclerosis (ALS) or recovering from stroke, regain their ability to communicate with the outside world.
These findings were published today in Scientific Reports.
“Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating,” said Nima Mesgarani, PhD, the paper’s senior author and a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “With today’s study, we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”
This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.
Decades of research has shown that when people speak — or even imagine speaking — telltale patterns of activity appear in their brain. Distinct (but recognizable) pattern of signals also emerge when we listen to someone speak, or imagine listening. Experts, trying to record and decode these patterns, see a future in which thoughts need not remain hidden inside the brain — but instead could be translated into verbal speech at will.
But accomplishing this feat has proven challenging. Early efforts to decode brain signals by Dr. Mesgarani and others focused on simple computer models that analyzed spectrograms, which are visual representations of sound frequencies.
But because this approach has failed to produce anything resembling intelligible speech, Dr. Mesgarani and his team, including the paper’s first author Hassan Akbari, turned instead to a vocoder, a computer algorithm that can synthesize speech after being trained on recordings of people talking.
“This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions,” said Dr. Mesgarani, who is also an associate professor of electrical engineering at Columbia Engineering.
To teach the vocoder to interpret to brain activity, Dr. Mesgarani teamed up with Ashesh Dinesh Mehta, MD, PhD, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute and co-author of today’s paper. Dr. Mehta treats epilepsy patients, some of whom must undergo regular surgeries.
“Working with Dr. Mehta, we asked epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while we measured patterns of brain activity,” said Dr. Mesgarani. “These neural patterns trained the vocoder.”
Next, the researchers asked those same patients to listen to speakers reciting digits between 0 to 9, while recording brain signals that could then be run through the vocoder. The sound produced by the vocoder in response to those signals was analyzed and cleaned up by neural networks, a type of artificial intelligence that mimics the structure of neurons in the biological brain.
The end result was a robotic-sounding voice reciting a sequence of numbers. To test the accuracy of the recording, Dr. Mesgarani and his team tasked individuals to listen to the recording and report what they heard.
“We found that people could understand and repeat the sounds about 75% of the time, which is well above and beyond any previous attempts,” said Dr. Mesgarani. The improvement in intelligibility was especially evident when comparing the new recordings to the earlier, spectrogram-based attempts. “The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy.”
Dr. Mesgarani and his team plan to test more complicated words and sentences next, and they want to run the same tests on brain signals emitted when a person speaks or imagines speaking. Ultimately, they hope their system could be part of an implant, similar to those worn by some epilepsy patients, that translates the wearer’s thoughts directly into words.
“In this scenario, if the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,” said Dr. Mesgarani. “This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”
Learn more: Columbia Engineers Translate Brain Signals Directly into Speech
The Latest on: Brain-computer interfaces
[google_news title=”” keyword=”brain-computer interfaces” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Brain-computer interfaces
- Next Generation Brain Implant Based on Grapheneon July 24, 2024 at 2:39 am
Digilent's ADP2230 MSO allows users to measure, visualize, generate, record, and control circuits. The EMC product range includes PCB and Cable ferrites, inductors, capacitors, and more ...
- New Brain Chip Revolutionizes Treatment of Parkinson’s Patientson July 23, 2024 at 1:00 am
Inbrain Neuroelectronics’ new brain implant can both read signals and stimulate brain impulses, with brain-computer interface (BCI) that uses graphene to create a high-resolution interface with the ...
- Breakthrough graphene brain implant preps for first human test, could treat Parkinson's diseaseon July 22, 2024 at 7:27 am
Most existing brain implants use metal electrodes to monitor brain activity and enable mind-machine communication. But the graphene chips cooked up by Inbrain provide some key ...
- Graphene-based brain chip offers 200x more power for Parkinson’s patientson July 22, 2024 at 1:00 am
This is the promise of a new technology from Inbrain Neuroelectronics, a startup based in Barcelona. Their brain-computer interface (BCI) uses a special material called graphene to create a ...
- First surgery scheduled for graphene chip designed for next-gen brain implants — non-metallic chip could used for brain stimulation in the futureon July 21, 2024 at 10:18 am
Startup Inbrain Neuroelectronics is experimenting with a brain-computer interface (BCI) using graphene chips. BCIs allow people to control computers directly using brain signals. The company is ...
- New Brain-Computer Interface Allows Fully Paralyzed Patients to Speakon July 19, 2024 at 5:49 am
Newly developed brain-computer interface using the power of thought can enable silent person to communicate, reports a scientific breakthrough by researchers from Tel Aviv University and Tel Aviv ...
- Human Brain Cells Power New Robot Developed by Chinese Scientistson July 19, 2024 at 4:38 am
Chinese scientists have developed a robot powered by lab-grown human brain cells, paving the way for advanced human-machine interaction.
- Remarkable magnetic brain control tech alters appetite and behavioron July 18, 2024 at 9:17 pm
Researchers have developed a remote, non-invasive method of selectively controlling neurons in the brain using magnetic fields. The technique opens the door to a greater understanding of brain ...
- The first Neuralink brain implant signals a new phase for human-computer interactionon July 14, 2024 at 9:04 pm
The first human has received a Neuralink brain chip implant, according to co-founder Elon Musk. The neurotechnology company has started its first human trial since receiving approval from the U.S.
- An Air Pocket May Have Caused Electrode Threads to Retract From the Brain of First Human Neuralink Patienton July 12, 2024 at 11:29 am
Neuralink has revealed plans to change how it surgically implants brain-computer interfaces in human patients, after shifting air bubbles contributed to a number of electrode-bearing threads ...
via Bing News