Columbia engineers develop new AI technology that amplifies correct speaker from a group; breakthrough could lead to better hearing aids.
Our brains have a remarkable knack for picking out individual voices in a noisy environment, like a crowded coffee shop or a busy city street. This is something that even the most advanced hearing aids struggle to do. But now Columbia engineers are announcing an experimental technology that mimics the brain’s natural aptitude for detecting and amplifying any one voice from many. Powered by artificial intelligence, this brain-controlled hearing aid acts as an automatic filter, monitoring wearers’ brain waves and boosting the voice they want to focus on.
Though still in early stages of development, the technology is a significant step toward better hearing aids that would enable wearers to converse with the people around them seamlessly and efficiently. This achievement is described today in Science Advances.
“The brain area that processes sound is extraordinarily sensitive and powerful; it can amplify one voice over others, seemingly effortlessly, while today’s hearings aids still pale in comparison,” said Nima Mesgarani, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper’s senior author. “By creating a device that harnesses the power of the brain itself, we hope our work will lead to technological improvements that enable the hundreds of millions of hearing-impaired people worldwide to communicate just as easily as their friends and family do.”
Our end result was a speech-separation algorithm that performed similarly to previous versions but with an important improvement. It could recognize and decode a voice — any voice — right off the bat.
Modern hearing aids are excellent at amplifying speech while suppressing certain types of background noise, such as traffic. But they struggle to boost the volume of an individual voice over others. Scientists calls this the cocktail party problem, named after the cacophony of voices that blend together during loud parties.
“In crowded places, like parties, hearing aids tend to amplify all speakers at once,” said Dr. Mesgarani, who is also an associate professor of electrical engineering at Columbia Engineering. “This severely hinders a wearer’s ability to converse effectively, essentially isolating them from the people around them.”
The Columbia team’s brain-controlled hearing aid is different. Instead of relying solely on external sound-amplifiers, like microphones, it also monitors the listener’s own brain waves.
“Previously, we had discovered that when two people talk to each other, the brain waves of the speaker begin to resemble the brain waves of the listener,” said Dr. Mesgarani.
Using this knowledge the team combined powerful speech-separation algorithms with neural networks, complex mathematical models that imitate the brain’s natural computational abilities. They created a system that first separates out the voices of individual speakers from a group, and then compares the voices of each speaker to the brain waves of the person listening. The speaker whose voice pattern most closely matches the listener’s brain waves is then amplified over the rest.
The researchers published an earlier version of this system in 2017 that, while promising, had a key limitation: It had to be pretrained to recognize specific speakers.
“If you’re in a restaurant with your family, that device would recognize and decode those voices for you,” explained Dr. Mesgarani. “But as soon as a new person, such as the waiter, arrived, the system would fail.”
Today’s advance largely solves that issue. With funding from Columbia Technology Ventures to improve their original algorithm, Dr. Mesgarani and first authors Cong Han and James O’Sullivan, PhD, again harnessed the power of deep neural networks to build a more sophisticated model that could be generalized to any potential speaker that the listener encountered.
This technology works by mimicking what the brain would normally do. First, the device automatically separates out multiple speakers into separate streams, and then compares each speaker with the neural data from the user’s brain. The speaker that best matches a user’s neural data is then amplified above the others (Credit: Nima Mesgarani/Columbia’ Zuckerman Institute).
“Our end result was a speech-separation algorithm that performed similarly to previous versions but with an important improvement,” said Dr. Mesgarani. “It could recognize and decode a voice — any voice — right off the bat.”
To test the algorithm’s effectiveness, the researchers teamed up with Ashesh Dinesh Mehta, MD, PhD, a neurosurgeon at the Northwell Health Institute for Neurology and Neurosurgery and coauthor of today’s paper. Dr. Mehta treats epilepsy patients, some of whom must undergo regular surgeries.
“These patients volunteered to listen to different speakers while we monitored their brain waves directly via electrodes implanted in the patients’ brains,” said Dr. Mesgarani. “We then applied the newly developed algorithm to that data.”
The team’s algorithm tracked the patients’ attention as they listened to different speakers that they had not previously heard. When a patient focused on one speaker, the system automatically amplified that voice. When their attention shifted to a different speaker, the volume levels changed to reflect that shift.
Encouraged by their results, the researchers are now investigating how to transform this prototype into a noninvasive device that can be placed externally on the scalp or around the ear. They also hope to further improve and refine the algorithm so that it can function in a broader range of environments.
“So far, we’ve only tested it in an indoor environment,” said Dr. Mesgarani. “But we want to ensure that it can work just as well on a busy city street or a noisy restaurant, so that wherever wearers go, they can fully experience the world and people around them.”
The Latest on: Brain-controlled hearing aid
via Google News
The Latest on: Brain-controlled hearing aid
- Karkala: Exemplary marriage – Groom donates Rs 10 lac to poor, holds simple functionon May 15, 2022 at 7:13 am
Instead of spending lavishly on wedding function, a groom set an example to the society by donating Rs 10 lac to different people of society who are having issues in life. A resident of Thellar, ...
- 20 best ways to improve your brain health – and avoid dementiaon May 10, 2022 at 7:04 am
Turn on the hearing aids, however, and a neuroplastic adjustment takes ... who believes that we can open ourselves up to creative insights by loosening control of the filters in our brain that block ...
- The Cognitive Decline Market Market To Witness Spillage Of Digitized Beans At A CAGR Of 5.8%on May 5, 2022 at 6:12 pm
The global Cognitive Decline Market is stipulated to witness a CAGR of 5.8% by the year 2022-2032. Digital innovation is the buzzword. As such, digital health unicorns are coming up. Digital up ...
- Soundwise Aria Hearing Aid Reviews – Is It Worth the Money to Buy?on May 4, 2022 at 9:58 am
Once the brain receives the enhanced sound from the microphone ... place the hearing aids on their holder and allow them to recharge. The Soundwise Aria Hearing Aids are equipped with controls to ...
- Soundwise Aria Hearing Aids Review – Will It Work For You?on April 28, 2022 at 12:12 pm
The Soundwise Aria Hearing Aids have integral controls that permit you to modify ... It might potentially exacerbate brain issues. Many patients experience balance problems, tinnitus, and other ...
- 2022 Eargo review: Hearing aid pros and conson April 26, 2022 at 5:00 pm
These cells convert the vibrations into neural signals that the brain processes ... works in connection with a person’s hearing aid, allowing users to control the settings of these devices.
- About Eargo Hearing Aidson April 21, 2022 at 5:00 pm
The Neo HiFi model can also be controlled via a smartphone app. Eargo hearing aids amplify sounds that ... as soft sounds may sound too loud until your brain adjusts to distinguishing background ...
- Best Hearing Aids on the Market for the Money (Top Products of 2022)on April 21, 2022 at 1:06 pm
Using a science-based process called habituation, Soundwise works to retrain the brain. Using this best hearing aid can reduce tinnitus ... Simple volume controls are available on each ear ...
- A New Party Trick: A Hearing Aid That Reads Minds (VIDEO)on April 19, 2022 at 2:08 pm
Powered by artificial intelligence, this brain-controlled hearing aid acts as an automatic filter, monitoring wearers' brain waves and boosting the voice they want to focus on. Disclaimer ...
- What Increases Dementia Risk?on January 15, 2022 at 1:23 pm
But hearing ... McKnight Brain Institute at the University of Alabama at Birmingham School of Medicine and a statement author. "It is keeping your cholesterol under control. It is eating properly ...
via Bing News