Neural networks learn to link temporally dispersed stimuli
Rustling leaves, a creaking branch: To a mouse, these sensory impressions may at first seem harmless – but not if a cat suddenly bursts out of the bush. If so, they were clues of impending life-threatening danger. Robert Gütig of the Max Planck Institute of Experimental Medicine in Göttingen has now found how the brain can link sensory perceptions to events occurring after a delay. In a computer model, he has developed a learning procedure in which the model neurons can learn to distinguish between many different stimuli by adjusting their activity to the frequency of the cues. The model even works when there is a time delay between the cue and the event or outcome. Not only is Gütig’s learning procedure vital for the survival of every living creature in that it enables them to filter environmental stimuli; it also helps solve a number of technological learning difficulties. One possible application is in the development of speech recognition programs.
In the animal world, dangers are frequently preceded by warning signs: telltale sounds, movements and odours may be clues of an imminent attack. If a mouse survives an attack by a cat, its future will be brighter if it learns from the failed attempt and reads the clues early next time round. However, mice are constantly bombarded with a vast number of sensorial impressions, most of which are not associated with danger. So how do they know which sounds and odours from their environment presage a cat attack and which do not?
This poses a problem for the mouse’s brain. In most cases, the crucial environmental stimuli are temporally dispersed from the actual attack, so the brain must link a clue and the resulting event (e.g. a sound and an attack) even though there is a delay between them. Previous theories have not provided satisfactory explanations as to how the brain bridges the gap between a cue and the associated outcome. Robert Gütig of the Max Planck Institute of Experimental Medicine has discovered how the brain can solve this problem. On the computer, he programmed a neural network that reacts to stimuli in the same way as a cluster of biological cells. This network can learn to filter out the cues that predict a subsequent event.
It depends on the frequency
The network learns by strengthening or weakening specific synapses between the model neurons. The foundation of the computer model is a synaptic learning rule under which individual neurons can increase or decrease their activity in response to a simple learning signal. Gütig has used this learning rule to establish a new learning procedure. “This ‘aggregate-label’ learning procedure is built on the concept of setting the connections between cells in such a way that the resulting neural activity over a certain period is proportional to the number of cues,” explains Gütig. In this way, if a learning signal reflects the occurrence and intensity of certain events in the mouse’s environment, the neurons learn to react to the stimuli that predict those events.
However, Gütig’s networks can learn to react to environmental stimuli even when no learning signals are available in the environment. They do this by interpreting the average neural activity within a network as a learning signal. Individual neurons learn to react to stimuli that occur in the same numbers as those to which other neurons in the network react. This ‘self-supervised’ learning follows a principle different to the Hebbian theory that has frequently been applied in artificial neural networks. Hebbian networks learn by strengthening the synapses between neurons that spike at the same time or in quick succession. “In self-supervised learning, it is not necessary for the neural activity to be temporally aligned. The total number of spikes in a given period is the deciding factor for synaptic change,” says Gütig. This means that such networks can link sensory clues of different types, e.g. visual, auditory and olfactory, even when there are significant delays between their respective neural representations.
Not only does Gütig’s learning procedure explain biological processes; it could also pave the way for far-reaching improvements to technological applications such as automatic speech recognition. “That would facilitate considerable simplification of the training requirements for computer-based speech recognition. Instead of laboriously segmented language databases or complex segmentation algorithms, aggregate-label learning could manage with just the subtitles from newscasts, for example,” says Gütig.
Learn more: New learning procedure for neural networks
The Latest on: Neural networks
[google_news title=”” keyword=”neural networks” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Neural networks
- Machine Learning and Neural Network Can Be Effective Diagnostic Tools in MDS, Study Findson April 30, 2024 at 12:51 pm
The investigators said their new system makes it possible for researchers to leverage artificial intelligence even if they do not have expertise working with advanced software.
- Can AI Solve The World's Cybersecurity Problems?on April 30, 2024 at 9:04 am
Cybersecurity involves using cutting-edge tools and techniques to protect important computer systems, software, and networks from threats, from either inside or outside an organization.
- AI’s Inner Dialogue: How Self-Reflection Enhances Chatbots and Virtual Assistantson April 30, 2024 at 7:57 am
Explore how self-reflection enhances AI chatbots and virtual assistants improving response accuracy, reducing bias, and fostering inclusivity ...
- ATPBot Introduces Advanced AI Trading Bot Leveraging Supercomputers and Neural Networkson April 29, 2024 at 7:01 pm
ATPBot has introduced a trial period for its AI-powered trading strategy bot, accessible to all new registrants. This trial aims to demonstrate the bot's capabilities through firsthand experience by ...
- Neural Networks And AI's Rapid Progression Ignite A Multi-Trillion-Dollar Economic Shifton April 29, 2024 at 1:09 pm
Neural Networks and AI's Rapid Progression Ignite a Multi-Trillion-Dollar Economic Shift Collective Audience Takes off on Insticator Link ...
- AlabugaLeaks. Part 2: Kaspersky Lab and neural networks for Russian military droneson April 28, 2024 at 9:49 pm
A drone with a neural network is not the only collaboration project of Kaspersky with Albatross. The presentation of the combat version of the Albatross M5 UAV obtained by Cyber Resistance hacktivists ...
- Smart Engines says new method boosts neural network efficiency by 40%on April 26, 2024 at 1:47 pm
Scientists from Smart Engines have announced they have found a way to improve the efficiency of neural networks by 40 percent.
- Research Highlights Importance Of Weak Social Ties In Neural Networkingon April 23, 2024 at 9:00 pm
By challenging traditional assumptions and exploring brain synchronization in diverse social contexts, this research sheds light on the complex dynamics of social interactions.
- AI Efficiency Breakthrough: How Sound Waves Are Revolutionizing Optical Neural Networkson April 23, 2024 at 12:34 pm
Researchers have developed a way to use sound waves in optical neural networks, enhancing their ability to process data with high speed and energy efficiency. Optical neural networks may provide the ...
- U of T researchers map protein network dynamics during cell divisionon April 22, 2024 at 11:23 am
An international team led by researchers at the University of Toronto has mapped the movement of proteins encoded by the yeast genome throughout its cell cycle. This is the first time that all the ...
via Bing News