
Intelligent machines of the future may need to sleep as much as we do.
States that resemble sleep-like cycles in simulated neural networks quell the instability that comes with uninterrupted self-learning in artificial analogs of brains
No one can say whether androids will dream of electric sheep, but they will almost certainly need periods of rest that offer benefits similar to those that sleep provides to living brains, according to new research from Los Alamos National Laboratory.
“We study spiking neural networks, which are systems that learn much as living brains do,” said Los Alamos National Laboratory computer scientist Yijing Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”
It was as though we were giving the neural networks the equivalent of a good night’s rest
Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.
The discovery came about as the research team worked to develop neural networks that closely approximate how humans and other biological systems learn to see. The group initially struggled with stabilizing simulated neural networks undergoing unsupervised dictionary training, which involves classifying objects without having prior examples to compare them to.
“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Los Alamos computer scientist and study coauthor Garrett Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”
The researchers characterize the decision to expose the networks to an artificial analog of sleep as nearly a last ditch effort to stabilize them. They experimented with various types of noise, roughly comparable to the static you might encounter between stations while tuning a radio. The best results came when they used waves of so-called Gaussian noise, which includes a wide range of frequencies and amplitudes. They hypothesize that the noise mimics the input received by biological neurons during slow-wave sleep. The results suggest that slow-wave sleep may act, in part, to ensure that cortical neurons maintain their stability and do not hallucinate.
The groups’ next goal is to implement their algorithm on Intel’s Loihi neuromorphic chip. They hope allowing Loihi to sleep from time to time will enable it to stably process information from a silicon retina camera in real time. If the findings confirm the need for sleep in artificial brains, we can probably expect the same to be true of androids and other intelligent machines that may come about in the future.
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Neuromorphic processor
- With Significant CAGR Growth of Global Self-Learning Neuromorphic Chip Market Data Shown in Research Report 2023-2030 | 92 Pages Report
The Self-Learning Neuromorphic Chip Market (2023-2030) Latest Research Report provides an extensive analysis of ...
- Self-Learning Neuromorphic Chip Market Growth By 2031
What is the Self-Learning Neuromorphic Chip market growth? Self-Learning Neuromorphic Chip Market Size is projected to Reach Multimillion USD by 2029, In comparison to 2023, at unexpected CAGR during ...
- How Relevant Is Moore’s Law Today?
This video explains how Moore’s Law has been and continues to be an important barometer for the electronics industry, but suggests that future technology leaps will likely come from elsewhere.
- BrainChip’s Neuromorphic Technology Enables Intellisense Systems to Address Needs for Next-Generation Cognitive Radio Solutions
By integrating BrainChip’s Akida™ neuromorphic processor, Intellisense can deliver even more advanced, yet energy efficient, cognitive capabilities to its RF system solutions. One such project ...
- SynSense closes USD$10m Pre-B+round to bring their ultra-low-power vision processing SOC "Speck™" to mass production
SynSense, the world-leading commercial supplier of ultra-low-power sensory processing hardware and application solutions, has announced the completion of a pre-B+ funding round, raising USD$10M.
Go deeper with Google Headlines on:
Neuromorphic processor
[google_news title=”” keyword=”neuromorphic processor” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
Artificial brains
- Artificial Intelligence (AI) Chip Market Size to Grow USD 263.6 Billion by 2031, growing at a CAGR of 37.1% | Valuates Reports
The MarketWatch News Department was not involved in the creation of this content. BANGALORE, India, March 28, 2023 (PR Newswire Europe via COMTEX) -- BANGALORE, India, March 28, 2023 /PRNewswire/ ...
- Artificial intelligence predicts genetics of cancerous brain tumors in under 90 seconds
Using artificial intelligence, researchers have discovered how to screen for genetic mutations in cancerous brain tumors in under 90 seconds -- and possibly streamline the diagnosis and treatment of ...
- Artificial General Intelligence (AGI) Is A Very Human Hallucination
In Sparks of Artificial General Intelligence ... These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; ...
- AI System Detects Genetic Mutations in Cancerous Brain Tumors
A new AI system is able to identify genetic mutations in cancerous brain tumors in under 90 seconds, potentially leading to improved diagnosis and treatment of gliomas.
- A powerful new AI can read brains and draw images strikingly close to what its subjects were imagining
The researchers then sent those signals through an AI model to train it to associate certain brain patterns with certain images. Later, when the subjects were shown new images in the fMRI, the system ...
Go deeper with Google Headlines on:
Artificial brains
[google_news title=”” keyword=”artificial brains” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]