Computers and artificial intelligence continue to usher in major changes in the way people shop. It is relatively easy to train a robot’s brain to create a shopping list, but what about ensuring that the robotic shopper can easily tell the difference between the thousands of products in the store?
Purdue University researchers and experts in brain-inspired computing think part of the answer may be found in magnets. The researchers have developed a process to use magnetics with brain-like networks to program and teach devices such as personal robots, self-driving cars and drones to better generalize about different objects.
“Our stochastic neural networks try to mimic certain activities of the human brain and compute through a connection of neurons and synapses,” said Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. “This allows the computer brain to not only store information but also to generalize well about objects and then make inferences to perform better at distinguishing between objects.”
Roy presented the technology during the annual German Physical Sciences Conference earlier this month in Germany. The work also appeared in the Frontiers in Neuroscience.
The switching dynamics of a nano-magnet are similar to the electrical dynamics of neurons. Magnetic tunnel junction devices show switching behavior, which is stochastic in nature.
The stochastic switching behavior is representative of a sigmoid switching behavior of a neuron. Such magnetic tunnel junctions can be also used to store synaptic weights.
The Purdue group proposed a new stochastic training algorithm for synapses using spike timing dependent plasticity (STDP), termed Stochastic-STDP, which has been experimentally observed in the rat’s hippocampus. The inherent stochastic behavior of the magnet was used to switch the magnetization states stochastically based on the proposed algorithm for learning different object representations.
The trained synaptic weights, encoded deterministically in the magnetization state of the nano-magnets, are then used during inference. Advantageously, use of high-energy barrier magnets (30-40KT where K is the Boltzmann constant and T is the operating temperature) not only allows compact stochastic primitives, but also enables the same device to be used as a stable memory element meeting the data retention requirement. However, the barrier height of the nano-magnets used to perform sigmoid-like neuronal computations can be lowered to 20KT for higher energy efficiency.
“The big advantage with the magnet technology we have developed is that it is very energy-efficient,” said Roy, who leads Purdue’s Center for Brain-inspired Computing Enabling Autonomous Intelligence. “We have created a simpler network that represents the neurons and synapses while compressing the amount of memory and energy needed to perform functions similar to brain computations.”
Roy said the brain-like networks have other uses in solving difficult problems as well, including combinatorial optimization problems such as the traveling salesman problem and graph coloring. The proposed stochastic devices can act as “natural annealer”, helping the algorithms move out of local minimas.
Learn more: A magnetic personality, maybe not. But magnets can help AI get closer to the efficiency of the human brain
The Latest on: Stochastic neural networks
[google_news title=”” keyword=”stochastic neural networks” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Stochastic neural networks
- Scientists uncover quantum-inspired vulnerabilities in neural networkson May 9, 2024 at 1:51 pm
In a recent study merging the fields of quantum physics and computer science, Dr. Jun-Jie Zhang and Prof. Deyu Meng have explored the vulnerabilities of neural networks through the lens of the ...
- How aging clocks tick: New study points to stochastic changes in cellson May 9, 2024 at 7:13 am
The study "Aging clocks based on accumulating stochastic variation" has been published in Nature Aging. "Aging is triggered when the building blocks in our cells become damaged. Where this damage ...
- Scientists develop energy-efficient memristive hardware for transparent AI decision-makingon May 7, 2024 at 5:00 pm
One type, based on niobium oxide, leverages stochastic switching behavior to generate random input perturbations. Another, using hafnium oxide, performs the matrix computations that underpin neural ...
- Could Life Exist In A Two-Dimensional Universe?on May 3, 2024 at 2:38 am
Scargill went on to look at whether a 2D universe (again, plus time) would be sufficiently complex to allow for complex life. In the paper, he looked at biological networks and created planar graphs ...
- New logarithmic step size for stochastic gradient descenton April 22, 2024 at 2:45 pm
achieving a 0.9% increase for the CIFAR100 dataset when utilized with a convolutional neural network (CNN) model. More information: New logarithmic step size for stochastic gradient descent ...
- New logarithmic step size for stochastic gradient descenton April 22, 2024 at 11:18 am
The step size, often referred to as the learning rate, plays a pivotal role in optimizing the efficiency of the stochastic gradient ... with a convolutional neural network (CNN) model.
- Designing a drone that uses adaptive invisibility: Towards autonomous sea-land-air cloakson March 6, 2024 at 9:53 am
To power this innovation, they propose a generation–elimination neural network, also known as stochastic-evolution learning. This network globally guides the spatiotemporal metasurfaces ...
- The ChatGPT Debate: Are We Intelligent Enough To Understand ‘Intelligence’?on March 14, 2023 at 6:32 pm
The argument is that these models can only learn from the stochastic distributions of ... is that as the type of neural network that LLM’s such as ChatGPT are based on grow in size, they seem ...
- Prof. Dr. Arnulf Jentzen, Angewandte Mathematik Münster: Institut für Analysis und Numerikon September 18, 2019 at 5:23 am
Selected Projects • Mathematical Theory for Deep Learning It is the key goal of this project to provide a rigorous mathematical analysis for deep learning algorithms and thereby to establish ...
- Computation by neural networkson December 6, 2014 at 11:47 pm
It also allowed psychologists to design neural networks that could perform interesting ... it is very helpful to think in terms of a top-down, stochastic, generative model. This is exactly the ...
via Bing News