#### Computers and artificial intelligence continue to usher in major changes in the way people shop. It is relatively easy to train a robot’s brain to create a shopping list, but what about ensuring that the robotic shopper can easily tell the difference between the thousands of products in the store?

Purdue University researchers and experts in brain-inspired computing think part of the answer may be found in magnets. The researchers have developed a process to use magnetics with brain-like networks to program and teach devices such as personal robots, self-driving cars and drones to better generalize about different objects.

“Our stochastic neural networks try to mimic certain activities of the human brain and compute through a connection of neurons and synapses,” said Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. “This allows the computer brain to not only store information but also to generalize well about objects and then make inferences to perform better at distinguishing between objects.”

Roy presented the technology during the annual German Physical Sciences Conference earlier this month in Germany. The work also appeared in the Frontiers in Neuroscience.

The switching dynamics of a nano-magnet are similar to the electrical dynamics of neurons. Magnetic tunnel junction devices show switching behavior, which is stochastic in nature.

The stochastic switching behavior is representative of a sigmoid switching behavior of a neuron. Such magnetic tunnel junctions can be also used to store synaptic weights.

The Purdue group proposed a new stochastic training algorithm for synapses using spike timing dependent plasticity (STDP), termed Stochastic-STDP, which has been experimentally observed in the rat’s hippocampus. The inherent stochastic behavior of the magnet was used to switch the magnetization states stochastically based on the proposed algorithm for learning different object representations.

The trained synaptic weights, encoded deterministically in the magnetization state of the nano-magnets, are then used during inference. Advantageously, use of high-energy barrier magnets (30-40KT where K is the Boltzmann constant and T is the operating temperature) not only allows compact stochastic primitives, but also enables the same device to be used as a stable memory element meeting the data retention requirement. However, the barrier height of the nano-magnets used to perform sigmoid-like neuronal computations can be lowered to 20KT for higher energy efficiency.

“The big advantage with the magnet technology we have developed is that it is very energy-efficient,” said Roy, who leads Purdue’s Center for Brain-inspired Computing Enabling Autonomous Intelligence. “We have created a simpler network that represents the neurons and synapses while compressing the amount of memory and energy needed to perform functions similar to brain computations.”

Roy said the brain-like networks have other uses in solving difficult problems as well, including combinatorial optimization problems such as the traveling salesman problem and graph coloring. The proposed stochastic devices can act as “natural annealer”, helping the algorithms move out of local minimas.

Learn more: A magnetic personality, maybe not. But magnets can help AI get closer to the efficiency of the human brain

##### The Latest on: Stochastic neural networks

*via Google News*

##### The Latest on: Stochastic neural networks

- Multi-Class Classification Using PyTorch: Model Accuracyon January 25, 2021 at 2:49 pm
Dr. James McCaffrey of Microsoft Research continues his four-part series on multi-class classification, designed to predict a value that can be one of three or more possible discrete values, by ...

- Understanding the reasons behind the Huge Energy And Power Demands of Artificial Intelligenceon January 24, 2021 at 3:47 am
The high energy consumption of artificial intelligence models, especially in neural network training (e.g. BERT) leads to huge carbon footprint. Spiking Neural network (SNN) has been proposed to start ...

- Deep learning (computer vision) & satellite imagery for estimating hurricane intensityon January 23, 2021 at 4:00 pm
In this article, I utilized over 70,000 satellite images and developed a deep learning model to estimate the intensity of tropical cyclones (TCs). My overarching goals for this exercise are as follows ...

- Neural networks : the official journal of the International Neural Network Societyon January 9, 2021 at 3:59 pm
Improved result on state estimation for complex dynamical networks with time varying delays and stochastic sampling via sampled-data control.

- Multi-Class Classification Using PyTorch: Trainingon January 7, 2021 at 9:11 pm
Next, the demo creates a 6-(10-10)-3 deep neural network. The demo prepares training by setting up a loss function (cross entropy), a training optimizer function (stochastic gradient descent) and ...

- Mechanical Engineeringon December 17, 2020 at 4:01 pm
Emphasizes basic methodological tools and recent advances for the solution of scheduling problems in both deterministic and stochastic settings. Models considered include classical scheduling models, ...

- Systems Engineeringon December 17, 2020 at 4:01 pm
Emphasizes basic methodological tools and recent advances for the solution of scheduling problems in both deterministic and stochastic settings. Models considered include classical scheduling models, ...

- Student Researchon August 13, 2020 at 1:41 pm
We investigate its stochastic nature, reaffirm that it is heavy-tailed as opposed to normally-distributed, and that this is true irrespective of the underlying data. We also show that the heaviness of ...

- ME318: Theoretical Foundations of Data Science and Machine Learningon January 9, 2020 at 3:58 am
Topics covered in the course include: bias-complexity trade-off and the no-free lunch theorem; curse of dimensionality; linear prediction; stochastic gradient descent; support vector machines; ...

- Recurrent Neural Networks cheatsheeton November 29, 2018 at 9:51 am
Architecture of a traditional RNN Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while ...

*via Bing News*