A team of scientists from the Moscow Institute of Physics and Technology (MIPT) have created prototypes of “electronic synapses” based on ultra-thin films of hafnium oxide (HfO2). These prototypes could potentially be used in fundamentally new computing systems.
The paper has been published in the journal Nanoscale Research Letters.
The group of researchers from MIPT have made HfO2-based memristors measuring just 40×40 nm2. The nanostructures they built exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks.
Memristors (resistors with memory) are devices that are able to change their state (conductivity) depending on the charge passing through them, and they therefore have a memory of their “history”. In this study, the scientists used devices based on thin-film hafnium oxide, a material that is already used in the production of modern processors. This means that this new lab technology could, if required, easily be used in industrial processes.
“In a simpler version, memristors are promising binary non-volatile memory cells, in which information is written by switching the electric resistance – from high to low and back again. What we are trying to demonstrate are much more complex functions of memristors – that they behave similar to biological synapses,” said Yury Matveyev, the corresponding author of the paper, and senior researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, commenting on the study.
Synapses – the key to learning and memory
A synapse is point of connection between neurons, the main function of which is to transmit a signal (a spike – a particular type of signal, see fig. 2) from one neuron to another. Each neuron may have thousands of synapses, i.e. connect with a large number of other neurons. This means that information can be processed in parallel, rather than sequentially (as in modern computers). This is the reason why “living” neural networks are so immensely effective both in terms of speed and energy consumption in solving large range of tasks, such as image / voice recognition, etc.
Over time, synapses may change their “weight”, i.e. their ability to transmit a signal. This property is believed to be the key to understanding the learning and memory functions of the brain.
From the physical point of view, synaptic “memory” and “learning” in the brain can be interpreted as follows: the neural connection possesses a certain “conductivity”, which is determined by the previous “history” of signals that have passed through the connection. If a synapse transmits a signal from one neuron to another, we can say that it has high “conductivity”, and if it does not, we say it has low “conductivity”. However, synapses do not simply function in on/off mode; they can have any intermediate “weight” (intermediate conductivity value). Accordingly, if we want to simulate them using certain devices, these devices will also have to have analogous characteristics.
The memristor as an analogue of the synapse
As in a biological synapse, the value of the electrical conductivity of a memristor is the result of its previous “life” – from the moment it was made.
There is a number of physical effects that can be exploited to design memristors. In this study, the authors used devices based on ultrathin-film hafnium oxide, which exhibit the effect of soft (reversible) electrical breakdown under an applied external electric field. Most often, these devices use only two different states encoding logic zero and one. However, in order to simulate biological synapses, a continuous spectrum of conductivities had to be used in the devices.
“The detailed physical mechanism behind the function of the memristors in question is still debated. However, the qualitative model is as follows: in the metal–ultrathin oxide–metal structure, charged point defects, such as vacancies of oxygen atoms, are formed and move around in the oxide layer when exposed to an electric field. It is these defects that are responsible for the reversible change in the conductivity of the oxide layer,” says the co-author of the paper and researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Sergey Zakharchenko.
The authors used the newly developed “analogue” memristors to model various learning mechanisms (“plasticity”) of biological synapses. In particular, this involved functions such as long-term potentiation (LTP) or long-term depression (LTD) of a connection between two neurons. It is generally accepted that these functions are the underlying mechanisms of memory in the brain.
The authors also succeeded in demonstrating a more complex mechanism –spike-timing-dependent plasticity, i.e. the dependence of the value of the connection between neurons on the relative time taken for them to be “triggered”. It had previously been shown that this mechanism is responsible for associative learning – the ability of the brain to find connections between different events.
To demonstrate this function in their memristor devices, the authors purposefully used an electric signal which reproduced, as far as possible, the signals in living neurons, and they obtained a dependency very similar to those observed in living synapses (see fig. 3).
Fig.3. The change in conductivity of memristors depending on the temporal separation between “spikes”(rigth) and thr change in potential of the neuron connections in biological neural networks.
Source: MIPT press office
These results allowed the authors to confirm that the elements that they had developed could be considered a prototype of the “electronic synapse”, which could be used as a basis for the hardware implementation of artificial neural networks.
“We have created a baseline matrix of nanoscale memristors demonstrating the properties of biological synapses. Thanks to this research, we are now one step closer to building an artificial neural network. It may only be the very simplest of networks, but it is nevertheless a hardware prototype,” said the head of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Andrey Zenkevich.
The Latest on: Artificial neural network
via Google News
The Latest on: Artificial neural network
- CPU algorithm trains deep neural nets up to 15 times faster than top GPU trainerson April 7, 2021 at 1:05 pm
Rice University computer scientists have demonstrated artificial intelligence (AI) software that runs on commodity processors and trains deep neural networks 15 times faster than platforms based on ...
- There’s a ‘New’ Nirvana Song Out, and It Was Written By Google’s AIon April 7, 2021 at 7:00 am
The neural network found patterns in components of each song, like vocal melody or rhythm guitar, then used those patterns to predict what would come next.
- Proposed network offers a simple, accurate screening tool for systemic sclerosison April 6, 2021 at 10:26 pm
In artificial intelligence, deep learning organizes algorithms into layers (the artificial neural network) that can make its own intelligent decisions. To speed up the learning process, the new ...
- Deep learning networks prefer the human voice -- just like uson April 6, 2021 at 2:54 pm
Once both neural networks were ready, Chen ... One of the more refreshing results of computer science research on artificial intelligence has been an unexpected side effect: by probing how machines ...
- New artificial neural network design can differentiate between healthy and diseased skinon April 6, 2021 at 12:12 pm
The founding chair of the Biomedical Engineering Department at the University of Houston is reporting a new deep neural network architecture that provides early diagnosis of systemic sclerosis (SSc), ...
- 3D printed all-optical diffractive deep neural network created at UCLAon April 6, 2021 at 5:07 am
Researchers from UCLA have used a 3D printer to create an artificial neural network able to analyze large volumes of data and identify objects at the speed of light. The system is called a ...
- Limits to visual representational correspondence between convolutional neural networks and the human brainon April 5, 2021 at 5:00 pm
Convolutional neural networks (CNNs) are increasingly used to model human ... Here we evaluate the performance of 14 different CNNs compared with human fMRI responses to natural and artificial images ...
- Evolution sets the stage for more powerful spiking neural networkson April 5, 2021 at 8:00 am
Spiking neural networks (SNNs) are a type of artificial neural network that most closely replicates the structure of the human brain. This makes SNNs an important step on the road to developing ...
- Artificial Intelligence Is Learning To Categorize And Talk About Arton March 30, 2021 at 12:09 pm
It’s hard enough for humans to categorize or discuss art, but it’s even more difficult for artificial intelligence. Several research groups have tried to apply machine learning to databases of ...
- Adversarial training reduces safety of neural networks in robots: Researchon March 29, 2021 at 9:14 pm
“In a lot of deep learning, machine learning, and artificial intelligence literature, we often see claims that ‘neural networks are not safe for robotics because they are vulnerable to ...
via Bing News