A computer simulation of a cognitive model entirely made up of artificial neurons learns to communicate through dialogue starting from a state of tabula rasa
A group of researchers from the University of Sassari (Italy) and the University of Plymouth (UK) has developed a cognitive model, made up of two million interconnected artificial neurons, able to learn to communicate using human language starting from a state of “tabula rasa”, only through communication with a human interlocutor. The model is called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning) and it is described in an article published in the international scientific journal PLOS ONE. This research sheds light on the neural processes that underlie the development of language.
How does our brain develop the ability to perform complex cognitive functions, such as those needed for language and reasoning? This is a question that certainly we are all asking ourselves, to which the researchers are not yet able to give a complete answer. We know that in the human brain there are about one hundred billion neurons that communicate by means of electrical signals. We learned a lot about the mechanisms of production and transmission of electrical signals among neurons. There are also experimental techniques, such as functional magnetic resonance imaging, which allow us to understand which parts of the brain are most active when we are involved in different cognitive activities. But a detailed knowledge of how a single neuron works and what are the functions of the various parts of the brain is not enough to give an answer to the initial question.
We might think that the brain works in a similar way to a computer: after all, even computers work through electrical signals. In fact, many researchers have proposed models based on the analogy brain-is-like-a-computer since the late ’60s. However, apart from the structural differences, there are profound differences between the brain and a computer, especially in learning and information processing mechanisms. Computers work through programs developed by human programmers. In these programs there are coded rules that the computer must follow in handling the information to perform a given task. However there is no evidence of the existence of such programs in our brain. In fact, today many researchers believed that our brain is able to develop higher cognitive skills simply by interacting with the environment, starting from very little innate knowledge. The ANNABELL model appears to confirm this perspective.
ANNABELL does not have pre-coded language knowledge; it learns only through communication with a human interlocutor, thanks to two fundamental mechanisms, which are also present in the biological brain: synaptic plasticity and neural gating. Synaptic plasticity is the ability of the connection between two neurons to increase its efficiency when the two neurons are often active simultaneously, or nearly simultaneously. This mechanism is essential for learning and for long-term memory. Neural gating mechanisms are based on the properties of certain neurons (called bistable neurons) to behave as switches that can be turned “on” or “off” by a control signal coming from other neurons. When turned on, the bistable neurons transmit the signal from a part of the brain to another, otherwise they block it. The model is able to learn, due to synaptic plasticity, to control the signals that open and close the neural gates, so as to control the flow of information among different areas.
The cognitive model has been validated using a database of about 1500 input sentences, based on literature on early language development, and has responded by producing a total of about 500 sentences in output, containing nouns, verbs, adjectives, pronouns, and other word classes, demonstrating the ability to express a wide range of capabilities in human language processing.
Read more: A network of artificial neurons learns to use human language
The Latest on: Artificial neural network
[google_news title=”” keyword=”artificial neural network” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Artificial neural network
- SenseNova 5.0: China’s latest AI model surpasses OpenAI’s GPT-4on April 27, 2024 at 10:03 am
In a new development in the sphere of artificial intelligence, China-based firm SenseTime unveiled the SenseNova 5.0. The AI model seems to have surpassed the performance of Generative Pre-trained ...
- CEOs of Microsoft, Nvidia and other tech giants join federal AI advisory boardon April 26, 2024 at 1:43 pm
Microsoft Corp. Chief Executive Satya Nadella, Nvidia Corp. CEO Jensen Huang and OpenAI’s San Altman are among the participants. They will be joined by their counterparts at Advanced Micro Devices Inc ...
- Zoom and enhance: Adobe AI sharpens videos by up to 8 times the original resolution with minimal artifactson April 26, 2024 at 6:27 am
The software giant Adobe has introduced a new AI feature, known as VideoGigaGAN, that promises to upscale videos by eight times the original resolution while minimizing common visual artifacts. This ...
- AI-powered DeFi platform secures $10M investment with upcoming DAOon April 26, 2024 at 3:00 am
Supported by significant investment, this innovative DeFi ecosystem integrates advanced AI technologies and decentralized governance to offer secure trading and protection ...
- Spotting AI Washing: How Companies Overhype Artificial Intelligenceon April 24, 2024 at 10:56 pm
I'll be looking at the phenomenon of AI washing – what it is, who is doing it, why it is dangerous, and perhaps most importantly, how to spot it.
- Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping reviewon April 24, 2024 at 5:39 pm
This scoping review of randomised controlled trials on artificial intelligence (AI) in clinical practice reveals an expanding interest in AI across clinical specialties and locations. The USA and ...
- AI Efficiency Breakthrough: How Sound Waves Are Revolutionizing Optical Neural Networkson April 23, 2024 at 12:34 pm
Researchers have developed a way to use sound waves in optical neural networks, enhancing their ability to process data with high speed and energy efficiency. Optical neural networks may provide the ...
- Researchers develop performance technology for aerial and satellite image extractionon April 22, 2024 at 12:58 pm
The development of the world's most performant neural network module for accurately extracting objects from aerial and satellite imagery is expected to have wide applications across various fields, ...
- How Neural Concept's aerodynamic AI is shaping Formula 1on April 14, 2024 at 8:00 am
Today, four out of 10 Formula 1 teams use an evolution of that same technology. Now at 50 employees, Switzerland-based Neural Concept is working toward a Series B round while its software helps ...
- What is artificial intelligence (AI)?on April 14, 2024 at 5:00 am
This exciting field of computer science focuses on technologies that mimic human intelligence — with AI systems becoming way more prevalent in recent years.
via Bing News