Now Reading
As Machines Get Smarter, Evidence Grows That They Learn Like Us

As Machines Get Smarter, Evidence Grows That They Learn Like Us

220px-Forest_of_synthetic_pyramidal_dendrites_grown_using_Cajal's_laws_of_neuronal_branching
Computer simulation of the branching architecture of the dendrites of pyramidal neurons

Studies suggest that computer models called neural networks may learn to recognize patterns in data using the same algorithms as the human brain

The brain performs its canonical task — learning — by tweaking its myriad connections according to a secret set of rules. To unlock these secrets, scientists 30 years ago began developing computer models that try to replicate the learning process. Now, a growing number of experiments are revealing that these models behave strikingly similar to actual brains when performing certain tasks. Researchers say the similarities suggest a basic correspondence between the brains’ and computers’ underlying learning algorithms.

The algorithm used by a computer model called the Boltzmann machineinvented by Geoffrey Hinton and Terry Sejnowski in 1983, appears particularly promising as a simple theoretical explanation of a number of brain processes, including development, memory formation, object and sound recognition, and the sleep-wake cycle.

“It’s the best possibility we really have for understanding the brain at present,” said Sue Becker, a professor of psychology, neuroscience, and behavior at McMaster University in Hamilton, Ontario. “I don’t know of a model that explains a wider range of phenomena in terms of learning and the structure of the brain.”

Hinton, a pioneer in the field of artificial intelligence, has always wanted to understand the rules governing when the brain beefs a connection up and when it whittles one down — in short, the algorithm for how we learn. “It seemed to me if you want to understand something, you need to be able to build one,” he said. Following the reductionist approach of physics, his plan was to construct simple computer models of the brain that employed a variety of learning algorithms and “see which ones work,” said Hinton, who splits his time between the University of Toronto, where he is a professor of computer science, and Google.

During the 1980s and 1990s, Hinton — the great-great-grandson of the 19th-century logician George Boole, whose work is the foundation of modern computer science — invented or co-invented a collection of machine learning algorithms. The algorithms, which tell computers how to learn from data, are used in computer models called artificial neural networks — webs of interconnected virtual neurons that transmit signals to their neighbors by switching on and off, or “firing.” When data are fed into the network, setting off a cascade of firing activity, the algorithm determines based on the firing patterns whether to increase or decrease the weight of the connection, or synapse, between each pair of neurons.

See Also

Read more . . .

 

The Latest Bing News on:
Artificial neural networks
The Latest Google Headlines on:
Artificial neural networks

[google_news title=”” keyword=”artificial neural networks” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]

The Latest Bing News on:
Machine learning
The Latest Google Headlines on:
Machine learning

[google_news title=”” keyword=”machine learning” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]

What's Your Reaction?
Don't Like it!
0
I Like it!
0
Scroll To Top