Now Reading
Creating Artificial Intelligence Based on the Real Thing

Creating Artificial Intelligence Based on the Real Thing

Dharmendra Modha @ Singularity Summit 2008
Image by Alexander van Dijk via Flickr

The physical limits of conventional computer designs are within sight

 
Ever since the early days of modern computing in the 1940s, the biological metaphor has been irresistible. The first computers — room-size behemoths — were referred to as “giant brains” or “electronic brains,” in headlines and everyday speech. As computers improved and became capable of some tasks familiar to humans, like playing chess, the term used was “artificial intelligence.” DNA, it is said, is the original software.

For the most part, the biological metaphor has long been just that — a simplifying analogy rather than a blueprint for how to do computing. Engineering, not biology, guided the pursuit of artificial intelligence. As Frederick Jelinek, a pioneer in speech recognition, put it, “airplanes don’t flap their wings.”

Yet the principles of biology are gaining ground as a tool in computing. The shift in thinking results from advances in neuroscience and computer science, and from the prod of necessity.

The physical limits of conventional computer designs are within sight — not today or tomorrow, but soon enough. Nanoscale circuits cannot shrink much further. Today’s chips are power hogs, running hot, which curbs how much of a chip’s circuitry can be used. These limits loom as demand is accelerating for computing capacity to make sense of a surge of new digital data from sensors, online commerce, social networks, video streams and corporate and government databases.

To meet the challenge, without gobbling the world’s energy supply, a different approach will be needed. And biology, scientists say, promises to contribute more than metaphors. “Every time we look at this, biology provides a clue as to how we should pursue the frontiers of computing,” said John E. Kelly, the director of research at I.B.M.

Dr. Kelly points to Watson, the question-answering computer that can play “Jeopardy!” and beat two human champions earlier this year. I.B.M.’s clever machine consumes 85,000 watts of electricity, while the human brain runs on just 20 watts. “Evolution figured this out,” Dr. Kelly said.

Several biologically inspired paths are being explored by computer scientists in universities and corporate laboratories worldwide. But researchers from I.B.M. and four universities — Cornell, Columbia, the University of Wisconsin, and the University of California, Merced — are engaged in a project that seems particularly intriguing.

The project, a collaboration of computer scientists and neuroscientists begun three years ago, has been encouraging enough that in August it won a $21 million round of government financing from the Defense Advanced Research Projects Agency, bringing the total to $41 million in three rounds. In recent months, the team has developed prototype “neurosynaptic” microprocessors, or chips that operate more like neurons and synapses than like conventional semiconductors.

But since 2008, the project itself has evolved, becoming more focused, if not scaled back. Its experience suggests what designs, concepts and techniques might be usefully borrowed from biology to push the boundaries of computing, and what cannot be applied, or even understood.

At the outset, Dharmendra S. Modha, the I.B.M. computer scientist leading the project, described the research grandly as “the quest to engineer the mind by reverse-engineering the brain.” The project embarked on supercomputer simulations intended to equal the complexity of animal brains — a cat and then a monkey. In science blogs and online forums, some neuroscientists sharply criticized I.B.M. for what they regarded as exaggerated claims of what the project could achieve.

See Also

These days at the I.B.M. Almaden Research Center in San Jose, Calif., there is not a lot of talk of reverse-engineering the brain. Wide-ranging ambitions that narrow over time, Dr. Modha explained, are part of research and discovery, even if his earlier rhetoric was inflated or misunderstood.

“Deciding what not to do is just as important as deciding what to do,” Dr. Modha said. “We’re not trying to replicate the brain. That’s impossible. We don’t know how the brain works, really.”

Read more . . .
 
Bookmark this page for “artificial intelligence” and check back regularly as these articles update on a very frequent basis. The view is set to “news”. Try clicking on “video” and “2” for more articles.

What's Your Reaction?
Don't Like it!
0
I Like it!
0
Scroll To Top