Robots and psychology: Mapping the uncanny valley

Why androids are scary

 
ARTIFICIALLY created beings, whether they be drawn or sculpted, are warmly accepted by viewers when they are distinctively inhuman. As their appearances are made more real, however, acceptance turns to discomfort until the point where the similarity is almost perfect, when comfort returns. This effect, called “the uncanny valley” because of the dip in acceptance between clearly inhuman and clearly human forms, is well known, particularly to animators, but why it happens is a mystery. Some suggest it is all about outward appearance, but a study just published in Cognition by Kurt Gray at the University of North Carolina and Daniel Wegner at Harvard argues that there can be something else involved as well: the apparent presence of a mind where it ought not to be.
 
According to some philosophers the mind is made up of two parts, agency (the capacity to plan and do things) and experience (the capacity to feel and sense things). Both set people apart from robots, but Dr Gray and Dr Wegner speculated that experience in particular was playing a crucial role in generating the uncanny-valley effect. They theorised that adding human-like eyes and facial expressions to robots conveys emotion where viewers do not expect emotion to be present. The resulting clash of expectations, they thought, might be where the unease was coming from.
 
To test this idea, the researchers presented 45 participants recruited from subway stations and campus dining halls in Massachusetts with a questionnaire about the “Delta-Cray supercomputer”. A third were told this machine was “like a normal computer but much more powerful”. Another third heard it was capable of experience, by being told it could feel “hunger, fear and other emotions”. The remainder were told it was capable of “self-control and the capacity to plan ahead”, thus suggesting it had agency. Participants were asked to rate how unnerved they were by the supercomputer on a scale where one was “not at all” and five was “extremely”.
 
Dr Gray and Dr Wegner found that those presented with the idea of a supercomputer that was much more powerful than other computers or was capable of planning ahead were not much unnerved. They gave it a score of 1.3 and 1.4 respectively. By contrast, those presented with the idea of a computer capable of experiencing emotions gave the machine an average of 3.4. These findings are consistent with the researchers’ hypothesis. There seems to be something about finding emotion in a place where it is not expected that upsets people. This led Dr Gray and Dr Wegner to wonder if the reverse, discovering a lack of experience in a place where it was expected, might prove just as upsetting.
 

Read more . . .
 
via The Economist
 

The Latest Streaming News: Robots and psychology updated minute-by-minute

Bookmark this page and come back often
 

See Also

Latest NEWS

 

Latest VIDEO

 

The Latest from the BLOGOSPHERE

What's Your Reaction?
Don't Like it!
0
I Like it!
0
View Comment (1)

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll To Top