Now Reading
Roboethics: Three Ways To Make Sure That Future Robots Have Morals

Roboethics: Three Ways To Make Sure That Future Robots Have Morals

Maximilian with Robot Zombie Scarecrow
Maximilian with Robot Zombie Scarecrow (Photo credit: NineInchNachosX1)

Robots might make us more human

As robots get more autonomous and more powerful, we’re going to have to program them to not be evil. It’s no guarantee, but there are things we can do to make sure they remain our friends and not our overlords.

As robots become increasingly intelligent and lifelike, it’s not hard to imagine a future where they’re completely autonomous. Once robots can do what they please, humans will have to figure out how to keep them from lying, cheating, stealing, and doing all the other nasty things that us carbon-based creatures do on a daily basis. Enter roboethics, a field of robotic research that aims to ensure robots adhere to certain moral standards.

In a recent paper (PDF), researchers at the Georgia Institute of Technology discuss how humans can make sure that robots don’t get out of line.

Have Ethical Governors

The killing robots used by the military all have some sort of human component–lethal force won’t be applied unless a person makes the final decision. But that could soon change, and when it does, these robots need to know how to act humanely. What that means in the context of war is debatable, but some sort of ethical boundaries need to be set. Indiscriminate killing robots don’t help anyone.

An ethical governor–a piece of the robot’s architecture that decides whether a lethal response is warranted based on preset ethical boundaries–may be the answer. A military robot with an ethical governor might only attack if a victim is in a designated kill zone or near a medical facility, for example. It could use a “collateral damage estimator” to make sure it only takes out the target and not all the other people nearby.

Establish Emotions

Emotions can help ensure that robots don’t do anything inappropriate–in a military context and elsewhere. A military robot could be made to feel an increasing amount of “guilt” if repeatedly chastised by its superiors. Pile on enough guilt, and the robot might forbid itself from completing any more lethal actions.

Emotions can also be useful in non-military human-robot interactions. Deception could help a robot in a search-and-rescue operation by allowing it to calmly tell panicked victims that they will be fine, while a confused patient with Alzheimer’s might need to be deceived by a nursing robot. But future programmers need to remember: It’s a slippery slope from benign deception to having autonomous robots that compulsively lie to get what they want.

Respect Humans

If robots don’t respect humans, we’re in trouble.

See Also

Read more . . .

via FastCoExist – Ariel Schwartz

Bookmark this page for “Roboethics” and check back regularly as these articles update on a very frequent basis. The view is set to “news”. Try clicking on “video” and “2” for more articles.

What's Your Reaction?
Don't Like it!
0
I Like it!
0
Scroll To Top