Now Reading
Machines in control

Machines in control

Dr. Frankenstein and His Monster
Image by Dunechaser via Flickr
How real is the threat of autonomous technology?

HOLLYWOOD has made at least half a dozen films based on Mary Shelley’s gothic masterpiece—mindless travesties all of them, even the Kenneth Branagh version released in 1994. That is a pity because the parable of the Genevan protagonist, Victor Frankenstein, deserves wider appreciation, especially among those concerned about technology getting out of control.

In the actual story, there is no crazed assistant, no criminal brain stolen from a grave, no violent rampage, and no angry mob hunting down and killing the monster. Instead, the rejected creation pleads to be accepted, and cared for, by its creator and tries hard to fit in with society. Yes, there is violence and revenge—it wouldn’t be a gothic novel without them. In the end, however, the autonomous being departs to commit suicide after its creator dies of disease.

What makes the tale such an enduring classic are the moral questions it raises about creation, responsibility and unintended consequences. The lessons are as relevant in today’s world of autonomous technology—whether driverless vehicles or surgical robots—as they were in 1818 when the melodrama first scared the daylights out of Georgian England.

Whether consciously or not, the Royal Academy of Engineering in Britain seems lately to have taken Shelley’s fable to heart. In a report published last week, the academy urges opinion-formers to start thinking seriously about the implications of autonomous technology—machinery that can act independently by replicating human behaviour. The intention is to have such machines do the sort of jobs people find dull, dirty or dangerous. Many such systems either already exist or are closer to reality than is generally realised. And right now, the ethical, let alone the legal, framework for dealing with any untoward consequences of their actions simply does not exist.

The academy looked at two areas of the technology that are expanding fast: autonomous transport and automated help around the home for the elderly. Within ten years, driverless vehicles that use lasers and radars to sense their surroundings will be able to thread their way through traffic. They are already widespread in controlled environments such as warehouses, airports and mines. Whether they will be seen on the public highways is not a technological issue, but a political and legal matter.

See Also
An RQ-4 Global Hawk drone flies over mountains and desert. Credit: Northrop Grumman

With their digital controllers programmed to obey the highway code, driverless trucks will be far safer and more predictable than human-operated vehicles. They won’t suddenly pull out in front of you, or refuse to give way when they should. But if a mechanical failure or software glitch should ever cause a driverless truck to collide with a car, who would be legally responsible—the truck company, the manufacturer, the systems engineer? (Under today’s product-liability law, the motorist would doubtless get off scot-free, even if the accident was his fault.)

Read more . . .

Reblog this post [with Zemanta]
What's Your Reaction?
Don't Like it!
0
I Like it!
0
Scroll To Top