If We’re Not Careful, Self-Driving Cars Will Be The Cornerstone Of The DRM’d, Surveillance Dystopias Of Tomorrow

via www.planetizen.com
via www.planetizen.com
We’ve talked a lot about the ethical and programming problems currently facing those designing self driving cars. Some are less complicated, such as how to program cars to bend the rules slightly and be more more human like. Others get more complex, including whether or not cars should be programmed to kill the occupant — if it means saving a school bus full of children (aka the trolley problem). And once automated cars are commonplace, can law enforcement have access to the car’s code to automatically pull a driver over? There’s an ocean of questions we’re not really ready to answer.

But as we accelerate down the evolutionary highway of self-driving technology, the biggest question of all becomes: who gets to control this code? Will the automotive update process be transparent? Will the driver retain the ability to modify their car’s code? Will automakers adapt and stop implementing the kind of paper mache level security that has resulted in the endless parade of stories about hacked automobiles it takes five years for automakers to patch?

Trying to force the issue before there’s a hacker-induced automotive mass fatality, Ford, GM and Toyota were hit by a class action lawsuit earlier this year claiming the car companies were failing to adequately disclose the problems caused my abysmal auto security:

“Among other things, the lawsuit alleges Toyota, Ford and GM concealed or suppressed material facts concerning the safety, quality and functionality of vehicles equipped with these systems. It charges the companies with fraud, false advertising and violation of consumer protections statutes. Stanley continued, “We shouldn’t need to wait for a hacker or terrorist to prove exactly how dangerous this is before requiring car makers to fix the defect. Just as Honda has been forced to recall cars to repair potentially deadly airbags, Toyota, Ford and GM should be required to recall cars with these dangerous electronic systems.”

This month a court ruled that yes, we will have to probably wait for someone to die before automakers are held liable for lagging automotive security. The case was ultimately dismissed(pdf), the court ruling that the plaintiffs have yet to prove sufficiently concrete harms, and that potential damage (to the driver and to others) remains speculative. At the pace self-driving and smart car technology is advancing, one gets the sneaking suspicion we won’t have long to wait before harms become notably more concrete.

But however complicated these legal, ethical, and technical questions are, they become immeasurably more complicated once you realize that smart cars will ultimately form the backbone of the smart cities of tomorrow, working in concert with city infrastructure to build a living urban organism designed to be as efficient as mathematically possible. As Cory Doctorow noted last week, this makes ensuring code transparency and consumer power more important than ever:

“The major attraction of autonomous vehicles for city planners is the possibility that they’ll reduce the number of cars on the road, by changing the norm from private ownership to a kind of driverless Uber. Uber can even be seen as a dry-run for autonomous, ever-circling, point-to-point fleet vehicles in which humans stand in for the robots to come – just as globalism and competition paved the way for exploitative overseas labour arrangements that in turn led to greater automation and the elimination of workers from many industrial processes.

Read more: If We’re Not Careful, Self-Driving Cars Will Be The Cornerstone Of The DRM’d, Surveillance Dystopias Of Tomorrow



See Also

The Latest on: Self-driving cars

[google_news title=”” keyword=”self-driving cars” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]

via Google News


The Latest on: Self-driving cars

via  Bing News


What's Your Reaction?
Don't Like it!
I Like it!
Scroll To Top