Photo credit: Denys Nevozhai
There’s a fairly large flaw in the way that programmers are currently addressing ethical concerns related to artificial intelligence (AI) and autonomous vehicles (AVs). Namely, existing approaches don’t account for the fact that people might try to use the AVs to do something bad.
For example, let’s say that there is an autonomous vehicle with no passengers and it is about to crash into a car containing five people. It can avoid the collision by swerving out of the road, but it would then hit a pedestrian.
Most discussions of ethics in this scenario focus on whether the autonomous vehicle’s AI should be selfish (protecting the vehicle and its cargo) or utilitarian (choosing the action that harms the fewest people). But that either/or approach to ethics can raise problems of its own.
“Current approaches to ethics and autonomous vehicles are a dangerous oversimplification – moral judgment is more complex than that,” says Veljko Dubljevi?, an assistant professor in the Science, Technology & Society (STS) program at North Carolina State University and author of a paper outlining this problem and a possible path forward. “For example, what if the five people in the car are terrorists? And what if they are deliberately taking advantage of the AI’s programming to kill the nearby pedestrian or hurt other people? Then you might want the autonomous vehicle to hit the car with five passengers.
“In other words, the simplistic approach currently being used to address ethical considerations in AI and autonomous vehicles doesn’t account for malicious intent. And it should.”
As an alternative, Dubljevi? proposes using the so-called Agent–Deed–Consequence (ADC) model as a framework that AIs could use to make moral judgments. The ADC model judges the morality of a decision based on three variables.
First, is the agent’s intent good or bad? Second, is the deed or action itself good or bad? Lastly, is the outcome or consequence good or bad? This approach allows for considerable nuance.
For example, most people would agree that running a red light is bad. But what if you run a red light in order to get out of the way of a speeding ambulance? And what if running the red light means that you avoided a collision with that ambulance?
“The ADC model would allow us to get closer to the flexibility and stability that we see in human moral judgment, but that does not yet exist in AI,” says Dubljevi?. “Here’s what I mean by stable and flexible. Human moral judgment is stable because most people would agree that lying is morally bad. But it’s flexible because most people would also agree that people who lied to Nazis in order to protect Jews were doing something morally good.
“But while the ADC model gives us a path forward, more research is needed,” Dubljevi? says. “I have led experimental work on how both philosophers and lay people approach moral judgment, and the results were valuable. However, that work gave people information in writing. More studies of human moral judgment are needed that rely on more immediate means of communication, such as virtual reality, if we want to confirm our earlier findings and implement them in AVs. Also, vigorous testing with driving simulation studies should be done before any putatively ‘ethical’ AVs start sharing the road with humans on a regular basis. Vehicle terror attacks have, unfortunately, become more common, and we need to be sure that AV technology will not be misused for nefarious purposes.”
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Artificial intelligence ethics
- UW Student Team Third in Business Ethics Competitionon April 30, 2021 at 10:25 am
A University of Wyoming undergraduate team placed third in the recent Daniels Fund Ethics Initiative Collegiate Program Case Competition. The program challenges students’ ethical reasoning, provides ...
- Artificial Intelligence – what about its ethical deployment?on April 29, 2021 at 9:00 am
While companies are making progress deploying artificial intelligence and machine learning, more than half of enterprises overestimate their levels of maturity in deploying responsible AI models.
- Big Tech Compliance Tracker: Europe Proposes Artificial Intelligence Rules; Canada’s New Budget Includes Digital Services Taxon April 26, 2021 at 11:44 am
Those developments will also bolster AI adoption, innovation and investment throughout the continent. "By setting the standards, we can pave the way to ethical technology worldwide and ensure that the ...
- Artificial intelligence and machine learning for unmanned vehicleson April 26, 2021 at 1:13 am
Military experts are developing new enabling technologies to help unmanned aircraft, ground vehicles, submarines, and surface vessels swarm and make decisions without human intervention.
- MEPs hold first debate on proposed rulebook for Artificial Intelligence with Commissionerson April 22, 2021 at 4:51 pm
The Legal Affairs Committee discussed the new legislative proposals on AI aimed at fostering trust, AI uptake and investment across the EU. The discussion in the Legal Affairs Committee on Thursday ...
Go deeper with Google Headlines on:
Artificial intelligence ethics
Go deeper with Bing News on:
- Chicken coop ruffles feathers of anonymous grumpon April 29, 2021 at 4:00 pm
Anonymous letter writer sends hundreds of unsigned notes complaining about a neighbor's chickens in Davie County. But the letter had an unintended consequence as others neighbors have rallied to suppo ...
- Putin warns that anyone who threatens Russia's security will 'regret' it as he amasses 100,000 troops on Ukraine's borderson April 22, 2021 at 2:24 am
Anyone who threatens the “fundamental interests of our security will regret their deeds ... consequences for Russia if Navalny dies. The Kremlin critic was poisoned with the Soviet-era nerve ...
- Putin warns that anyone who threatens Russia's security will 'regret' it as he amasses 100,000 troops on Ukraine's borderson April 21, 2021 at 7:10 am
Anyone who threatens the "fundamental interests of our security will regret their deeds," Putin ... has warned of consequences for Russia if Navalny dies. The Kremlin critic was poisoned with the ...
- Orocobre and Galaxy agree to a proposed A$4B merger of equals, establishing a new force in the global lithium sectoron April 19, 2021 at 4:22 am
BRISBANE, Australia, April 19, 2021 (GLOBE NEWSWIRE) -- Orocobre Limited (ASX:ORE, TSX:ORL) (Orocobre) and Galaxy Resources Limited (ASX:GXY) (Galaxy) are pleased to announce that they have entered ...
- Deeds banning Blacks. Red-lined maps. What students dug up on Ridgewood's housing legacyon April 16, 2021 at 3:12 am
Housing deeds that banned people of color from buying homes. "Red-lined" maps where government loan agents gave lower ... learn about how wide-reaching the consequences of unequal housing are ...