There has been much discussion of late of the ethics of artificial intelligence (AI), especially regarding robot weapons development and a related but more general discussion about AI as an existential threat to humanity.
If Skynet of the Terminator movies is going to exterminate us, then it seems pretty tame — if not pointless — to start discussing regulation and liability. But, as legal philosopher John Donaher has pointed out, if these areas are promptly and thoughtfully addressed, that could help to reduce existential risk over the longer term.
In relation to AI, regulation and liability are two sides of the same safety/public welfare coin. Regulation is about ensuring that AI systems are as safe as possible; liability is about establishing who we can blame — or, more accurately, get legal redress from — when something goes wrong.
The finger of blame
Taking liability first, let’s consider tort (civil wrong) liability. Imagine the following near-future scenario. A driverless tractor is instructed to drill seed in Farmer A’s field but actually does so in Farmer B’s field.
Let’s assume that Farmer A gave proper instructions. Let’s also assume that there was nothing extra that Farmer A should have done, such as placing radio beacons at field boundaries. Now suppose Farmer B wants to sue for negligence (for ease and speed, we’ll ignore nuisance and trespass).
Is Farmer A liable? Probably not. Is the tractor manufacturer liable? Possibly, but there would be complex arguments around duty and standard of care, such as what are the relevant industry standards, and are the manufacturer’s specifications appropriate in light of those standards? There would also be issues over whether the unwanted planting represented damage to property or pure economic loss.
So far, we have implicitly assumed the tractor manufacturer developed the system software. But what if a third party developed the AI system? What if there was code from more than one developer?
Over time, the further that AI systems move away from classical algorithms and coding, the more they will display behaviours that were not just unforeseen by their creators but were wholly unforeseeable. This is significant because foreseeability is a key ingredient for liability in negligence.
To understand the foreseeability issue better, let’s take a scenario where, perhaps only a decade or two after the planting incident above, an advanced, fully autonomous AI-driven robot accidentally injures or kills a human and there have been no substantial changes to the law. In this scenario, the lack of foreseeability could result in nobody at all being liable in negligence.
Blame the AI robot
Why not deem the robot itself liable? After all, there has already been some discussion about AI personhood and possible criminal liability of AI systems.
But would that approach actually make a difference here? As an old friend said to me recently:
Leaving aside whether AI systems can be sued, AI manufacturers and developers will probably have to be put back into the frame. This might involve replacing negligence with strict liability – liability applied without any need to prove fault or negligence.
Strict liability already exists for defective product claims in many places. Alternatively there could be a no fault liability scheme with a claims pool contributed to by the AI industry.
Read more: Who’s to Blame when Artificial Intelligence Systems Go Wrong?
The Latest on: Ethics of artificial intelligence
[google_news title=”” keyword=”ethics of artificial intelligence” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Ethics of artificial intelligence
- Pope to bring his call for ethical artificial intelligence to G7 summit in June in southern Italyon April 27, 2024 at 3:01 am
Pope Francis is taking his call for artificial intelligence to be developed and used according to ethical lines to the Group of 7 industrialized nations.Italian Prime Minister Giorgia Meloni ...
- Pope to take part in G7 summit in June to talk Artificial Intelligenceon April 26, 2024 at 5:00 pm
The Rome Call is premised in part on what the document calls “algorethics,” meaning an ethical code for the digital age. “Signatories committed to request the development of an artificial intelligence ...
- Pope meets head of Cisco as AI ethics pact continues to growon April 26, 2024 at 1:46 am
Pope Francis met briefly with Chuck Robbins, CEO of Cisco Systems -- the U.S. digital communications conglomerate -- after Robbins signed on to the ...
- The Impact of Artificial Intelligence on Job Market Trendson April 25, 2024 at 3:49 am
Are robots taking over our jobs? Will artificial intelligence make human labor obsolete? These questions have been wandering over the job market for years, sparking debates and fears about the future ...
- Ethical A.I. and Innovation Are ‘Two Sides of the Same Coin’on April 24, 2024 at 6:47 pm
CEOs of start-ups and big tech companies spoke at the TIME100 Summit about innovating with artificial intelligence in an ethical way.
- Cisco Systems joins Microsoft, IBM in Vatican pledge to ensure ethical use and development of AIon April 24, 2024 at 6:04 am
Tech giant Cisco Systems has joined Microsoft and IBM in signing onto a Vatican-sponsored pledge to ensure artificial intelligence is developed and used ethically ...
- The AI Conscience: Leading Ethical Decision-Making In The Artificial Intelligence Ageon April 24, 2024 at 4:00 am
The march of artificial intelligence (AI) is transforming businesses worldwide. As AI becomes more influential, leaders face the challenge of making ethical decisions in uncharted territories. As ...
- University of Maryland launches interdisciplinary institute on artificial intelligenceon April 24, 2024 at 12:30 am
COLLEGE PARK, Md. (7News) — The University of Maryland (UMD) is launching a new institute focused on Artificial Intelligence (AI). 7News' Lindsey Mastis met with UMD President Darryll Pines for more ...
- Engineers are unprepared to handle ethical considerations of AIon April 19, 2024 at 6:28 am
As artificial intelligence and machine learning tools become more integrated into daily life, ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread ...
- Ethical Use of Artificial Intelligence Roundtableon April 18, 2024 at 2:14 pm
In an AS/COA event in partnership with Salesforce, speakers, including Chile's science minister, explored how to regulate the burgeoning technology.
via Bing News