There has been much discussion of late of the ethics of artificial intelligence (AI), especially regarding robot weapons development and a related but more general discussion about AI as an existential threat to humanity.
If Skynet of the Terminator movies is going to exterminate us, then it seems pretty tame — if not pointless — to start discussing regulation and liability. But, as legal philosopher John Donaher has pointed out, if these areas are promptly and thoughtfully addressed, that could help to reduce existential risk over the longer term.
In relation to AI, regulation and liability are two sides of the same safety/public welfare coin. Regulation is about ensuring that AI systems are as safe as possible; liability is about establishing who we can blame — or, more accurately, get legal redress from — when something goes wrong.
The finger of blame
Taking liability first, let’s consider tort (civil wrong) liability. Imagine the following near-future scenario. A driverless tractor is instructed to drill seed in Farmer A’s field but actually does so in Farmer B’s field.
Let’s assume that Farmer A gave proper instructions. Let’s also assume that there was nothing extra that Farmer A should have done, such as placing radio beacons at field boundaries. Now suppose Farmer B wants to sue for negligence (for ease and speed, we’ll ignore nuisance and trespass).
Is Farmer A liable? Probably not. Is the tractor manufacturer liable? Possibly, but there would be complex arguments around duty and standard of care, such as what are the relevant industry standards, and are the manufacturer’s specifications appropriate in light of those standards? There would also be issues over whether the unwanted planting represented damage to property or pure economic loss.
So far, we have implicitly assumed the tractor manufacturer developed the system software. But what if a third party developed the AI system? What if there was code from more than one developer?
Over time, the further that AI systems move away from classical algorithms and coding, the more they will display behaviours that were not just unforeseen by their creators but were wholly unforeseeable. This is significant because foreseeability is a key ingredient for liability in negligence.
To understand the foreseeability issue better, let’s take a scenario where, perhaps only a decade or two after the planting incident above, an advanced, fully autonomous AI-driven robot accidentally injures or kills a human and there have been no substantial changes to the law. In this scenario, the lack of foreseeability could result in nobody at all being liable in negligence.
Blame the AI robot
But would that approach actually make a difference here? As an old friend said to me recently:
Leaving aside whether AI systems can be sued, AI manufacturers and developers will probably have to be put back into the frame. This might involve replacing negligence with strict liability – liability applied without any need to prove fault or negligence.
Strict liability already exists for defective product claims in many places. Alternatively there could be a no fault liability scheme with a claims pool contributed to by the AI industry.
The Latest on: Ethics of artificial intelligence
via Google News
The Latest on: Ethics of artificial intelligence
- New Texas partnership aims to define ethics in artificial intelligenceon August 8, 2022 at 11:58 am
It seems like every day there are new headlines about what artificial intelligence can do. However, a new research partnership at the University of Texas at Austin is more interested how it works. The ...
- Artificial Intelligence: A modern technology changing the worldon August 8, 2022 at 9:38 am
Artificial Intelligence (AI) is one of the modern technologies that continue to change how people do things. The relevance of AI is felt ...
- Artificial intelligence products cannot be moral agents. The tech industry must be held responsible for what it developson August 7, 2022 at 4:30 am
Questions about whether a sophisticated algorithm is sentient or should have legal rights are interesting, but are distracting us from the real issue.
- Artificial intelligence- an invention that will eventually boomerang on mankindon August 7, 2022 at 2:20 am
When I talk about Artificial intelligence, first thing which comes in my mind is “What is Artificial intelligence (AI) “ Artificial intelligence means the work done by machines rather ...
- Would "artificial superintelligence" lead to the end of life on Earth? It's not a stupid questionon August 6, 2022 at 9:00 am
A supercharged AI will be so much smarter than us that there's no way to calculate the risk. Let's call time out ...
- What Tech Leaders Should Consider About The Ethical Use Of AI In The Metaverseon August 3, 2022 at 6:15 am
With the advent of the metaverse which utilizes AI-based technology, ethical use will become an important point of concern for both brands and users.
- STOA study on the ethical and societal challenges ofon August 2, 2022 at 5:00 am
A recently published Panel for the Future of Science and Technology (STOA) study offers a bird's eye perspective of the key societal and ethical challenges that can be expect as a result of the ...
- Religions want moral-ethical guardrails as Artificial Intelligence heads towards sentienceon August 2, 2022 at 3:24 am
In view of fast emerging Artificial Intelligence (AI) with the possibility of artificial sentience, the world needs to seriously and urgently address ...
- Artificial Intelligenceon July 24, 2022 at 5:00 pm
The evolution of artificial intelligence has led to countless ethical questions. Will machine learning perpetuate bias and inequality? Will AI infringe on human privacy and freedoms? Will humans ...
- Ethical GmbH Adds Artificial Intelligence to SAE Data Reconciliation Automationon July 23, 2022 at 7:08 am
By adding artificial intelligence (AI) to its eReconciliation® automation engine, Ethical helps to improve the automated matching accuracy and reduces even further time spent reviewing unmatched SAEs.
via Bing News