There has been much discussion of late of the ethics of artificial intelligence (AI), especially regarding robot weapons development and a related but more general discussion about AI as an existential threat to humanity.
If Skynet of the Terminator movies is going to exterminate us, then it seems pretty tame — if not pointless — to start discussing regulation and liability. But, as legal philosopher John Donaher has pointed out, if these areas are promptly and thoughtfully addressed, that could help to reduce existential risk over the longer term.
In relation to AI, regulation and liability are two sides of the same safety/public welfare coin. Regulation is about ensuring that AI systems are as safe as possible; liability is about establishing who we can blame — or, more accurately, get legal redress from — when something goes wrong.
The finger of blame
Taking liability first, let’s consider tort (civil wrong) liability. Imagine the following near-future scenario. A driverless tractor is instructed to drill seed in Farmer A’s field but actually does so in Farmer B’s field.
Let’s assume that Farmer A gave proper instructions. Let’s also assume that there was nothing extra that Farmer A should have done, such as placing radio beacons at field boundaries. Now suppose Farmer B wants to sue for negligence (for ease and speed, we’ll ignore nuisance and trespass).
Is Farmer A liable? Probably not. Is the tractor manufacturer liable? Possibly, but there would be complex arguments around duty and standard of care, such as what are the relevant industry standards, and are the manufacturer’s specifications appropriate in light of those standards? There would also be issues over whether the unwanted planting represented damage to property or pure economic loss.
So far, we have implicitly assumed the tractor manufacturer developed the system software. But what if a third party developed the AI system? What if there was code from more than one developer?
Over time, the further that AI systems move away from classical algorithms and coding, the more they will display behaviours that were not just unforeseen by their creators but were wholly unforeseeable. This is significant because foreseeability is a key ingredient for liability in negligence.
To understand the foreseeability issue better, let’s take a scenario where, perhaps only a decade or two after the planting incident above, an advanced, fully autonomous AI-driven robot accidentally injures or kills a human and there have been no substantial changes to the law. In this scenario, the lack of foreseeability could result in nobody at all being liable in negligence.
Blame the AI robot
But would that approach actually make a difference here? As an old friend said to me recently:
Leaving aside whether AI systems can be sued, AI manufacturers and developers will probably have to be put back into the frame. This might involve replacing negligence with strict liability – liability applied without any need to prove fault or negligence.
Strict liability already exists for defective product claims in many places. Alternatively there could be a no fault liability scheme with a claims pool contributed to by the AI industry.
The Latest on: Ethics of artificial intelligence
via Google News
The Latest on: Ethics of artificial intelligence
- COVID-19 MONITORING AND TRACKING SYSTEM USING INFORMATION TECHNOLOGIES AND ARTIFICIAL INTELLIGENCEon August 21, 2021 at 1:19 am
The appearance of COVID-19 almost coincided with the beginning of an active phase of the digitalization process in all areas, including public health. Moreover, COVID-19 unwittingly became an impulse ...
- As Artificial Intelligence (AI) becomes more mainstream, environmental, social and…on August 20, 2021 at 12:55 am
While potentially offering tremendous benefit, the broad applicability of AI across society must be handled carefully and professional accountants have a key role to play. ACCA (the Association of Cha ...
- Top Trends in Web Scraping: Focus on Ethics, Data Quality and the Power of MLon August 19, 2021 at 6:22 am
Free conference OxyCon to cover the latest industry developmentsVILNIUS, LITHUANIA / ACCESSWIRE / August 19, 2021 / OXYLABS Web scraping, a technology that allows automatic collection of public web ...
- $300+ Billion Artificial Intelligence Markets Analysis & Forecasts: Growth in Ethical AI Development to Boost the Adoption of AI Technology - Global Forecast to 2026 ...on August 17, 2021 at 8:56 am
The global AI market size to grow from USD 58.3 billion in 2021 to USD 309.6 billion by 2026, at a Compound Annual Growth Rate (CAGR) of 39.7% “Artificial Intelligence Market by Offering (Hardware, ...
- Back to school: a revolution in machine education and artificial intelligenceon August 16, 2021 at 5:53 pm
Back to school: a revolution in machine education and artificial intelligence Classroom-style learning could become the norm for programming ethical, trustworthy robots, according to new research led ...
- UGA launches major hiring initiative in data science and artificial intelligenceon August 16, 2021 at 3:10 pm
As data science and artificial intelligence transform a range of fields, the University of Georgia is making a significant investment in faculty with expertise in using big data to address some of ...
- How Can We Make Sure Artificial Intelligence Won't Destroy Humanity?on August 16, 2021 at 9:38 am
Artifical intelligence (AI) is becoming increasingly pervasive in daily life. While still far from the robitic humanoids we see in sci-fi films, many say i | Infographics ...
- Building Australia's Capability on Ethical Artificial Intelligenceon August 14, 2021 at 5:46 pm
Mr Santow will lead a major UTS initiative to build Australia’s strategic capability in artificial intelligence (AI) and new technology. This initiative will support Australian business and government ...
- Ethical, unbiased artificial intelligence against fraud in utility supplies and serviceson August 11, 2021 at 11:56 pm
The Fraud Research project was one of the finalists in the ninth edition of SpinUOC, the university event to foster entrepreneurshipThis tool based on big data analysis makes it possible to detect ano ...
- Report Identifies Ethical Challenges, Risks with the Use of Artificial Intelligence in Healthcareon August 11, 2021 at 7:33 pm
Emma Okonji A recent report released by World Health Organisation (WHO), has identified the ethical challenges and risks with the use of Artificial Intelligence (AI) in healthcare. The report ...
via Bing News