
Participants rated from 1 to 5 stars the ethical decision of either a human or an AI driver.
(Image courtesy of Johann Caro-Burnett, Hiroshima University)
With the accelerating evolution of technology, artificial intelligence (AI) plays a growing role in decision-making processes. Humans are becoming increasingly dependent on algorithms to process information, recommend certain behaviors, and even take actions of their behalf. A research team has studied how humans react to the introduction of AI decision making. Specifically, they explored the question, “is society ready for AI ethical decision making?” by studying human interaction with autonomous cars.
The team published their findings on May 6, 2022, in the Journal of Behavioral and Experimental Economics.
In the first of two experiments, the researchers presented 529 human subjects with an ethical dilemma a driver might face. In the scenario the researchers created, the car driver had to decide whether to crash the car into one group of people or another – the collision was unavoidable. The crash would cause severe harm to one group of people, but would save the lives of the other group. The subjects in the study had to rate the car driver’s decision, when the driver was a human and also when the driver was AI. This first experiment was designed to measure the bias people might have against AI ethical decision making.
In their second experiment, 563 human subjects responded to the researchers’ questions. The researchers determined how people react to the debate over AI ethical decisions once they become part of social and political discussions. In this experiment, there were two scenarios. One involved a hypothetical government that had already decided to allow autonomous cars to make ethical decisions. Their other scenario allowed the subjects to “vote” whether to allow the autonomous cars to make ethical decisions. In both cases, the subjects could choose to be in favor of or against the decisions made by the technology. This second experiment was designed to test the effect of two alternative ways of introducing AI into society.
The researchers observed that when the subjects were asked to evaluate the ethical decisions of either a human or AI driver, they did not have a definitive preference for either. However, when the subjects were asked their explicit opinion on whether a driver should be allowed to make ethical decisions on the road, the subjects had a stronger opinion against AI-operated cars. The researchers believe that the discrepancy between the two results is caused by a combination of two elements.
The first element is that individual people believe society as a whole does not want AI ethical decision making, and so they assign a positive weight to their beliefs when asked for their opinion on the matter. “Indeed, when participants are asked explicitly to separate their answers from those of society, the difference between the permissibility for AI and human drivers vanishes,” said Johann Caro-Burnett, an assistant professor in the Graduate School of Humanities and Social Sciences, Hiroshima University.
The second element is that when introducing this new technology into society, allowing discussion of the topic has mixed results depending on the country. “In regions where people trust their government and have strong political institutions, information and decision-making power improve how subjects evaluate the ethical decisions of AI. In contrast, in regions where people do not trust their government and have weak political institutions, decision-making capability deteriorates how subjects evaluate the ethical decisions of AI,” said Caro-Burnett.
“We find that there is a social fear of AI ethical decision-making. However, the source of this fear is not intrinsic to individuals. Indeed, this rejection of AI comes from what individuals believe is the society’s opinion,” said Shinji Kaneko, a professor in the Graduate School of Humanities and Social Sciences, Hiroshima University, and the Network for Education and Research on Peace and Sustainability. So when not being asked explicitly, people do not show any signs of bias against AI ethical decision-making. However, when asked explicitly, people show an aversion to AI. Furthermore, where there is added discussion and information on the topic, the acceptance of AI improves in developed countries and worsens in developing countries.
The researchers believe this rejection of a new technology, that is mostly due to incorporating individuals’ beliefs about society’s opinion, is likely to apply in other machines and robots. “Therefore, it will be important to determine how to aggregate individual preferences into one social preference. Moreover, this task will also have to be different across countries, as our results suggest,” said Kaneko.
Original Article: Researchers study society’s readiness for AI ethical decision making
More from: Hiroshima University
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Artificial intelligence ethical decision making
- Artificial Intelligence: 4 ways AI will reshape society
The age of artificial intelligence is here ... Automated decisions: What if the 'computer says no'? Finally, governments and companies are also increasingly using AI to automate decision-making ...
- In What Ways Can We Build Trust in AI Decision-Making?
As AI technology continues to advance and become more sophisticated, it is increasingly being used in decision-making processes across a variety of industries and sectors, from healthcare to finance ...
- UAE: Artificial Intelligence Market To Reach $1.9B By 2026
The value of the artificial intelligence (AI) market in the UAE is expected to reach $1.9 billion by 2026, representing a compound annual growth rat ...
- Pope Francis: Ethical Artificial Intelligence Respects Human Dignity
(photo: National Catholic Register / Vatican Media) Hannah Brockhaus/CNA Vatican March 27, 2023 Pope Francis says emerging technologies such as artificial intelligence and ... “In social and economic ...
- Pope Francis urges ethical use of artificial intelligence
While praising the benefits of technology and artificial intelligence, Pope Francis says AI raises serious questions and must be ethically and responsibly used to promote human dignity and the common ...
Go deeper with Google Headlines on:
Artificial intelligence ethical decision making
[google_news title=”” keyword=”artificial intelligence ethical decision making” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
AI ethical decision making
- Need For An Ethical Framework On Usage Of Generative AI In The Legal Realm
Most of the uses of AI tools in the legal arena are happening without an exhaustive assessment of the potential consequences and limitations of many of these tools. This includes the inherent biases, ...
- In What Ways Can We Build Trust in AI Decision-Making?
As AI technology continues to advance and become more sophisticated, it is increasingly being used in decision-making processes across a variety of industries and sectors, from healthcare to finance ...
- In a first, ICMR devises ethical guidelines for AI use in biomedical research, health
AI for health, to a large extent, depends on data obtained from human participants and invokes additional concerns related to potential biases, data handling, interpretation, autonomy, risk ...
- Pope Francis: Ethical Artificial Intelligence Respects Human Dignity
Pope Francis says emerging technologies could be beneficial to society as long as they respect human dignity: ‘The fundamental value that we must recognize and promote is that of the dignity of the ...
- ICMR releases first ethical guidelines for AI application in biomedical research, healthcare
It includes separate sections addressing ethical principles for AI in health ... preventive treatments, clinical decision-making, public health surveillance, complex data analysis, and predicting ...
Go deeper with Google Headlines on:
AI ethical decision making
[google_news title=”” keyword=”AI ethical decision making” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]