via Cardiff University
Showing prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by artificially intelligent machines, new research has suggested.
Computer science and psychology experts from Cardiff University and MIT have shown that groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.
It may seem that prejudice is a human-specific phenomenon that requires human cognition to form an opinion of, or to stereotype, a certain person or group.
Though some types of computer algorithms have already exhibited prejudice, such as racism and sexism, based on learning from public records and other data generated by humans, this new work demonstrates the possibility of AI evolving prejudicial groups on their own.
The new findings, which have been published in the journal Scientific Reports, are based on computer simulations of how similarly prejudiced individuals, or virtual agents, can form a group and interact with each other.
In a game of give and take, each individual makes a decision as to whether they donate to somebody inside of their own group or in a different group, based on an individual’s reputation as well as their own donating strategy, which includes their levels of prejudice towards outsiders.
As the game unfolds and a supercomputer racks up thousands of simulations, each individual begins to learn new strategies by copying others either within their own group or the entire population.
Co-author of the study Professor Roger Whitaker, from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said: “By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it.
The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.
“It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population,” Professor Whitaker continued.
“Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behaviour of devices is also influenced by others around them. Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource.”
A further interesting finding from the study was that under particular conditions, which include more distinct subpopulations being present within a population, it was more difficult for prejudice to take hold.
“With a greater number of subpopulations, alliances of non-prejudicial groups can cooperate without being exploited. This also diminishes their status as a minority, reducing the susceptibility to prejudice taking hold. However, this also requires circumstances where agents have a higher disposition towards interacting outside of their group,” Professor Whitaker concluded.
The Latest on: Autonomous machines
via Google News
The Latest on: Autonomous machines
- Navy Tests Autonomous Capabilities of Future USNS Apalachicolaon August 1, 2022 at 1:45 pm
Looking for the latest Government Contracting News? Read about Navy Tests Autonomous Capabilities of Future USNS Apalachicola.
- L3Harris, US Navy to Demo Maritime Autonomous Capabilities at RIMPACon August 1, 2022 at 11:49 am
L3Harris Technologies, in collaboration with the U.S. Navy, will demonstrate how unmanned surface vehicle technologies can provide critical support for traditional maritime forces during the Rim of ...
- Autonomous Truck Developer Under Federal Investigation After Highway Crash Prompts Safety Issueson August 1, 2022 at 11:08 am
Several governmental agencies are looking into TuSimple and it autonomous driving technology that could be risking the public's safety.
- Greensea Advancing Autonomous Hull Cleaning for the US Navyon August 1, 2022 at 7:36 am
Marine software company Greensea Systems Inc. said it has recently been awarded a contract for a two-year Phase II Option Period by the U.S.
- Greensea Wins US Navy Autonomous Hull Cleaning Contracton August 1, 2022 at 7:23 am
Marine software company Greensea Systems Inc. said it has recently been awarded a contract for a two-year Phase II Option Period by the U.S.
- Bridgestone Invests in $300 Million Green Autonomous Driving Projecton July 31, 2022 at 9:00 am
Bridgestone Corp. has invested in Tier IV, an open-source Autoware autonomous driving operating system as part of the “Green Innovation Fund Projects” (GI Fund) for research and development of ...
- USPA Launches Autonomous Security Technology Divisionon July 30, 2022 at 11:31 am
USPA Nationwide Security has announced a new autonomous security system in their service line. In September, USPA will offer the deployment of next generation drones to its ...
- Autonomous Mobile Robot Company Utilizes 2D and 3D LiDAR Systemson July 29, 2022 at 5:54 am
Autonomous Mobile Robot Company $RGGI Uses 2D & 3D LiDAR Systems to Help Navigate Facilities Increases Horsepower & Reduces Maintenance: Resgreen Group International, Inc.
- Navy to use experimental warship to test autonomous systemson July 29, 2022 at 1:54 am
An experimental warship has been unveiled by the Royal Navy to be used to test state-of-the-art technology, including autonomous systems. The XV (Experimental Vessel) Patrick Blackett arrived at ...
- AI Ethics And That Viral Story Of The Chess Playing Robot That Broke The Finger Of A Seven-Year-Old During A Heated...on July 27, 2022 at 5:00 am
AI Ethics issues arise in a recent viral news story about a young boy playing tournament chess that got his finger broken by a chess playing robotic arm.
via Bing News