
ASM experimental interface. The left-side monitor displays the lead Soldier’s point of view of the task environment. The right-side monitor displays the ASM’s communication interface.
What does it take for a human to trust a robot? That is what Army researchers are uncovering in a new study into how humans and robots work together.
Research into human-agent teaming, or HAT, has examined how the transparency of agents–such as robots, unmanned vehicles or software agents–influences human trust, task performance, workload and perceptions of the agent. Agent transparency refers to its ability to convey to humans its intent, reasoning process and future plans.
New Army-led research finds that human confidence in robots decreases after the robot makes a mistake, even when it is transparent with its reasoning process. The paper, “Agent Transparency and Reliability in HumanRobot Interaction: The Influence on User Confidence and Perceived Reliability,” has been published in the August issue of IEEE-Transactions on Human-Machine Systems.
To date, research has largely focused on HAT with perfectly reliable intelligent agents–meaning the agents do not make mistakes–but this is one of the few studies that has explored how agent transparency interacts with agent reliability. In this latest study, humans witnessed a robot making a mistake, and researchers focused on whether the humans perceived the robot to be less reliable, even when the human was provided insight into the robot’s reasoning process.
“Understanding how the robot’s behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members,” said Dr. Julia Wright, principal investigator for this project and researcher at U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, also known as ARL. “This research contributes to the Army’s Multi-Domain Operations efforts to ensure overmatch in artificial intelligence-enabled capabilities. But it is also interdisciplinary, as its findings will inform the work of psychologists, roboticists, engineers, and system designers who are working toward facilitating better understanding between humans and autonomous agents in the effort to make autonomous teammates rather than simply tools.
This research was a joint effort between ARL and the University of Central Florida Institute for Simulations and Training, and is the third and final study in the Autonomous Squad Member project, sponsored by the Office of Secretary of Defense’s Autonomy Research Pilot Initiative. The ASM is a small ground robot that interacts with and communicates with an infantry squad.
Prior ASM studies investigated how a robot would communicate with a human teammate. Using the Situation awareness-based Agent Transparency model as a guide, various visualization methods to convey the agent’s goals, intents, reasoning, constraints, and projected outcomes were explored and tested. An at-a-glance iconographic module was developed based on these early study findings, and then was used in subsequent studies to explore the efficacy of agent transparency in HAT.
Researchers conducted this study in a simulated environment, in which participants observed a human-agent Soldier team, which included the ASM, traversing a training course. The participants’ task was to monitor the team and evaluate the robot. The Soldier-robot team encountered various events along the course and responded accordingly. While the Soldiers always responded correctly to the event, occasionally the robot misunderstood the situation, leading to incorrect actions. The amount of information the robot shared varied between trials. While the robot always explained its actions, the reasons behind its actions and the expected outcome of its actions, in some trials the robot also shared the reasoning behind its decisions, its underlying logic. Participants viewed multiple Soldier-robot teams, and their assessments of the robots were compared.
The study found that regardless of the robot’s transparency in explaining its reasoning, the robot’s reliability was the ultimate determining factor in influencing the participants’ projections of the robot’s future reliability, trust in the robot and perceptions of the robot. That is, after participants witnessed an error, they continued to rate the robot’s reliability lower, even when the robot did not make any subsequent errors. While these evaluations slowly improved over time as long as the robot committed no further errors, participants’ confidence in their own assessments of the robot’s reliability remained lowered throughout the remainder of the trials, when compared to participants who never saw an error. Furthermore, participants who witnessed a robot error reported lower trust in the robot, when compared to those who never witnessed a robot error.
Increasing agent transparency was found to improve participants’ trust in the robot, but only when the robot was collecting or filtering information. This could indicate that sharing in-depth information may mitigate some of the effects of unreliable automation for specific tasks, Wright said. Additionally, participants rated the unreliable robot as less animate, likable, intelligent, and safe than the reliable robot.
“Earlier studies suggest that context matters in determining the usefulness of transparency information,” Wright said. “We need to better understand which tasks require more in-depth understanding of the agent’s reasoning, and how to discern what that depth would entail. Future research should explore ways to deliver transparency information based on the tasking requirements.”
Learn more: When it comes to robots, reliability may matter more than reasoning
The Latest on: Human-agent teaming
[google_news title=”” keyword=”human-agent teaming” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Human-agent teaming
- The Definitive List of ‘Spider-Man: Across the Spider-Verse’ Easter Eggson June 4, 2023 at 2:28 pm
In the source material, Lyla and Miguel’s relationship has gone through ups and downs, but they have become close friends kept apart only by the fact that Lyla isn’t human. Through it ... then she’s ...
- Study identifies pro-viral human protein critical for embryo developmenton May 31, 2023 at 10:12 am
A new study led by scientists at Uppsala University and INRAE/Université Paris-Saclay has discovered that the pro-viral host protein ZC3H11A plays a critical role in maintaining embryo viability ...
- Ranking every NFL team's 2023 running back rotations, from the Cardinals to the Falconson May 31, 2023 at 5:04 am
The NFL is a passing league — but having a sound ground attack remains an asset for franchises with Super Bowl dreams. In some cases, they’re a mechanism to create space for the dynamic throws of an ...
- DoJ indicts 10 person for human traffickingon May 30, 2023 at 9:54 pm
THE Department of Justice (DoJ) has indicted at least 10 individuals for their alleged involvement in the trafficking of over 1,000 foreigners who were forced into working for a cyber-fraud ...
- Punjab Police Forms SIT To Probe Human Trafficking Caseson May 28, 2023 at 8:05 am
In view of the sensitivity and gravity of the matter, it is hereby ordered that any such complaint received by any other field unit shall immediately be registered as an FIR, without any loss of time, ...
- Punjab Police Forms Special Investigation Team To Probe Human Trafficking Cases In Stateon May 27, 2023 at 6:43 pm
Punjab Police's Bureau of Investigation has formed a Special Investigation Team to investigate all human trafficking cases in the state, officials said on Saturday.
- Researchers team receives $20 million grant to develop AI education friendlyon May 26, 2023 at 1:21 am
A team of researchers that includes Emmanuel Johnson, a Postdoctoral Research Associate at USC Viterbi School of Engineering’s Information Sciences Institute (ISI), has been awarded a $20 million gran ...
- No, Your Job Is Not Going To Be Replaced by AIon May 25, 2023 at 10:53 am
The more plausible scenario is human-agent teaming, which will pair human intelligence with artificial intelligence, leveraging the strengths of each. The concept of human-agent teaming has ...
- Drug agents, SWAT and CERT teams raid 5 illegal marijuana grow and processing sites in Bend, La Pineon May 24, 2023 at 1:04 pm
Detectives with the Deschutes County Illegal Marijuana Enforcement team and the Central Oregon Drug Enforcement Team wrapped up a several-month investigation Tuesday with five Bend and La Pine raids ...
- Chinese archaeologists uncover World War II ‘horror bunker’ where Japanese scientists conducted lethal human experiments and shared data with USon May 23, 2023 at 8:08 pm
Archaeologists have located an underground research facility where Japanese military scientists conducted “horrific biological weapon experiments” on human subjects during World War II in northeast ...
via Bing News