
How should intelligent machines be designed so as to “earn” the trust of humans? New models are informing these designs. (Purdue University photo/Marshall Farthing)
New “classification models” sense how well humans trust intelligent machines they collaborate with, a step toward improving the quality of interactions and teamwork.
The long-term goal of the overall field of research is to design intelligent machines capable of changing their behavior to enhance human trust in them. The new models were developed in research led by assistant professor Neera Jain and associate professor Tahira Reid, in Purdue University’s School of Mechanical Engineering.
“Intelligent machines, and more broadly, intelligent systems are becoming increasingly common in the everyday lives of humans,” Jain said. “As humans are increasingly required to interact with intelligent systems, trust becomes an important factor for synergistic interactions.”
For example, aircraft pilots and industrial workers routinely interact with automated systems. Humans will sometimes override these intelligent machines unnecessarily if they think the system is faltering.
“It is well established that human trust is central to successful interactions between humans and machines,” Reid said.
The researchers have developed two types of “classifier-based empirical trust sensor models,” a step toward improving trust between humans and intelligent machines.
The work aligns with Purdue’s Giant Leaps celebration, acknowledging the university’s global advancements made in AI, algorithms and automation as part of Purdue’s 150th anniversary. This is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.
The models use two techniques that provide data to gauge trust: electroencephalography and galvanic skin response. The first records brainwave patterns, and the second monitors changes in the electrical characteristics of the skin, providing psychophysiological “feature sets” correlated with trust.
Forty-five human subjects donned wireless EEG headsets and wore a device on one hand to measure galvanic skin response.
One of the new models, a “general trust sensor model,” uses the same set of psychophysiological features for all 45 participants. The other model is customized for each human subject, resulting in improved mean accuracy but at the expense of an increase in training time. The two models had a mean accuracy of 71.22 percent, and 78.55 percent, respectively.
It is the first time EEG measurements have been used to gauge trust in real time, or without delay.
“We are using these data in a very new way,” Jain said. “We are looking at it in sort of a continuous stream as opposed to looking at brain waves after a specific trigger or event.”
Findings are detailed in a research paper appearing in a special issue of the Association for Computing Machinery’s Transactions on Interactive Intelligent Systems. The journal’s special issue is titled “Trust and Influence in Intelligent Human-Machine Interaction.” The paper was authored by mechanical engineering graduate student Kumar Akash; former graduate student Wan-Lin Hu, who is now a postdoctoral research associate at Stanford University; Jain and Reid.
“We are interested in using feedback-control principles to design machines that are capable of responding to changes in human trust level in real time to build and manage trust in the human-machine relationship,” Jain said. “In order to do this, we require a sensor for estimating human trust level, again in real-time. The results presented in this paper show that psychophysiological measurements could be used to do this.”
The issue of human trust in machines is important for the efficient operation of “human-agent collectives.”
“The future will be built around human-agent collectives that will require efficient and successful coordination and collaboration between humans and machines,” Jain said. “Say there is a swarm of robots assisting a rescue team during a natural disaster. In our work we are dealing with just one human and one machine, but ultimately we hope to scale up to teams of humans and machines.”
Algorithms have been introduced to automate various processes.
“But we still have humans there who monitor what’s going on,” Jain said. “There is usually an override feature, where if they think something isn’t right they can take back control.”
Sometimes this action isn’t warranted.
“You have situations in which humans may not understand what is happening so they don’t trust the system to do the right thing,” Reid said. “So they take back control even when they really shouldn’t.”
In some cases, for example in the case of pilots overriding the autopilot, taking back control might actually hinder safe operation of the aircraft, causing accidents.
“A first step toward designing intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real time,” Jain said.
To validate their method, 581 online participants were asked to operate a driving simulation in which a computer identified road obstacles. In some scenarios, the computer correctly identified obstacles 100 percent of the time, whereas in other scenarios the computer incorrectly identified the obstacles 50 percent of the time.
“So, in some cases it would tell you there is an obstacle, so you hit the brakes and avoid an accident, but in other cases it would incorrectly tell you an obstacle exists when there was none, so you hit the breaks for no reason,” Reid said.
The testing allowed the researchers to identify psychophysiological features that are correlated to human trust in intelligent systems, and to build a trust sensor model accordingly. “We hypothesized that the trust level would be high in reliable trials and be low in faulty trials, and we validated this hypothesis using responses collected from 581 online participants,” she said.
The results validated that the method effectively induced trust and distrust in the intelligent machine.
“In order to estimate trust in real time, we require the ability to continuously extract and evaluate key psychophysiological measurements,” Jain said. “This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor.”
The EEG headset records signals over nine channels, each channel picking up different parts of the brain.
“Everyone’s brainwaves are different, so you need to make sure you are building a classifier that works for all humans.”
For autonomous systems, human trust can be classified into three categories: dispositional, situational, and learned.
Dispositional trust refers to the component of trust that is dependent on demographics such as gender and culture, which carry potential biases.
“We know there are probably nuanced differences that should be taken into consideration,” Reid said. “Women trust differently than men, for example, and trust also may be affected by differences in age and nationality.”
Situational trust may be affected by a task’s level of risk or difficulty, while learned is based on the human’s past experience with autonomous systems.
The models they developed are called classification algorithms.
“The idea is to be able to use these models to classify when someone is likely feeling trusting versus likely feeling distrusting,” she said.
Jain and Reid have also investigated dispositional trust to account for gender and cultural differences, as well as dynamic models able to predict how trust will change in the future based on the data.
Learn more: New models sense human trust in smart machines
The Latest on: Human trust in intelligent systems
via Google News
The Latest on: Human trust in intelligent systems
- Five9 Inference Studio Named a Leader in the 2021 Opus Research Decision Makers' Guide to Enterprise Intelligent Virtual Assistantson March 2, 2021 at 6:15 am
Five9, Inc. , an industry-leading provider of the intelligent cloud contact center, announced today that Five9 Inference Studio has been recognized as a leader in the 2021 Opus Research "Decision ...
- Adding edge intelligence: an interview with NXPon March 2, 2021 at 5:48 am
A key topic today and clearly covered in many talks at embedded world 2021 is the widespread adoption of edge computing to enable edge intelligence. Some ...
- Elbit Systems Awarded a $300 Million Contract to Supply Hermes 900 Unmanned Aircraft Systems to a Country in Asiaon March 2, 2021 at 12:20 am
Elbit Systems Ltd. ("Elbit Systems" or "the Company") announced today that it was awarded an approximately $300 million contract by a country in Asia to provide HermesTM 900 Unmanned Aircraft Systems ...
- FortressIQ Launches hello, Human Women in AI Podcast Series for International Women’s Day 2021on March 1, 2021 at 9:30 pm
FortressIQ, the company delivering end-to-end process insights for the modern enterprise, today announced the launch of its Women in AI podcast series, which will air under the hello, Human podcast.
- AWS Launches Second Region in Japanon March 1, 2021 at 7:37 pm
(AWS), an Amazon.com company (NASDAQ:AMZN), today announced the launch of a second full region in Japan, the AWS Asia Pacific (Osaka) Region. The region is an expansion of the existing AWS Osaka Local ...
- Perseverance On Mars. A Giant Leap In Intelligent Systems, 130 Million Miles From Earthon February 26, 2021 at 2:26 pm
NASA, JPL’s Perseverance successfully landed on Mars. Perseverance is the ultimate intelligent system. It’s 130 million miles from earth so it has to make decisions and actions mostly on its own. The ...
- As Digital Gap Widens in Wake of Pandemic, 'Masters of Change' Will Define the Future, According to Accenture Technology Vision 2021on February 25, 2021 at 4:45 am
Leadership is critical as every business becomes a technology businessTORONTO and NEW YORK, Feb. 25, 2021 /CNW/ - According to the Accenture Technology Vision 2021, technology was a lifeline during ...
- Responsible AI: The Human-Machine Symbiosison February 18, 2021 at 7:32 pm
The Human-Machine Symbiosis By Sal Cucchiara, CIO & Head of Wealth Management Technology, Morgan Stanley - Over 20 years ago, a supercomputer beat world chess champion Garry Kasparov at chess. That ...
- Building trust in human-machine teamson February 18, 2021 at 1:08 pm
Cultivating trust is essential to building effective human-machine teams and requires a holistic understanding of the concept.
- AI: An intelligence without reasoning! Can we trust it?on February 10, 2021 at 6:59 am
We firstly need to understand how the human brain works before we can build truly intelligent computer systems that can mimic ... So how can we fully trust AI when it’s reasoning cannot be ...
via Bing News