Credit: B. Hayes/NIST
NIST scientists have proposed four principles for judging how explainable an artificial intelligence’s decisions are.
Technical agency proposes four fundamental principles for judging how well AI decisions can be explained
It’s a question that many of us encounter in childhood: “Why did you do that?” As artificial intelligence (AI) begins making more consequential decisions that affect our lives, we also want these machines to be capable of answering that simple yet profound question. After all, why else would we trust AI’s decisions?
This desire for satisfactory explanations has spurred scientists at the National Institute of Standards and Technology (NIST) to propose a set of principles by which we can judge how explainable AI’s decisions are. Their draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312), is intended to stimulate a conversation about what we should expect of our decision-making devices.
The report is part of a broader NIST effort to help develop trustworthy AI systems. NIST’s foundational research aims to build trust in these systems by understanding their theoretical capabilities and limitations and by improving their accuracy, reliability, security, robustness and explainability, which is the focus of this latest publication.
The authors are requesting feedback on the draft from the public — and because the subject is a broad one, touching upon fields ranging from engineering and computer science to psychology and legal studies, they are hoping for a wide-ranging discussion.
“AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why,” said NIST electronic engineer Jonathon Phillips, one of the report’s authors. “But an explanation that would satisfy an engineer might not work for someone with a different background. So, we want to refine the draft with a diversity of perspective and opinions.”
An understanding of the reasons behind the output of an AI system can benefit everyone the output touches. If an AI contributes to a loan approval decision, for example, this understanding might help a software designer improve the system. But the applicant might want insight into the AI’s reasoning as well, either to understand why she was turned down, or, if she was approved, to help her continue acting in ways that maintain her good credit rating.
According to the authors, the four principles for explainable AI are:
- AI systems should deliver accompanying evidence or reasons for all their outputs.
- Systems should provide explanations that are meaningful or understandable to individual users.
- The explanation correctly reflects the system’s process for generating the output.
- The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output. (The idea is that if a system has insufficient confidence in its decision, it should not supply a decision to the user.)
While these principles are straightforward enough on the surface, Phillips said that individual users often have varied criteria for judging an AI’s success at meeting them. For instance, the second principle — how meaningful the explanation is — can imply different things to different people, depending on their role and connection to the job the AI is doing.
“Think about Kirk and Spock and how each one talks,” Phillips said, referencing the Star Trek characters. “A doctor using an AI to help diagnose disease may only need Spock’s explanation of why the machine recommends a particular treatment, while the patient might be OK with less technical detail but want Kirk’s background on how it relates to his life.”
Phillips and his co-authors align their concepts of explainable AI to relevant previous work in artificial intelligence, but they also compare the demands for explainability we place on our machines to those we place on our fellow humans. Do we measure up to the standards we are asking of AI? After exploring how human decisions hold up in light of the report’s four principles, the authors conclude that — spoiler alert — we don’t.
“Human-produced explanations for our own choices and conclusions are largely unreliable,” they write, citing several examples. “Without conscious awareness, people incorporate irrelevant information into a variety of decisions from personality trait judgments to jury decisions.”
However, our awareness of this apparent double standard could eventually help us better understand our own decisions and create a safer, more transparent world.
“As we make advances in explainable AI, we may find that certain parts of AI systems are better able to meet societal expectations and goals than humans are,” said Phillips, whose past research indicates that collaborations between humans and AI can produce greater accuracy than either one working alone. “Understanding the explainability of both the AI system and the human opens the door to pursue implementations that incorporate the strengths of each.”
For the moment, Phillips said, the authors hope the comments they receive advance the conversation.
“I don’t think we know yet what the right benchmarks are for explainability,” he said. “At the end of the day we’re not trying to answer all these questions. We’re trying to flesh out the field so that discussions can be fruitful.”
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
AI decisions
- AI-designed COVID-19 drug nominated for preclinical trial
An oral medication designed by scientists with the help of AI algorithms could one day treat patients with COVID-19 and other types of diseases caused by coronaviruses. Insilico Medicine, a biotech ...
- New Fintech Startup economyx.AI Uses Machine Learning to Pull Insights From US Economic News
The US economy is $25 trillion of activity, by 330 million people, over 3.8 million square miles. Wanna to see how it’s all doing? (PRUnderground) May ...
- Study shows AI deep learning models can detect race in medical imaging
“AI is a very upcoming and novel concept in healthcare,” Dr ... “It’s really unclear as to what it is that the algorithm is picking up on that allows it to make those decisions and make it so well, ...
- Nvidia details plans to transform data centers into AI factories
At Computex, Nvidia announced hardware milestones and new innovations toward its roadmap for more powerful AI-enabled data centers.
- Most parents welcome use of AI in pediatric emergency department, but reservations remain
Parents are generally receptive to the use of artificial intelligence (AI) tools in the management of children with respiratory illnesses in the Emergency Department (ED), according to a survey from ...
Go deeper with Google Headlines on:
AI decisions
Go deeper with Bing News on:
Explainable AI
- How amalgamated learning could scale medical AI
A promising new AI and machine learning technique called amalgamated learning might help overcome these silos to find new cures for diseases, prevent fraud and improve industrial equipment. It may ...
- Now that Australia’s had a serious accident involving a self-driving car, it’s time to open AI’s black box and decide who’s responsible
While self-driving cars cause harm, we should ask whether the manufacturer (or software developer) has met their safety responsibilities.
- Two NHS surgeons are using Azure AI to spot patients facing increased risks during surgery
An NHS trust is exploring how AI could help reduce waiting times, support recommendations from healthcare teams and help patients.
- The Future Of AI Accountability
Should these AI systems be held to the same accountability standards as humans? Where is AI today? To answer these questions, let's unpack a few of the current use cases for AI. 1. Finance: ...
- Edgility, Inc. Launches EdgeAi, a Breakthrough Explainable and Open-Box Artificial Intelligence to Accelerate Adoption
Edgility, Inc. launches EdgeAi as healthcare's first operationally embedded AI with Explainability. EdgeAi exposes the internal mechanics of machine learning and deep learning systems in ...