Credit: B. Hayes/NIST
NIST scientists have proposed four principles for judging how explainable an artificial intelligence’s decisions are.
Technical agency proposes four fundamental principles for judging how well AI decisions can be explained
It’s a question that many of us encounter in childhood: “Why did you do that?” As artificial intelligence (AI) begins making more consequential decisions that affect our lives, we also want these machines to be capable of answering that simple yet profound question. After all, why else would we trust AI’s decisions?
This desire for satisfactory explanations has spurred scientists at the National Institute of Standards and Technology (NIST) to propose a set of principles by which we can judge how explainable AI’s decisions are. Their draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312), is intended to stimulate a conversation about what we should expect of our decision-making devices.
The report is part of a broader NIST effort to help develop trustworthy AI systems. NIST’s foundational research aims to build trust in these systems by understanding their theoretical capabilities and limitations and by improving their accuracy, reliability, security, robustness and explainability, which is the focus of this latest publication.
The authors are requesting feedback on the draft from the public — and because the subject is a broad one, touching upon fields ranging from engineering and computer science to psychology and legal studies, they are hoping for a wide-ranging discussion.
“AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why,” said NIST electronic engineer Jonathon Phillips, one of the report’s authors. “But an explanation that would satisfy an engineer might not work for someone with a different background. So, we want to refine the draft with a diversity of perspective and opinions.”
An understanding of the reasons behind the output of an AI system can benefit everyone the output touches. If an AI contributes to a loan approval decision, for example, this understanding might help a software designer improve the system. But the applicant might want insight into the AI’s reasoning as well, either to understand why she was turned down, or, if she was approved, to help her continue acting in ways that maintain her good credit rating.
According to the authors, the four principles for explainable AI are:
- AI systems should deliver accompanying evidence or reasons for all their outputs.
- Systems should provide explanations that are meaningful or understandable to individual users.
- The explanation correctly reflects the system’s process for generating the output.
- The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output. (The idea is that if a system has insufficient confidence in its decision, it should not supply a decision to the user.)
While these principles are straightforward enough on the surface, Phillips said that individual users often have varied criteria for judging an AI’s success at meeting them. For instance, the second principle — how meaningful the explanation is — can imply different things to different people, depending on their role and connection to the job the AI is doing.
“Think about Kirk and Spock and how each one talks,” Phillips said, referencing the Star Trek characters. “A doctor using an AI to help diagnose disease may only need Spock’s explanation of why the machine recommends a particular treatment, while the patient might be OK with less technical detail but want Kirk’s background on how it relates to his life.”
Phillips and his co-authors align their concepts of explainable AI to relevant previous work in artificial intelligence, but they also compare the demands for explainability we place on our machines to those we place on our fellow humans. Do we measure up to the standards we are asking of AI? After exploring how human decisions hold up in light of the report’s four principles, the authors conclude that — spoiler alert — we don’t.
“Human-produced explanations for our own choices and conclusions are largely unreliable,” they write, citing several examples. “Without conscious awareness, people incorporate irrelevant information into a variety of decisions from personality trait judgments to jury decisions.”
However, our awareness of this apparent double standard could eventually help us better understand our own decisions and create a safer, more transparent world.
“As we make advances in explainable AI, we may find that certain parts of AI systems are better able to meet societal expectations and goals than humans are,” said Phillips, whose past research indicates that collaborations between humans and AI can produce greater accuracy than either one working alone. “Understanding the explainability of both the AI system and the human opens the door to pursue implementations that incorporate the strengths of each.”
For the moment, Phillips said, the authors hope the comments they receive advance the conversation.
“I don’t think we know yet what the right benchmarks are for explainability,” he said. “At the end of the day we’re not trying to answer all these questions. We’re trying to flesh out the field so that discussions can be fruitful.”
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
- Singapore’s DBS Introduces AI enhanced Digital Investment Advisory Solutionon April 12, 2021 at 9:40 pm
Singapore's DBS introduces AI enhanced digital investment advisory solution so customers can make more informed financial decisions.
- Some FDA-approved AI medical devices are not ‘adequately’ evaluated, Stanford study sayson April 12, 2021 at 6:03 pm
Some AI-powered medical devices approved by the ... For example, most computer-aided diagnostic devices are designed to be decision-support tools rather than primary diagnostic tools.
- AI Governance Market Growing at a CAGR 65.5% | Key Player Google, Salesforce, AWS, Pymetrics, Informaticaon April 12, 2021 at 2:48 pm
rise in need for building trust in AI systems and growth in demand for transparency in AI decisions, and growth in regulatory compliances around the technology are expected to drive the adoption of ...
- NVIDIA Completely Re-Imagines The Data Center For AIon April 12, 2021 at 10:00 am
NVIDIA has used its wildly popular GPU Technology Conference (GTC) to announce new products for AI applications. However, this year the big reveal a new NVIDIA Arm CPU tightly coupled with a GPU to ...
- Microsoft will buy AI firm Nuance for $19.7 billion to bolster healthcare techon April 12, 2021 at 9:42 am
"Together, with our partner ecosystem, we will put advanced AI solutions into the hands of professionals everywhere to drive better decision-making and create more meaningful connections, as we ...
Go deeper with Google Headlines on:
Go deeper with Bing News on:
- How AI will shape smart citieson April 12, 2021 at 7:07 am
“Implementing minimal interoperability mechanisms means that from design, we have private data security and explainable AI. In the end, it’s all about transparency and putting trust in what we do,” he ...
- The Future Of Smart Cities’ Utilities: Powering Progress With AIon April 12, 2021 at 6:10 am
Optimizing a municipality’s strategy behind the entirety of its utility operations is one of the major keys to creating a “smarter city” and a more sustainable environment overall.
- Tredence Launches ML Works, Machine Learning Ops Platform to Accelerate AI Innovation and Value Realizationon April 12, 2021 at 5:30 am
Tredence Inc., a leading AI engineering and analytics services company, is powering innovation in artificial intelligence with the launch ...
- ChinaAMC Integrates Machine Learning Solutions from Boosted.ai into Investment Processon April 12, 2021 at 5:29 am
Implementation brings performant, explainable AI to one of China’s largest asset managers Boosted.ai, the leading distributed machine learning platform for global investment professionals ...
- Global AI Governance Market Report 2021-2026: Healthcare and Lifesciences Verticals to Grow at the Highest Rateon April 12, 2021 at 1:45 am
This is the major driving factor for the adoption of AI governance solutions in Europe. Use Case 1: Fico Analytics Workbench Helps Banks to Improve Credit Risk Decisions Use Case 2: Darwinai Uses ...