Computing Reviews

DARPA’s explainable artificial intelligence (XAI) program
Gunning D.  IUI 2019 (Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, California,  Mar 17-20, 2019) ii-ii, 2019. Type: Proceedings
Date Reviewed: 07/21/21

This very interesting survey talk provides extensive, high-level insight into the midterm progress of this advanced research endeavor.

David Gunning first presents motivation for establishing the four-year program and the creation of the acronym XAI, which stands for “explainable artificial intelligence.” XAI is an already well-recognized term, denoting an entire field of scientific research attempting to foster better symbiosis between humans and machines by understanding how AI reaches certain conclusions--the main concern in this being whether we should trust a machine’s decisions.

The program includes the research of 11 teams, led by outstanding US universities, that propose different techniques to achieve explainability for two use cases of AI: 1) to explain the system’s recommendations to human analysts, and 2) to explain the system’s decisions in autonomous systems.

Except for mathematical and engineering methods exploring heat map analysis to understand object recognition from image algorithms (UC Berkeley); studying the convolutional layers of a neural network to identify recognizable-by-humans objects in them (MIT); combining visual and textual explanations with generative adversarial nets (UT Austin); training a system to play a game to observe autonomous decision-making, deriving finite state machines from conventional deep learning systems (Oregon); inserting trained models on particular subject matter into larger deep learning networks (Carnegie Mellon); detecting instance novelty for the user (Brown University); adding text generation to exemplify an explanation with causal expressions (Berkeley); and model induction (Rutgers), cognitive scientists and philosophers explore gender trust issues and the psychological literature for insights into cognition and so on.

System evaluation is carried out via novel strategies, for example, measuring user satisfaction based on a cognitive psychology framework, user prediction of a system’s accuracy, and building ontologies to determine when a system is right or wrong. One conclusion from the evaluation pertains to the impact of bad and good explanations. It turns out that a bad explanation tends to have a much worse impact on the outcomes of the system than the good explanation’s positive impact on the system’s performance.

Only halfway through the program, the research demonstrates exciting evidence and promise. A vivid discussion around the selection of the approaches and the expected results follows. A very intriguing topic and detailed and well-structured presentation, this talk is very good for scholars, students, and professionals interested in the future of AI.

Reviewer:  Mariana Damova Review #: CR147314

Reproduction in whole or in part without permission is prohibited.   Copyright 2021™
Terms of Use
| Privacy Policy