Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
DARPA’s explainable artificial intelligence (XAI) program
Gunning D.  IUI 2019 (Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, California,  Mar 17-20, 2019) ii-ii. 2019. Type: Proceedings
Date Reviewed: Jul 21 2021

This very interesting survey talk provides extensive, high-level insight into the midterm progress of this advanced research endeavor.

David Gunning first presents motivation for establishing the four-year program and the creation of the acronym XAI, which stands for “explainable artificial intelligence.” XAI is an already well-recognized term, denoting an entire field of scientific research attempting to foster better symbiosis between humans and machines by understanding how AI reaches certain conclusions--the main concern in this being whether we should trust a machine’s decisions.

The program includes the research of 11 teams, led by outstanding US universities, that propose different techniques to achieve explainability for two use cases of AI: 1) to explain the system’s recommendations to human analysts, and 2) to explain the system’s decisions in autonomous systems.

Except for mathematical and engineering methods exploring heat map analysis to understand object recognition from image algorithms (UC Berkeley); studying the convolutional layers of a neural network to identify recognizable-by-humans objects in them (MIT); combining visual and textual explanations with generative adversarial nets (UT Austin); training a system to play a game to observe autonomous decision-making, deriving finite state machines from conventional deep learning systems (Oregon); inserting trained models on particular subject matter into larger deep learning networks (Carnegie Mellon); detecting instance novelty for the user (Brown University); adding text generation to exemplify an explanation with causal expressions (Berkeley); and model induction (Rutgers), cognitive scientists and philosophers explore gender trust issues and the psychological literature for insights into cognition and so on.

System evaluation is carried out via novel strategies, for example, measuring user satisfaction based on a cognitive psychology framework, user prediction of a system’s accuracy, and building ontologies to determine when a system is right or wrong. One conclusion from the evaluation pertains to the impact of bad and good explanations. It turns out that a bad explanation tends to have a much worse impact on the outcomes of the system than the good explanation’s positive impact on the system’s performance.

Only halfway through the program, the research demonstrates exciting evidence and promise. A vivid discussion around the selection of the approaches and the expected results follows. A very intriguing topic and detailed and well-structured presentation, this talk is very good for scholars, students, and professionals interested in the future of AI.

Reviewer:  Mariana Damova Review #: CR147314 (2111-0274)
Bookmark and Share
  Editor Recommended
Featured Reviewer
General (I.0 )
Human Factors (H.1.2 ... )
General (I.2.0 )
Would you recommend this review?
Other reviews under "General": Date
Walling up backdoors in intrusion detection systems
Bachl M., Hartl A., Fabini J., Zseby T.  Big-DAMA 2019 (Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks, Orlando, FL,  Dec 9, 2019) 8-13, 2019. Type: Proceedings
Feb 4 2021
Research on text location and recognition in natural images with deep learning
Zhang P., Shi Z., Gao H.  ICAAI 2018 (Proceedings of the 2nd International Conference on Advances in Artificial Intelligence, Barcelona, Spain,  Oct 6-8, 2018) 1-6, 2018. Type: Proceedings
Jan 14 2021
Linked open knowledge organization systems: definition of a method for reducing the traversing
Chicaiza J., Tapia-Leon M., Piedra N., Lopez-Vargas J., Tovar-Caro E.  APPIS 2019 (Proceedings of the 2nd International Conference on Applications of Intelligent Systems, Las Palmas de Gran Canaria, Spain,  Jan 7-9, 2019) 1-6, 2019. Type: Proceedings
Jan 5 2021

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright © 2000-2022 ThinkLoud, Inc.
Terms of Use
| Privacy Policy