Computing Reviews

The seven tools of causal inference, with reflections on machine learning
Pearl J. Communications of the ACM62(3):54-60,2019.Type:Article
Date Reviewed: 07/02/19

This is one of the most influential and eye-opening articles I’ve read in the last two or three years. The author, an ACM Turing Award recipient, makes clear distinctions between machine learning (ML), artificial intelligence (AI), and data science. To achieve this, the article focuses on the limitations of ML when it is considered as AI, assuming AI is treated as equivalent to human-level intelligence.

This article is comprised of three pillars. The first pillar discusses the cognitive skills in human intelligence necessary to deal with cause-effect relations, skills that are not usually treated in traditional ML-based approaches that focus on associative learning, observations of data correlations, and fitting curves. To achieve this, the author presents the “ladder of causation,” involving cognitive skills innate to human intelligence, the scientific mindset, and the nature of inquiries such as intervention and counterfactuals (asking “what if” questions, both real and hypothetical).

The second pillar addresses the necessary components of an inference engine when it is an integral part of any AI-based system. The discussion of the use of symbolic reasoning and logic to formalize cause-effect relations in specific environments, a difficult task even for trained mathematicians, is compelling.

The third pillar discusses the seven tools and cognitive tasks necessary to achieve a truly human-level intelligent system, such as encoding causal assumptions, the algorithmization of counterfactuals, and dealing with missing data.

The proposed inference engine answers questions in a layered structure, called the structural causal model (SCM), that subsumes the layer below. The first layer is devoted to ML and associative learning, while the two subsequent layers provide answers both to interventional “what if” questions (“What if I take aspirin, will my headache be cured?”) and counterfactual hypothetical questions (“Was it the aspirin that stopped my headache?”). The main presupposition is that answers at one layer can be provided only when answers to the layer below can be answered.

Given the current hype of AI combined with ML success stories in certain applications, this article is strongly recommended for all ML developers and practitioners, as well as for data analysts, where cause-effect relations are necessary for businesses, planners, and other stakeholders. This article will also be of use to computer, social, and cognitive scientists wrestling with big AI research questions and the obstacles of ML/AI approaches. For example, the proposed three-layered causal model can inform lifelong learning, as well as the adaptability and explainability of actions via transparency and testability. Until these issues are addressed, AI-based systems will be of limited usefulness in healthcare and other critical areas.

This article proposes leveraging ML to human-level intelligence and promotes explainable AI (XAI) as a key to the trust and acceptance of AI-based systems by humans. Since this proposal presupposes that AI is defined as human-level intelligence, it remains to be seen whether the article’s claims and implications will hold up over time.

Reviewer:  Epaminondas Kapetanios Review #: CR146613 (1909-0346)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy