Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach

In stock

Publication History

Received: March 4, 2020
Revised: October 15, 2020; June 4, 2021; October 13, 2021
Accepted: November 23, 2021
Published Online as Articles in Advance: August 29, 2022
Published in Issue: September 1, 2022


Downloadable File

We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context. We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.g., Shapley additive explanations [SHAP], local interpretable model-agnostic explanations [LIME]) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well suited to explain system decisions. Specifically, we show that (1) features with a large importance weight for a model prediction may not affect the corresponding decision, and (2) importance weights are insufficient to communicate whether and how features influence decisions. We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate conditions under which counterfactual explanations explain data-driven decisions better than importance weights.

Additional Details
Author Carlos Fernández-Loría, Foster Provost, and Xintian Han
Year 2022
Volume 46
Issue 3
Keywords Explanations, system decisions, interpretable machine learning, explainable artificial intelligence
Page Numbers 1635-1660
Copyright © 2023 MISQ. All rights reserved.