Explanations From Intelligent Systems: Theoretical Foundations and Implications for Practice

In stock
SKU
23.4.2
Downloadable File
$15.00
Abstract
Information systems with an “intelligent” or “knowledge” component are now prevalent and include knowledge-based systems, decision support systems, intelligent agents and knowledge management systems. These systems are in principle capable of explaining their reasoning or justifying their behavior. There appears to be a lack of understanding, however, of the benefits that can flow from explanation use, and how an explanation function should be constructed. Work with newer types of intelligent systems and help functions for everyday systems, such as word-processors, appears in many cases to neglect lessons learned in the past. This paper attempts to rectify this situation by drawing together the considerable body of work on the nature and use of explanations. Empirical studies, mainly with knowledge-based systems, are reviewed and linked to a sound theoretical base. The theoretical base combines a cognitive effort perspective, cognitive learning theory, and Toulmin’s model of argumentation. Conclusions drawn from the review have both practical and theoretical significance. Explanations are important to users in a number of circumstances – when the user perceives an anomaly, when they want to learn, or when they need a specific piece of knowledge to participate properly in problem solving. Explanations, when suitably designed, have been shown to improve performance and learning and result in more positive user perceptions of a system. The design is important, however, because it appears that explanations will not be used if the user has to exert “too much” effort to get them. Explanations should be provided automatically if this can be done relatively unobtrusively, or by hypertext links, and should be context-specific rather than generic. Explanations that conform to Toulmin’s model of argumentation, in that they provide adequate justification for the knowledge offered, should be more persuasive and lead to greater trust, agreement, satisfaction, and acceptance – of the explanation and possibly also of the system as a whole.
Additional Details
Author Shirley Gregor and Izak Benbasat
Year 1999
Volume 23
Issue 4
Keywords Explanation use, explanations, intelligent systems, knowledge-based systems, expert systems, intelligent agents, decision support systems, cognitive effort, cognitive learning
Page Numbers 497-530
Copyright © 2024 MISQ. All rights reserved.