How AI-Based Systems can Induce Reflections: The Case of AI-Augmented Diagnostic Work

SKU
16773

Publication History

Received: March 21, 2020
Revised: March 6, 2021; December 23, 2021; July 19, 2022; December 6, 2022
Accepted: December 9, 2022
Published Online as Articles in Advance: Forthcoming
Published Online in Issue: Forthcoming

https://doi.org/10.25300/MISQ/2022/16773

Abstract

This paper addresses a so-far neglected dimension in human–Artificial Intelligence (AI) augmentation: machine-induced reflections. By establishing a grounded theoretical-informed model of machine-induced reflection, we contribute to the ongoing discussion in Information Systems (IS) regarding AI and research on reflection theories. In our multistage study, physicians used a Machine Learning-based (ML) Clinical Decision Support System (CDSS) to see if and how this interaction may stimulate reflective practice in the context of an X-ray diagnosis task. By analyzing verbal protocols, performance metrics, and survey data, we developed an integrative theoretical foundation to explain how ML-based systems can help stimulate reflective practice. Individuals engage in more critical or shallower modes depending on whether they perceive a conflict or agreement with these CDSS systems, which in turn leads to different levels of reflection depth. By uncovering the process of machine-induced reflections, we offer IS research a different perspective on how such AI-based systems can help individuals become more reflective—and consequently, more effective—professionals. This perspective is in stark contrast to the traditional, efficiency-focused view of ML-based Decision Support Systems, as well as it enriches theories on human–AI augmentation.

Additional Details
Author Benjamin M. Abdel-Karim, Nicolas Pfeuffer, K. Valerie Carl, and Oliver Hinz
Year
Volume
Issue
Keywords Machine Learning, reflective practice, grounded theory, health information technology, physicians, verbal protocols
Page Numbers
Copyright © 2023 MISQ. All rights reserved.