Failures of Fairness in Automation Require a Deeper Understanding of Human–ML Augmentation

In stock
SKU
45.3.16
Downloadable File
$15.00
Abstract
Machine learning (ML) tools reduce the costs of performing repetitive, time-consuming tasks yet run the risk of introducing systematic unfairness into organizational processes. Automated approaches to achieving fairness often fail in complex situations, leading some researchers to suggest that human augmentation of ML tools is necessary. However, our current understanding of human–ML augmentation remains limited. In this paper, we argue that the Information Systems (IS) discipline needs a more sophisticated view of and research into human–ML augmentation. We introduce a typology of augmentation for fairness consisting of four quadrants: reactive oversight, proactive oversight, informed reliance, and supervised reliance. We identify significant intersections with previous IS research and distinct managerial approaches to fairness for each quadrant. Several potential research questions emerge from fundamental differences between ML tools trained on data and traditional IS built with code. IS researchers may discover that the differences of ML tools undermine some of the fundamental assumptions upon which classic IS theories and concepts rest. ML may require massive rethinking of significant portions of the corpus of IS research in light of these differences, representing an exciting frontier for research into human–ML augmentation in the years ahead that IS researchers should embrace. Posted Online: June 24, 2021
Additional Details
Author Mike H. M. Teodorescu, Lily Morse, Yazeed Awwad, and Gerald C. Kane
Year 2021
Volume 45
Issue 3b
Keywords Fairness, machine learning, augmentation, automation, artificial intelligence
Page Numbers 1483-1499; DOI: 10.25300/MISQ/2021/16535
Copyright © 2023 MISQ. All rights reserved.