Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants
Widespread use of machine learning (ML) systems could result in an oppressive future of ubiquitous monitoring and behavior control that, for dialogic purposes, we call “Informania.” This dystopian future results from ML systems’ inherent design based on training data rather than built with code. To avoid this oppressive future, we develop the concept of an emancipatory assistant (EA), an ML system that engages with human users to help them understand and enact emancipatory outcomes amidst the oppressive environment of Informania. Using emancipatory pedagogy as a kernel theory, we develop two sets of design principles: one for the near future and the other for the far-term future. Designers optimize EA on emancipatory outcomes for an individual user, which protects the user from Informania’s oppression by engaging in an adversarial relationship with its oppressive ML platforms when necessary. The principles should encourage IS researchers to enlarge the range of possibilities for responding to the influx of ML systems. Given the fusion of social and technical expertise that IS research embodies, we encourage other IS researchers to theorize boldly about the long-term consequences of emerging technologies on society and potentially change their trajectory.
|Author||Gerald C. Kane, Amber G. Young, Ann Majchrzak, and Sam Ransbotham|
|Keywords||Machine learning, artificial intelligence, design theory, critical theory, next generation, oppression, emancipation, pedagogy, emerging technologies, socio-technical systems, affordances, future forecasting, freedom, social inclusion, algorithm, agency,|
|Page Numbers||371-306; DOI: 10.25300/MISQ/2021/1578|