Eliane Bucher

Eliane BucherNordic Centre for Internet and Society

Eliane Bucher is an Assistant Professor with the Nordic Centre for Internet and Society at BI Norwegian Business School in Oslo, Norway. She completed her doctorate in management at the University of St. Gallen, where she is also a lecturer in media- and communications management. Her current research centers on digital labor practices and platforms – including algorithmic labor – as well as modes of collaborative/access-based consumption.

Abstract:
The Alienating Effect of being managed by Artificial Intelligence
Ever more decisions in economic life, both online and offline, are significantly shaped, enabled and predicted by machine-learning algorithms (e.g. Agrawal, Gans & Goldfarb, 2018a, 2018b; Russell & Norvig, 2016; O’Neil, 2016; Domingos, 2015; Pasquale, 2015). Whether it is being accepted for a car loan, receiving a suggestion for a business contact on a social networking site or being allotted a task on a crowd working platform, algorithms are working in the background to aid, approve and allot.

However, there also is a dark side to this: While these algorithms certainly have potential upsides for example in that they may render (managerial) decision-making more rational, this contribution aims to shed light on the potential downsides as well by scrutinizing whether individuals, subjected to automated decision-making “without humans in the loop” may experience alienation. Consider that when treated unfairly, one generally has the right to face their accuser and to understand how a specific verdict or decision has come to pass.

This is not applicable in the case of artificial intelligence. AI-facilitated decision-making processes are however often marked, at least to the person concerned by these decisions, by varying degrees of opacity in terms of (1) which specific data sources and points were considered [opaque input variables] as well as (2) which specific calculations, weights and probabilities [opaque algorithm] were performed and (3) which specific outputs were translated into which decisions according to which thresholds [opaque output conditions]. In the current practice ‘opaque and invisible models are the rule, and clear ones very much the exception’ (O’Neal, 2016). Against this backdrop, we posit that the opacity linked to predictive AI creates a considerable potential for alienation vis-à-vis an anonymous non-human “accuser” (i.e. decision-making entity).

In this contribution, we are thus intending to look at how individuals – users, employees, customers and citizens – are impacted by the advent of algorithmic decision-making mechanisms. More to the point, we explore the potential for alienation inherent in settings where the human element is not ‘in the loop’ anymore: How do individuals feel when they struggle to understand a decision made by a predictive model, when they have trouble connecting their actions in real-life to the decisions taken by the AI and when they seemingly have no or limited means to impact or appeal a decision? (see Table 1 for potential key questions). Here, we follow the social-psychological tradition which assumes alienation to be a conscious subjective experience marked by individual perceptions such as powerlessness, normlessness or isolation (Seeman, 1959; Dean, 1961; Clark, 1959). In particular, we are interested in (1) how individuals make sense of decisions which are derived on the basis of predictive AI and (2) how they perceive their own role and agency in the face of predictive AI.

Table 1: Key Questions pertaining to the alienating effect of predictive AI

Possible Dimension of Alienation
(based on Seeman, 1959)

Potentially alienating effects of…

…AI-based rejection a car-loan

…AI-based lowering of credit score

…AI-based display of offensive web-ad

Proposed Key Questions

Meaninglessness

The outcome does not make sense to me

How do individuals make sense of decisions which are derived on the basis of predictive AI?

Normlessness

It is unclear how the outcome came to be

Estrangement

The outcome does not adequately reflect my situation, actions or performance

Powerlessness

I don’t have any means to influence the outcome

How do individuals perceive their own role and agency in the face of predictive AI?

Isolation

I cannot compare the current outcome to alternative outcomes in similar/other situations

 

The proposed contribution thus seeks to complement to the ongoing normative discourse on AI by an individual and human-centered perspective. We intend to shed light on individuals’ sensemaking and perceived agency in an environment marked by predictive AI. In particular, we are interested in how far individuals feel alienated vis a vis non-human agents in the decision making process. Here, we look into the ethics of AI not just from a deontological perspective (which outcome is fair?) but from a subjective perspective (which outcome is perceived as fair?) as well.

It is conceivable that certain AI-based decisions fulfill objective criteria of fairness but do not match individual notions of fairness. A performance evaluation may thus be objectively fair in the sense that it is holds up in comparison with a large population of similar cases. However, due to individual tendencies to overestimate one’s own skill and performance in certain areas (i.e. ‘inflated self-assessment’ of logical reasoning or specific knowledge), this may still subjectively be perceived as unfair (e.g. Kruger & Dunning, 1999).

Also the reverse is possible. It is conceivable that individuals perceived AI-governed decisions as objectively fair, when in reality they may suffer from systematic bias (e.g. causality vs. correlation fallacy) which may not be accounted for. Here a particularly striking example is cited by O’Neil (2016) in the case where the length of prison sentences was determined (among other factors) by probabilistic algorithms who relied on demographic factors such as family ties, friends and neighborhood which were of no legal relevance and should not have factored into the sentencing decision.

The proposed contribution will first outline the central dimensions of alienation in the context of predictive AI. Here we will put a particular emphasis on the dimensions of opacity of algorithmic decision making processes and how they relate to alienation. Furthermore, we will scrutinize two aspects within the alienation theme, namely (1) the question of sensemaking where we will look into objective and subjective perceptions of fairness as well as (2) individual conceptualizations of role and agency vis a vis AI. Derived from this, we propose a framework based on transparency and embeddedness as a basis for AI implementation and governance. While our contribution will be set-up in the social-psychological tradition (Seeman, 1959; Dean, 1961; Durkheim, 1893), we will draw on insight from the structural tradition (e.g. Marx, 1978; Kalekin-Fishman, 2015) to interpret our results and to set them into a broader societal context.

 

References
Agrawal, A. K., Gans, J. S., & Goldfarb, A. (2018a). Exploring the impact of artificial intelligence: Prediction versus judgment (No. w24626). National Bureau of Economic Research.
Agrawal, A., Gans, J. S., & Goldfarb, A. (2018b). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press: Boston.
Dean, D. G. (1961). Alienation: Its meaning and measurement. American sociological review, 753-758.
Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine will remake our world. Basic Books.
Durkheim, E. 1893 | 1984. The Division of Labour in Society. Basingstoke : Macmillan
Kalekin-Fishman, D., & Langman, L. 2015. Alienation: the critique that refuses to disap-pear. Current Sociology, 63(6): 916-933.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77(6), 1121.
Marx, K. 1978[1844]. Economic and philosophical manuscripts of 1844. In R.C. Tucker (Ed.), The Marx-Engels reader: 66–125. London: W.W. Norton & Company.
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Russell, S. J., & Norvig, P. (2016). Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited,.
Seeman, M. 1959. On the meaning of alienation. American Sociological Review, 24: 783–791.