Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
Authors: Nina Grgic-Hlaca, Elissa M. Redmiles, Krishna P. Gummadi & Adrian Weller (2018)
Article link: https://doi.org/10.1145/3178876.3186138
Abstract: As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people»s moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person»s assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people»s unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people»s fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.
Presenter: Stratis Tsirtsis
Date: 2022-05-05 15:00 (CEST)
Online: https://bit.ly/ellis-hcml-rg