Lookism: The overlooked bias in computer vision

Authors: Gulati, A. , Lepri, B. , Oliver, N.

External link: https://doi.org/10.48550/arXiv.2408.11448
Publication: Fairness and ethics towards transparent AI: facing the chalLEnge through model Debiasing (FAILED) - Workshop at ECCV 2024, 2024
DOI: 10.48550/arXiv.2408.11448
PDF: Click here for the PDF paper

In recent years, there have been significant advancements in computer vision which have led to the widespread deployment of image recognition and generation systems in socially relevant applications, from hiring to security screening. However, the prevalence of biases within these systems has raised significant ethical and social concerns. The most extensively studied biases in this context are related to gender, race and age. Yet, other biases are equally pervasive and harmful, such as lookism, i.e. the preferential treatment of individuals based on their physical appearance. Lookism remains under-explored in computer vision but can have profound implications not only by perpetuating harmful societal stereotypes but also by undermining the fairness and inclusivity of AI technologies. Thus, this paper advocates for the systematic study of lookism as a critical bias in computer vision models. Through a comprehensive review of existing literature, we identify three areas of intersection between lookism and computer vision. We illustrate them by means of examples and a user study. We call for an interdisciplinary approach to address lookism, urging researchers, developers, and policymakers to prioritize the development of equitable computer vision systems that respect and reflect the diversity of human appearances.