Judging Books by Their Cover: The Impact of Facial Attractiveness on Humans and AI
Abstract: Human perception, memory, and decision-making are shaped by a wide range of cognitive biases and heuristics that influence our actions and decisions. Despite their pervasiveness, these biases are rarely accounted for in the design of human–AI systems. This thesis argues that the future of effective human–AI collaboration will require the computational modeling, systematic understanding, and, in some cases, the deliberate replication of cognitive biases. To support this vision, we first introduce a framework that organizes known cognitive biases into five categories, illustrated with representative examples and accompanied by open research questions concerning their role in human–AI interaction. The second part of the thesis focuses on one specific bias: the *Attractiveness Halo Effect* i.e., the tendency to associate positive traits (e.g., intelligence or trustworthiness) with physically attractive individuals, even when attractiveness is an irrelevant cue. We examine the prevalence of this bias in the digital age by assessing how beauty filters influence human ratings of various traits. This large-scale study, the most extensive to date, yielded a dataset of 924 images with high-quality ground-truth ratings for attractiveness, intelligence, trustworthiness, sociability, happiness, and other variables. Building on these findings, we introduce the concept of *algorithmic lookism* i.e., the tendency of algorithms, particularly AI systems, to exhibit attractiveness-based discrimination -- an important yet underexplored phenomenon. We empirically investigate this effect in seven large open-source Multimodal Large Language Models (MLLMs), demonstrating that these systems employ attractiveness as a cue in decision-making. Further, we analyze synthetically generated faces and find that images produced using positive trait descriptors are significantly more likely to depict attractive individuals. We also assess the downstream impact of such AI-generated images on related tasks. Collectively, our results reveal critical pathways through which cognitive biases can propagate in AI systems, underscoring the need for ethical and equitable design principles in their real-world deployment.
Short bio: Aditya Gulati is a PhD student at ELLIS Alicante. His supervisors are Nuria Oliver (ELLIS Alicante), Bruno Lepri (Fondazione Bruno Kessler) and Miguel Angel Lozano (University of Alicante). He holds a Bachelor’s and Master’s degree in Computer Science Engineering from the International Institute of Information Technology Bangalore
Presenter: Aditya Gulati
Date: 2025-09-12 11:00 (CEST)
Location: Sala Rafael Altamira - Planta 0. Sede Universitaria Ciudad de Alicante, Av. de Ramón y Cajal, 4, Alicante 03001, Alicante ES
Online: https://si.ua.es/es/videostreaming/sede.html
Add to Google Calendar, Outlook Calendar
Watch the thesis defense live here Details about the committee can be found here