Uncovering an Attractiveness Bias in Multimodal Large Language Models: A Case Study with LLaVA

Authors: Gulati, A. , D'Incà, M., Sebe, N., Lepri, B. , Oliver, N.

External link: https://arxiv.org/abs/2504.16104
Publication: arXiv preprint arXiv:2504.16104, 2025

Physical attractiveness matters. It has been shown to influence human perception and decision-making, often leading to biased judgments that favor those deemed attractive in what is referred to as “the attractiveness halo effect”. While extensively studied in human judgments in a broad set of domains, including hiring, judicial sentencing or credit granting, the role that attractiveness plays in the assessments and decisions made by multimodal large language models (MLLMs) is unknown. To address this gap, we conduct an empirical study using 91 socially relevant scenarios and a diverse dataset of 924 face images, corresponding to 462 individuals both with and without beauty filters applied to them, evaluated on LLaVA, a state-of-the-art, open source MLLM. Our analysis reveals that attractiveness impacts the decisions made by the MLLM in over 80% of the scenarios, demonstrating substantial bias in model behavior in what we refer to as an attractiveness bias. Similarly to humans, we find empirical evidence of the existence of the attractiveness halo effect, such that more attractive individuals are more likely to be attributed positive traits, such as intelligence or confidence, by the MLLM. Furthermore, we uncover a gender, age and race bias in 83%, 73% and 57% of the scenarios, respectively, which is impacted by attractiveness, particularly in the case of gender, highlighting the intersectional nature of the attractiveness bias. Our findings suggest that societal stereotypes and cultural norms intersect with perceptions of attractiveness, amplifying or mitigating this bias in multimodal generative AI models in a complex way. Our work emphasizes the need to account for intersectionality in algorithmic bias detection and mitigation efforts and underscores the challenges of addressing bias in modern multimodal large language models.