Leveraging Large Language Models to Measure Gender Representation Bias in Gendered Language Corpora
Authors: Derner, E. , Sansalvador de la Fuente, S., Gutiérrez, Y., Moreda, P., Oliver, N.
External link: https://arxiv.org/abs/2406.13677
Publication: Collaborative AI and modeling of Humans (CAIHu) - Bridge program at AAAI 2025, 2025
DOI: https://doi.org/10.48550/arXiv.2406.13677
PDF: Click here for the PDF paper
Language corpora are used in a variety of natural language processing (NLP) tasks, such as for training large language models (LLMs). Biases present in text corpora, reflecting sociolinguistic patterns, can lead to the perpetuation and amplification of societal inequalities. The phenomenon of gender bias is particularly pronounced in gendered languages like Spanish or French, where grammatical structures inherently encode gender, making the bias analysis more challenging. A first step in quantifying gender bias in text entails computing biases in gender representation, i.e., differences in the prevalence of words referring to males vs. females. Existing methods to measure gender representation bias in text corpora have mainly been proposed for English and do not generalize to gendered languages due to the intrinsic linguistic differences between English and gendered languages. This paper introduces a novel methodology that leverages the contextual understanding capabilities of LLMs to quantitatively measure gender representation bias in Spanish corpora. By utilizing LLMs to identify and classify gendered nouns and pronouns in relation to their reference to human entities, our approach provides a robust analysis of gender representation bias in gendered languages. We empirically validate our method on four widely-used benchmark datasets, uncovering significant gender prevalence disparities with a male-to-female ratio ranging from 4:1 to 6:1. These findings highlight the presence of gender biases in LLM training data, which can, in turn, adversely affect human-AI interactions. Our methodology contributes to the development of more equitable language technologies, aiming to reduce biases in LLMs and improve fairness in human-LLM collaboration.