Leveraging Large Language Models to Measure Gender Bias in Gendered Languages
Authors: Derner, E. , Sansalvador de la Fuente, S., Gutiérrez, Y., Moreda, P., Oliver, N.
External link: https://arxiv.org/abs/2406.13677
Publication: arXiv:2406.13677, 2024
DOI: https://doi.org/10.48550/arXiv.2406.13677
PDF: Click here for the PDF paper
Gender bias in text corpora used in various natural language processing (NLP) contexts, such as for training large language models (LLMs), can lead to the perpetuation and amplification of societal inequalities. This is particularly pronounced in gendered languages like Spanish or French, where grammatical structures inherently encode gender, making the bias analysis more challenging. Existing methods designed for English are inadequate for this task due to the intrinsic linguistic differences between English and gendered languages. This paper introduces a novel methodology that leverages the contextual understanding capabilities of LLMs to quantitatively analyze gender representation in Spanish corpora. By utilizing LLMs to identify and classify gendered nouns and pronouns in relation to their reference to human entities, our approach provides a nuanced analysis of gender biases. We empirically validate our method on four widely-used benchmark datasets, uncovering significant gender disparities with a male-to-female ratio ranging from 4:1 to 6:1. These findings demonstrate the value of our methodology for bias quantification in gendered languages and suggest its application in NLP, contributing to the development of more equitable language technologies.