Breaking Bad Bias: Gender Stereotypes in Generative Models

Abstract: Generating images from textual descriptions requires the generative model to make implicit assumptions about the output scene that are not explicitly instructed in the input prompt. These assumptions can reinforce unfair stereotypes related to gender, race, or socioeconomic status. However, measuring and quantifying these social biases in generated images is a big challenge. In this talk, we will explore methods for measuring gender bias in text-to-image models, particularly Stable Diffusion, and discuss how the generated images, when used to train future computer vision models, affect bias in downstream tasks.

Short bio: Noa Garcia is an Associate Professor at the Institute for Advanced Co-Creation Studies, Osaka University, Japan. Originally from Barcelona, she moved to Japan in 2018, first as a postdoctoral researcher and then as a specially-appointed assistant professor at the Institute for Datability Science. She completed her Ph.D in multimodal retrieval and instance-level recognition at Aston University, United Kingdom, after earning her degree in Telecommunications Engineering from Universitat Politècnica de Catalunya, Barcelona. Her current research interests lie at the intersection of computer vision, natural language processing, fairness, and art. She is an active member of the computer vision community, having co-organized several workshops and international events, and regularly publishes at top conferences such as CVPR, ICCV, ECCV, or NeurIPS.

Presenter: Asst. Prof. Noa García, Computer Vision, Osaka University (Osaka, Japan)

Date: 2024-09-27 11:30 (CEST)

Location: Distrito Digital 5, Muelle Pte., 5 – Edificio D, Alicante 03001, Alicante ES

Online: https://teams.microsoft.com/l/meetup-join/19%3Ameeting_M2I2NDg4YTEtNmNiMy00MjM5LWIxNDktZGY3NzBiZTQ4Zjk1%40thread.v2/0?context=%7B%22Tid%22%3A%22bb758050-7db8-403e-bffa-5643855efdb1%22%2C%22Oid%22%3A%22f63862bc-031d-4058-8533-000ceb056c4c%22%2C%22MessageId%22%3A%220%22%7D