Thesis Defense Presentations
Students of ELLIS Alicante invites you to attend the final presentation of their work at our lab. The thesis defenses are streamed online and open to public in person as well.
Events
-
A Sociotechnical Approach to Trustworthy AI: from Algorithms to Regulation
Abstract: This thesis presents a sociotechnical framework for implementing Trustworthy Artificial Intelligence (TAI), integrating technical, human, and regulatory aspects. It emphasizes the importance of aligning algorithmic development with societal needs and legal standards throughout the AI lifecycle. First, the thesis focuses on algorithmic fairness and proposes two novel methods to mitigate algorithmic discrimination in decision-making (FairShap) and in social networks (ERG). Next, the thesis explores the challenge of provably optimal human-AI complementarity in a resource allocation task. Finally, the thesis investigates the interplay between AI and Spanish labor legislation. Concludes that trustworthiness in AI systems requires a holistic understanding of data, algorithms, institutions, and regulatory factors.
Short bio: Adrián Arnaiz Rodríguez is a PhD student at ELLIS Alicante. His PhD topic is Trustworthy AI and Graph Neural Networks. His supervisors are Nuria Oliver (ELLIS Alicante), Miguel Angel Lozano (University of Alicante). He visited MPI-SWS advised by Manuel Gómez Rodríguez as his secondary ELLIS location. He holds a AI MSc degree from the Open University of Catalunya and a BSc from the University of Burgos.
Presenter: Adrián Arnaiz Rodríguez
Date: 2025-09-26 10:30 (CEST)
Location: SALÓN DE GRADOS RECTOR RAMÓN MARTÍN MATEO - FACULTAD DE DERECHO - UNIVERSIDAD DE ALICANTE, Campus de la Universidad de Alicante, Alicante 03690, Alicante ES
Online: Meeting link
-
Human aesthetics under the representational power of Artificial Intelligence
Abstract: This thesis investigates how Artificial Intelligence (AI)-based technologies mediate human representation in contemporary visual culture. The work is grounded on Don Ihde's philosophical framework that analyzes distinct modalities (embodiment, hermeneutic, and alterity) according to which humans relate to technologies. Through a combination of technical contributions, along with critical reflections on the socio-ethical and artistic dimensions of these systems, the thesis offers an interdisciplinary exploration of AI's role in visual culture. First, we examine augmented reality (AR) beauty filters as a form of embodiment relation, where the technology becomes transparent and modifies how individuals perceive and present their own faces. By introducing novel datasets (FairBeauty and B-LFW) and the OpenFilter tool, we demonstrate how these filters propagate Eurocentric beauty standards, subtly reshaping identity in ways that reinforce historical and racialized aesthetics. Then, we address the algorithmic censorship of artistic nudity as a hermeneutic relation, focusing on how moderation systems interpret and assess the obscenity of the human body. Through a mixed methods approach that combines qualitative and quantitative contributions, the chapter reveals the limitations of current moderation technologies and advocates for greater transparency, cultural sensitivity, and accountability in content moderation governance. Finally, we explore text-to-image (T2I) generative systems through the lens of alterity relation, highlighting how users interact with technologies that produce outputs perceived as novel and autonomous. By auditing leading T2I platforms and introducing ImageSet2Text, a new method for summarizing image sets via vision-language models, we uncover stylistic patterns and cultural biases embedded in AI-generated depictions of humans.
Short bio: Piera Riccio is an ELLIS PhD student. She holds a bachelor's degree in Cinema and Media Engineering (2018, Politecnico di Torino), a Master's degree in ICT for Smart Societies (2021, Politecnico di Torino), and a Master's degree in Data Science and Engineering (2021, Télécom Paris – EURECOM). In 2020, she was an affiliate at Metalab (at) Harvard. In 2021, she was a research assistant at the Oslo Metropolitan University. In her research, she is interested in exploring the cultural, social, and artistic possibilities of AI. In her PhD, she focuses on the effect that social media have on the lives of women and the way they are perceived in the social media cultural ecosystem. Her supervisors are Nuria Oliver (ELLIS Alicante), Thomas Hofmann (ETH Zurich) and Francisco Escolano (Universidad de Alicante).
Presenter: Piera Riccio
Date: 2025-09-22 11:00 (CEST)
Location: Sala Rafael Altamira - Planta 0. Sede Universitaria Ciudad de Alicante, Av. de Ramón y Cajal, 4, Alicante 03001, Alicante ES
-
Relaxing Core Assumptions: the Impact of Data, Model and Participation Heterogeneity on Performance, Privacy and Fairness in Federated Learning
Abstract: Federated Learning (FL) enables decentralized training of machine learning models on distributed data while preserving privacy by design. An FL design consists of clients training models on private data and a central server aggregating a global model based on the consensus among clients. In an ideal scenario, the training data and computing resources are identically and independently distributed (i.i.d.) among clients, therefore, clients can work together in agreement to reach a global optima. However, in realistic FL settings, heterogeneity arises between clients in terms of both data and resource availability. This research focuses on such scenarios, with a special interest on how the server can adapt the aggregation method from a simple averaging to address the clients’ diversity.
The first research direction discusses existing client selection methods and proposes a novel taxonomy of FL methods where the participation of the clients is actively managed by the server to achieve a global objective with respect to the client heterogeneity. This research direction is presented in [NLQO22].
The next chapter focuses on model heterogeneity as an inclusion policy for low-resource clients. It investigates the implications of client resource constraints on privacy given a reduced model complexity in low-resource clients. This work has been presented in [NLQO25].
The final area provides a solution to the data heterogeneity problem with distribution-aware client selection. Applying this solution can mitigate spurious correlations and improve algorithmic fairness in FL. This research line has been described in [NFN+25].
[NLQO22] Németh, G. D., Lozano, M. A., Quadrianto, N., and Oliver, N. (2022). A Snapshot of the Frontiers of Client Selection in Federated Learning. Transactions on Machine Learning Research.
[NLQO25] Németh, G. D., Lozano, M. A., Quadrianto, N., and Oliver, N. (2025). Privacy and Accuracy Implications of Model Complexity and Integration in Heterogeneous Federated Learning. IEEE Access, 13, 40258-40274.
[NFN+25] Németh, G. D., Fani, E., Ng, Y. J., Caputo, B., Lozano, M. A., Oliver, N., and Quadrianto, N.(2025). FedDiverse: Tackling Data Heterogeneity in Federated Learning with Diversity-Driven Client Selection. FLTA2025Short bio: Gergely Dániel Németh is a PhD student at ELLIS Alicante. His PhD topic is Privacy and Fairness in Federated Learning. His supervisors are Nuria Oliver (ELLIS Alicante), Miguel Angel Lozano (University of Alicante) and Novi Quadrianto (University of Sussex). He holds a CS MSc degree from The University of Manchester and a BSc from Budapest University of Technology and Economics. His university topic was about Natural Language Processing, but he also worked on Computer Vision in a Hungarian StartUp.
Presenter: Gergely D. Németh
Date: 2025-09-12 11:00 (CEST)
Location: Salón 0103PB001 - SF/CONFERENCIAS, EDIFICIO SAN FERNANDO, C. San Fernando, 40, Alicante 03001, Alicante ES
Online: Meeting link