A Sociotechnical Approach to Trustworthy AI: from Algorithms to Regulation
Abstract: This thesis proposes a sociotechnical framework for the effective implementation of Trustworthy Artificial Intelligence (TAI), addressing the technical, human, and regulatory dimensions of AI-induced harms. Rather than treating TAI as a purely technical goal, we emphasize its interdisciplinary nature, aligning algorithmic development with societal needs and legal norms throughout the AI system life-cycle. Our approach focuses particularly on harms and investigates how they can be mitigated through algorithmic design and effective development and implementation of regulations.
From a technical standpoint, we focus on harms derived from discrimination in algorithmic decisions. We first introduce FairShap, a novel data valuation method that quantifies the contribution of individual training examples to group fairness decision-making metrics. This enables a more complete diagnosis and mitigation of discrimination in high-risk decision-making systems, aligning with auditing obligations under the EU AI Act.
Moving beyond the fairness definitions in decision-making, we also propose ERG, a graph-based approach to measure and mitigate structural disparities in social capital within social networks. This approach addresses emerging regulatory demands, such as those set out in the EU Digital Services Act, which require assessing and reducing systemic risks in online platforms.
In the context of algorithm use, we design and evaluate a human-AI complementarity framework for collaborative decision-making in high-stakes resource allocation tasks. By combining human and algorithmic matching decisions and optimizing the hand-off using bandit-based strategies, we explore how semi-automated systems can be designed to outperform humans or algorithms alone. This approach adheres to the TAI principles of technical robustness, user oversight, and minimal harm, as set out in EU GDPR and TAI guidelines.
Finally, in the governance sphere, we examine the use of AI for worker management under Spanish labor law. We identify applicable legal frameworks across the AI system life-cycle; analyze the alignment between TAI principles, the EU AI Act, and labor duties; and highlight tensions such as the gap between correlation-based models and the legal requirement for causal justification in certain decisions.
Taken together, these contributions demonstrate that ensuring trustworthiness requires more than just algorithmic improvements. Instead, it must be understood as a sociotechnical system, emerging from the interaction of data, algorithms, institutions, and regulatory constraints. This thesis provides practical insights for researchers, practitioners, and policymakers seeking to develop AI systems that are technically robust, socially aligned, and legally compliant.
Short bio: Adrián Arnaiz Rodríguez is a PhD student at ELLIS Alicante. His PhD topic is Trustworthy AI and Graph Neural Networks. His supervisors are Nuria Oliver (ELLIS Alicante), Miguel Angel Lozano (University of Alicante). He visited MPI-SWS advised by Manuel Gómez Rodríguez as his secondary ELLIS location. He holds a AI MSc degree from the Open University of Catalunya and a BSc from the University of Burgos.
Presenter: Adrián Arnaiz Rodríguez
Date: 2025-09-26 10:30 (CEST)
Location: SALÓN DE GRADOS RECTOR RAMÓN MARTÍN MATEO - FACULTAD DE DERECHO - UNIVERSIDAD DE ALICANTE, Campus de la Universidad de Alicante, Alicante 03690, Alicante ES
Online: https://si.ua.es/es/videostreaming/derecho.html
Add to Google Calendar, Outlook Calendar
[AAO24] Arnaiz-Rodriguez, A., and Oliver, N. (2024). Towards Algorithmic Fairness by means of Instance-level Data Re-weighting based on Shapley Values. ICLR Workshop on Data-centric Machine Learning Research (DMLR 2024). OpenReview
[ACRO25] Arnaiz-Rodriguez, A., Curto Rex, G., and Oliver, N. (2025). Structural Group Unfairness: Measurement and Mitigation by Means of the Effective Resistance. Proceedings of the International AAAI Conference on Web and Social Media (ICWSM 2025), Vol. 19(1), pp. 83–106. Also presented at IC2S2 2024 and TrustLOG@WWW 2024. DOI
[ACOTG25] Arnaiz-Rodriguez, A., Corvelo, N., Thejaswi, S., Oliver, N., and Gomez-Rodriguez, M. (2025). Towards Human-AI Complementarity in Matching Tasks. HLDM at ECLM-PKDD (2025).
[ALC24a] Arnaiz-Rodriguez, A., and Losada Carreño, J. (2024). (The Intersection of Trustworthy AI and Labour Law. A Legal and Technical Study from a Tripartite Taxonomy) La intersección de la IA fiable y el Derecho del Trabajo. Un estudio jurídico y técnico desde una taxonomía tripartita. Revista General de Derecho del Trabajo y de la Seguridad Social, 69. . Iustel
[ALC24b] Arnaiz-Rodriguez, A., and Losada Carreño, J. (2024). Estudio de la causalidad en la toma de decisiones algorítmicas: el impacto de la IA en el ámbito empresarial. Revista Internacional y Comparada de Relaciones Laborales y Derecho del Empleo, 12(3). (EN: Studying Causality in Algorithmic Decision Making: the Impact of AI in the Business Domain). ADAPT
[AABO22] Arnaiz-Rodriguez, A., Begga, A., Escolano, F., and Oliver, N. (2022). DiffWire: Inductive Graph Rewiring via the Lovász Bound. Proceedings of the First Learning on Graphs Conference (LoG 2022), PMLR Vol. 198, pp. 15:1–15:27. PMLR
[AE25] Arnaiz-Rodriguez, A., and Errica, F. (2025). Oversmoothing, Oversquashing, Heterophily, Long-Range, and more: Demystifying Common Beliefs in Graph Machine Learning. arXiv preprint. arXiv:2505.15547 Best paper award at MLG 2025 at ECML-PKDD 2025.
[AABOH22] Arnaiz-Rodriguez, A., Begga, A., Escolano, F., Oliver, N., and Hancock, E. (2022). Graph Rewiring: From Theory to Applications in Fairness. Tutorial at the Learning on Graphs Conference (LoG 2022). Link
[AAV24] Arnaiz-Rodriguez, A., and Velingker, A. (2024). Graph Learning: Principles, Challenges, and Open Directions. Tutorial at the 41st International Conference on Machine Learning (ICML 2024). ICML