Trustworthy AI and Compositionality in Multimodal Language Models
Abstract: This presentation will give an overview of two different perspectives on trustworthy AI; one technical and one sociotechnical view. The technical view focuses on visual reasoning task experiments with multimodal LLMs. These experiments show how LLMs fail to compositional generalise and their lack of consistency in problem solving. The experiments are based on a novel benchmark with synthetic data from the CLEVR domain. The sociotechnical view is based on critical analysis of Reinforcement Learning from Human Feedback as a method for AI alignment. This perspective will show all the ways in which RLHF is insufficient as a method, and the fundamental issues with the concept of alignment.
Short bio: Adam Dahlgren Lindström (PhD 2024) is a postdoctoral research fellow in the Computing Science department at Umeå University, Sweden, where he is a part of the Responsible AI research group and AI policy lab. His thesis on Learning, Reasoning, and Compositional Generalisation in Multimodal Language Models, investigated limitations in vision-language models and how to better measure phenomena related to reasoning tasks and compositionality. During his PhD, he was affiliated with Swedens largest research program, WASP, and sat as one of the conference chairs for the int. conf. on Hybrid Human-AI Intelligence (HHAI) 2024. He is currently funded by the ELIAS project, where he is part of the tasks on trustworthy AI and hybrid AI-methods. In his current research, he is working on the benchmarking of multimodal models, e.g. for reasoning capabilities and ontological knowledge, and on methodologies for hybrid human-AI collaboration and sustainable deployment of AI technologies.
Presenter: Adam Dahlgren (Umeå University, Sweden)
Date: 2025-03-26 10:00 (CET)
Location: Oficinas ELLIS Alicante, Muelle Pte., 5 – Edificio A, Alicante 03001, Alicante ES
Add to Google Calendar, Outlook Calendar