AI-powered social innovation at ELLIS Alicante
ELLIS Alicante receives the Social Innovation Prize awarded by the Spanish Association of Foundations. We speak with our youngest researchers at ELLIS Alicante about how their work on AI contributes to driving positive societal impact.
ELLIS Alicante, the only non-profit foundation in Spain dedicated to research of excellence in ethical, responsible and people-centered Artificial Intelligence (AI), has been recognized with the Social Innovation Award granted by the Spanish Association of Foundations (AEF – Asociación Española de Fundaciones). Within the context of this award, we spoke with 5 predoctoral researchers from ELLIS Alicante, whose PhD studies are cofunded by the Valencian Government (Generalitat Valenciana) and the Banco Sabadell Foundation, about how their research contributes to generating positive social impact. They are part of the prestigious and highly competitive ELLIS PhD program.
Adrián Arnáiz graduated first in his class in Computer Science and went on to complete a Masters degree in Data Science and Artificial Intelligence. The topics he explores in this PhD are IA justice, causality and graph theory applied to improve the ethics, responsibility and transparency of algorithmic decision making.
“Machine learning models are becoming the main tools for addressing complex social problems due to their ability to analyze huge amounts of data, learn from them, make informed decisions and establish predictions of behavior models. In this context, they are also increasingly used to make or support decisions about people in many important areas of their lives, from judicial to employment to health. Therefore, it is necessary to take into account the ethical implications of such decisions, including concepts such as privacy, transparency, accountability, reliability, autonomy and algorithmic fairness.
Today, there are algorithms that make discriminatory decisions regarding certain groups of people as a consequence of having learnt from historical data that contain these biases or due to an incorrect design or use of those algorithms. Algorithmic justice wants to achieve decision algorithms that do not learn in a biased way and, therefore, do not make decisions with the same cultural biases that a human would. Thus, in the field of algorithmic justice, we study how to modify the data and the learning algorithm so that it learns to make decisions in a non-discriminatory way. That way, these machine learning models won’t repeat or amplify the same decision biases that humans have historically incurred in.”
Julien Colin graduated in Physics and Chemistry and has a Masters degree in Cognitive Sciences: Natural and Artificial Cognition. Before starting his PhD at ELLIS Alicante, he worked as a research assistant in France and the United States. The focus of his PhD is explicable AI. He is interested in developing methods to better understand how deep learning systems work.
“Because of its learning and decision-making abilities, AI has great potential to become a source of social innovation as it is capable of helping us achieve more objectivity and fairness when applied to the many processes needed to organise our daily lives. Companies have started relying more and more on AI systems to make more impartial decisions in their stead. However, it has been discovered after their deployment that their decisions can actually be unreliable, if not downright discriminatory at times. This is a major problem, as AI cannot be useful for society if we cannot trust its decision-making processes.I work on making the decision process of those AI systems more transparent. With a better understanding of their decision process, we can identify and prevent the deployment of unsafe AI in the real world.”
Aditya Gulati holds a university degree and a Masters degree in Computer Science from the International Technology Institute in Bangalore (India) and is currently undertaking his PhD at ELLIS Alicante. His research interests are rooted in the computational modelling of human behavior via Artificial Intelligence methods.
“In today’s world, we interact with Artificial Intelligence systems on a regular basis. From small scale decisions like which show to watch on Netflix to more important decisions like deciding how long someone should be in prison, these systems are everywhere. Despite their prevalence, today’s AI systems do not seem to understand some key aspects of human decision making - in particular, our tendency to make irrational decisions arising from cognitive biases.
Social scientists have studied these cognitive biases that impact our decision making extensively, but AI systems are currently designed without accounting for this knowledge. This could be problematic as if we want our AI systems to be helpful and complementary to humans, it is important for them to understand how we make decisions. My research focuses on using the acquired knowledge regarding cognitive biases to help design AI systems that can understand us better and help us make better decisions in teams composed of humans and IA.”
Gergely Németh is a graduate in Computer Science from the Budapest University of Technology and Economics (Hungary) and also holds a Masters degree in Artificial Intelligence by Manchester University (UK). His doctorate studies look at privacy and justice in federated learning systems.
“Deep learning enables data owners to change the world in an unprecedented manner. It is well known that large tech companies are the foremost experts at leveraging the data collected from their great number clients to serve their interests. But client-users (as well as those regulatory agencies acting on their behalf) are increasingly reacting to this state of affairs in an effort to protect and take control over their data.
There is a novel technique called Federated Learning that helps company clients take full control over their data and work together collaboratively to achieve the same AI results as centralized data owners. For now, development in this field is focused on the company’s interest: to have the same performance as with other methods, while limiting access to the client’s sensitive data – mainly motivated by regulations imposed upon companies, such GDPR. However, I believe that there is another, client-centric way, to approach this issue.
I want to research and demonstrate why it is worth for users to participate in Federated Learning. I would like for my research to improve clients’ trust in the system and motivate them to participate in more Federated Learning training, allowing them to better leverage their data for their own good. The potential for clients to use the data they produce for their own benefit has great potential to drive positive societal impact. The more we know about ourselves, the better we will be able to conduct ourselves as citizens.”
Piera Riccio graduated in Multimedia Engineering and Cinema at the Polytechnic University of Turin (Italy) and went on two complete two Masters degrees, one in Technology for Intelligent Societies at the Polytechnic University of Turin and the second in Data Engineering at Telécom Paris – EURECOM (France). She studies the impact that the Artificial Intelligence algorithms present in social media have on society, and particularly on women.
“Artificial Intelligence already has an undeniable cultural impact on humanity. As it is massively used on social media, it constantly influences our daily choices, also providing automatic editing tools, and shaping our aesthetic taste. In addition, the intersection between AI and art practices is becoming more widespread, with clear implications for the artistic production of our epoch. In my work, I am analyzing the implications of such platforms on cultural minorities, on women, and artists.
Unfortunately, we cannot assume that AI is unbiased, neutral and apolitical. It is possible and necessary to create socially responsible AI, and for that we need to be aware of the societal impact of our technogical choices, and ensure that they are ethical. In this regard, my research aims to uncover unfair impositions from the dominant cultures developing AI technologies nowadays.”
At ELLIS Alicante we will continue supporting and promoting work of these five young and talented scientists who aim to achieve positive societal impact through their research in Artificial Intelligence.
ELLIS Alicante awarded the Social Innovation Prize by the AEF