Grupo de lectura Human-Centric Machine Learning (HCML)

Un grupo de lectura centrado en el aprendizaje automático centrado en el la humanidad (Human Centric Machine Learning) organizado por los estudiantes de doctorado de la Unidad ELLIS de Alicante.

El grupo de lectura HCML tiene como objetivo reunir a investigadores y estudiantes interesados tanto en obtener una visión amplia del área como en profundizar en él. Para llegar a comprender claramente cómo las decisiones algorítmicas y humanas se influyen mutuamente, debatiremos artículos sobre diferentes temas dentro de HCML, y también ahondaremos en nuevos problemas, diferentes enfoques y fuentes de sesgos.

Comunicación (en inglés):

  • Signal, donde tienen lugar todas las comunicaciones del grupo. En este canal se escriben los enlaces importantes, los artículos sugeridos, etc.
  • Canal Slack de ELLIS PhD & Postdoc: #rg-human-centric-ml

Reuniones


  • Understanding and Creating Art with AI: Review and Outlook

    Author: Eva Cetinic, James She (2022)

    Article link: https://dl.acm.org/doi/full/10.1145/3475799

    Technologies related to artificial intelligence (AI) have a strong impact on the changes of research and creative practices in visual arts. The growing number of research initiatives and creative applications that emerge in the intersection of AI and art motivates us to examine and discuss the creative and explorative potentials of AI technologies in the context of art. This article provides an integrated review of two facets of AI and art: (1) AI is used for art analysis and employed on digitized artwork collections, or (2) AI is used for creative purposes and generating novel artworks. In the context of AI-related research for art understanding, we present a comprehensive overview of artwork datasets and recent works that address a variety of tasks such as classification, object detection, similarity retrieval, multimodal representations, and computational aesthetics, among others. In relation to the role of AI in creating art, we address various practical and theoretical aspects of AI Art and consolidate related works that deal with those topics in detail. Finally, we provide a concise outlook on the future progression and potential impact of AI technologies on our understanding and creation of art.

    Presenter: Nuria Oliver

    Date: Thursday 14th of July at 15.00 CEST


  • Psychoanalyzing artifcial intelligence: the case of Replika

    Author: Luca M. Possati (2022)

    Article link: https://link.springer.com/content/pdf/10.1007/s00146-021-01379-7.pdf

    The central thesis of this paper is that human unconscious processes infuence the behavior and design of artifcial intelligence (AI). This thesis is discussed through the case study of a chatbot called Replika, which intends to provide psychological assistance and friendship but has been accused of inciting murder and suicide. Replika originated from a trauma and a work of mourning lived by its creator. The traces of these unconscious dynamics can be detected in the design of the app and the narratives about it. Therefore, a process of de-psychologization and de-humanization of the unconscious takes place through AI. This psychosocial approach helps criticize and overcome the so-called “standard model of intelligence” shared by most AI researchers. It facilitates a new interpretation of some classic problems in AI, such as control and responsibility.

    Presenter: Erik Derner

    Date: Thursday 16th of June at 15.00 CEST


  • Performative Power

    Authors: Moritz Hardt, Meena Jagadeesan and Celestine Mendler-Dünner (2022)

    Article link: https://arxiv.org/pdf/2203.17232.pdf

    We introduce the notion of performative power, which measures the ability of a firm operating an algorithmic system, such as a digital content recommendation platform, to steer a population. We relate performative power to the economic theory of market power. Traditional economic concepts are well known to struggle with identifying anti-competitive patterns in digital platforms—a core challenge is the difficulty of defining the market, its participants, products, and prices. Performative power sidesteps the problem of market definition by focusing on a directly observable statistical measure instead. High performative power enables a platform to profit from steering participant behavior, whereas low performative power ensures that learning from historical data is close to optimal. Our first general result shows that under low performative power, a firm cannot do better than standard supervised learning on observed data. We draw an analogy with a firm being a price-taker, an economic condition that arises under perfect competition in classical market models. We then contrast this with a market where performative power is concentrated and show that the equilibrium state can differ significantly. We go on to study performative power in a concrete setting of strategic classification where participants can switch between competing firms. We show that monopolies maximize performative power and disutility for the participant, while competition and outside options decrease performative power. We end on a discussion of connections to measures of market power in economics and of the relationship with ongoing antitrust debates.

    Presenter: Miriam Rateike

    Date: Thursday 2nd of June at 15.00 CEST


  • An Introduction to AI Safety

    A discussion about why powerful AI systems can be very dangerous and why it is important for everyone in the Machine Learning community to at least understand the basic problems.

    Presenter: Marius Hobbhahn

    Date: Thursday 19th of May at 15.00 CEST


  • Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

    Authors: Nina Grgic-Hlaca, Elissa M. Redmiles, Krishna P. Gummadi & Adrian Weller (2018)

    Article link: https://doi.org/10.1145/3178876.3186138

    As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people»s moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person»s assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people»s unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people»s fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.

    Presenter: Stratis Tsirtsis

    Date: Thursday 5th of May at 15.00 CEST


  • Lessons for artificial intelligence from the study of natural stupidity

    Authors: Alexander S. Rich & Todd M. Gureckis (2019)

    Article link: https://doi.org/10.1038/s42256-019-0038-z

    Artificial intelligence and machine learning systems are increasingly replacing human decision makers in commercial, healthcare, educational and government contexts. But rather than eliminate human errors and biases, these algorithms have in some cases been found to reproduce or amplify them. We argue that to better understand how and why these biases develop, and when they can be prevented, machine learning researchers should look to the decades-long literature on biases in human learning and decision-making. We examine three broad causes of bias—small and incomplete datasets, learning from the results of your decisions, and biased inference and evaluation processes. For each, findings from the psychology literature are introduced along with connections to the machine learning literature. We argue that rather than viewing machine systems as being universal improvements over human decision makers, policymakers and the public should acknowledge that these system share many of the same limitations that frequently inhibit human judgement, for many of the same reasons.

    Presenter: Aditya Gulati

    Date: Thursday 31st of March at 15.00 CEST


  • An Introduction to Reliable and Robust AI

    Article link: https://doi.org/10.3390/app12083936

    An introduction to the design of reliable and robust AI systems in computer vision based on a review paper by Francesco Galati.

    Presenter: Francesco Galati

    Date: Thursday 17th of March at 15.00 CEST


  • Improving human decision-making with machine learning

    Authors: Hamsa Bastani, Osbert Bastani, Wichinpong Park Sinchaisri

    Article link: https://arxiv.org/abs/2108.08454

    A key aspect of human intelligence is their ability to convey their knowledge to others in succinct forms. However, despite their predictive power, current machine learning models are largely blackboxes, making it difficult for humans to extract useful insights. Focusing on sequential decision-making, we design a novel machine learning algorithm that conveys its insights to humans in the form of interpretable “tips”. Our algorithm selects the tip that best bridges the gap in performance between human users and the optimal policy. We evaluate our approach through a series of randomized controlled user studies where participants manage a virtual kitchen. Our experiments show that the tips generated by our algorithm can significantly improve human performance relative to intuitive baselines. In addition, we discuss a number of empirical insights that can help inform the design of algorithms intended for human-AI interfaces. For instance, we find evidence that participants do not simply blindly follow our tips; instead, they combine them with their own experience to discover additional strategies for improving performance.

    Presenter: Putra Manggala

    Date: Thursday 3rd of March at 15.00 CET


  • Guest talk by Qualcomm AI Research: Natural Graph Networks

    Authors: Pim de Haan, Taco Cohen, Max Welling (2020)

    Article link: https://arxiv.org/abs/2007.08349

    On 17th February at 15.00 CET, the ELLIS Human-Centric Machine Learning reading group will host the first guest session receiving distinguished researchers from Qualcomm AI research and ELLIS Scholars. Pim de Haan, Research Associate at Qualcomm AI Research, will present his paper Natural Graph Networks, which explores how we can use the local symmetries of graphs to build more expressive graph networks. We will be further exploring the relationship between graph structured data and human-centric problems and applications in a round table made up by Manuel Gómez Rodríguez (MPI-SWS), Carlos Castillo (UPF) and Efstratios Gavves (director of the Qualcomm-UvA Deep Vision Lab). This relationship mainly arises from the fact that a lot of human interaction data is expressed as network structured data. Additionally, many advantages of GNNs, such as capturing complex structures between data or information flow, could lead to GNNs being an outstanding tool for addressing HCML problems. This session will be held online and is open to everyone interested in it.

    Presenter: Pim de Hann (Qualcomm AI Research)

    Round Table: Manuel Gómez Rodríguez (MPI-SWS), Carlos Castillo (UPF) and Efstratios Gavves (Qualcomm-UvA Deep Vision Lab)

    Date: Thursday 17th of February at 15.00 CET


  • CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities

    Authors: Mina Lee, Percy Liang, Qian Yang (2022)

    Article link: https://arxiv.org/abs/2201.06796

    Large language models (LMs) offer unprecedented language generation capabilities and exciting opportunities for interaction design. However, their highly context-dependent capabilities are difficult to grasp and are often subjectively interpreted. In this paper, we argue that by curating and analyzing large interaction datasets, the HCI community can foster more incisive examinations of LMs’ generative capabilities. Exemplifying this approach, we present CoAuthor, a dataset designed for revealing GPT-3’s capabilities in assisting creative and argumentative writing. CoAuthor captures rich interactions between 63 writers and four instances of GPT-3 across 1445 writing sessions. We demonstrate that CoAuthor can address questions about GPT-3’s language, ideation, and collaboration capabilities, and reveal its contribution as a writing “collaborator” under various definitions of good collaboration. Finally, we discuss how this work may facilitate a more principled discussion around LMs’ promises and pitfalls in relation to interaction design. The dataset and an interface for replaying the writing sessions are publicly available at this https URL.

    Presenter: Gergely D. Németh

    Date: Thursday 3rd of February at 15.00 CET


  • Towards a Theory of Justice for Artificial Intelligence

    Author: Iason Gabriel (2021)

    Article link: https://arxiv.org/abs/2110.14419

    This paper explores the relationship between artificial intelligence and principles of distributive justice. Drawing upon the political philosophy of John Rawls, it holds that the basic structure of society should be understood as a composite of socio-technical systems, and that the operation of these systems is increasingly shaped and influenced by AI. As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts. These norms entail that the relevant AI systems must meet a certain standard of public justification, support citizens rights, and promote substantively fair outcomes – something that requires specific attention be paid to the impact they have on the worst-off members of society.

    Presenter: Nazaal Ibrahim

    Date: Thursday 20th of January at 15.00 CET


  • Rethinking of Marxist perspectives on big data, artificial intelligence (AI) and capitalist economic development

    Authors: Nigel Waltona, Bhabani Shankar Nayak (2021)

    Article link: https://www.sciencedirect.com/science/article/abs/pii/S0040162521000081

    AI and big data are not ideologically neutral scientific knowledge that drives economic development and social change. AI is a tool of capitalism which transforms our societies within an environment of technological sin- gularity that helps in the expansion of the capitalist model of economic development. Such a development process ensures the precarity of labour. This article highlights the limits of traditional Marxist conceptualisation of labour, value, property and production relations. It argues for the rethinking of Marxist perspectives on AI led economic development by focusing on conceptual new interpretation of bourgeois and proletariat in the infor- mation driven data-based society. This is a conceptual paper which critically outlines different debates and challenges around AI driven big data and its implications. It particularly focuses on the theoretical challenges faced by labour theory of value and its social and economic implications from a critical perspective. It also offers alternatives by analysing future trends and developments for the sustainable use of AI. It argues for developing policies on the use of AI and big data to protect labour, advance human development and enhance social welfare by reducing risks.

    Presenter: Bhargav Srinivasa Desikan

    Open notes: shared document

    Date: Thursday 16th of December at 15.00 CET


  • Machine Learning for the Developing World

    Authors: De-Arteaga, M., Herlands, W., Neill, D. B., & Dubrawski, A. (2018)

    Article link: https://dl.acm.org/doi/abs/10.1145/3210548

    Researchers from across the social and computer sciences are increasingly using machine learning to study and address global development challenges. This article examines the burgeoning field of machine learning for the developing world (ML4D). First, we present a review of prominent literature. Next, we suggest best practices drawn from the literature for ensuring that ML4D projects are relevant to the advancement of development objectives. Finally, we discuss how developing world challenges can motivate the design of novel machine learning methodologies. This article provides insights into systematic differences between ML4D and more traditional machine learning applications. It also discusses how technical complications of ML4D can be treated as novel research questions, how ML4D can motivate new research directions, and where machine learning can be most useful.

    Presenter: Felix Grimberg

    Open notes: shared document

    Date: Thursday 2nd of December at 15.00 CET