A questionable pause in Artificial Intelligence

Nuria Oliver

Nuria Oliver, PhD

Scientific Director of the ELLIS Foundation Alicante

May 3, 2023

Article published by EL PAIS

«In the face of the vision of those who want to adapt humanity to technology, and not the other way around, it is time to act as a society, collectively defining the ethical and political horizons of AI.»

Artificial Intelligence (AI) is going through turbulent times. In recent months, we have witnessed unprecedented advances in generative AI techniques that allow for the easy creation of texts, music, images, voice, videos, or code that are nearly indistinguishable from human-created content. The fact that what we read, see, or hear is no longer exclusively created by humans raises profound social and ethical dilemmas that we must undoubtedly address.

Just over a month ago, the Future of Life Institute published a letter supported by thousands of people, including prominent figures such as historian and philosopher Yuval Noah Harari, AI expert and Turing Award winner Joshua Bengio, and entrepreneur Elon Musk. The letter proposed an “AI pause,” calling for at least a six-month halt in the development of AI systems more powerful than GPT-4 (the latest publicly known version of ChatGPT). It also called for the development of rigorous safety protocols and a robust AI governance system to ensure that the effects of these systems are positive and manageable.

Undoubtedly, deploying governance and regulatory systems that ensure positive social impact of AI is a priority and a necessity. However, the motivations of the institution promoting the letter, as well as its focus on avoiding an existential risk to humanity due to AI, are highly controversial and questionable. It is not surprising that the letter has generated both supporters and detractors. A few days later, a strong response to the letter was published by the authors of one of the articles cited in the letter, shedding light on the debate. What lies behind this debate? Are we truly facing an existential threat due to AI development? Are we on the verge of the end of humanity?

To understand this letter and its proposals, it is necessary to be familiar with the theories of the longtermist movement, which some consider a religion, and the concept of effective altruism that has influenced its publication.

Beyond the vagueness of the letter (what does “systems more powerful than GPT-4” mean when we are do not know GPT-4’s capabilities due to its complete opacity?) and its oversimplification of the complexity of the challenge (it is impossible to develop AI governance models within six months when Europe has been working on an AI European regulation, the AI Act, for over two years), the letter fails to focus on the real risks and negative consequences of the development and widespread deployment of artificial intelligence in society. Instead, it centers on potential “existential risks” posed by AI that supposedly hinder human progress. According to the long-termist perspective, progress consists of creating trillions of digital post-humans living in a vast computational simulation and colonizing space with the help of an omnipresent and all-powerful friendly AI.

This vision is evidently controversial and not necessarily shared by society as a whole. It is also a dangerous vision as it justifies ignoring the significant challenges faced by flesh-and-blood humans today, as long as these challenges do not represent an existential risk. It justifies, for example, not investing resources in mitigating inequality or poverty in the current world if they do not pose an existential risk to the development of post-humanity. In the context of AI, it diverts attention from imminent challenges and real risks posed by AI, focusing instead on the risk that a hypothetical superhuman and uncontrolled AI would entail.

Challenges such as privacy violation and the use of massive amounts of data without explicit consent, potentially infringing upon the intellectual property rights of data creators; the exploitation of workers who annotate, train, and correct AI systems, many of them in developing countries with meager wages; algorithmic biases and discrimination that not only perpetuate but even exacerbate stereotypes, patterns of discrimination, and systems of oppression; lack of transparency in both the models and their uses; the significant carbon footprint of the large neural networks that make up these AI systems; subliminal manipulation of human behavior by AI algorithms; the lack of truthfulness in generative AI systems that invent all kinds of content (images, texts, audios, videos…) without correspondence to the real world; the fragility of these large models that can make mistakes and be deceived; or the concentration of power in the hands of an oligopoly of companies and their billionaire owners or investors. All these issues of such importance that should be our priority are noticeably absent.

It is evident that dangerous longtermist theories have penetrated not only the circles of influence in the technology sector but also governmental institutions. These theories advocate for the necessary adaptation of humanity to technological development determined by a privileged group, instead of promoting the development of technology that adapts to people and their needs (rather than the other way around); technology that helps us face the immense challenges of the 21st century; technology, in short, that represents progress, understood as an improvement in the quality of life for people (for all, not just some), for the rest of living beings, and for our planet.

The “race for artificial intelligence” is not a race that we have collectively decided and agreed upon. The immense social experiments that arise from the massive deployment of artificial intelligence algorithms in our societies without any kind of regulation and control are not part of an inevitable future and false technological determinism, but rather the result of decisions made by the companies responsible for these systems—driven by ambitious economic interests and power aspirations—and the inability of societies and their institutions to react in time and regulate accordingly.

It is time to act as a society, collectively defining the ethical and political horizons of AI, because we are talking about science and technology, but also about rights, economy, democracy, equality, inclusion, citizenship, peace, and power. Gandhi said that “the power to question is the basis of human progress.” It is time not only to question but especially to find answers to the profound questions posed by artificial intelligence. There is no society more vulnerable and easily manipulable than an ignorant society. Therefore, it is time to educate, to learn, and not to fall into sensationalistic apocalypticism. It is time to take ownership of our destiny, to regulate AI intelligently, and to focus on stopping abusive practices and the social harm caused by the companies behind the advances in artificial intelligence, which in the last decade have accumulated unprecedented power and contributed to social inequality. It is time to invest in artificial intelligence that contributes to progress, without leaving anyone behind and without destroying the planet in the process. Let’s not allow others—humans or algorithms—to decide our future.

Original article published here: https://elpais.com/opinion/2023-05-03/una-pausa-cuestionable-en-la-inteligencia-artificial.html