Reading time: 12 minutes
Translation by AB – January 5, 2025
This is a translation of an article written in April 2023, in response to a “pause” publicly called for in March 2023 in the development of large-scale generative AIs. This article is therefore no longer really topical, but it nevertheless bears some “eternal” themes.
I think people should be happy that we are a little bit scared of this.
Sam Altman, CEO of OpenAI[1]
Morning program
Last Friday, March 31, France Inter’s morning program took part in the worldwide excitement generated by the worried open letter signed by numerous researchers and engineers calling for a six-month moratorium on the development of generative AI systems such as ChatGPT, Midjourney or DALL.E (“Pause Giant AI Experiments: An Open Letter”[2]).
AI has in fact achieved for the first time the goal dreamed of in the 1960s by its creators and foretold by science-fiction literature: to resemble humans and make society with them. If a major national media outlet like Radio France gives voice to this concern, it is because these intelligent machines are finally reaching what we identify as “stage 3” of their deployment, the one requiring considerable technical and financial resources (PageRank, Parcoursup and other Moral Machines). They are now accessible to everyone and therefore, as long as we give them an existence of their own, they already form a society with us: they can in principle work, pass exams, produce “truth”, “art”, etc.
During this broadcast, Raja Chatila, roboticist, Professor Emeritus of artificial intelligence and technology ethics at Sorbonne University, gave the following justification for his signature[3]:
There, we had to raise a red card. Something is happening, and we had to become collectively aware of it. It is not about scaring; it is about stating a certain reality. A certain number of institutions are deploying systems that are based on machine learning of very large quantities of data, and which have reached a point of development to reproduce texts that seem to be written by humans. But these texts do not carry truth, it is difficult to distinguish the true from the false. The second problem is the total absence of meaning. These systems do not understand what they write or what they say.
Nothing really new, then, but it must be recognized that, presented in this way, this situation activates a familiar anxiety: that of change and entry in the unknown. This anxiety is usually reduced by positive injunctions to disruption, creative destruction and technological solutionism. But in the case of generative AI, those responsible for change seem to react differently. Why are they themselves calling for a moratorium? Is the situation so worrying?
Without doubting for a second the sincerity of Mr. Chatila or that of most of the 2,500 signatories of this open letter (count as of April 1, 2023), they are playing, perhaps unwittingly, a very classic score for stage “4” of all technical progress: the repackaging of “solutions” deployed with billions and gigawatt-hours into moral machines, the ultimate stage in their notching into our belief systems. For, by introducing the question of good and evil without explanation, the morally-charged demand for a moratorium paradoxically contributes to the collective acceptance of these generative AIs and their effective advent into human society.
Let’s take a closer look at this, starting with a signatory whose motives we know.
Elon Musk
Elon Musk, the robust character scanned in Elon Musk, special vassal, is one of the instigators of this open letter, and not many people are fooled by his motives.
ChatGPT is indeed the creation of OpenAI, a structure that he co-chaired from 2015 and then left in 2018 (due to lack of results!). A little souvenir from 2016, arms crossed, piercing gaze, perfect[4]:
Elon Musk’s somewhat tense reaction to the success of these generative AIs is perfectly explained in Matt Novak’s excellent article, which concludes[5]:
Musk was perfectly happy with developing artificial intelligence tools at a breakneck speed when he was funding OpenAI. But now that he’s left OpenAI and has seen it become the frontrunner in a race for the most cutting edge tech to change the world, he wants everything to pause for six months. If I were a betting man, I’d say Musk thinks he can push his engineers to release their own advanced AI on a six month timetable. It’s not any more complicated than that.
We are willing to bet with him and even double the stake. Elon Musk’s motivations are not limited to a simple tournament between peers in which he lost a round. It’s much more than that. Musk is viscerally committed to the transformation of humanity (i.e. himself…). According to this radical transhumanist, planet earth and the “natural” human are obsolete. Therefore, AI as a simple technology at the service of humanity, in the same way as the car, electricity or digital technology, is a deception and above all a challenge to its “speciesist” project with regard to machines. To resist the machine that is coming and terrorizing us, humans must in some way “ingest” it. This is the very motive behind his Neuralink project, which poses ethical problems that are much more formidable than generative AIs, at least as long as the latter remain considered as tools and not as social actors or vehicles of good and evil.
Guest on the France Inter morning program, Clément Delangue, co-founder of the French start-up “Hugging Face”, took the position that seems to us to be the most correct, in line with this “disclosive ethics” that is dear to us[6]:
I did not sign the petition. What we see here is that this petition looks like a marketing operation a bit directed by Elon Musk. He has been overtaken by artificial intelligence. One of the problems with this petition is that the solutions proposed are very impractical or applicable. What is important, and what this debate shows, is that today we need more transparency and education about these systems.
But this much less anxiety-provoking position, which calls first and foremost for explanations, is also much less effective…
Stephen Hawking
Finally, let us note that this is not Elon Musk’s first attempt. Using his Future of Life Institute as a propaganda vehicle in 2015, he was already involved in an open and concerned letter[7] signed, among others, by the late physicist Stephen Hawking. The latter already declared this in 2014 to the BBC[8]:
Humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.
This somehow “mathematical” prediction by a famous physicist may have contributed to the general concern (if Hawking says so…) but, like any prediction inspired by mathematics, it only projects a present dynamic linked to the facts of the moment and their interpretation. No one can predict the future, but we can at least claim that, while AI may indeed become a destructive technology through sheer power of action (like the atomic bomb or the internal combustion engine, each powerful technology having its own mode of destruction), it will never “supersede” humans in the biological sense of a Darwinian competition between species, because AI is not a species. This anthropomorphization of technology ignores, whether sincerely or by calculation, the essence of technology.
If Hawking is sincere, Musk’s concern, like that of many other signatories, is not related to the hypothetical civilizational dangers of AI but to the risk of missing out on (or coming second to) a considerable business, or even worse: of allowing public authorities and civil society, both hated by libertarians, to understand the dangers of AI themselves, and to hinder the freedom of business by legislating for the former, vociferating for the latter.
So, let’s see how we can pre-emptively take hold of good and evil to avoid this nightmare.
Good and Evil
Among the notable actors of the Future of Life Institute and signatories of these open letters, we recognize three characters already encountered here: Jaan Tallinn, Estonian billionaire and “effective altruist”[9] (mentioned here as a prototype for a later review of the moral current of “Effective Altruism” – see also the mention of Jaan Tallinn in A Future Without Us), Tristan Harris (Tristan Harris and the Swamp of Digital Ethics where the disclosive ethics is discussed among other things) and finally Stuart Russell (Being Stuart Russell – The comeback of Moral Philosophy). All endorse various and varied forms of this consequentialism which inspires the sincerest signatories.
Traces of the comeback of this moral philosophy, that is to say of an axiology of “good” and “evil” (categories which can now be digitized) are perceptible in the petition. We read thus:
Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
Of course not! All this is evil, obviously.
But all this already existed, sometimes for a long time, without AI, and there have always been particular humans at the helm, mysteriously merged into this vast collective “we”. So, who is “flooding our information channels with propaganda and untruth” (that is evil), if not someone, who certainly benefits today from digital and smart media? Who is relentlessly seeking to “automate all jobs, including the most rewarding” (that is evil), if not, since the 19th century, industrial capitalism, which today certainly benefits from computing megamachines? Who is relentlessly seeking to “develop nonhuman minds” (this is evil), if not, at least, the military complex, the crucible of modern AI, let us remember, which today certainly benefits from intelligent weapons?… So, who can seriously think that we can “lose control of our civilization” (that is evil) because of AI, when the responsibility lies with particular humans sheltered from a vast collective “we”, still benefiting, of course, from the technology of the moment?
AI as such should not be blamed or feared, but it must be recognized that it has a truly troublesome specificity inherited from its computing matrix, which we have often insisted on: its opacity. We don’t really know what it is about, or even what we are talking about. Anyone can therefore claim anything, and in particular can move the cursor of good and evil at will. “Effective” altruism has no precise meaning, but, speaking of good and evil, it speaks to all, presenting itself as the Trojan horse of unbridled techno-solutionism (“doing things better”). In this respect, the call for a moratorium reaches a rare level of hypocrisy, because the signatories, Elon Musk first and foremost, know perfectly well that such a moratorium is impossible for strategic reasons. In the opaque fog of AI, without any explanation, they themselves raise the specter of evil before “we” do.
Regulate “us”!
The open letter logically ends by calling for greater control of these systems:
AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
Oddly enough, contrary to the libertarian doxa, it is even a form of social control that is suggested, with more regulation and public funds for research into the “technical AI safety”. Everyone can only subscribe, obviously, since it is a question of fighting evil and in particular of preserving our democratic model (good):
… and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Incidentally, what further appalling democratic evils is AI capable of when, no longer able to bear any risk, no longer able to bear the uncertainties of the future, we have already laid our freedom at the feet of powers equipped with instruments of control and preventive regulations, when, demanding to be rid of worry, we have already entrusted the digitization of our existences to the very people who, like Elon Musk, are calling for a moratorium? We, that is, civil society structured by public policy, are therefore already disqualified from exercising the social control suggested in bad faith.
Everyone knows that AI research (as well as research in medicine, biology, robotics, etc.) largely escapes the financial capacities of the civil public authorities and therefore a certain “common good by design”, so to speak. Raja Chatila himself recognizes this well[10]:
Academic research does not have the means to compete with companies of this type [OpenAI, Google, etc.]. They have everything, and in universities we are “small players”.
Perhaps for the first time, most (democratic) states no longer have the means to carry out such research, or to allow citizens to participate in it a priori. All that is required of this good-natured public power is a posteriori regulation (Europe, the world champion), and it is given a few crumbs, such as “robust public funding for technical AI safety research”.
This may give the illusion and reassure the most worried. But the regulations required can obviously only be drawn up by expert firms, consultancies and specialists capable of understanding the technologies at stake, so that regulation remains a business moving forward hand in hand with tech and sharing the same networks of influence. This is how we effectively move from “phase 3” to “phase 4”, pretending to join the fight against evil while the private digital powers hold the upper hand.
Conclusion
The real “danger” is that we become truly frightened, and end up believing that AI and its avatars are much more than tools: they are “things” capable of provoking a dangerous break with civilization. This alarmist discourse (pre-emptively) makes any critical discussion of the necessity or inevitability of these technologies impossible, positing that an ethical or regulatory polish, on which the signatories invite us to focus, will get rid of the remaining problems, even if these problems are “lethal”. Generative AI must find its place in society, and this open letter urges us to do our part by preparing its regulatory foundation.
Rather, we invite, following Clément Delangue, to educate our gaze, both on AI techniques and on the powers that control them. Over the millennia of technical progress, humans have always been able to overcome the fear of their own replacement, that is, their lack of singularity, by familiarizing themselves with the tools forged by craftsmen and by controlling their representations through language (which is what we are striving to do here), then through ironic and artistic forms. These are all the representations that we must hurry to come about.
1. ↑ Victor Ordonez , Taylor Dunn, Eric Noll / ABC News – March 16, 2023 – OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: ‘A little bit scared of this’
2. ↑ Future of Life Institute – March 22, 2023– Pause Giant AI Experiments: An Open Letter
3. ↑ (in French) Radio France / Matinale de France Inter – March 31, 2023 – Le patron de Twitter Elon Musk et des centaines d’experts réclament une pause dans l’intelligence artificielle. Ils réclament un moratoire jusqu’à la mise en place de systèmes de sécurité. – “Là, il fallait lever un carton rouge. Quelque chose est en train de se passer, et il fallait en prendre conscience collectivement. Il ne s’agit pas de faire peur, il s’agit de dire une certaine réalité. Un certain nombre d’institutions sont en train de déployer des systèmes qui sont des basés sur l’apprentissage automatique de très grande quantité de données, et qui sont arrivés à un point de développement de reproduire des textes qui semblent rédigés par des humains. Mais ces textes ne sont pas porteurs de vérité, il est difficile de distinguer le vrai du faux. Le deuxième problème, c’est l’absence totale de sens. Ces systèmes ne comprennent pas ce qu’ils écrivent ou ce qu’ils disent.”
4. ↑ Wired / Cade Metz – April 27, 2016 – Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free
5. ↑ Matt Novak / Forbes – March 29, 2023 – Elon Musk’s AI History May Be Behind His Call To Pause Development
6. ↑ Ibid. 3 – “Je n’ai pas signé la pétition. Ce que l’on voit ici, c’est que cette pétition ressemble à une opération marketing un peu dirigée par Elon Musk. Il a été dépassé par l’intelligence artificielle. L’un des problèmes avec cette pétition, c’est que les solutions proposées sont très peu pratiques ou applicables. Ce qui est important, et ce que montre ce débat, c’est qu’aujourd’hui il faut plus de transparence et d’éducation au sujet de ces systèmes.”
7. ↑ Future of Life Institution – Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
8. ↑ Michael Sainato / Observer – August 19, 2015 – Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence
9. ↑ Wikipedia –Jaan Tallinn
10. ↑ Ibid. 3 – “La recherche académique n’a pas les moyens d’être en compétition avec des entreprises de ce type-là [ OpenAI, Google, etc. ]. Ils ont tout, et dans les universités on est « petits joueurs »”