Reading time: 13 minutes
Translation by AB – January 15, 2022
AI on the attack
Artificial Intelligence (AI) is undoubtedly the technique of the century and its ambition is limitless: to assist or even replace industrious humans, to discipline and wage war, to seize cognitive power… The century is thus opening up to the possibility of a real break with civilization. But few see it, because AI is advancing in a triple mask: relegated behind our dual concern for ecology and security, draped in its technical and philosophical opacity, and concealed from our investigation by digital, political and military powers. We are thus exposed to a radical technique that escapes any serious criticism, not so much about its technical applications, vaguely tempered by ethical and legal promises, as about the very conditions of possibility of a still intact claim: to reach or even exceed human intelligence.
However, AI has stumbled several times since its baptism by John McCarthy and his colleagues in 1956. It has gone through two “winters” (1974-1980 and 1987-1993) but has moved forward tirelessly by taking advantage of the power of computers for the most part. The algorithms of 2020 are not much smarter than those of 1960, but the wealth of data and computing power are incomparable and now drape these algorithms with a seeming of intelligence (the movement of hairs in computer-animated movies has made a lot of progress, but they are still not real hairs).
Those who, like me, started their career in AI between its two winters, in the great era of “expert systems” and first operational artificial neural networks, had ambition and sincerely believed in the possibility of a genuine artificial intelligence. A touch of phenomenology (and even of common sense) could have tempered our enthusiasm for what was, in the end, only an algorithmic toolbox. “Intelligent” systems have spread like wildfire, but at the cost of ever more bizarre conceptual and technical tinkering. This was not so much the fault of the engineers who built these systems as of the researchers themselves, who piled up theories without ever having to or even being able to account for the validity of their ideas (to give just one example that will undoubtedly speak to some, a pinnacle of conceptual power-passing is Noam Chomsky’s famous “generative transformational grammar”). America was giving lessons in cognition that had the finesse of a privateer’s boarding. In short, “cognitivism” was taking its toll and we only owed it to Moore’s Law and, even today, to the camouflage of its tricks: algorithms mimic intelligence at the speed of millions of billions of operations per second.
This is more or less the reason why the most “sensitive” among us let go of AI and its great promises, at the end of its second winter, paradoxically just before the first “feat” of the discipline: the famous victory in 1997 of IBM’s Deep Blue supercomputer over the chess champion Garry Kasparov. But the staging of this event had somehow reinforced our decision (my emphasis)1 :
Deep Blue’s win was seen as symbolically significant, a sign that artificial intelligence was catching up to human intelligence, and could defeat one of humanity’s great intellectual champions.
It was enough to know about the “springs” inside Deep Blue to understand that there was no quivering about intelligence and that once again the semantics of buccaneering that armed the rise of AI was spreading. I was recently asked how we practitioners saw the future of AI in the 1990s. AI had already survived all its outrages by continuing to call itself, as John McCarthy intended, “artificial intelligence”. It was therefore inevitably making its way into the third millennium under this banner. I therefore perceived AI as the technique of a coming century dominated by power, but also as a technique of our “twilight”. My opinion has not changed since then and I come back to it in conclusion.
The intrusion of Hubert Lederer Dreyfus
A philosopher has never ceased to criticize this claim of the discipline to encode human and animal intelligence and especially to confuse, without much epistemological precaution, encoding and “decoding” (violating what the mathematician Reuben Hersh called the zeroth law of modeling: never confuse the phenomenon and its model). This is Hubert Lederer Dreyfus, an American philosopher who died in 2017. In 1965, while a professor at MIT, he published the first serious attack against two of the main proponents of AI, Allen Newell and Herbert Simon. In a document with the explicit title, “Alchemy and Artificial Intelligence”2, Dreyfus vigorously disputed the main allegation behind which the entire discipline stood and still stands: intelligence must progress along a “continuum” which starting point is already set to be the computer (I examine anthropologically this continuist doxa in A reading of Philippe Descola); reaching human intelligence is only a question of time and means. But there is no serious argument to support this claim. One might as well claim to be able to reach the moon by climbing a tree under the pretext that there is a space continuum that can be crossed between the two. Thus, Dreyfus observes (I underline, additions in brackets3 :
Instead of trying to make use of the special capacities of computers, workers in artificial intelligence – blinded by their early success and hypnotized by the assumption that thinking is a continuum – will settle for nothing short of the moon [ artificial human intelligence ].
In support of this observation, Hubert Dreyfus quotes the introduction to a 1963 book, “Computer and Thoughts”, by two researchers, Edward Feigenbaum and Julian Feldman. We quote here the excerpt that is so representative of the atmosphere of the time and, more broadly, of the Western belief system:
In terms of the continuum of intelligence suggested by Armer, the computer programs we have been able to construct are still at the low end. What is important is that we continue to strike out in the direction of the milestone that represents the capabilities of human intelligence. Is there any reason to suppose that we shall never get there? None whatever. Not a single piece of evidence, no logical argument, no proof or theorem has ever been advanced which demonstrates an insurmountable hurdle along the continuum.
The epistemological stance of the researchers can obviously not be denied, since everything that is not demonstrated can be considered true4. The “computationalist” credo of AI (the isomorphism of intelligence to a computer program) could not bear any contradiction. The fierceness of young Dreyfus’ onslaught matched that of the prevailing catechism. He drew a parallel between AI and alchemy, which was similarly based on a continuum of progress leading to the Great Work, and provoked the vindictiveness of researchers. Herbert Simon accused him of being “political” and scorned work that he called “garbage” without qualification. Dreyfus recalls that his MIT colleagues working in AI tried not to be seen in his presence at lunch. Seymour Papert organized a chess match between Dreyfus and a program developed by Richard Greenblatt. Dreyfus lost the match and Papert declared: “A Ten-Year-Old Can Beat the Machine—Dreyfus: But the Machine Can Beat Dreyfus”. Herbert Simon advised him to cool and recover his sense of humor, etc. All these nice people were our “heroes” of the time. A transatlantic and divine intellectual ray came to dazzle us. Reading Dreyfus and expressing doubts was not a matter of course, even here in France in the 1980s. After all, it was “only” philosophy, and philosophy had to cower in the face of the probative power of the computer (nothing has really changed, but today the responsibility is clearly on the side of philosophers). Dreyfus tells us5 :
When I was teaching at MIT in the early sixties, students from the Artificial Intelligence Laboratory would come to my Heidegger course and say in effect: “You philosophers have been reflecting in your armchairs for over 2000 years and you still don’t understand how the mind works. We in the AI Lab have taken over and are succeeding where you philosophers have failed. We are now programming computers to exhibit human intelligence: to solve problems, to understand natural language, to perceive, and to learn”.
These outraged and patronizing reactions, sometimes virulent, could be explained in part, beyond the damage to the image of lords challenged in their kingdoms by Dreyfus, by the threat that any miscreant poses to the funding of AI research and in particular, in the 1960s, by that of DARPA, the American agency responsible for research into new technologies for military use. Hubert Dreyfus could have been content to wait quietly for the inevitable failure of computationalist ambitions (this failure will occur quickly and the American and British governments will cut research credits in 1974, triggering the first winter of AI). But Dreyfus, beyond his attacks, could not remain silent because, not content with criticizing, he was already proposing a path to be explored, a path that he would develop throughout his life and that he had already outlined in his 1965 pamphlet.
Why I went out of the frame
In 1982, Japan and its FGCS (Fifth Generation Computer Systems) initiative largely contributed to the revival of interest in logic programming, so decried by Dreyfus. But this revival was more soberly about “knowledge” and “knowledge processing” by means of logic programming rather than about intelligence in general. It was no longer a question of continuing the progression on this continuum leading to human intelligence, but more modestly of endowing the computer with logical reasoning capabilities in very specific areas: medical diagnosis, blast furnace control, pension liquidation, monitoring of secondary circuits in nuclear power plants, etc. These “expert systems” have been quite successful and have allowed AI to find its first industrial impetus, thus gradually setting it up in the economic landscape.
But, at the same time, the research in pure or “general” artificial intelligence did not give up and continued its progression like a discreet peat fire. At the time, any AI engineer was necessarily contaminated by its emanations, if only because most private companies were co-managed by researchers linked to the labs. If the first winter had calmed the most fanatical, the AI field carried on its divine mission: to manufacture the equivalent, eventually, of an artificial human. Before being a science, AI is an ideology6.
At the time, I had read Dreyfus’ works, but a simple reading, even repeated, even convincing, is not enough to “get out of the frame”. It was the daily practice, for several years, of these systems of logical reasoning that gradually undermined my faith in this continuum denounced by Dreyfus. I was beginning to become more aware of the conditions of possibility of a genuine (not necessarily human) intelligence, but they were so far away that the time of engineers – mine anyway – had obviously not come: going from alchemy to chemistry would not be a matter of a single generation. In the end, I did not miss anything: the progress of algorithmic AI is only vaguely inspired by human intelligence, that which common sense understands and that philosophy means.
By dint of nesting smoky theories to tinker with systems whose intelligence was constantly slipping away, it appeared that what I was naively waiting for was a manifestation of authentic autonomy from these systems, a “birth” in a way. But what reason would an artificial system have to escape me? Is it possible to encode this pattern? Can it emerge from an algorithmic device? Etc.
The Dreyfusian intuition
Today, Dreyfus’ work is widely shared and considered as important philosophical contributions to AI. If one were to summarize in one formula the philosopher’s intuition, which I have come to agree with, it is that an organism is intelligent only if it has to “worry” (to my knowledge, Dreyfus does not use these terms exactly). However, no computer program, however complex, obviously has the slightest worry. This somewhat summary statement constitutes the first thread of the Dreyfusian ball of wool.
Pursuing this path of concerned intelligence, thus incarnated and in the world, Dreyfus reads Maurice Merleau-Ponty but especially Martin Heidegger of whom he is a renowned exegete in the United States. In 1990, he published a commentary on “Being and Time” precisely entitled “Being-in-the-world”, an English translation of the Heideggerian term “In-der-Welt-sein”7. Without facing the harsh philosophical side of this term, he already announces Dreyfus’ critical take: intelligence only comes to this being-in-the-world who knows the hassles of existence. This legitimate philosophical posture about AI has its banner entitled – it is almost unbelievable – “Heideggerian AI” (HAI)! Let us specify, however, that if Dreyfus was the first passer-by of Heideggerian philosophy in American computationalist circles, we owe to the American philosopher Beth Preston the more concrete inspiration of this current at the end of the 1980s8.
Everyone can intuitively understand this Dreyfusian way. Man is not this animal gifted with a reason that drives each of his steps. The brain does not calculate on a representation of the world like a navigator on a GPS map, calculation and representation that we could continuously improve in order to reach or even exceed human intelligence with computers obeying Moore’s law. Rather, there is a form of reciprocal and “self-interested” coupling between the world and the organism that causes the emergence of what we call “intelligence” (see also Francisco Varela the heterodox). But whatever thought, intelligence, consciousness and all these phenomena of the mind are, the condition of representation of their “essence”, even under the appearance of this convincing coupling, is the existence of an ideological base. Indeed, intelligence is not the character of a planet or a virus but of ourselves. The only way to escape this reflexivity is a reification of the thought that only language, thus ideology, allows.
Would AI therefore be on the side of the human sciences? It still escapes it because today, far from the somewhat aggressive postures of its founders and while persevering in its informational and computational doxa, AI systematically seeks its criteria of truth in the apparently less ideologized field of neuroscience. Hubert Dreyfus himself, as well as all the researchers and thinkers of the embodied intelligence movement (Michael Wheeler, Erik Rietveld and many others) follow this principle, seeking to validate their models of “being-in-the-world” in the behavior of assemblies of neurons. AI has thus passed on to neuroscience the torch of continuity semantics of the 1960s, finally silencing all the Dreyfus, i.e. all the philosophers. Thus Stanislas Dehaene, a famous French neuropsychologist who is now head of the Scientific Council of the French Ministry of Education, can quietly declare that our children are “supercomputers” to whom we must provide the “data” they need9, or that (I emphasize the continuist vocabulary)10 :
We are all endowed with an extraordinary brain machine that surpasses computers, artificial intelligence… Because the human brain deploys algorithms, scientific theories, that have not yet been imitated by machines.
The Western obsession for calculating mind always finds a way…
Heidegger, no but yes!
If Hubert Dreyfus has fought all his life against computationalist ideology and its conception of intelligence as a combination of symbols, is his “passion” for Heideggerian interpretation ideologically better? One cannot ignore, at least in France, the lively controversies about Heidegger and his historical and especially intellectual links with Nazism (by the way, to my knowledge, the expression “Heideggerian artificial intelligence” does not exist on the European side of the Atlantic). Stormy debates were fueled in particular by the highly documented interpretations of the French philosopher Emmanuel Faye and were revived by the publication, from 2014, of the famous “Black Notebooks”. I remain convinced, however, that in order to confront the hard question of the nature of intelligence and its possible mechanisms, one must sooner or later confront the philosophical heritage of Martin Heidegger. Those in France who hold their noses lose another opportunity to involve us in the great questions of the century such as AI and, on the horizon, transhumanism. At the same time, prudence requires facing the darkest interpretations of the philosophical concepts disseminated by Heidegger, which Emmanuel Faye claims are at the very least equivocal, even arming a Trojan horse for Nazi ideology. In any case, at a time when we are concerned with ethics, AI can hardly ignore what it really brings into play, nor can it bypass Heidegger and his conceptual “syntax”, possibly decontaminated from its historical “semantics”.
The researchers themselves hardly explain, nor perhaps perceive, the ideological basis of their work. At least, at the time of the first attacks led by and against Dreyfus, the battle was loud, flaming and waged openly. Nowadays, “beliefs” are directly implemented by sleepwalkers in opaque algorithms. Thus, AI is now effortlessly penetrating the blind spots of a twilight civilization, the technocratic and materialistic “Zivilization” described by Oswald Spengler at the beginning of the last century. A contemporary sign of this twilight? Only one: the computer has already won the game. Hubert Lederer Dreyfus and all of us are already checkmate in a few moves.
1. ↑ Wikipedia – Deep Blue versus Garry Kasparov
2. ↑ This fascinating document is still available online, an important piece in the history of AI and more broadly of ideas – Hubert L. Dreyfus – December 1965 – Alchemy and Artificial Intelligence
3. ↑ Ibid.2 p.83
4. ↑ This is reminiscent of the “argumentative” modes used today on social networks, which are not so much aimed at the subject under discussion – which is of no real interest – as at the speaker’s own position in the space for debate: notoriety, defense of a “territory”, exclusion of heretics…
5. ↑ Hubert L. Dreyfus / Elsevier, Artificial Intelligence Volume 171, Issue 18 – December 2007 – Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian
6. ↑ Note that an article published in the excellent Wired is entitled Alchemy and Artificial Intelligence. But the term is used in a different meaning. This article is about the ideological use of AI and not, as the title and ourselves suggest, about AI itself as an ideology.
7. ↑ Hubert L. Dreyfus / Bradford – 1990 – A Commentary on Heidegger’s Being in Time, Division I
8. ↑ See on this subject the text of Terry Winograd (in French, translated by Jean Lassègue), famous professor of computer science at Stanford University: Terry Winograd / Intellectica, 1993/2, 17, pp. 51-78 – September 1989 – Heidegger et la conception des systèmes informatiques
9. ↑ (in French) Isabelle Boucq / La Tribune – October 22, 2018 – “Nos enfants sont des super-ordinateurs” Stanislas Dehaene, Collège de France
10. ↑ (in French) Delphine Bancaud / 20 minutes – September 5, 2018 – Stanislas Dehaene: «Nous sommes dotés d’une machine cérébrale qui dépasse les ordinateurs»