Reading time: 11 minutes
Translation by AB – April 15, 2020
Foreword (only for the English version of this post)
The Frenchman Bruno Latour, sociologist of science, introduced in 1991 in his work “Nous n’avons jamais été modernes. Essai d’anthropologie symétrique“, the concept of “non-modernity”. We obviously make a nod to his work here by qualifying the “zombies” which we are going to talk as “non-modern”. It would be a bit long here to explain exactly why but before starting, I will give a little glimpse of Bruno Latour’s thought by quoting the extract from the work of an American student, an extract that has the advantage to be written directly in English and available on the net. (Justin T. Simpson , University of North Florida – 2015 – Quasi-Subjectivity and Ethics in Non-Modernity):
Given the general dissatisfaction with the modern metaphysical picture of the world, which analyzes the world in terms of the mutually exclusive and completely separate categories of nature/objects and society/subjects, I proceed from an alternative conceptual perspective, that of non-modernity, offered by Bruno Latour. By focusing on the actual practice of the sciences Latour develops one of his central concepts: mediation. From this understanding of the practices of mediation the world is revealed as an ontological continuum of hybrids – mixtures of human and nonhuman elements – that ranges from quasi-object to quasi- subject. Rather than being separate, nature and society are intimately interwoven and co-constituted, forming a nature-culture collective that is connected and defined by the network of relations between existing hybrids. Given this philosophical landscape of mediation, hybrids and networks, the question that I seek to address is how does this effect what it means to be human? What does it mean to human living in a hybrid world?
The “zombies” we are going to speak about are, so to speak, kinds of “nonhuman elements” we are already all of us dealing with.
Numbers, symbols and information flow without friction in our networks, like a superfluid, and transform us1 :
We now spend an average of 6 hours a day using devices connected to the Internet, i.e. a third of our time without sleep! In total, more than a billion years were spent online in 2017.
The Share Taken from Human Time (The value of e-things) the share is phenomenal but above all, it has accelerated: 7% more of Internet users and 13% more of individuals active on social networks in a single year! More generally, it has become difficult to consider existence, even simply material, at the bottom of the Maslow pyramid, without digital appendages. The new human is “hyper-hybridized” in the sense that these appendages increase him but also begin to ignore him2. This unrestrained hybridization obviously raises endless adaptation problems3 :
A smartphone in his field of vision decreases concentration and the ability to communicate with another person. Even if you don’t touch it or watch it.
More worryingly, recent studies multiply the warnings about the impact of this hyper-hybrid mode on the mental health of young people, especially in the United States4:
Rates of teen depression and suicide have skyrocketed since 2011. It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones. […] Boys’ depressive symptoms increased by 21 percent from 2012 to 2015, while girls’ increased by 50 percent […] Although the rate increased for both sexes, three times as many 12-to-14-year-old girls killed themselves in 2015 as in 2007, compared with twice as many boys.
Mark Deuze, a Dutch media specialist, recalls this evidence:
When we live in the media, we somehow become less aware of our environment, less in tune with our senses, more like lifeless automata.
“Zombie” generally stands for a person whose will has been abolished. In the absence of willpower, he wanders – sometimes dangerously – in an environment of which he does not seem to be aware: the zombie should not be there in the sense that his “spirit” is not adapted to control This body (his own body) in This environment. Something more must then keep it in this existence, for example a battery of connected technical appendages.
The image of the hyper-hybrid zombie therefore seems valid if we avoid any fantastic connotation. It designates this mixed being whose state of consciousness is no longer clearly definable and who no longer knows himself whether he feels “good” or “bad”.
The Chalmers zombie, or the “philosophical zombie”
Presented in this way, it is not surprising that the zombie is also a figure in cognitive science relating to the “problem of consciousness”: what is consciousness? Is a conscious machine conceivable? etc. (About artificial consciousness). The downside of this direct and logical approach to consciousness is that it quickly ends up in inextricable knots linked to its foundational aporia: I am aware that I am aware
Whatever! As usual, the cognitive sciences dig their furrow … The “philosophical zombie“, taken up and developed by the Australian philosopher David Chalmers in the 1970s5, is a pure logical concept aimed at showing that (the phenomenon of) the consciousness cannot be reduced to the physio-chemical properties of the brain.
There is, however, every reason to believe that neuroscience intertwined with digital sciences, with electromagnet sensors and algorithmic verifications, will gradually deliver causal explanations in terms of memory, role of emotions, learning and, in general, concerning all the physical-cognitive mechanisms of man and animal. It’s already done, so to speak.
But, decidedly, the conscience seems to have to resist. How is it that a physical theory (the reduction of cognitive phenomena to the electrical movements of swarms of neurons) fails to explain a phenomenon that is happening there, however, before the eyes of neuroscientists? Like a movement of the hand (we can see it!):
Chalmers reasons in the following way: nothing in our “functioning” seems to require consciousness. For example, to keep our hand away from a flame, it is not logically necessary to be aware of the heat. All it takes is a direct brain (or ventral …) algorithm between the electrochemical sensors of the hand and the muscles of the arm. It would be enough of algorithms, innate or acquired, or anything else that causally links our perceptions to our actions for us to act as we usually do but without our being reflexively aware of what this makes to us.
Moreover, Chalmers did not know it at the time, cerebral imagery teaches us that (phenomenal) consciousness appears slightly after the action: I am aware that I burn myself after having spread my hand and not I spread my hand because I’m aware that I’m getting burned. Consciousness is not causally related to action; it seems to manifest “in addition”. It is thus possible to logically disregard consciousness and to conceive a zombie functional mode. The zombie is our complete twin, indistinguishable from us, who acts like us, but without any consciousness. A zombie world is therefore conceivable. It is a perfect duplication, down to the atom, of our reality, but its occupants are deprived of any phenomenal consciousness.
If consciousness thus appears as a “surplus”, in a contingent way, then it can be made the subject of a specific scientific approach, we can go in search of “psychophysical laws” which relate to the phenomena of consciousness as such, irreducible to a physical reality (and could therefore, in passing, be articulated with other laws of nature than those that we know, with other material or virtual structures). A new science to build!
But this beautiful elaboration relies entirely on the conceivability of the zombie. In other words, Chalmers draws its conclusions from premises which are not very stable and which allow objections which are themselves rather gaseous: without consciousness, how to conceive a being without consciousness? In the zombie world, can we imagine a (non) zombie world? What is causation? etc.
This reasoning which concludes that consciousness is something by itself and deserves another science is at the very least fragile and then the principle of an artificial consciousness based on a logical-neuronal structure remains safe. This is the position of Chalmers’ detractors (and the interest of the industry).
Selmer Bringsjord, an American cognitivist, conducted in 2015 an experiment described in an article with an evocative title: “Real Robots that Pass Human Tests of Self-Consciousness”6 (following the aporia around Chalmers zombies, a doubt seizes us7…). Here is the experiment.
We tap on the head of three Nao programmable robots to “whom” we specified that this tap can “mute” them (the robot can no longer emit any sound) or not. We then ask each robot if they have been muted or not (for those who know a little about this experiment, it is equivalent to the question “which pill did you receive?”) and to explain why::
The robot on the right, the only one that has not been muted, begins by saying “I don’t know” since a) it is programmed to always respond, like a connected speaker (Google Home, Amazon Echo …) b) he cannot logically make a deduction until he has spoken. But having “heard” the answer, he can deduce that he has not been muted and corrects his answer by starting with “Sorry, I know now …” (this expression is completely zombie). Magical, isn’t it?
This experience was suggested by a rather strange test of self-consciousness proposed by Luciano Floridi (From the infosphere to a gaseous ethics). This test is supposed to be able to distinguish zombie / non-zombie. It consists, said in a very simplified way, in that information about the world can be drawn by a non-zombie (self-conscious) not only from its environment but also from its own answers8.
Bringsjord’s article begins cautiously with denial and clarification:
He [Bringsjord] has explained repeatedly that genuine phenomenal consciousness is impossible for a mere machine to have, and true self-consciousness would require phenomenal consciousness. Nonetheless, the logico-mathematical structure and form of self-consciousness can be ascertained and specified, and these specifications can then be processed computationally in such a way as to meet clear tests of mental ability and skill.
And indeed, the Nao robot has a system of logical rules that allow it to develop the behavior that one seeks to obtain. We show below what this logico-mathematical structure looks like (the “program”) on which the behavior of the robot “conscious of itself” is based:
Look at our red boxes. “Self-consciousness” is the manifestation of logical propositions manipulating a predefined symbol: “self”. Bringsjord and his colleagues implanted the symbol which allows the robot to reason on a “self” and to pronounce sentences starting with “I”. Self-consciousness obviously does not emerge, but acts (logically) as if. This instance of Nao is therefore a pure philosophical zombie who begins by answering in his mathematical language:
K(I, t4, happens(action(I∗, S(I∗, t4, “I don’t know”)), t4))
[ “I” don’t know ]
… then it corrects “itself” (« Sorry… ») :
K(R3, t4, not(happens(action(R3, ingestDumbPill), t2)))
[ I did not take the pill that makes you dumb ]
It is very likely that we will live more and more surrounded by such digital zombies, namely artifacts / artifices made with the objective that we can designate them as conscious by a good imitation based on “self”. Why? Because we must be able to trust technical objects whose behavior has become intrinsically unexplainable (AI), or even interact at the level at which we can understand them, that is to say at the phenomenal level. A coherent and credible imitation technical work is therefore necessary so that we, the “users”, can attribute a conscience to these authentic zombies and thus recognize them the possibility of communicating at the level of intentions, choices, responsibility… and if necessary, be able to bring these artifacts within the framework of ethical and legal standards.
In this meaning, Bringsjord’s work is good engineering work. But he claims nothing more.
These zombies, which we will provisionally call “non-modern”, are therefore of a proliferating species. It’s about our machines but also about ourselves and our organizations. But it’s not just about the artifacts.
The children of the iGeneration that we mentioned above, probably undergo a much too fast hyper-hybridization for a self-consciousness which is still too human. Our engineers, like Bringsjord, seem here to lack “care” (or conscience?), as innovation requires to quickly saturate the economic and social space. But we also have other quasi-victims of the Number, such as “zombie” companies, who just give enough to repay the interest on their loans, artificially kept alive by low rates, or even zombie States for the same reasons. Or even the ecological zombies that we became in France on May 5.
Technics allow us to temporarily keep ourselves into existence out of any logic, of any minimal accounting of our “bodies” within this world. Ethical thinking should question this massive production of zombies, these “a-conscious” creations claiming to be conscious and therefore to have the “right” to exist.
1. ↑ Flavien Chantrel on Blog du Modérateur – January 30, 2018 – État des lieux 2018 : l’usage d’Internet, des réseaux sociaux et du mobile en France)
2. ↑ Les Numériques – January 20, 2014 – Être livré avant même d’avoir commandé ? Amazon a un brevet pour ça…
3. ↑ Radio Canada – March 10, 2018 – Le malaise de la solitude à l’ère des réseaux sociaux
4. ↑ Jean M. Twenge pour The Atlantic – September 2017 – Have Smartphones Destroyed a Generation?
5. ↑ Wikipedia – Philosophical zombie
6. ↑ Selmer Bringsjord, John Licato, Naveen Sundar Govindarajulu, Rikhiya Ghosh, Atriya Sen – Rensselaer AI & Reasoning (RAIR) Lab – Real Robots that Pass Human Tests of Self-Consciousness
7. ↑ Un doute tout juste rattrapé par le conditionnel… Libération, citant le magazine Quartz – 17 juillet 2015 – Un robot pourrait avoir conscience de lui-même
8. ↑ L. Floridi, “Consciousness, Agents and the Knowledge Game” Minds and Machines, vol. 15, no. 3-4, pp. 415–444, 2005.