About Artificial Consciousness

Reading time: 8 minutes

Translation by AB – April 15, 2020


François N.

Nine years ago, on an old Yahoo forum, François N. answered the question “can a machine be conscious of itself?” as follows:

Whatever! Stop asking yourself such questions. […] A machine cannot be conscious of itself because first of all it has no consciousness and then it has no life. It is a robot like a computer, an automobile (assembly of various materials which, combined together, constitute a system). […] So, a machine is simply an inert and unconscious system and it is impossible that this consciousness could one day arise!

It seems to us that the opinion of François N. remains widely shared: self-consciousness requires consciousness, which is a vital phenomenon that cannot arise from the inert. But it was nine years ago. It is not sure that he has not changed his mind slightly since, especially if he reads the newspapers and begins a co-evolution with Google Home, Amazon Echo, or Buddy the little robot … And neuroscientists do not have long shared the frank conviction of François N.

Giulio T.

Giulio Tononi

Giulio Tononi

In 2004, Giulio Tononi, psychiatrist and neuroscientist, known in particular for his research on sleep, proposed a scientific theory of consciousness, the IIT (“Integrated Information Theory“), a theory which is still vigorous today. In 2016, the newspaper Aeon interviewed Giulio Tononi and published an article entitled “Consciousness creep“, the last lines of which are1:

We are already encountering systems that act as if they were conscious. Our reaction to them depends on whether we think they really are, so tools such as Integrated Information Theory will be our ethical lamplights. Tononi says: ‘The majority of people these days would still say, “Oh, no, no, it’s just a machine”, but they have just the wrong notion of a machine. They are still stuck with cold things sitting on the table or doing clunky things. They are not yet prepared for a machine that can really fool you. When that happens – and it shows emotion in a way that makes you cry and quotes poetry and this and that – I think there will be a gigantic switch. Everybody is going to say, “For God’s sake, how can we turn that thing off?”.

Turing might have dreamed of it (Turing’s Body). But where does Giulio Tononi and his colleagues come from with such a conviction?

Christof K., René D.

IIT is an axiomatic and mathematical theory of consciousness, which should already awaken the natural defenses of those who know what “axiomatic theory” means and where this kind of theory can be dismantled.

It starts like this: it seems impossible to deny that we are conscious and in particular conscious of ourselves. As Christof Koch, an important follower of Tononi’s work, put it simply2:

I find this view of some people that consciousness is an illusion to be ridiculous.

And he summons the famous Cartesian ergo sum cogito. The “demonstration” of the existence of consciousness being made, it is instituted as a legitimate scientific object and can be offered to scalpels. Only problem: this object remains sealed in the skull and never appeared as such in electroencephalography experiments. Tononi’s initial proposition is therefore the following: rather than trying to locate an object that seems unobservable, let us start from the supposed phenomenological characteristics of consciousness (an axiomatic) and deduce the structural and material principles of the systems capable of producing them. Any system obeying these principles will therefore be considered as conscious.

What then characterizes consciousness according to Tononi?

First, consciousness has something to do with the ability to integrate information. More precisely, consciousness results from the construction “on the fly” of a unique and indivisible mental state. This is how the mental state (we would add “preverbal”) “slug”, the consciousness of a slug, emerges from the integration of the millions of pixels of the retinal image of its body and the reflection of its mucus. However, today we have neural networks perfectly capable of achieving this type of integration, of “recognizing” a face, an imprint, etc. We all agree: these networks (described as “intelligent”) are just computer programs, admittedly sophisticated but devoid of any form of consciousness. So, what are they missing?

Tononi suggests a thought experience:

You are placed in front of a screen […], and you are asked to say “on” when it turns on and “off” when it turns off. A photodiode – a very simple and light-sensitive component – is also placed in front of the screen and beeps when the screen turns on and remains silent when the screen turns off. Here is the first problem of consciousness: when you tell the difference between the screen on and off, you consciously experience “seeing” the light or the dark. The photodiode […] is unlikely to “consciously” see light and dark. Where does this key difference lie between you and the photodiode […]?

The key difference, according to Tononi, is that the photodiode can only discriminate between two situations, when the experience you are asked to live is one of an unimaginable number of experiences that you know how to discriminate (for example, recognize an off screen, a painting by Jackson Pollock or a right triangle). This explains why our “intelligent” neural network is not conscious. Although he is able to achieve the feat of recognizing slugs in all possible visual configurations, even the most blurred, the world is reduced to that: the state “slug” (a real slug) or the state “no slug “(a painting by Jackson Pollock). Consciousness would thus emerge from a system capable of integrating and immediately discriminating (in a fairly short time) an incredible number of possible experiences. But what is “integration” really about?

The integration of information into conscious experience is phenomenologically obvious: when you consciously “see” an image, that image is experienced as a whole and cannot be broken down into sub-images experienced separately. For example, no matter how hard you try, you cannot experience colors on one side and shapes on the other, or the left side of the field of vision regardless of the right side.

Even if we can a posteriori think separately the front part and the rear part of the slug, the initial movement of consciousness integrates them as if they were causally linked. The recognition of the “front slug” causes and reinforces the recognition of the “rear slug” and vice versa, all in the same momentum. So that’s pretty much it, according to IIT theory, a “conscious” system: a system that integrates and recognizes, in a single impulse, a state among a gigantic amount of possible states.

It turns out that these principles can determine a mathematical axiomatic of “conscious” systems3. Consequently, by classical mathematical work, we deduce that there is a gradual “capacity for consciousness”, which can be measured using an indicator noted Φ. If the value of Φ of a system is not zero, then the system is “conscious”. So, if we read in our newspaper tomorrow that such a machine or such a system is “conscious”, it can mean that this system obeys the axioms of Tononi and has a measure Φ>0.

We readily admit that the ITI can reveal the necessary conditions for a “proto-consciousness”, and that as such Φ measures something interesting. But this theory says nothing about the sufficient conditions of “consciousness”, much less of its function. A philosophical touch is therefore necessary.

Friedrich N., Anne B.

Consciousness, and in particular self-awareness, seems so obvious to us, so much “already there”, that it is difficult to put it at a distance and to grasp it as an object. The cogito ergo sum itself may seem dubious because of its reflexivity and there are some of us who have never managed to live the Cartesian experience (I am aware that I am) without language. From this point of view, the cogito is far from being the manifestation of pure solipsism: it is what remains of the Other when we are alone, the Other with whom we have mutually worked out a social “me” (inscribed in a body).

In a way, consciousness is what language does to us: it is not natural but cultural. Anyone can try the experience of a consciousness without language, without outcropping words: we approach a form of proto-consciousness revealed by the IIT but little more. This postulate is of course not new and many philosophers have set it. In a short essay entitled “Nietzsche and Linguistic Consciousness“, Maurizio Scandella of the University of Bologna quotes Nietzsche4:

Man, like all living creatures, continually thinks, but does not know it; thought that becomes conscious is only the smallest part of it, let’s say: the most superficial, the worst part, because only this conscious thought occurs in the form of words, that is to say, signs of communication, which reveals the source of consciousness itself. To put it in a nutshell, the development of language and the development of consciousness (not of reason, but only of consciousness of reason) go hand in hand.

IIT thus lacks the essential: grasping the dynamics of consciousness and its function. Consciousness is far from being only the immediate grasp of a scene, a grasp that can be axiomatized. Nietzsche went even further as Maurizio Scandella reminds us:

The intersubjective relationship is essentially “in debt”. The debt gives the subject a conscience (an “I” who remembers his promise) and a body, that is to say a “guarantee” of the debt.

Even our dear Anne Barratin (The value of e-things) gets started:

Talk to the child a lot about his conscience so that he knows, man, to remain under his domination.

Conscious machines?

Even without going so far, we doubt that Φ> 0 is enough to consider a system as conscious. Unless consciousness can one day become a locatable object, it is likely that an artifact will only be “conscious” if… we have assigned it this ability! And on this point, we are somehow vulnerable. In adulthood we are no longer talking to our cuddly toy and imagining its response, but we remain with this learned disposition consisting in relating to all things (our peers, animals, natural systems, Gaia, objects, artefacts…) by engaging our conscious self and assigning similar capacities to these things. Consciousness is therefore also a matter of convention: a conscious machine is a machine socially accepted as conscious, desired, why not, as such. Nietzsche is not far but remains at a sufficient distance.

Finally, as much as we can conceive of the economic interest of an “intelligent” machine, as much as we have an interest in its being “conscious” is not obvious. It would even be the opposite (an animal endowed with conscience, for example, is problematic for the food industry). This is why research on consciousness is not intended to reproduce it but to identify it in order to submit it.

From the point of view of the “Technological System” (Jacques Ellul and the Technological System), consciousness is an undesirable side effect that occurs surreptitiously: “Consciousness creep”, unexpected and problematic interference, Tononi-like, and thus its sensationalist corollary: how can we unplug this thing?


1. George Musser on Aeon – February 25, 2016 – Consciousness creep
2. Antonio Regalado in MIT Technology Review – October 2, 2014 – What It Will Take for Computers to Be Conscious
3. For a scientific and mathematical introduction, see Giulio Tononi et Christof Koch / The Royal Society Publishing – May 19, 2015 – Consciousness: here, there and everywhere?
4. Maurizio Scandella Nietzsche et la conscience linguistique

2 Responses

  1. 9 July 2023

    […] whose disruption we must now acknowledge? Isn’t it also heading towards a technical destiny (About Artificial Consciousness)? And finally, since respite is never an option for this technological system that can only move […]

  2. 2 April 2024

    […] have already pointed this out with regard to “consciousness” (About Artificial Consciousness): feeling that an object is conscious is enough to characterize it as conscious. The same applies […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.