The hypothesis of the “Robot Person”

Reading time: 22 minutes

Translation by AB – May 2, 2021

Translation note

This article discusses law and robots, relying mainly on French and European examples. The comments made here must therefore be qualified or adapted according to the regulatory and cultural contexts of each country. However, it seems to us that the underlying trend described here is more or less the same in all the countries where AI and robotics are developing. It remains for non-French readers to confirm this hypothesis…


The law barely keeps pace with digital technologies, “artificial intelligence” (AI) and robotics, but the development of clear legal frameworks remains a pressing task since, said succinctly, these technologies now rule our fate. On this legislative site, technology and “society” come across and our value systems deploy their scenarios. The law is therefore also this advanced observation point, not so much of the techniques in themselves but of our desires and our beliefs about them. In this case, it indicates a new and powerful thrust of this old atavism: the conviction that humans can (or even be called upon to) create the individual from scratch.

Responsibility

Whether it is medicine, transport, human resources, agriculture, the military … or just our personal life, AI is gradually conquering the field of decision-making. The corollary of this conquest is obviously the question of the responsibility of these artefacts to which we must at the same time concede, hesitating between apprehension and fascination, the possibility of making bad decisions. Victims of these decisions, we have the right to ask the responsible person(s) for compensation for the damage suffered and some people think it is inevitable to be able, in the long run, to incriminate the robot itself. In addition to the natural person and the legal person, a new subject of law should therefore be created: the “Robot Person”. But what is the basis for this third person? What does this proposal reveal about our relations that are being prepared with robots?

We propose to start with a (simplified) technical detour that the thurifers of the Robot Person usually spare themselves, and for good reason, since it brings to light a simple and awkward objection: a robot is a technical object and its “autonomy” is therefore a technical character, not a moral aptitude. It follows that the “autonomous decision” must be technically defined and controlled, just like nuclear fission, the trajectory of a rocket or the baking of a cake… Since robots have many “degrees of freedom”, we only need to “work” much more.

Let’s open this technical detour with an example from France.

2013: Google vs. Lyonnaise de garantie

When you search Google, suggestions automatically appear. Technically, the algorithm works from statistical data collected constantly (frequent searches …) and, of course, while Google can guarantee the statistical quality of the results, the firm cannot predict the outcome of a particular query. This statistical technique is in principle analogous to AI “learning” techniques that lead to statistically valid “decisions”, such as a correct move in chess or go, but which may, in some particular cases, fail and “harm”. So sometimes, Google’s suggestions have the misfortune of annoying some…

The Lyonnaise de Garantie, an insurance brokerage company, sued Google, whose algorithm proposed “Lyonnaise de garantie escroc” (“escroc” means “crook”) as the third most popular suggestion. Google was held liable as a supplier, but Google finally won the case in cassation in a ruling handed down in June 20131, which stipulated that “Lyonnaise de garantie escroc” corresponds to “the statement of a thought made possible only by the implementation of the feature in question”. This means that this “thought” – i.e. a sentence awaiting human interpretation – was recorded in the statistical tables of Google but could not be “stated” – returned to an interpreter – only by means of the algorithm of suggestions. In other words, for Google this “thought” could not exist as such. Therefore (we underline):

[…] the functionality leading to the criticized suggestion is the result of a purely automatic process in its operation and random in its results, so that the display of the “keywords” which results from it is exclusive of any will of the operator of the search engine to emit the remarks in question or to confer to them an autonomous meaning beyond their simple juxtaposition and their only function of help to the search […]

Thus, the link of responsibility between Google and this “thought” is broken by a statistic without a statistician and which preserves the secrecy of its aggregates until their statement. Nevertheless, the prejudice for the Lyonnaise de garantie was real: here is a prejudice, consequence of a “digital decision”, without a responsible person! The same would probably apply today for a decision made by an “intelligent” artefact since, again, such a decision would only be statistically founded, without the supplier of this artefact having been able to voluntarily participate in its formulation. Here is a dramatic example.

2016: fatal accident in Florida

The Autopilot system of the Tesla Model S was activated when a heavy truck turned in front of the vehicle. This maneuver was not detected by Autopilot and the driver died in the collision. Tesla was quick to point out that Autopilot is only an assistance tool and that the responsibility always lies with the driver. In this case, the driver does not seem to have seen the heavy vehicle (white, against the light of a dazzling sky) or to have initiated the slightest maneuver to regain control of the vehicle. Tesla also pointed out that this was the first fatality in 130 million total miles driven with Autopilot enabled, compared to an average of 94 million miles driven between fatal crashes in the U.S. (implying: Tesla + Autopilot is safer on average than Vehicle + Driver). Like Google’s suggestion algorithm or any other AI, Autopilot is statistically proficient but can dramatically fail in some special cases.

“To turn”, “to brake”, “to maintain the track” … are in a certain way as many “thoughts” which “utterance” leads to decision effects which, as in the case of Google, escape the “will” of the supplier. The seriousness of the harm, however, required a clear definition of responsibilities. Without a robot-specific mode of investigation, the investigators therefore studied the functioning of the Autopilot as a technical component of the car. But statistical AI is not a brake or a steering column and does not lend itself well to “physical” investigation. One can even note with a little irony that this opacity protects the supplier because it can hardly be blamed for a design or calculation error. Thus, as one of the investigators states, “in this crash, Tesla’s system worked as designed, but it was designed to perform limited tasks in a limited range of environments2. Responsibility will therefore be sought in the context of use, implying in principle, either the supplier for having poorly explained, or the driver for having violated it. In this case, the driver left the Autopilot at the controls for 37 minutes of a trip that lasted a total of 41 minutes, ignoring the visual and audible instructions from the Tesla Model S asking him to regularly take control.

Calculation, attachment

Today’s robots do not make genuine decisions in the sense that they would exercise a kind of intimate deliberation followed by a free choice: they only perform calculations. The mathematical precision of these calculations and the explanation of their context of use is then the responsibility of the supplier, and their proper use is entirely up to the user. In this case, not having the vehicle back in hand was like asking the Autopilot to continue a “wrong calculation” (all things considered, the tape measure supplier is not responsible for the consequences of a poorly performed measurement either).

If the robot thus remains a technical object and therefore always a legal object, it has the peculiarity of being able, more than any other, to “lure” us. The supplier exaggerates the “intelligence” of his product and equips it with “tricks” that arouse our attachment in return (Attachment to Simulacra). This attachment can lead to the relief of our attention, to a “misuse of the calculus” causing harm, thus engaging our responsibility. It is therefore up to the competent authorities to moderate the supplier’s “overstatement” and to warn users. It’s not simple. To return to the example of Tesla, the term “Autopilot” does mean “automatic pilot”, not “driving assistant”, and the KBA (German Federal Authority for Motorized Transport) has tried to have it banned, but in vain.

Technical autonomy

“Autonomy” gradually integrates the domain of standards and degrees because it must be able to be measured in order to adjust the regulatory or insurance framework. Thus, ISO publishes a set of standards for robots such as collaborative robots (ISO 15066) or personal care robots (ISO 13482). In 2014, the American Society of Automotive Engineers (SAE) developed a classification (J3016) which defines six levels of autonomy (0 to 5) for vehicle driving systems3:

SAE Levels of Driving

The Tesla vehicle incriminated in Florida reaches level 2 for which it is indicated “You are driving”, even if you do not touch the pedals or the steering wheel and even if the driving assistant is called “Autopilot”. At levels 3 to 5 the vehicle is technically autonomous since “You are not driving” even if you are in the “driver’s seat”. In 2019, Audi will market the first level 3 vehicle in the world (an Audi A8). At this level, the operating conditions of the machine are quite restricted (Audi clearly calls its system “Traffic Jam Pilot”) and the driver must be able to regain control whenever the system requests it. The machine is however locked according to the legislation in force in each country i.e., everywhere except … in Japan, which amended its Road Traffic Act to authorize level 3 vehicles on public roads from May 2020. The Olympic Games planned for the summer of 2020 were obviously a unique opportunity for this pioneer country in robotics4. In 2020, Honda became the first supplier to be granted Level 3 certification in this country5, thus crossing the “You are not driving” Rubicon. In Japan, the driver is no longer necessarily human.

A technical autonomy scale thus makes it possible to simultaneously control the progress of autonomous machines and that of the environment, including regulatory one, in which these machines operate. We are already seeing that a highly autonomous Robot Person has no chance of being let loose in today’s environment, either physical, legal or political.

Technical environment

SAE classification levels begin to inspire medical robotics too6. At level 5, the robot would no longer be a medical device, even a very sophisticated one, but an authentic practitioner. But whether it is medicine or any other field (police, home help, army, vehicle driving, etc.), this level 5 is still a utopia and we shall not soon come across fully autonomous robots in a totally open environment. At the same time, we watch this curious propensity of the technical system to generate objects that are more and more autonomous (but which, moreover, have no character of necessity). In our technical world, the unattainable level 5 is thus presented as a powerful final cause which pushes us at the same time to technicalize our environment to allow the technical autonomy of the objects to appear more authentic, to show itself more effective (see the example of agriculture – In French: L’AgTech ou les champs numériques). The “autonomy zone” and its degrees then unfolds like this:

The progress of the “environment” must be understood in a broad sense as the extension of the domain of “decision” of autonomous objects. It can be a technical progress (smart city, smart road, connected apartment…), as well as physical (dedicated operating areas…), regulatory or insurance progress…

Technical autonomy thus develops along the coupling of the robot / environment. For example, the digital environment allows the gathering of enough valuable real data to improve the learning of the robots that go around and therefore the progressive widening of their autonomy perimeter. “Technoking” Elon Musk7 thus collects in real time the behavioral data of 1.4 million users, so much information to improve the performance of the Autopilot. At the same time, its operating environment, the road infrastructure, must progress in such a way as to enclose the vehicle in a technical cocoon that can be viewed as a “digital road” where the risk is reduced to the sole technical flaw of an item of equipment and not to a legally problematic statistical bias. There is no indication yet that on the way to this progressive coupling a Robot Person will ever become legally necessary.

The collateral effect of this progress directed towards the “final cause” is the risk of progressive inadequacy of the human being to this environment in the process of technicization and that we share with autonomous machines: the human being finds himself engaged in a new field of responsibility. At the same time, therefore, it is necessary, let us say it quickly, to adapt the “classic” positive law.

The European Parliament hesitates

The European Parliament Resolution of 16 February 2017 contains recommendations for civil law rules on robotics8. This resolution begins with a few dozen considerations that draw a panorama of a future world profoundly transformed by robotics and where, to retain only the theme that interests us here, legal responsibility should be rethought. Indeed (emphasis added):

AF. whereas in the scenario where a robot can take autonomous decisions, the traditional rules will not suffice to give rise to legal liability for damage caused by a robot, since they would not make it possible to identify the party responsible for providing compensation and to require that party to make good the damage it has caused;

[…]

AI. whereas, notwithstanding the scope of the Directive 85/374/EEC, the current legal framework would not be sufficient to cover the damage caused by the new generation of robots, insofar as they can be equipped with adaptive and learning abilities entailing a certain degree of unpredictability in their behavior, since those robots would autonomously learn from their own variable experience and interact with their environment in a unique and unforeseeable manner;

These considerations unambiguously set the stage for the Robot Person on the basis of a rather ambiguous assessment of autonomy. It is stated earlier in the text (AA) that autonomy “a robot’s autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence; […] this autonomy is of a purely technological nature and its degree depends on how sophisticated a robot’s interaction with its environment has been designed to be”. Autonomy is technical in nature, we agree, and as noted above, the definitions and degrees of technical autonomy are gradually being put in place, paving the way for a correlative adjustment of the “environment”, in particular the legal one. But the rest of the statement is ambiguous. This “degree of unpredictability in their behavior” and this capacity to make decisions “independently of external control or influence” irresistibly evoke the dominant cognitivist posture consisting in locking the “thought” in one box, the “outside world” in another, and in considering that these two boxes pose problems of interlocking. This old fad of the brain (even artificial) closed in its mysterious rumination and from which “decisions” sometimes spring, continues to sow language disorders. Let’s remember that the robot is not a cryptic being that we seek to understand but a technical object from head to toe. Let’s remember that the robot, like the Tesla car, will of course always be controlled from the outside, technically from the cloud, by public or private authorities. Let us also remember that nothing obliges us (except this “level 5 teleology”…) to put potentially dangerous machines into circulation and it seems unreasonable to invent a new subject of law precisely to allow it.

To conclude these parliamentary considerations, let us return to the expression “take autonomous decisions”. Google’s algorithm, while strictly non-robotic, “take autonomous decisions” like any algorithm powered by contextual data. It is moreover this form of autonomy, in the sense here of being able to “state a thought” which has no existence before the act of enunciating (that is to say the calculation of enunciation), which released Google from its responsibility as a supplier. This example shows that, if we want to continue to use these technologies, the principle of responsibility must be rethought as these cases arise, but without ever giving in to the temptation of the robot person. Regulations must be clarified, standards must appear, insurances must be adapted and suppliers must set up safeguards without it being necessary to give in to the idea of any capacity of the objects to make decisions in an autonomous way, in the usual human sense of the term. These machines, like all those we have created since the dawn of time, must only be integrated technically, socially, politically…

The founding ambiguity of the AI field leads once again to the doors of fantasy (“Q. whereas ultimately there is a possibility that in the long-term, AI could surpass human intellectual capacity;”). One of the great challenges that awaits us will probably be to firmly hold the ontological frontier that still separates the human from the technical object. However, speculations and “advances” in the law present themselves as so many signs of a certain sluggishness. The European Parliament is still cautious when it comes to the responsibility surrounding robotic activity and mainly proposes common sense instruments (compulsory insurance scheme, compensation fund, individual registration…). But its premises inevitably lead it to ask the Commission to give final consideration to:

(59. f) creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause […]

Nathalie Nevejans, lecturer in private law at the University of Artois, specialist in the law and ethics of artificial intelligence, member of the CNRS ethics committee (COMETS), develops a clear critique of this new legal person. She rightly points out that “the feeling very quickly sets in [reading the European Parliament Resolution] that the desired developments are not simply motivated by legal arguments, but implicitly prepare an ontological upheaval of the place of humans in a technological world9. The law plays here the role of the canary in the coal mine and its jolts announce something.

The robotic nature

Thanks to [ cybernetics ] the austere figures of registration, of accounting compensation, of a posteriori summation rediscover the freshness of what is born, of what “self-organizes” with all the vigor and innocence of a fauna or a flora […] but at the price of a degradation of politics into a theory of competition of aggregates […]

Gilles Châtelet10

However, the robot is not an object quite like the others. It is the only one that “does everything” to hide its natural state of technical object. From this point of view, it raises specific questions in terms of technical, social and political integration. For example, we recalled above that the “game” between the manufacturer and the user can lead to a “prejudice by attachment” that calls for at least a regulatory intervention (besides, all “hard” attachment industries, such as the tobacco industry, are teaching resources for the robotics industry). But beyond this particularity, the digital robot has intrinsic technical characteristics which are, for once, in its nature. Let us recall the three main ones.

Tesla’s Autopilot does not work differently from Google’s algorithm. If the first has a physical envelope they are both products of IT (Information Technology). Yet, IT is characterized by its opacity. This technology produces surface effects that anyone can observe, desire, fear … without our understanding the causes or even the occurrence. It is opaque as the sky is blue and the manufacturer using IT as “raw material” inevitably produces opaque objects. We must defend ourselves from this opacity by all the means at our disposal, whether technical, legal or ethical (Tristan Harris and the Swamp of Digital Ethics).

Second, Autopilot’s algorithm differs from Google’s in its complexity. This complexity has levels, measurable for example in number of lines of code, of function modules, of number and types of possible algorithmic inputs and outputs, of memory required, of computation time… The manufacturer who incorporates a great complexity in its machines is exposed to having to set up algorithms for algorithmic proof, verification and certification which are themselves complex. It also becomes difficult to regain control of a complex machine which runs into trouble. Remember the terrible accidents of the Boeing 737 MAX11. If this source is to be believed , this new aircraft, which succeeded the 737 NG, known for its high flight stability and reliability, was equipped with larger engines that significantly changed the aircraft’s stability parameters. Rather than physically redesigning the aircraft, Boeing opted for a cheaper and faster option12: use automatic stabilization software. Without being specialists in aeronautics, we can understand that the airplane has thus become more dangerous by design because this complex software has become an additional point of failure: “adding more software just adds complexity to an aircraft”. The “hyper-softwarized” robot, whose complexity makes it unpredictable, presents the same kind of difficulty, and it will take many more technical generations to “naturalize” its behavior, i.e. to make it “stable” in the broad sense without complex additions. The search for simplicity, or naturalness, should be an exciting technical quest as well as an ethical requirement.

Finally, since we are talking about AI, the mysterious and unpredictable “freshness” of the robot is born, as the mathematician-philosopher Gilles Châtelet nicely puts it, from the “austere figures” of calculation. The extraordinary accumulation of data, at an exorbitant ecological cost, produces at most the possibility of capturing increasingly fine statistical aggregates that are a posteriori exposed as “real” things or “intelligent” decisions. The robot is in a way an advanced degree of this statistical zoom. It looks like a collection of hyper-fine statistical aggregates gathered in a physical envelope, an extended inventory of average gestures. The issue thus appears to be technical, but it also seems to us to be ethical in a sense that is still difficult to characterize: is the targeted operating environment, from which the data are taken for the fabrication of the robot’s “behavioral aggregates”, respected and respectful? (This last point deserves a later elaboration).

The digital robot is therefore opaque, complex and statistical. Rather than surrendering to these “existentials” by packing them in a legal person, we must “oppose” them on principle. The weapons must be regulatory and technical, of course, and there is an inexhaustible source of innovation there. But these weapons must also be ethical. We will thus oppose opacity with an ethic of disclosure, complexity with an ethic of simplicity, and statistics, later, with an “environmental” ethic.

The hypothesis of the “Robot Person”

It doesn’t matter whether there’s blood in your veins or hydraulic fluid, it’s everyone’s right to be free.

W.H. Mitchell, The Arks of Andromeda

In France, the lawyer Alain Bensoussan, well known in the digital sector, was the first to advocate, back in 2013, for the advent of this Robot Person to whom we could recognize rights and duties. The most singular aspect of his proposal is that it is based on a Natural Rights which usual definition is as follows13:

Natural rights are rights granted to all people by nature or God that cannot be denied or restricted by any government or individual. Natural rights are often said to be granted to people by “natural law”/

The robot would thus acquire a set of rights because of its membership in a “robohumanity in which men and robots will have to learn to cohabit14. For the lawyer, there is no doubt: we are dealing with a “new species” which would confer to the robot a “state of nature” from which would come a “right to sovereignty” and in particular to the “freedom which resides in the fact of being able to decide “in his soul and conscience” or rather “in his algorithms and conscience”, in any case more than a simple automaton”. Maître Bensoussan participates in this “ontological upheaval” in an impetus that seems a little too daring to us. It simply goes over technology and its inherent dignity to directly establish a fictitious robot species, but what we want to point out here is not so much that statement as the fact that it can exist. He is not the only one…

Asimov, the return

Nobody seems ready to welcome a third subject of law yet. But, as we have seen, the political authorities are preparing the ground and the ethics committees are busy, the “canary” quivers. Thus, in January 2020, the French deputy Pierre-Alain Raphan drafted a constitutional bill relating to the Charter of artificial intelligence and algorithms15. This rather disconcerting proposition is inspired by the famous “Three Laws of Robotics” stated in 1942 by science fiction writer Isaac Asimov16. While Article 1 prudently denies any legal responsibility for the robot or the AI algorithm defined as “evolving in its structure, learning, with regard to its initial design”, Article 2 states that such a system:

« – cannot harm a being or a group of human beings, nor, by remaining passive, allow a being or a group of human beings to be exposed to danger;

« – must obey orders given to it by a human being, unless such orders conflict with the previous point;

« – must protect its existence as long as this protection does not conflict with the previous two points.

These are de facto duties incumbent on the natural and legal persons who “host or distribute the said system”. But no calculation can offer such guarantees, which the European Parliament, also well inspired by the literature of anticipation, recalls itself:

U. whereas Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, including robots assigned with built-in autonomy and self-learning, since those laws cannot be converted into machine code;

Consequently, these “laws” (or others) should be considered as classical rules of ethics applying to the whole of the human community participating in the integration of tools, whatever they are (car, knife, sewing machine …), in our environment, from design to use. The famous “autonomy” of robots is just one more technical feature that requires a clear technical definition and control.

Rupture anthropologique

I am not afraid of robots. I am afraid of people, people, people. I want them to remain human.

Ray Bradbury

We can admit that as a technical object the robot can belong to a “species”, as Alain Bensoussan suggests, but only in the sense of a particular mode of technical individuation that could call, why not, for a particular dignity and a specific care. There is a real field of reflection to be reactivated here, in particular for objects to which we confer autonomous modes. But at the same time that we “demand” from them, through us, to respect the human, the obligation should be reciprocal: what kind of “respect” do we owe to these objects? This ethical posture, mostly neglected, is necessary for the production of artifacts that “respect” us. The attempt to delegate responsibility to the object is disrespectful to the object as well as to ourselves.

Finally, the most important thing here is to observe that the law tries to express in its turn the zeitgeist, the one that exalts the universalism of technological species and undermines, correlatively, the principle of a natural and universal human right. On this very profound movement, there would be so much to say. Let us begin with this: it is probably what the despisers of technical “progress” groaningly call an “anthropological break”. By dint of calling it, it is coming.


1. French Court of Cassation – June 2013 – Arrêt n° 625 du 19 juin 2013 (12-17.591)
2. BBC News – Septembre 12, 2017 – Tesla Autopilot ‘partly to blame’ for crash
3. SAE web site.
4. Herbert Smith Freehills – January 28, 2019 – Japan advances driverless car ambitions with draft bill to amend Road Traffic Act
5. AFP / msn.com – November 11, 2020 – Honda wins world-first approval for Level 3 autonomous car
6. Guang-Zhong Yang, James Cambias, Kevin Cleary, Eric Daimler, James Drake, Pierre E. Dupont, Nobuhiko Hata, Peter Kazanzides, Sylvain Martel, Rajni V. Patel, Veronica J. Santos, Russell H. Taylor / Science Robotics – 2017 – Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy
7. Chris Isidore / CNN Business – March 15, 2021 – Elon Musk is now ‘Technoking’ of Tesla. Seriously
8. European Parliament – February 16, 2017 – Résolution du Parlement européen du 16 février 2017 contenant des recommandations à la Commission concernant des règles de droit civil sur la robotique
9. Nathalie Nevejans / La Jaune et la Rouge N° 750 – December 2019 – Le statut juridique du robot doit-il évoluer ?
10. Gilles Châtelet / Éditions rue d’Ulm – 2010 – L’enchantement du virtuel (« Mettre la main à quelle pâte ? ») – “Grâce à [ la cybernétique ] les figures austères de l’enregistrement, de la compensation comptable, de la sommation a posteriori retrouvent la fraîcheur de ce qui naît, de ce qui « s’auto-organise » avec toute la vigueur et l’innocence d’une faune ou d’une flore […] mais au prix d’une dégradation de la politique en théorie de compétition d’agrégats […]”
11. Padma Nagappan / San Diego State University – October 24, 2019 – What Caused Boeing’s 737 Max Crashes
12. Ian Snyder / Points with a crew – March 12, 2019 – Can Boeing fix a potentially faulty 737 MAX design with software?
13. ThoughtCo – What Are Natural Rights?
14. Alain Bensoussan / Figaro Blog – July 10, 2018 – Le droit naturel, fondement juridique de la personne-robot ?
15. French National Assembly – January 15, 2020 Proposition de loi constitutionnelle relative à la Charte de l’intelligence artificielle et des algorithmes
16. First Law: a robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (Isaac Asimov, Runaround, 1942)


Notes

May 21, 2021 – Kratt Law

As a complementary reading, we suggest this article dedicated to the thoughts of Estonia, this “e-democracy”, on the subject: Estonia considers a ’kratt law’ to legalise Artifical Intelligence (AI).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.