PageRank, Parcoursup and other Moral Machines

Reading time: 15 minutes

Translation by AB – September 20, 2020


Information theories

We use the “solutions” of the Technological System but we know almost nothing about how they work. However, is it useful to know that the GPS of our car is animated by a system of satellites, electromagnetic waves and the equations of general relativity? Should we know that our air conditioner is a thermodynamic machine using refrigerants obtained from petroleum chemistry? That the screen of our smartphone is animated by a transfer of electric charges to our finger? That Google’s search algorithm is validated by the astonishing mathematical fixed-point theorem? …

Most of us are obviously indifferent to these countless theories and techniques, however ingenious and remarkable they may be. In fact, it is impossible to know even a small number of them. And yet, once disseminated in the Technological System, these abstract inventions have truly gigantic effects, be they social, economic or environmental.

Information theories play a major role here. They are concerned with how we (in the broadest sense, including artifacts) interpret the world and communicate. These theories have even acquired a performative power that is totally unprecedented in the history of science and technology because, being immediately digitized, they can be projected without difficulty into this “digital natural environment” that already surrounds us. In a certain way, mathematical theorems “speak” to us through objects.

“Performativity” is1:

… the fact that a linguistic sign (statement, sentence, verb, etc.) is performative, that is to say, that it realizes itself what it says. The use of one of these signs then makes a reality […]

The American language philosopher John Searle, in “The Construction of Social Reality”, published in 1995, states in a generalized manner that acts of language produce social realities, distinct from natural realities (physical, etc.), and on which institutions (religious, civil) and conventions (games) are based.

It is now quite possible to replace “linguistic sign” and “act of language” by “algorithm” or “mathematical model” (about the performativity of mathematics, see also Women and Mathematics). If information theories and techniques “produce social realities“, then it is advisable not to ignore them and, above all, to understand the intention of their creators.

Google PageRank

For example, let’s take apart the Google search engine, originally driven by an algorithm called “PageRank”2. The French sociologist Dominique Cardon published in 2013 an article entitled “In the Spirit of PageRank, An Investigation into Google’s Algorithm” in which he states in preamble3:

PageRank is a moral machine. It encloses a system of values, giving pre-eminence to those who have been judged deserving by others, and deploying a will: to make the web a space where the exchange of merits is neither hindered nor distorted.

The theme of “algorithmic morality” is then in the air. Consumers, civil society, and even political powers, are beginning to become aware of and worry about the power effects of algorithms, which exert real social control and vassalize entire sectors of the economy (small businesses, self-entrepreneurs, consumers…). Let us also remember that the famous linguistic sign “Uberisation” was born in 2014. Dominique Cardon explains his thesis in this moment of collective astonishment.

Returning to PageRank, how could this moral purpose (“the exchange of merits should not be hindered or distorted“), formulated in passing at the birth of a web that we all dreamed of as “ideal”, underpinned by a simple technical principle (the initial patent filed by Larry Page, exclusive property of Stanford University until 2011, is only a few pages long), be so quickly transformed into a powerful universal Moral Machine? How did the PageRank algorithm become so performative? That’s precisely what Google’s real tour de force is all about: its diffusion in the Technological System.

This diffusion capacity depends on at least two conditions of equal importance.

Firstly, a technical condition: how to efficiently execute an algorithm requested 40,000 times per second to search among 30,000 billion pages? Google’s engineers have designed technical architectures for this purpose that are much more complex, much more decisive and much less well-known than PageRank.

Then a financial condition. None of the massive technical deployments would have been possible without a financial system that tolerates, at the very least, massive and long-loss investments. Without this “cavalry”, none of the web giants could have seen the light of day, as good as their theories were, as ingenious as their engineers were.

Little method for understanding information theories and their effects

The theories themselves are not neutral. They are not developed in an ethereal world; they do not search for a Platonic truth; they come from a culture, a moral system, a set of beliefs about what the world is or should be. We find them somehow “embodied” as Moral Machines in the Technological System. Their use in turn modifies our beliefs. The spiral of progress unfolds:

Machine PageRank

(1) A Belief System

As stated by Dominique Cardon, the idea of using the citation link to define the classification of information first dates back to Moreno’s sociometric revolution in the 1930s, when he wanted to describe the structure of society based on the links between individuals, rather than on the categories used to identify and differentiate people. Thus, here is a first belief: the social link is a vote.

Here is a second belief, even more decisive: the web is a means of liberation. Let’s recall this (somewhat naive) incantation by John Perry Barlow, to which the whole generation of the origins of the web subscribed4:

We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.

It is hard to believe how quickly this Belief System collapsed, even in the heart of Silicon Valley, after only about twenty years of using the first Moral Machines (Google, Facebook, Twitter, etc.).

(2) An “inventor” and a rational theory

In this case, it is an algorithm, PageRank, that coincides with the Belief System and thus justified by a “private” ethic (that of Google) that acts as a corporate baseline.

(3) Technical and financial means

Let’s emphasize the extreme ingenuity and remarkable skill of the engineers who designed and built the machines and mechanisms capable of spreading the algorithm throughout the system. It is a question of setting rapidly a great “power” in motion (in a completely different field, the example of Tesla’s Gigafactory is striking).

(4) A Moral Machine…

…of which origin, intention, principle and mechanism are generally indifferent to us, if not unknown. We are settled for using it without considering that it embodies all of the above. And this machine in turn modifies our Belief System.

This diagram summarizes many good questions about any information technology application: what Belief System does it rely on? What is the underlying theory that performs? Who invented it? What are the technical and financial means of its diffusion in the Technological System? What are its Moral Machines? Possibly: How do these Machines modify our Belief System? Let us recognize that answering this question is not always simple. But it is always possible.

Moral Machines, objects, relations

What is basically a Moral Machine? Let’s try to sum up an answer: in a given situation, a Moral Machine must select a behavior relative to a norm. By “norm” we will mean here something like a “rule for assessing conformity / that conforms to a moral system”. Then, in the case of PageRank, the “situation” is the query, the “behavior” is the answer, and the “norm” is the rule “social link ~ vote” backed by Larry Page.

In our view, there are two essential kinds of norms in an information system, not incompatible with each other but with very different intentions. What are they?

Let’s remember that, following Moreno’s work, the “value” of a web page does not depend on its content, on what it tells, but on the quotations (hyperlinks) it is the subject of in other pages. What seems to us self-evident today is the residue of this general revolution of our representations that took place in the 20th century: the transfer of value and meaning from the objects to the relations between objects. The object exists (and is defined) only as an element of a structure and gets its value only from its relations and differences with the other objects5.

This revolution solves a priori many ethical problems. Indeed, it is no longer necessary to assign directly a value to things according to morally questionable criteria, but to let a structure organize itself, as if by an invisible and neutral hand. Google, as Dominique Cardon explains, strives (strived?) in this way not to be “seen”. So here is the original philosophy of the web, à la Barlow, which is to be indifferent to beings and things, not to judge them (we underline):

Pure intertextuality, the graph of the web would only consist of associations between statements, without it being necessary to qualify the people who produced them. The disappearance of the enunciator is at the heart of this idealized vision of a world of ideas dialoguing between them in a relationship of argumentation and reasoning free of the weight of the interests, personality or psychology of those who made them.

But, to make it short, this “moral” ambition has gone away. The structuralist vision is now giving way to a “return to the object” (page content, author’s personality, aesthetics of the post…) in the Moral Machines of the decade: number of likes, Uber or TripAdvisor rating, social credit rating in China (China and AI: imperial!), manual moderation of violent content, etc. It is this “return to the object” that brings with it the return of forms of judgment in these machines, and therefore necessarily the intervention of man and his part-time assistant: artificial intelligence.

If, therefore, we identify a Moral Machine by the fact that it selects a behavior relatively to a norm in a given context, then our investigation should at least answer this question: does the norm in question apply to a structure, whose objects are somehow “free” (and therefore equal “citizens”) (Type 1), or does it refer the objects to a system of values (each is judged / measured) (Type 2)? Let us note in passing that artificial intelligence allows for machines of the second type (Being Stuart Russell – The Return of Moral Philosophy): the comeback of AI is not indifferent to that of moral judgement about objects and persons in Mundus Numericus.

Self-Driving Cars, Moral Machines

Consider the example of the self-driving car. Situation: the road environment. Behaviors: braking, turning, etc. But what about the norm?

Self-driving cars embody this famous algorithmic dilemma and its many variations: who to sacrifice in the event of an unavoidable accident? The driver or the pedestrian? Which behavior to select? Either the algorithm chooses according to an intrinsic “value” attributed in a situation by a norm of Type 2 to the potential victims (the apparent age – an “old” is less valuable than a “young” –, the driver’s social credit score, etc.), or according to a norm of the Type 1 indifferent to the “objects” themselves (the potential number of victims or, if there is a choice, it is the occupants of the vehicle who will cause the accident who must sacrifice).

This simple dilemma turns the self-driving car into a Moral Machine, relevant to all of our questions about information theories. It therefore deserves that we go beyond the “wow effect” and that, following the “little method of understanding information theories and their effects”, we ask ourselves some good questions: who designed his driving algorithm? According to what Belief System? By what technical (Gigafactory …) and financial (giga debt) means does it spread through the Technological System? According to what norms does it select its behavior? How does it impact our Belief System (and, in this case, our environment)?

Parcoursup: a textbook case!

Parcoursup is the French educational guidance platform set up in 2018. The ambition is to enroll each bachelor’s degree graduate in a training path “which leads him//her to success”. The case of Parcoursup is obviously different from that of Google, but it is now clear that this platform is also a Moral Machine: it selects a “behavior” relatively to a norm. Unlike PageRank, which conforms to a “private ethic”, this standard is public and has therefore been carefully documented. A “Parcoursup Ethical and Scientific Committee” (PESC) has thus issued a documented opinion prior to the implementation of the platform6.

Technically, academic orientation is a problem of matching young people on the one hand and educational paths on the other. Most of the countries that have set up this type of platform use the same algorithm, known as “stable marriages algorithm”, developed in 1962 by mathematicians and economists David Gale and Lloyd Shapley. The general problem is formulated in the following way (exposing in passing a certain Belief System…): to find, given n men and n women, and their preferences, a “stable” way to put them in a couple7. Instability occurs when a man and a woman both prefer to be in a relationship with each other rather than each remaining with their respective partners. Gale and Shapley’s algorithm guarantees a solution to this problem.

This algorithm, in all its mathematical purity, thus guarantees a “stable marriage” between each young person and each educational institution: given a young person enrolled in an educational institution, it will never happen that the young person will prefer another educational institution at the same time as the educational institution will prefer to enroll another young person. This obviously does not mean that the young person will have his or her “preferred” training, nor even that the school will be “satisfied” to welcome this young person but in any case:

No candidate may contest his assignment on the grounds that he has been refused admission to a course which would have accepted a candidate less well ranked than him.

Can we really do without humans?

Gale and Shapley therefore equip Moral Machines that select their behavior according to a Type 1 norm (“structuralist”), neutral and undisputable by the grace of a preference relationship. Note that the algorithm itself obeys a belief: the initial preference ensures stability forever (strictly true in the mathematical universe but, as we know, only vaguely applicable to the real world)8. In the case of Parcoursup, the preference of a young person is thus materialized by the order of his choices and the preference of the education paths for the young people by their pedagogical rankings.

If the proposed algorithmic framework is neutral, fair and mathematically guarantees stability, these preferences have to be calculated. In particular, it is necessary for the school or university to carry out a pedagogical ranking of the students. However, this pedagogical classification is now carried out manually. Why? Here is the argument of the PESC:

The use of algorithms leads to questioning the place of humans in public decision making. It is a fact that public decision-makers seem to be increasingly giving up their decision-making power to these algorithms, adorned with the virtues of algorithmic rationality. From this point of view, Parcoursup is not a substitute for humans. It leaves a large part to human intervention. Nor does it have anything to do with an artificial intelligence or predictive system that automatically predetermines the chances of success in higher education, or even their professional destiny, according to the profile of the candidates.

For any “Type 2” Moral Machine, human and artificial intelligence are now competing to evaluate “objects” (e.g. us). We can’t always do without humans (Facebook moderators, vocal assistants’ listeners…) but the choice of AI is often the most tempting because it is the cheapest and sometimes the most efficient. The TripAdvisor Moral Machine, for example, had to decide to judge the opinions algorithmically (AI) with a residue entrusted to humans :

In 2018, 2.7 million notices, or 0.04% of the published notices, were […] reviewed […] by the platform’s moderators.

Type 2″ feedback to the Belief System is severe: most Moral Machines, moving from algorithmic transparency (Stable Marriages, PageRank…) to opacity in the construction of their judgment about things, annihilate the very idea of the “ideal web” and reinforce suspicion. Consequence: the new belief is that the digital is not a means of emancipation but a means of control, if not surveillance.

Explicability

The comeback of human or artificial judgment in Moral Machines reveals a new problem. Ethicists claim that these machines will be able to do without man as soon as they can explain their decisions and, above all, their judgments. They mainly aim at the user’s understanding of the relationship established by the machine between himself, the situation, the norm and the selected behavior. Indeed, ethics requires that the user be able to accept or challenge the algorithmic decision on the basis of a clear explanation. The French PESC has done its job by publishing the Parcoursup algorithm. Here is a short extract:

We consider the total number A of applicants who currently have a proposal for this training, the capacity C of the group, and the additional call rate factor f as indicated by the training. If A is below f.C, then a proposal is sent to the first f.C – A applicants in the order of call, among those with a pending wish in that group.

What a transparency! In most cases, with a few exceptions, nobody is able and/or has the time to understand even a relatively simple algorithmic sequence (and with “artificial intelligence” in it, it is nowadays almost impossible). Of course, PESC had to do this job. But we can see that the explanation aims first of all to prove the neutrality and to ensure the de jure transparency of an algorithm which must overhang the particular cases. Thus, it first of all outfits the authors of the machine with a moral and legal armor in front of those who have the capacity to contest, for example legal professionals associated with technical experts.

Back to the method

Moral Machines

In short, Information Theories (2), once mathematicized and algorithmized, have become performative: they produce new social realities. This is now possible because their diffusion in the Technological System is ensured by considerable technical and financial means (3). Thus, recycled into Moral Machines (4), they in turn transform our Belief System (1). In particular, we have observed the abandonment of the “dream” of a neutral, egalitarian digital system and the dissemination of widespread suspicion.

Finally, a Moral Machine deploys a “norm”, either public or private, which tries to ignore the “objects” (us, our posts…) by focusing as much as possible on their only relations (our friends, our likes…). This overcoming being impossible, the return to the object itself, to its “value”, is carried out by humans (or AI, scoring…), often in the greatest opacity.

In the final analysis, therefore, there remains the question of “explanation”. But it seems that the solution is coming: XAI, “Explainable Artificial Intelligence”, artificial intelligence that knows how to explain what it decides. Considerable resources are being deployed at this very moment to develop this technology. This promises us a new wave of Moral Machines … finally above suspicion?


Post-scriptum: Discourse on the Method

This circular diagram is a tool intended to “take apart” some Moral Machines in order to understand their functioning. It is essentially complex, in the sense that it reveals the relationships between disjointed, heterogeneous elements (mathematics, finance, law, beliefs…), exposes contradictions, leads to approximations. This tool does not allow for any synthesis. Let’s take it rather as an observation instrument, or more precisely as a dialectical measurement instrument10:

Dialectics is not a rock-ribbed triad of thesis-antithesis-synthesis that serves as an all-purpose explanation; nor does it provide a formula that enables us to prove or predict anything; nor is it the motor force of history. The dialectic, as such, explains nothing, proves nothing, predicts nothing and causes nothing to happen. Rather, dialectics is a way of thinking that brings into focus the full range of changes and interactions that occur in the world. As part of this, it includes how to organize a reality viewed in this manner for purposes of study and how to present the results of what one finds to others, most of whom do not think dialectically.

This way of « organizing a reality » is complex. But, for Mundus Numericus, it must be attempted in every possible way, even at the risk of confusion.


1. It’s rather rare, but we prefer here the French entry from which the quote has been translated: Wikipédia – Performativité
2. Google now uses a complex algorithm based on about 200 ranking parameters. PageRank is now just one of them. In fact, we don’t know much about it since 2014. One of the characteristics of the Machines we are going to talk about is their increasing opacity.
3. Dominique Cardon / Réseaux 2013/1 (n° 177), pages 63 à 95 – 2013 – Dans l’esprit du PageRank – Une enquête sur l’algorithme de Google
4. John Perry Barlow – 1996 – A Declaration of the Independence of Cyberspace
5. We will thus see the emergence of structuralism in the social sciences and humanities, category theory in mathematics, graph theory in algorithmics, and so on.
6. French « Ministère de l’Enseignement Supérieur, de la Recherche et de l’Innovation » – Document de présentation des algorithmes de Parcoursup (broken link)
7. Wikipedia – Stable marriage problem
8. This belief in mathematical stability in everything perhaps feeds without our knowledge, via the performativity of the associated theories, the “strange world”.
9. Elsa Dicharry / Les Echos – September 17, 2019 – Quand TripAdvisor s’attaque aux faux avis sur son site
10. Bertell Ollman – 1935 – La dialectique mise en œuvre : Le processus d’abstraction dans la méthode de Marx

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.