Tristan Harris and the Swamp of Digital Ethics

Reading time: 15 minutes

Translation by AB – February 16, 2021


The information technology power surge is so sudden (20-30 years at most) that we are still like flabbergasted and barely able to develop the legislative and political firewalls against its most unwanted effects. There is, however, some good news: the emergence of an ethical concern among young engineers. We will mention here the example of the American Tristan Harris, who has gained some notoriety by revealing the methods used by the major digital companies to capture our attention and our time. It was necessary to take this first step in order to start a movement in the direction of “ethics”, but in our opinion his analyses call for a critical response. It must nevertheless be acknowledged that this “swamp” of digital ethics is treacherous…

We have chosen to respond to the ethical position of Tristan Harris, boldly turned towards values (“good”, “bad”…), with an introduction to the more humble “disclosive ethics”. In our opinion, this already old approach constitutes a solid pontoon on the shore of this swamp. To conclude, we will establish a first connection between this ethical question and these “Moral Machines” that we mentioned in 2019.

Circles

How do you ethically steer the thoughts and actions of two billion people’s minds every day?

Tristan Harris web site

The message of Tristan Harris, ephemeral “design ethicist” at Google, has always been simple: digital companies use every trick in the book to monopolize our time. We literally lose our time navigating and interacting digitally. From there to moral judgment, the step is easily taken.We are in 2016 (we underline)1:

What’s bad is that our screens, by “filling us up”, while falsely giving us the impression that we are choosing, threaten our fundamental freedom to live our lives the way we want, to spend our time the way we want, and replace the choices we would have made with the choices these companies want us to make.

Tristan Harris’s statement hits the mark because it is delivered at the right time and from the right place: Silicon Valley. Let us recall that in 2015-2016, a (small) wave of denunciation of abuse and scandals sparked awareness there and triggered a (small) wave of regret and contrition as well as new vocations. Highly paid engineers, sought after and celebrated as stars, suddenly realize that Silicon Valley is no exception to the banality of mankind and that their work serves uncontrollable and sometimes “bad” power effects … We then observe, with some apprehension and a hint of hope, these same engineers take the problem in hand. If it is “bad”, then there is a problem to be solved and consequently a technical solution to be found (and, incidentally, a business). So, according to Tristan Harris, the solution …

… would be technologies that would give us back our freedom of choice. […] For technologies to help us meet these basic human needs instead of taking their place, they must change. It won’t be quantitative like having fewer notifications. All these screens would be radically different, they would create, by design, social connection.

A “screen” creating “by design, social connection“, doesn’t that remind you of something? This somewhat circular and diagrammatic reasoning is typical of a technological dynamic which seems to produce innovation but which today delivers, for the most part, only more and more powerful repetitions or echoes. Mundus Numericus is thus traversed by somewhat vicious technological circles in which “bad” effects develop (“threat to our fundamental freedom“), regulated on the following lap by “good” solutions (“giving us back our freedom of choice“). From this circularity was therefore born this ethical awakening of Tristan Harris that we will now enquire. Tristan Harris is obviously far from being the only one to animate these circles. There are many other proponents of this doctrine, including here in Europe, who believe in the unlimited capacity of technology to do or restore the “good”.

Problems

The economic matrix requires this: every problem has, now or later, a technical solution. Nothing should be left to the mystery of guesswork, to the uncertainty of events, to the free flow of reality … In the same way that, as François Sureau said – François Sureau is a French writer, lawyer and technocrat –, “we have already become accustomed to living without freedom“, we have also already become accustomed to “solutionism” (two habits that can also be seen in our general behavior in the face of the pandemic). In any case, for the economy and the business, the issue is not so much to offer solutions as to get hold of the right problems.

Problems can be roughly ranked according to two criteria: “reach” (local → regional → global) and “artificiality” (natural → presumed → produced – voluntarily or involuntarily). What we call “progress” is measured, not by the number of problems solved, but rather by how far the problems have progressed in the direction “local / natural” → “global / produced”. We have thus developed, in line with our progress, the technical capacity to produce major problems (global warming, pandemics, democratic disorders…).

The problem identified by Tristan Harris falls into this ultimate category: when it comes to the “fundamental freedoms” of at least “two billion people“, it is global; being caused by platform technology, it is produced (i.e., artificial). The “theft” of our time is therefore the result of “great” progress which in turn opens up a new economic field: ethics. All the intentions in the world, “good” as well as “bad”, rush therein, a sign of this phenomenon well identified here: the radical amorality of the treatment of any matter considered as a problem to be resolved.

By design

We will not dwell much on the consequences of the prodigies of the design of attention and captology (we refer to this article of 2017: Our second natures), highlighted, especially since 2016, by many works: addictions and psychological disorders, ready-made thinking, amplification of semantic salience (fake news, conspiracy …), “Apocalypse cognitive2, “Crétins digitaux3 (“digital morons”)… The problem is certainly large and Tristan Harris sets out to solve it, thus launching this “ethics business” about which we will now say a few words.

Harris, 36, is a graduate of Stanford University and has a career path similar to his brightest peers. He began his career as a development engineer for Apple. Then, in 2007, he co-founded Apture, a company that proposed a technology to improve web browsing4. This innovation met with some success and Apture therefore broke into Google’s “Innovation Kill Zone”. Following the takeover of Apture by Google, Tristan Harris continued his career at Google where he became “design ethicist“, a phrase which, as is now customary, dresses the targeted solution with words. We can indeed consider any problem of non-compliance with an “X” value, which is the consequence of an operational digital solution, then go back up the chain of technical causes to the first: design. A new “vein” then arises, called “X by design” as “security by design”, “privacy by design”, “ecology by design”, « fairness by design » … and therefore, of course, “ethics by design”.

The technical nobility thus concentrates its efforts on design, which of course has its authentic talents as well as its schools, its language, its circles, its suitors … Everything that is monopolized by the activity of design is thus recycled as a new source of technical power: security, respect for private life, ecology… and therefore ethics.

Dilemma

Tristan Harris’s ethical proposal initially targets the compulsion of large digital platforms (Google, Facebook in the first place) for capturing user attention, behavioral influence, nudge… But let’s remember that this is an essential part of their business model and we have always praised the technical prowess and excellent design work done by these companies who know our “horrible secrets” by heart, the first of which is that we are all more or less the same (Adam Curtis and the strange world). Addiction is therefore rather easy to arouse, in particular by maintaining, using big data, artificial “intelligence”, etc. the idea or hope that each of us is unique …

In 2013, Tristan Harris and Aza Raskin (“writer, entrepreneur, inventor, and interface designer5) founded the non-profit association “Time well spent“, now renamed “Center for Humane Technology6, to foster the development of human-centered technology that would solve the problems listed in their “ledger of harms“: misinformation, conspiratorial theories, fake news, loss of cognitive abilities, stress, loneliness, loss of feeling of empathy, racism, sexism, suicidal temptations, etc. What a program! In the same vein, Harris and Raskin intervene in a docudrama broadcast in 2020 on Netflix (!), “The Social Dilemma”, which reports the use and exploitation of personal data, surveillance capitalism, addictions, political manipulation, etc.

This work obviously has the merit of raising the awareness of many people, but the simplicity of the statements, the (it seems) ignorance of the old “realm” of ethics and the “solutionist” position seem to lead to a kind of emptiness. Let’s read a passage from this very interesting review of “The Social Dilemma7:

[ The docudrama ] treats any problem with the internet as a problem with the specific conditions that make Facebook or YouTube bad. Any large-scale solution boils down to tinkering with or shutting down those platforms. And any small-scale one involves cutting back on “screen time” or deleting your accounts, the main options that The Social Dilemma proposes8.

The global / produced problem identified by Harris crushes everything in its path and obviously forces schematic solutions that are really only of interest to designers. But where are the users, who still see the value of their digital social interactions without being totally naive about their potential harms? Would they be taken for fools by design ethicists? Are we really living this historic moment when the user and the citizen, weakened, must be “protected” by the ones who know rather than educated and empowered?

Not so simple… Tristan Harris’ approach seems useful and well-founded, but suffers in particular from implicitness, unspokenness and an absence of reflexivity (values are never questioned) which call into question its authentically ethical effectiveness. How can we better qualify what is at stake here?

Disclosive ethics

Indeed information technology seems to be a very cost-efficient way to solve many of the problems facing an increasingly complex society. One can almost say it has become a default technology for solving a whole raft of technical and social problems. It have become synonymous with societies view of modernisation and progress9.

This statement by Lucas D. Introna, professor of technology and ethics at the British University of Lancaster, dates from 2005. The dot-com bubble crisis was then being digested and the inevitable “progress” had started up again. Zuckerberg had just founded The Facebook, Google launched Google Maps, Amazon launched its Amazon Mechanical Turk micro-work platform, and Apple still only had $ 14 billion in revenue ($ 275 billion in 2020) … At that time, it was already clear (at least in the United States) that information technology (“IT”) would become the focus of most new global businesses However, it was necessary to “innovate”, “disrupt” and “creatively destroy” the existing to make room for them. What was less clear, however, except for Introna and information scientists since the 1980s, was that IT was going to diffuse its own “biases.” It wouldn’t be just another technology.

The first characteristic of IT, which is also the first premise of the reasoning which follows, is its opacity: IT is embedded (we cannot see it functioning like a washing machine), autonomous (it does not require our participation or our presence to operate or stop – Introna relies on the example of facial recognition systems), flexible (it is not intended to produce a result stabilized in advance – let’s just think of the “theater” of social networks), obscure in its mode of operation (it is materially impossible to follow the computer code line to line as it executes or to understand its effects from layer to layer to impulses electronic and physical that it triggers), mobile (not located in a given hardware – our so-called “digital natural environment”). Let us simply retain from this smart list that IT is opaque, that is to say that it produces surface effects that anyone can observe, desire, fear … without our understanding the causes or even the occurrence.

The second premise is better known. As Introna points out, quoting American political researcher Langdon Winner, “technology is political“. In a philosophically “purer” language, we can quote the French philosopher Gilbert Simondon who considered in the machine in general the reproduction of a “human gesture fixed and crystallized in structures which operate“. In other words, until the eventual advent of an artificial life or an artificial intelligence which would no longer require our presence to operate, technology remains a human reality intended for the reproduction of a human “gesture” and then under control.

Conclusion: opacity favors the underground development of the “political” (with a small “p”) understood as “the actual operation of power in serving or enclosing
particular interests, and not others“. The proposed “ethical” action (with a small “e”, says Introna) therefore does not relate to the major questions of “good” and “bad” but more concretely to the identification of these “others” that were excluded from the process. This exclusion is a “closure” produced by the opaque activity specific to IT. Thus, it is a “disclosive ethics” which is called at the beginning of any digital ethical practice. What is not ethically correct at the very beginning is the “concealment of the concealment”.

This concealment, which Introna considers in the two forms “closure” and “enclosure”, can be observed throughout the lifecycle of digital technical systems.

Introna vs. Harris

When you want to change things, you can’t please everyone. If you do please everyone, you aren’t making enough progress.

Marc Zuckerberg

Let us now draw some parallels between the words of Lucas D. Introna and the positions of Tristan Harris (emphasis added):

We can see it operating as already ‘closed’ from the start – where the voices (or interests) of some are shut out from the design process and use context from the start.

Design thinking” (short introduction here in another context: Artificial Intelligence-Art in its infancy) obviously does not have the means to consider all the “stakeholders” on earth (“two billion people’s minds“). There are therefore necessarily choices to be made for designers, implicit or explicit, and aiming to satisfy the greatest number by focusing on an “average” real or assumed individual. For example, the “attention design” criticized by Tristan Harris excludes a large set of users who do not adhere for themselves to the “captological” principle (advertising, manipulation …) or worse, who do not support its effects for psychological reasons. It would therefore be necessary, ethically, to disclose the exclusion of these users just like, in other circumstances, that of these minorities in the learning samples of neuromimetic networks (AI), or of the visually impaired, dyslexic, or people with motor disabilities … But the operation of concealment doesn’t end there:

We can also see it as an ongoing operation of ‘closing’ – where the possibility for suggesting or requesting alternatives are progressively excluded.

Indeed, for those who remain considered by design, flexibility – one of the characteristics of opacity – requires in return to restrict by design the multitude of possible options. Again, it is necessary to make choices that will lead the user on certain paths but not on others that would have been possible. This closure is, of course, “political” in the sense stated above of “serving or enclosing particular interests, and not others“. The captological technique thus indicates how the interface should behave to maximize attention time and direct the user to the most “profitable” services – in the broad sense – or to the most effective advertising footprints. Lastly:

We can also see it as an ongoing operation of ‘enclosing’ – where the design decisions become progressively ‘black-boxed’ so as to be inaccessible for further scrutiny.

Again, the “enclosing” that Introna mentions may or may not be intentional. It comes unintentionally from the intrinsic complexity of information technology (layered structure, etc.), statistical decision-making techniques (artificial intelligence, etc.) or even from the dilution of algorithmic traces in gigantic global socio-technical networks. Some concealments can be very partially lifted (legal access to data, history, publication of algorithms, etc.) but the successful disclosure, which would highlight all the political footprints of an implementation, remains essentially elusive. We can only come close to it (in the almost identical sense of a philosophical approach to the “being of a state-of-being”) and even this work of approach is imbued with politics since it in turn requires choices of representation, method and language.

Disclosure

In the name of what values and what way of life is it possible to question technique if our judgment is based on the very values that it conveys?10

In the light of disclosive ethics, can there exist, as Tristan Harris suggests, something like “ethics by design”, that is to say a technical approach to the ethical problems posed by the IT? Solutions that would allow us, for example, to spend our time “well”? Disclosive ethics does not respond because it does not directly target the final demand for moral sense and respect for the human being, nor this somewhat subjective demarcation between “good” and “bad”, which itself, as we have seen, is played out in concealment. It indicates, however, that Tristan Harris’ proposals conceal, within the artefacts, another politics, another power game, proceed to other exclusions, and therefore propose other solutions that lead to another business.

Thus, to pretend that the individual (in general) claims a “fundamental freedom to live his/her life the way he/she wants“, does not Tristan Harris base himself on a presumption of relatively good health and financial ease, technical optimism and faith in progress? In this case, the demand is indeed that of freedom, autonomy and responsibility. But whether we are among those excluded from this design and, let’s say, struggling and less confident in the future, then our demand will be essentially that of security and protection, but certainly not of “Time well spent”. To this demand can also respond by design a suitable technology, based on notification, tracing, confinement in reassuring filter bubbles, etc. Is it “bad” for all that?

Disclosive ethics is therefore in search of the “values” which are implicitly or explicitly enshrined into opaque systems and the power games that are at stake there. This method of inquiry thus opens up a genuine space for ethics in general that is not reserved for engineers alone.

Moral Machines

This reflection unexpectedly led to a return to the article PageRank, Parcoursup and other Moral Machines which dealt with the performative power of IT on our social structures and our belief systems. The final drawing can now be completed like this:

We said: in short, our Belief Systems determine Information Theories (2) which, once mathematicized and algorithmized, have become performative: they produce new social realities. This is now possible because their diffusion in the Technological System is ensured by considerable technical and financial means (3). Thus, recycled into Moral Machines (4), they in turn transform our Belief Systems (1).

We add: technological “progress” travels the circle in the direction indicated while ethical “progress” should today seek a new beginning by performing the opposite course to disclose the accumulation of concealments made by designers, developers, companies, administrations, etc.11

Consequently, the central problem appears: can we end up putting technical progress and ethical progress in the same direction? It is basically this resynchronization that Tristan Harris tries to achieve but, one might say, in full force: ethics becomes a problem to be solved, then a technical subject (by design), then a topic subjugated to the technology of the information to end up contaminated by its opacity. How to get out of this aporia? There is no good news: it’s probably very hard! In principle, we should reverse the orders and limit the technique by ethics because, let us remember, according to the famous law of Gabor, everything that is technically feasible will be done. But the subordination of technique to ethics has been an illusion since ethics itself is no longer undoubtedly subject to a higher and “necessary” (of divine nature) order, since it gently waves in the waters of a dark swamp.


1. Alice Maruani / Rue 89 – November 21, 2016 – Tristan Harris : « Des millions d’heures sont juste volées à la vie des gens »
2. Gérald Bronner / PUF – 2020 – Apocalypse cognitive
3. Michel Desmurget / Éditions du Seuil – March 2020 – La fabrique du crétin digital
4. Rory O’Connor / February 28, 2009 – Apture: Web 3.0 Is Now
5. Wikipedia – Aza Raskin
6. Website of the association – Center for Humane Technology
7. Adi Robertson / The Verge – September 4, 2020 – TELLING PEOPLE TO DELETE FACEBOOK WON’T FIX THE INTERNET
8. Recommendations from the Center for Humane Technology, if that inspires you: Take Control
9. Lucas D. Introna / Ethics and Information Technology – 2005 – Disclosive Ethics and Information Technology: Disclosing Facial Recognition Systems
;10. Guillaume Carron / PUF, « Revue de métaphysique et de morale », 2013/3 N° 79 | pages 433 à 451 – 2013 – L’institution comme préalable à une éthique de la technique
;11. If we evoke here a kind of “hermeneutics” of technical systems, we are led towards other paths, such as for example that of the “interrogative ethics” (“éthique interrogative“) proposed by the French philosopher Olivier Abel (the resonance of this term with the “disclosive ethics” is striking). Following Abel quickly, let us remember here that disclosing is not only the beginning of an ethical process: it is also opening a common space “within which human beings can cohabit“. Nothing less!

1 Response

  1. 1 January 2023

    […] itself somewhere. Digital prefigurative politics must thus be wary of its own design (see also Tristan Harris and the Swamp of Digital Ethics) and examine what it actually […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.