Reading time: 11 minutes
Translation by AB – April 15, 2020
Last month, a study on AI and its potentially malicious uses appeared, which was widely reported in the press: « The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation ».
We build on this work to propose a series of comments made up of two parts.
The first part relates to some selected extracts from the study and which allow us to understand certain important aspects of AI in 2018: the prowess accomplished by this technology in terms of “artificial senses” (vision, sounds …), distance it puts between us and the consequences of our actions, its ability to massively individualize services at lower cost, the ease with which we can implement it.
The second part offers more “political” comments on the study. Because when tackling the subject of malicious uses of AI, you inevitably have to gain a foothold in the field of morals and ethics, with your own culture and your own subjectivity. We therefore propose to examine the point of view of the authors, from which they give their opinion and recommendations. This area of ethics in our opinion deserves to be occupied and approached from other angles and by other actors of public life, including ourselves despite, sometimes, our lack of “technicality”.
AI comes out of the shadows
Two American academics cited in the study, M.I. Jordan and T.M. Mitchell, recall why the AIT (Artificial Intelligence Techniques) came out of a long “coma”:
Factors that help to explain these recent gains include the exponential growth of computing power, improved machine learning algorithms (especially in the area of deep neural networks), development of standard software frameworks for faster iteration and replication experiments, larger and more widely available datasets, and expanded commercial investments.
AITs were not born yesterday and their defrosting results almost exclusively from factors of scale (power of computers, abundant data, massive investments, etc.), organization and convergence (startups, frameworks, AIT / machine couplings, etc.). No new theory or Einstein of the artificial mind but the result is there: AITs are now spreading in overpowered and miniaturized infrastructures by following our data, our images, our voices and all our digital traces.
But what are they capable of today?
In/Super-human artificial senses
The most spectacular advances concern image processing and recognition (identifying someone / something in a photo or video) where human performance has been exceeded since 2016.
Technically, it’s already impossible to escape surveillance cameras. Even when running, our face can be automatically recognized, resulting in the possible realization of an Orwellian fantasy: mass control and surveillance1. AITs equip the digital world with millions of lynx eyes (or even better since they can theoretically take care of the whole electromagnetic spectrum and more generally all the physical signals beyond our senses). The next step is to interpret a scene in which the characters and elements of the environment have been identified. “Understanding” the scenes of real life, what is going on, is an essential skill to reinforce digital hugging
TIAs also allow you to create images and videos that are absolutely breathtaking, with an “OK” side (in English)3 :
But there is also the dark side of hackers already capable of the worst4. We will not insist …
What is possible with images is just as possible with sounds or language … The TIAs therefore expose us to a flood of humanly impossible “fakes” (except possibly having our own senses augmented by AI). Let us also mention speech recognition (“OK Google” …), “understanding” of language, games (Chess, Go, video games …), the ability of AITs to develop strategic behaviors, driving vehicles, etc.
You have to get used to it: AI already offered “digital senses” beyond all human capacity. AITs are now boarding the real world by equipping the infrastructures and platforms of digital players worthy of the most efficient creatures in the animal world.
L’IA, c’est facile !
You would think that all this is very complicated. Not really, and that’s the problem. The authors of the study do not fail to emphasize that AI technologies do not constitute an entry barrier for anyone:
In truth, many recent AI algorithms are replicated in a matter of days or weeks. In addition, the culture of AI research is characterized by a high level of openness and sharing, with many articles accompanied by source code. […] We can easily find open source face detection, navigation and planning algorithms, multi-agent distributed intelligence frameworks that can be exploited for malicious purposes.
Technological complexity therefore does not put AITs out of reach of malicious users. In addition, in terms of cyber piracy, we know that the value chains are perfectly organized and that everyone has their job: ransomware is sold and bought, like tomorrow (or perhaps already today) malicious AI, configurable and fully documented, will be available on the darknet stalls..
“Phishing” is a well-known attack technique aimed at extracting information or initiating a malicious action by deceiving its target with a trustworthy facade: a false email, sms, tweet … and now, thanks to TIAs, a fake video, a fake voice, etc.
To reach a level of confidence sufficient for the “target” to uncover, it is necessary to collect information about it (name, friends, professional activity, centers of interest, sites visited, GPS trails … – all kinds of digital traces) and make a “credible mask” with all this. The difficulty for the hacker lies in the cost / benefit equation: it can be long, costly and unprofitable to collect information and assemble a credible message to target a single person. AITs now allow mass phishing to be individualized at a lower cost, from the collection of information (digital and real sensors) to the production of the message (text, audio and / or video) in the recipient’s language
Let us remember this: spear phishing (therefore “intelligent”) is the shadow cast by mass individualization by TIAs, which allows confidence and empathy to be generated at a lower cost, a marketing dream in our society of the individual.
AI-assisted activities are increasing. Our “intelligent appendages” (personal assistants, robots…) are deployed and “augment” us by their superhuman sensors and actuators. Consequently, the “psychological distance” between humans increases and can extend to the point where attacking (in the broad sense) no longer poses “culpability”, where aggression is no longer hampered by our education, our culture or our history. The victim is simply virtual, even completely “taken care of” by our immediate or distant digital appendages.
We would therefore be overexposed to, purely technical, a-moral attacks. Perhaps, but if it is the case, then these characteristics induced by distance also applies on the “OK” side: economy, society, politics…
The phenomenon of “technological resonance” (Manifesto) is revealed here as a combination of possibilities. In other words, the potential kinds of attack skyrocket.
Let us take an autonomous “body”: vehicle, drone (of delivery …), robot … Let us equip it with image recognition, sound, scene interpretation tools … We therefore understand that it is possible to carry out a physical attack without any remote control. Thanks to the TIAs, the autonomy of the artefacts gradually extends in time (longer) and in space (further). In addition, the interoperability of “bodies ” and of algorithms allows to consider the development of fast, inexpensive assemblies and kits.
Let us quote the study again:
As far as cyber-physical systems are concerned, the Internet of Things (IoT) is often advertised as a great step forward in terms of efficiency and usefulness, but it is also recognized as very fragile and it represents another attack vector by which AI systems controlling key systems could be sabotaged, causing more damage than would have been possible had these systems been controlled by humans.
Thus, physical threats follow in parallel the physical capacities of AITs, assembled in networks or coupled to bodies.
By mentioning these few characteristics of possible attacks, we understand a little better the capabilities of AITs today. What can we do to protect us from their malicious uses?
The study authors make some propositions, but first of all who are they? Most of them are researchers from well-known Anglo-American organizations:
- « Future of Humanity Institute » and « Centre for the Study of Existential Risk », two institutions (already quoted there: A future without us) actively campaigning to participate in the political game;
- Oxford and Cambridge Universities, as well as the « Center for a New American Security» (CNAS);
- The « Electronic Frontier Foundation », active organization for the defense of liberties on the internet, one of whose co-founders, John Perry Barlow, who died last month, is known to have written in 1996 the famous “Declaration of the Independence of Cyberspace“.
- « OpenAI », a non-profit association co-chaired by … Elon Musk.
The observations and recommendations therefore orginate from a network of researchers and activists rather inclined, as we will see, to “take matters in their own hand”.
Here is the starting point:
AI promotes changes in the nature of communication between individuals, businesses and states, so that everyone is connected through automated devices that produce and deliver content.
The pervasiveness of the Internet allows the development of a new information environment, in which the (many) effects are no longer attributable to clear causes. Classic reason becomes ineffective, our relationship to “truth” less direct. It is therefore all the production of intangible content that is now likely to be attacked, altered, manipulated. By whom? By what?
[…] Social media technologies are available to the authorities as well as to protesters: they allow military intelligence to monitor feelings and attitudes […] some claim that social networks have helped to polarize political discourse by allowing users, Westerners in particular, to self-select their own echo chambers, but others question this assumption. Machine learning algorithms running on these platforms prioritize the content that users are expected to like.
Therefore, AITs specially designed to adapt your news feed, tomorrow, why not, to write tweets and messages for you, etc. allow us to envisage […] mass manipulation (was this already the case in the last elections in USA?).
Authoritarian regimes in particular may benefit from an information landscape where objective truth becomes devalued and “truth” is whatever authorities claim it to be.
Here is the ground prepared to insinuate a doubt concerning the “authorities” and put the final nail…
… with the hammer of pure logic:
[…] appropriate responses to these issues may be hampered by two self-reinforcing factors: first, a lack of deep technical understanding on the part of policymakers, potentially leading to poorly-designed or ill-informed regulatory, legislative, or other policy responses; second, reluctance on the part of technical researchers to engage with these topics, out of concern that association with malicious use would tarnish the reputation of the field and perhaps lead to reduced funding or premature regulation.
Here we face the recurrent argument, which we contest, so that scientists alone take charge of the ethics of their field:
[…] we believe it is important that researchers consider it their responsibility to take whatever steps they can to help promote beneficial uses of the technology and prevent harmful uses.
How to say the opposite? It is not surprising to observe the same beginning of the phenomenon in France (with the neurosciences for example). But this time, the argument consisting in saying “the policies do not understand anything there therefore the scientists must interfere” is brought up on a painful place: that of security. Unstoppable!
Without this extreme point of view being shared by all the authors of the report, it should be recalled what John Barlow wrote in 1996, and which remains in the background:
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
Winners take even more…
Here is finally an important point drawn from the conclusion of the study:
In the absence of significant efforts, determining the origin of the attacks and punishing the aggressors will probably be difficult, which could lead to a permanent succession of low and medium intensity attacks, eroding confidence in our societies, between corporations and their governments and between governments themselves.
If the truth is about intersubjectivity, then seems “truer” which is “guaranteed for true” by the greatest number. So, when it comes to trust, “Winners take all”, as they say. Therefore:
Tech and media giants can become peace havens for people because of their access to relevant data on a massive scale, their mastery of products and communication channels [and of the underlying technical infrastructure – Trevor Paglen along submarine cables], they are in the highly favorable position of being able to offer their users adequate protection.
Consequently, the GAFAM become even more necessary! The question of the role of governments remains obviously central and political options are outlined:
Nations will be under pressure to protect their citizens and their own political stability in the face of malicious uses of AI. They may then want to directly control the digital infrastructures [datacenters…] and communication infrastructures [cables, satellites…], or else develop fruitful collaborations with the private entities which control these infrastructures, or even put in place judicious and binding legislation articulated with well-designed financial incentives.
In other words, we, experts, researchers, entrepreneurs, must take the lead and act with political power to avoid this “catastrophe”: that the states get involved in wanting to control the infrastructures (in a truly transparent democracy this would be, from the point from the citizen’s point of view, obviously the best solution…)!
This point being rather delicate, the conclusion of the report will not surprise anyone:
There are still a lot of disagreements between the co-authors of this report, not to mention the many communities of experts around the world.
We would really have liked to know these disagreements!
1. ↑ Angélique Forget at France Culture (small podcast of about 4 minutes) – January 9, 2018 – En Chine, la cybersurveillance par la reconnaissance faciale
2. ↑ Elsa Trujillo in Le Figaro – October 25, 2017 – Google entraîne son intelligence artificielle à interpréter les gestes humains
3. ↑ By the way, here is a good illustration of the difference between AI Techniques (AIT), which have nothing to do with intelligence, and AI, a term subject to all interpretations. The tinkering of images has been around for a long time (Photoshop …), but AITs now allow, thanks to the power of computers, to do this tinkering on a video, in real time and in a hyperrealistic way. Undoubtedly this is technical progress, but no more.
4. ↑ Elodie in Journal du Geek – January 25, 2018 – Grâce à cette IA, tout le monde peut figurer dans un film porno
5. ↑ It is an approach based on a multitude of simple agents whose emerging global behavior has complex characteristics, like a colony of termites or ants for example.