DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION Agnieszka M. Walorska ANALYSISANALYSE 2 DEEPFAKES & DISINFORMATION IMPRINT Publisher Friedrich Naumann Foundation for Freedom Karl-Marx-Straße 2 14482 Potsdam Germany /freiheit.org /FriedrichNaumannStiftungFreiheit /FNFreiheit Author Agnieszka M. Walorska Editors International Department Global Themes Unit Friedrich Naumann Foundation for Freedom Concept and layout TroNa GmbH Contact Phone: +49 (0)30 2201 2634 Fax: +49 (0)30 6908 8102 Email: [email protected] As of May 2020 Photo Credits Photomontages © Unsplash.de, © freepik.de, P. 30 © AdobeStock Screenshots P. 16 © https://youtu.be/mSaIrz8lM1U P. 18 © deepnude.to / Agnieszka M. Walorska P. 19 © thispersondoesnotexist.com P. 19 © linkedin.com P. 19 © talktotransformer.com P. 25 © gltr.io P. 26 © twitter.com All other photos © Friedrich Naumann Foundation for Freedom (Germany) P. 31 © Agnieszka M. Walorska Notes on using this publication This publication is an information service of the Friedrich Naumann Foundation for Freedom. The publication is available free of charge and not for sale. It may not be used by parties or election workers during the purpose of election campaigning (Bundestags-, regional and local elections and elections to the European Parliament). Licence Creative Commons (CC BY-NC-ND 4.0) https://creativecommons.org/licenses/by-nc-nd/4.0 DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION 3 4 DEEPFAKES & DISINFORMATION CONTENTS Table of contents EXECUTIVE SUMMARY 6 GLOSSARY 8 1.0 STATE OF DEVELOPMENT ARTIFICIAL INTELLIGENCE AND ITS ROLE IN DISINFORMATION 12 2.0 CHEAPFAKES & DEEPFAKES TECHNOLOGICAL POSSIBILITIES FOR THE MANIPULATION OF TEXT, IMAGES, AUDIO AND VIDEO 14 2.1 DEEPFAKES VS CHEAPFAKES 15 2.2 EXAMPLES OF APPLICATION 16 MANIPULATION OF MOVEMENT PATTERNS 16 VOICE AND FACIAL EXPRESSIONS 17 IMAGE MANIPULATION: DEEPNUDE AND ARTIFICIAL FACES 18 AI-GENERATED TEXTS 19 DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION 5 CONTENTS 3.0 DISSEMINATION & CONSEQUENCES HOW DANGEROUS ARE DEEPFAKES IN REALITY? 20 3.1 DISSEMINATION 20 3.2 CONSEQUENCES 21 3.3 ARE THERE ANY EXAMPLES OF POSITIVE APPLICATIONS OF DEEPFAKES? 22 4.0 HOW CAN WE FACE THE CHALLENGES ASSOCIATED WITH DEEPFAKES? 24 4.1 TECHNOLOGICAL SOLUTIONS FOR IDENTIFYING AND COMBATING DEEPFAKES 24 4.2 SELF-REGULATION ATTEMPTS BY SOCIAL MEDIA PLATFORMS 26 4.3 REGULATION ATTEMPTS BY LEGISLATORS 28 4.4 THE RESPONSIBILITY OF THE INDIVIDUAL – CRITICAL THINKING AND MEDIA LITERACY 29 5.0 WHAT’S NEXT? 30 6 DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION 7 Applications of Artificial Intelligence (AI) are playing an The German federal government is clearly unprepared for increasing role in our society – but the new possibil- the topic of “Applications of AI-manipulated content for ities of this technology come hand in hand with new purposes of disinformation”, as shown by the brief par- risks. One such risk is misuse of the technology to liamentary inquiry submitted by the FDP parliamenta- deliberately disseminate false information. Although ry group in December 2019. There is no clearly defined politically motivated dissemination of disinformation responsibility within the government for the issue is certainly not a new phenomenon, technological pro- and no specific legislation. So far, only “general and gress has made the creation and distribution of ma- abstract rules” have been applied. The replies given by nipulated content much easier and more efficient than the federal government do not suggest any concrete ever before. With the use of AI algorithms, videos can strategy nor any intentions of investing in order to now be falsified quickly and relatively cheaply “deep( - be better equipped to deal with this issue. In general, fakes”) without requiring any specialised knowledge. the existing regulatory attempts at the German and European level do not appear sufficient to curb the The discourse on this topic has primarily focused on problem of AI-based disinformation. But this does not the potential use of deepfakes in election campaigns, necessarily have to be the case. Some US states have but this type of video only makes up a small fraction already passed laws against both non-consensual of all such manipulations: in 96% of cases, deepfakes deepfake pornography and the use of this technology were used to create pornographic films featuring to influence voters. prominent women. Women from outside of the public sphere may also find themselves as the involuntary star Accordingly, legislators should create clear guidelines of this kind of manipulated video (deepfake revenge for digital platforms to handle deepfakes in particular, pornography). Additionally, applications such as and disinformation in general, in a uniform manner. DeepNude allow static images to be converted into Measures can range from labelling manipulated deceptively real nude images. Unsurprisingly, these content as such and limiting its distribution (excluding applications only work with images of female bodies. it from recommendation algorithms) to deleting it. But visual content is not the only type of content that Promoting media literacy should also be made a pri- can be manipulated or produced algorithmically. AI- ority for all citizens, regardless of age. It is important generated voices have already been successfully used to raise awareness of the existence of deepfakes to conduct fraud, resulting in high financial damages, among the general public and develop the ability of and GPT-2 can generate texts that invent arbitrary individuals to analyse audiovisual content – even facts and citations. though it is becoming increasingly difficult to identify fakes. In this regard, it is well worth taking note of the What is the best way to tackle these challenges? approach taken by the Nordic countries, especially Companies and research institutes have already Finland, whose population was found to be the most invested heavily in technological solutions to identify resilient to disinformation. AI-generated videos. The benefit of these investments is typically short-lived: deepfake developers respond Still, there is one thing that we should not do: give in to technological identification solutions with more to the temptation of banning deepfakes completely. sophisticated methods – a classical example of an Like any technology, deepfakes do open up a wealth arms race. For this reason, platforms that distribute of interesting possibilities – including for education, manipulated content must be held more accountable. film and satire – despite their risks. Facebook and Twitter have now self-imposed rules for handling manipulated content, but these rules are not uniform, and it is not desirable to leave it to private companies to define what “freedom of expression” entails. 8 DEEPFAKES & DISINFORMATION GLOSSARY Artificial General Intelligence / Strong AI Cheapfakes / Shallowfakes The concept of strong AI or AGI refers to a computer In contrast to deepfakes, shallowfakes are image, audio system that masters a wide range of different tasks or video manipulations created with relatively simple and thereby achieves a human-like level of intelligence. technologies. Examples include reducing the speed of Currently, no such AI application exists. For instance, an audio recording or displaying content in a modified no single system is currently able to recognise cancer, context. play chess and drive a car, even though there are spe- cialised systems that can perform each task separately. Multiple research institutes and companies are currently DARPA working on strong AI, but there is no consensus on whether it can be achieved, and, if so, when. The Defense Advanced Research Projects Agency is part of the US Department of Defense, entrusted with the task of researching and funding groundbreaking Big Tech military technologies. In the past, projects funded by DARPA have resulted in major technologies that are The term “Big Tech” is used in the media to collectively also used in non-military applications, including the refer to a group of dominant companies in the IT internet, machine translation and self-driving vehicles. industry. It is often used interchangeably with “GAFA” or “the Big Four” for Google, Apple, Facebook, and Amazon (or “GAFAM” if Microsoft is included). For the Chinese big tech companies, the abbreviation BATX is used, for Baidu, Alibaba, Tencent, and Xiaomi. DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION 9 GLOSSARY Deepfake Deep Porn Deepfakes (a portmanteau of deep learning and fake) Deep porn refers to the use of deep learning methods are the product of two AI algorithms working together to generate artificial pornographic images. in a so-called Generative Adversarial Network (GAN). GANs are best described as a way to algorithmically generate new types of data from existing datasets. For Generative Adversarial Network example, a GAN could analyse thousands of pictures of Donald Trump and then generate a new picture that is Generative adversarial networks are algorithmic architec- similar to the analysed images but not an exact copy of tures based on a pair of two neural networks, namely any of them. This technology can be applied to various one generative network and one discriminatory network. types of content – images, moving images, sound, and The two networks compete against one another (the text. The term deepfake is primarily used for audio and generative network generates data and the discriminatory video content. network falsifies
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages32 Page
-
File Size-