Artificial Intelligence As a Disinformation Engine Revision
Total Page:16
File Type:pdf, Size:1020Kb
Artificial Intelligence as a Disinformation Engine Revision #76 Dylan W. Wheeler University of New Hampshire Philosophy Thesis – In Review May 13, 2020 Wheeler 2 ACKNOWLEDGMENTS This research would not have been possible without the mentorship and direction given by Nicholas J. Smith, Ph.D. His undergraduate classes in topics of science, technology, and society originally inspired me to consider these pressing problems. Appreciation is due to Brandon Bryant for his expertise in cryptography and decentralized currencies. I am grateful for the wisdom of Tom Haines and Jaed Coffin, whom each helped me crystalize the broad scope of disinformation and its implications. My heartfelt thanks to my parents for their unwavering support that empowered me to enjoy so many opportunities. Wheeler 3 Table of Contents Introduction ..................................................................................................................................... 4 Contemporary Disinformation ........................................................................................................ 5 The Role of Artificial Intelligence .................................................................................................. 7 The Role of Social Media ............................................................................................................. 11 Fuel for the Engine........................................................................................................................ 15 Shutting Down the Engine ............................................................................................................ 18 The Implications ........................................................................................................................... 23 Social Media .............................................................................................................................. 23 Governments ............................................................................................................................. 25 Ideological Challenges .............................................................................................................. 26 Recommendations ......................................................................................................................... 28 Awareness Through Digital Literacy ........................................................................................ 29 Transparency Through New Technologies ............................................................................... 30 Increase the Cost of Spam ......................................................................................................... 33 Conclusion .................................................................................................................................... 36 Figures........................................................................................................................................... 38 Appendix A ................................................................................................................................... 39 Bibliography ................................................................................................................................. 40 Wheeler 4 Introduction New forms of artificial intelligence (A.I.) can autonomously replace people in existing images or videos with the likeness of others. The resulting media are called deepfakes. These replacements could take any form—anything from basic face-swapping (think: Snapchat augmented reality filters) to full-body mapping, where a person’s body can be mapped onto videos of professional dancers to make it appear that they, too, are highly coordinated. People of the Internet are now using various kinds of A.I. that are excellent at face-swapping to deepfake actor Nicholas Cage into various movies that did not cast him. These algorithms are allowing anyone in the world to easily make it look like a target is doing or saying anything. In the wrong hands, these innocent algorithms enable the mass manufacture of disinformation (or “fake news”), false information spread deliberately to deceive harmfully. If we are not careful, we could enter a post-truth future, where the truth about issues becomes overshadowed by emotionally charged headlines and discourse, making it even more challenging to identify truthful and genuine media. With little ways to determine which images and videos have been synthesized, the most sensationalist disinformation will go viral and attract the most attention, usually at the expense of a population or demographic. In this paper, I argue that AI- powered disinformation tools are becoming increasingly ubiquitous. I argue that the best way to address disinformation is by facilitating widespread digital literacy while building new technologies that provide increased context for news articles and social media posts. My central recommendations will be advocating for digital literacy education, investments in technologies to provide ample context and transparency, and to pursue new technologies that deter inauthentic behavior. Wheeler 5 Contemporary Disinformation Disinformation exists in many forms but is generally characterized by a deliberate attempt to mislead and deceive people. American engineer and YouTuber Destin Sandlin interviewed Renée DiResta, a 2019 Mozilla Fellow in Media, Misinformation, and Trust, who said that perpetrators of disinformation can be both corporations and individuals and have two primary motivations: financial and ideological.1 That is, they usually seek to extract advertisement revenue from their manufactured content, or try to manipulate public opinion (usually for financial or political gain). This reality is not surprising, given that money is power in our global economic system. Disinformation is not new, either, for its earlier forms were often government-sponsored propaganda. Propaganda has taken various forms, with earlier examples being pamphlets posted in town squares. This model’s cost per impression ratio is high; governments would design and manufacture these pamphlets for the masses, but only people who were walking by and happened to stop would see them. Computers and computer graphics have helped pave the way for the next generation of disinformation: synthetic media, or algorithmically created/modified media. Contemporary disinformation is far more dangerous because it is created fully autonomously using A.I. that can synthesize and spread it without a single human interaction. No longer must curious people seek out and read propaganda; it is now being delivered to peoples’ inboxes and newsfeeds, without consent, and backed by social features that make engaging with it easier than ever before. Disinformation can now reach more people with comparatively little cost and effort. Industry-grade tools like Adobe Photoshop (released in 1988) and Adobe After Effects (released five years later in 1993) gave humanity the ability to manipulate images and videos at 1 Sandlin, Destin W. 2019. Manipulating the YouTube Algorithm. Wheeler 6 large scale for the first time. No longer was it as expensive or time-consuming to cut and paste physical pieces of paper for mass production—we could now produce near-perfect computer- generated imagery (CGI) from the digital realm because all the computationally-intensive rendering took place by a computer. However, the use of Adobe’s robust software programs is still costly and offers an overwhelmingly vast selection of features. Only a team of talented graphic design artists can, with time and money, learn how to doctor images and videos convincingly. This phenomenon is why CGI is typically solely used by large-scale production companies with massive budgets like Marvel Studios, whose budget for Avengers: Endgame was $356 million.2 As with all forms of nascent technology, the landscape is rapidly changing. With the advent of community-driven content-generating algorithms, the process of creating disinformation is automated. Developers from all over the world are researching, refining, and optimizing these algorithms and are making them free to the public. Instead of needing an expensive computer graphics company to alter the visual data with proprietary tools, we can have free algorithms do the job just as well—if not better. FaceSwap, for example, is an open- source algorithmic face-swapping tool that anyone with an internet connection can utilize.3 The software is maintained by over sixty contributors who have built an infrastructure that takes this highly technical process and simplifies it to a matter of (1) gathering image data for a subject and (2) metaphorically pressing “go.” Real-Time Voice Cloning is another open-source tool that can not only clone a voice from mere minutes of a sample sound but can also make that cloned voice 2 n.d. Avengers: Endgame. 3 https://github.com/deepfakes/faceswap Wheeler 7 speak whatever the author desires.4 The software has received rave reviews5 and spawned over 2,800 open-source variations. Research and progress in these spaces are booming. For little-to-no cost, anyone can use these algorithms to produce deepfakes that look or sound utterly genuine to our senses. We are observing a shift in resources away from expensive creativity and talent toward inexpensive training data and processing power. Data and computational speed are resources that are becoming more accessible among individuals and corporations all