Deepfakes: a New Threat to Face Recognition? Assessment and Detection Pavel Korshunov and Sebastien´ Marcel

Total Page:16

File Type:pdf, Size:1020Kb

Deepfakes: a New Threat to Face Recognition? Assessment and Detection Pavel Korshunov and Sebastien´ Marcel 1 DeepFakes: a New Threat to Face Recognition? Assessment and Detection Pavel Korshunov and Sebastien´ Marcel Abstract—It is becoming increasingly easy to automatically responding to the public demand to detect face swapping replace a face of one person in a video with the face of another technology, researchers are starting to work on databases person by using a pre-trained generative adversarial network and detection methods, including image and video data [6] (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to generated with an older face swapping approach Face2Face [7] 3 detect these Deepfake videos. To help developing such methods, in or videos collected using Snapchat application [8]. this paper, we present the first publicly available set of Deepfake In this paper, we present a first publicly available database videos generated from videos of VidTIMIT database. We used of videos where faces are swapped using the open source open source software based on GANs to create the Deepfakes, GAN-based approach4, which is developed from the orig- and we emphasize that training and blending parameters can 1 significantly impact the quality of the resulted videos. To demon- inal autoencoder-based Deepfake algorithm . We manually strate this impact, we generated videos with low and high visual selected 16 similar looking pairs of people from publicly quality (320 videos each) using differently tuned parameter sets. available VidTIMIT database5. For each 32 subject, we trained We showed that the state of the art face recognition systems based two different models, in the paper, referred to as low quality on VGG and Facenet neural networks are vulnerable to Deepfake (LQ), with 64 × 64 input/output size, and high quality (HQ), videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting with 128×128 size, models (see Figure 1 for examples). Since Deepfake videos are necessary. By considering several baseline there are 10 videos per person in VidTIMIT database, we approaches, we found that audio-visual approach based on lip- generated 320 videos corresponding to each version, resulting sync inconsistency detection was not able to distinguish Deepfake in total 620 videos with faces swapped. For the audio, we kept videos. The best performing method, which is based on visual the original audio track of each video, i.e., no manipulation quality metrics and is often used in presentation attack detection domain, resulted in 8.97% equal error rate on high quality was done to the audio channel. Deepfakes. Our experiments demonstrate that GAN-generated It is also important to understand how much of a threat Deepfake videos are challenging for both face recognition systems Deepfake videos are to face recognition systems. Because if and existing detection methods, and the further development of these systems are not fooled by Deepfakes, creating a separate face swapping technology will make it even more so. system for detecting Deepfakes would not be necessary. To Index Terms—Deepfake videos, Face swapping, Video the vulnerability of face recognition to Deepfake videos, we database, Tampering detection, Face recognition evaluate two state of the art systems: based on VGG [9] and Facenet6 [10] neural networks on both untampered videos and videos with faces swapped. I. INTRODUCTION For detection of the Deepfakes, we first used an audio- Recent advances in automated video and audio editing tools, visual approach that detects inconsistency between visual lip generative adversarial networks (GANs), and social media movements and speech in audio [11]. It allows us to under- allow creation and fast dissemination of high quality tam- stand how well the generated Deepfakes can mimic mouth pered video content. Such content already led to appearance movement and whether the lips are synchronized with the of deliberate misinformation, coined ‘fake news’, which is speech. We also applied several baseline methods from pre- arXiv:1812.08685v1 [cs.CV] 20 Dec 2018 impacting political landscapes of several countries [1]. A sentation attack detection domain, by treating Deepfake videos recent surge of videos, often obscene, in which a face can be as digital presentation attacks [8], including simple principal swapped with someone else’s using neural networks, so called component analysis (PCA) and linear discriminant analysis 1 2 Deepfakes , are of a great public concern . Accessible open (LDA) approaches, and the approach based on image quality source software and apps for such face swapping lead to large metrics (IQM) and support vector machine (SVM) [12], [13]. amounts of synthetically generated Deepfake videos appearing To allow researchers to verify, reproduce, and extend our in social media and news, posing a significant technical work, we provide the database of Deepfake videos7, face challenge for detection and filtering of such content. Therefore, recognition and Deepfake detection systems with correspond- the development of efficient tools that can automatically detect ing scores as an open source Python package8. these videos with swapped faces is of a paramount importance. Therefore, this paper has the following main contributions: Until recently, most of the research was focusing on advanc- ing the face swapping technology [2], [3], [4], [5]. However, 3https://www.snapchat.com/ 4https://github.com/shaoanlu/faceswap-GAN P. Korshunov and S. Marcel are at Idiap Research Institute, Martigny, 5http://conradsanderson.id.au/vidtimit/ Switzerland; emails: fpavel.korshunov,[email protected] 6https://github.com/davidsandberg/facenet 1Open source: https://github.com/deepfakes/faceswap 7https://www.idiap.ch/dataset/deepfaketimit 2BBC report (Feb 3, 2018): http://www.bbc.com/news/technology-42912529 8Complete implementation: https://gitlab.idiap.ch/bob/bob.report.deepfakes 2 (g) Original 1 (h) Original 2 (i) LQ swap 1 ! 2 (j) HQ swap 1 ! 2 (k) LQ swap 2 ! 1 (l) HQ swap 2 ! 1 Fig. 1: Screenshot of the original videos from VidTIMIT database and low (LQ) and high quality (HQ) Deepfake videos. • Publicly available database of low and high quality sets techniques based on Gasussian blurring, which means the fa- of videos from VidTIMIT database with swapped faces cial expressions of the input faces were not preserved. Another using GAN-based approach; method based on LBP-like features with SVM classifier was • Vulnerability analysis of VGG and Facenet based face proposed by Agarwal et al. [8] and evaluated on the videos recognition systems; collected by the authors with Snapchat3 phone application. • Evaluation of several detection methods of Deepfakes, Snapchat uses active 3D model to swap faces in real time, so including lip-syncing approach and image quality metrics the resulted videos are not really Deepfakes, but it is still a with SVM method; widely used tool and database of such videos, if it will ever become public (the authors promised to release it but have not II. RELATED WORK done so at the moment of publication), it can be interesting to research community. One of the first works on face swapping is by Bitouk et Rossler¨ et al. [6] presented the most comprehensive al. [14], where the authors searched in a database for a face database of non-Deepfake swapped faces (5000000 images similar in appearance to the input face and then focused on from more than 1000 videos) to date. The authors also perfecting the blending of the found face into the input image. benchmarked the state of the art forgery classification and The main motivation for this work was de-identification of an segmentation methods. The authors used Face2Face [7] tool input face and its privacy preservation. Hence, the approach to generate the database, which is based on expression trans- did not allow for a seamless swapping of any two given formation using 3D facial model and a pre-computed database faces. Until the latest era of neural networks, most of the of mouth interiors. One of the latest approaches [21] proposed techniques for face swapping or facial reenacment were based to use blinking detection as the means to distinguish swapped on similarity searchers between faces or face patches in target faces in Deepfake videos. The authors generated 49 videos (not and source video and various blending techniques [15], [16], publicly available) and argued that the proposed eye blinking [17], [18], [4]. detection was effective in detecting Deepfake videos. The first approach that used a generative adversarial network However, no public Deepfake video database where GAN- to train a model between pre-selected two faces was proposed based approach was applied is available. Hence, it is unclear by Korshunova et al. in 2017 [3]. Another related work with whether the above methods would be effective in detecting even a more ambitious idea was to use long short term such faces. In fact, the Deepfakes that we have generated can memory (LSTM) based architecture to synthesize a mouth effectively mimic the facial expressions, mouth movements, feature solely from an audio speech [19]. Right after these and blinking, so the current detection approaches need to publication became public, they attracted a lot of publicity. be evaluated on such videos. For instance, it is practically Open source approaches replicating these techniques started impossible to evaluate the methods proposed in [6] and [21] to appear, which resulted in the Deepfake phenomena. as their implementations are not yet available. The rapid spread of Deepfakes and the ease of generating such videos are calling for a reliable detection method. So far, however, there are only few publications focusing on detecting III. DEEPFAKE DATABASE GAN-generated videos with swapped faces and very little As original data, we took video from VidTIMIT database5. data for evaluation and benchmarking is publicly available. The database contains 10 videos for each of 43 subjects, For instance, Zhang et al. [20] proposed the method based which were shot in controlled environment with people facing on speeded up robust features (SURF) descriptors and SVM camera and reciting predetermined short phrases.
Recommended publications
  • Artificial Intelligence in Health Care: the Hope, the Hype, the Promise, the Peril
    Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Michael Matheny, Sonoo Thadaney Israni, Mahnoor Ahmed, and Danielle Whicher, Editors WASHINGTON, DC NAM.EDU PREPUBLICATION COPY - Uncorrected Proofs NATIONAL ACADEMY OF MEDICINE • 500 Fifth Street, NW • WASHINGTON, DC 20001 NOTICE: This publication has undergone peer review according to procedures established by the National Academy of Medicine (NAM). Publication by the NAM worthy of public attention, but does not constitute endorsement of conclusions and recommendationssignifies that it is the by productthe NAM. of The a carefully views presented considered in processthis publication and is a contributionare those of individual contributors and do not represent formal consensus positions of the authors’ organizations; the NAM; or the National Academies of Sciences, Engineering, and Medicine. Library of Congress Cataloging-in-Publication Data to Come Copyright 2019 by the National Academy of Sciences. All rights reserved. Printed in the United States of America. Suggested citation: Matheny, M., S. Thadaney Israni, M. Ahmed, and D. Whicher, Editors. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication. Washington, DC: National Academy of Medicine. PREPUBLICATION COPY - Uncorrected Proofs “Knowing is not enough; we must apply. Willing is not enough; we must do.” --GOETHE PREPUBLICATION COPY - Uncorrected Proofs ABOUT THE NATIONAL ACADEMY OF MEDICINE The National Academy of Medicine is one of three Academies constituting the Nation- al Academies of Sciences, Engineering, and Medicine (the National Academies). The Na- tional Academies provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions.
    [Show full text]
  • Real Vs Fake Faces: Deepfakes and Face Morphing
    Graduate Theses, Dissertations, and Problem Reports 2021 Real vs Fake Faces: DeepFakes and Face Morphing Jacob L. Dameron WVU, [email protected] Follow this and additional works at: https://researchrepository.wvu.edu/etd Part of the Signal Processing Commons Recommended Citation Dameron, Jacob L., "Real vs Fake Faces: DeepFakes and Face Morphing" (2021). Graduate Theses, Dissertations, and Problem Reports. 8059. https://researchrepository.wvu.edu/etd/8059 This Thesis is protected by copyright and/or related rights. It has been brought to you by the The Research Repository @ WVU with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you must obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Thesis has been accepted for inclusion in WVU Graduate Theses, Dissertations, and Problem Reports collection by an authorized administrator of The Research Repository @ WVU. For more information, please contact [email protected]. Real vs Fake Faces: DeepFakes and Face Morphing Jacob Dameron Thesis submitted to the Benjamin M. Statler College of Engineering and Mineral Resources at West Virginia University in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering Xin Li, Ph.D., Chair Natalia Schmid, Ph.D. Matthew Valenti, Ph.D. Lane Department of Computer Science and Electrical Engineering Morgantown, West Virginia 2021 Keywords: DeepFakes, Face Morphing, Face Recognition, Facial Action Units, Generative Adversarial Networks, Image Processing, Classification.
    [Show full text]
  • Synthetic Video Generation
    Synthetic Video Generation Why seeing should not always be believing! Alex Adam Image source https://www.pocket-lint.com/apps/news/adobe/140252-30-famous-photoshopped-and-doctored-images-from-across-the-ages Image source https://www.pocket-lint.com/apps/news/adobe/140252-30-famous-photoshopped-and-doctored-images-from-across-the-ages Image source https://www.pocket-lint.com/apps/news/adobe/140252-30-famous-photoshopped-and-doctored-images-from-across-the-ages Image source https://www.pocket-lint.com/apps/news/adobe/140252-30-famous-photoshopped-and-doctored-images-from-across-the-ages Image Tampering Historically, manipulated Off the shelf software (e.g imagery has deceived Photoshop) exists to do people this now Has become standard in Public have become tabloids/social media somewhat numb to it - it’s no longer as impactful/shocking How does machine learning fit in? Advent of machine learning Video manipulation is now has made image also tractable with enough manipulation even easier data and compute Can make good synthetic Public are largely unaware of videos using a gaming this and the danger it poses! computer in a bedroom Part I: Faceswap ● In 2017, reddit (/u/deepfakes) posted Python code that uses machine learning to swap faces in images/video ● ‘Deepfake’ content flooded reddit, YouTube and adult websites ● Reddit since banned this content (but variants of the code are open source https://github.com/deepfakes/faceswap) ● Autoencoder concepts underlie most ‘Deepfake’ methods Faceswap Algorithm Image source https://medium.com/@jonathan_hui/how-deep-learning-fakes-videos-deepfakes-and-how-to-detect-it-c0b50fbf7cb9 Inference Image source https://medium.com/@jonathan_hui/how-deep-learning-fakes-videos-deepfakes-and-how-to-detect-it-c0b50fbf7cb9 ● Faceswap model is an autoencoder.
    [Show full text]
  • Exposing Deepfake Videos by Detecting Face Warping Artifacts
    Exposing DeepFake Videos By Detecting Face Warping Artifacts Yuezun Li, Siwei Lyu Computer Science Department University at Albany, State University of New York, USA Abstract sibility to large-volume training data and high-throughput computing power, but more to the growth of machine learn- In this work, we describe a new deep learning based ing and computer vision techniques that eliminate the need method that can effectively distinguish AI-generated fake for manual editing steps. videos (referred to as DeepFake videos hereafter) from real In particular, a new vein of AI-based fake video gen- videos. Our method is based on the observations that cur- eration methods known as DeepFake has attracted a lot rent DeepFake algorithm can only generate images of lim- of attention recently. It takes as input a video of a spe- ited resolutions, which need to be further warped to match cific individual (’target’), and outputs another video with the original faces in the source video. Such transforms leave the target’s faces replaced with those of another individ- distinctive artifacts in the resulting DeepFake videos, and ual (’source’). The backbone of DeepFake are deep neu- we show that they can be effectively captured by convo- ral networks trained on face images to automatically map lutional neural networks (CNNs). Compared to previous the facial expressions of the source to the target. With methods which use a large amount of real and DeepFake proper post-processing, the resulting videos can achieve a generated images to train CNN classifier, our method does high level of realism. not need DeepFake generated images as negative training In this paper, we describe a new deep learning based examples since we target the artifacts in affine face warp- method that can effectively distinguish DeepFake videos ing as the distinctive feature to distinguish real and fake from the real ones.
    [Show full text]
  • JCS Deepfake
    Don’t Believe Your Eyes (Or Ears): The Weaponization of Artificial Intelligence, Machine Learning, and Deepfakes Joe Littell 1 Agenda ØIntroduction ØWhat is A.I.? ØWhat is a DeepFake? ØHow is a DeepFake created? ØVisual Manipulation ØAudio Manipulation ØForgery ØData Poisoning ØConclusion ØQuestions 2 Introduction Deniss Metsavas, an Estonian soldier convicted of spying for Russia’s military intelligence service after being framed for a rape in Russia. (Picture from Daniel Lombroso / The Atlantic) 3 What is A.I.? …and what is it not? General Artificial Intelligence (AI) • Machine (or Statistical) Learning (ML) is a subset of AI • ML works through the probability of a new event happening based on previously gained knowledge (Scalable pattern recognition) • ML can be supervised, leaning requiring human input into the data, or unsupervised, requiring no input to the raw data. 4 What is a Deepfake? • Deepfake is a mash up of the words for deep learning, meaning machine learning using a neural network, and fake images/video/audio. § Taken from a Reddit user name who utilized faceswap app for his own ‘productions.’ • Created by the use of two machine learning algorithms, Generative Adversarial Networks, and Auto-Encoders. • Became known for the use in underground pornography using celebrity faces in highly explicit videos. 5 How is a Deepfake created? • Deepfakes are generated using Generative Adversarial Networks, and Auto-Encoders. • These algorithms work through the uses of competing systems, where one creates a fake piece of data and the other is trained to determine if that datatype is fake or not • Think of it like a counterfeiter and a police officer.
    [Show full text]
  • Deepfakes 2020 the Tipping Point, Sentinel
    SENTINEL DEEPFAKES 2020: THE TIPPING POINT The Current Threat Landscape, its Impact on the U.S 2020 Elections, and the Coming of AI-Generated Events at Scale. Sentinel - 2020 1 About Sentinel. Sentinel works with governments, international media outlets and defense agencies to help protect democracies from disinformation campaigns, synthetic media and information operations by developing a state-of-the-art AI detection platform. Headquartered in Tallinn, Estonia, the company was founded by ex-NATO AI and cybersecurity experts, and is backed by world-class investors including Jaan Tallinn (Co-Founder of Skype & early investor in DeepMind) and Taavet Hinrikus (Co-Founder of TransferWise). Our vision is to become the trust layer for the Internet by verifying the entire critical information supply chain and safeguard 1 billion people from information warfare. Acknowledgements We would like to thank our investors, partners, and advisors who have helped us throughout our journey and share our vision to build a trust layer for the internet. Special thanks to Mikk Vainik of Republic of Estonia’s Ministry of Economic Affairs and Communications, Elis Tootsman of Accelerate Estonia, and Dr. Adrian Venables of TalTech for your feedback and support as well as to Jaan Tallinn, Taavet Hinrikus, Ragnar Sass, United Angels VC, Martin Henk, and everyone else who has made this report possible. Johannes Tammekänd CEO & Co-Founder © 2020 Sentinel Contact: [email protected] Authors: Johannes Tammekänd, John Thomas, and Kristjan Peterson Cite: Deepfakes 2020: The Tipping Point, Johannes Tammekänd, John Thomas, and Kristjan Peterson, October 2020 Sentinel - 2020 2 Executive Summary. “There are but two powers in the world, the sword and the mind.
    [Show full text]
  • CNN-Generated Images Are Surprisingly Easy to Spot... for Now
    CNN-generated images are surprisingly easy to spot... for now Sheng-Yu Wang1 Oliver Wang2 Richard Zhang2 Andrew Owens1,3 Alexei A. Efros1 UC Berkeley1 Adobe Research2 University of Michigan3 synthetic real ProGAN [19] StyleGAN [20] BigGAN [7] CycleGAN [48] StarGAN [10] GauGAN [29] CRN [9] IMLE [23] SITD [8] Super-res. [13] Deepfakes [33] Figure 1: Are CNN-generated images hard to distinguish from real images? We show that a classifier trained to detect images generated by only one CNN (ProGAN, far left) can detect those generated by many other models (remaining columns). Our code and models are available at https://peterwang512.github.io/CNNDetection/. Abstract are fake [14]. This issue has started to play a significant role in global politics; in one case a video of the president of In this work we ask whether it is possible to create Gabon that was claimed by opposition to be fake was one a “universal” detector for telling apart real images from factor leading to a failed coup d’etat∗. Much of this con- these generated by a CNN, regardless of architecture or cern has been directed at specific manipulation techniques, dataset used. To test this, we collect a dataset consisting such as “deepfake”-style face replacement [2], and photo- of fake images generated by 11 different CNN-based im- realistic synthetic humans [20]. However, these methods age generator models, chosen to span the space of com- represent only two instances of a broader set of techniques: monly used architectures today (ProGAN, StyleGAN, Big- image synthesis via convolutional neural networks (CNNs).
    [Show full text]
  • OC-Fakedect: Classifying Deepfakes Using One-Class Variational Autoencoder
    OC-FakeDect: Classifying Deepfakes Using One-class Variational Autoencoder Hasam Khalid Simon S. Woo Computer Science and Engineering Department Computer Science and Engineering Department Sungkyunkwan University, South Korea Sungkyunkwan University, South Korea [email protected] [email protected] Abstract single facial image to create fake images or videos. One popular example is the Deepfakes of former U.S. President, An image forgery method called Deepfakes can cause Barack Obama, generated as part of a research [29] focusing security and privacy issues by changing the identity of on the synthesis of a high-quality video, featuring Barack a person in a photo through the replacement of his/her Obama speaking with accurate lip sync, composited into face with a computer-generated image or another person’s a target video clip. Therefore, the ability to easily forge face. Therefore, a new challenge of detecting Deepfakes videos raises serious security and privacy concerns: imag- arises to protect individuals from potential misuses. Many ine hackers that can use deepfakes to present a forged video researchers have proposed various binary-classification of an eminent person to send out false and potentially dan- based detection approaches to detect deepfakes. How- gerous messages to the public. Nowadays, fake news has ever, binary-classification based methods generally require become an issue as well, due to the spread of misleading in- a large amount of both real and fake face images for train- formation via traditional news media or online social media ing, and it is challenging to collect sufficient fake images and Deepfake videos can be combined to create arbitrary data in advance.
    [Show full text]
  • Deepfakes & Disinformation
    DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION Agnieszka M. Walorska ANALYSISANALYSE 2 DEEPFAKES & DISINFORMATION IMPRINT Publisher Friedrich Naumann Foundation for Freedom Karl-Marx-Straße 2 14482 Potsdam Germany /freiheit.org /FriedrichNaumannStiftungFreiheit /FNFreiheit Author Agnieszka M. Walorska Editors International Department Global Themes Unit Friedrich Naumann Foundation for Freedom Concept and layout TroNa GmbH Contact Phone: +49 (0)30 2201 2634 Fax: +49 (0)30 6908 8102 Email: [email protected] As of May 2020 Photo Credits Photomontages © Unsplash.de, © freepik.de, P. 30 © AdobeStock Screenshots P. 16 © https://youtu.be/mSaIrz8lM1U P. 18 © deepnude.to / Agnieszka M. Walorska P. 19 © thispersondoesnotexist.com P. 19 © linkedin.com P. 19 © talktotransformer.com P. 25 © gltr.io P. 26 © twitter.com All other photos © Friedrich Naumann Foundation for Freedom (Germany) P. 31 © Agnieszka M. Walorska Notes on using this publication This publication is an information service of the Friedrich Naumann Foundation for Freedom. The publication is available free of charge and not for sale. It may not be used by parties or election workers during the purpose of election campaigning (Bundestags-, regional and local elections and elections to the European Parliament). Licence Creative Commons (CC BY-NC-ND 4.0) https://creativecommons.org/licenses/by-nc-nd/4.0 DEEPFAKES & DISINFORMATION DEEPFAKES & DISINFORMATION 3 4 DEEPFAKES & DISINFORMATION CONTENTS Table of contents EXECUTIVE SUMMARY 6 GLOSSARY 8 1.0 STATE OF DEVELOPMENT ARTIFICIAL
    [Show full text]
  • Countering Terrorism Online with Artificial Intelligence an Overview for Law Enforcement and Counter-Terrorism Agencies in South Asia and South-East Asia
    COUNTERING TERRORISM ONLINE WITH ARTIFICIAL INTELLIGENCE AN OVERVIEW FOR LAW ENFORCEMENT AND COUNTER-TERRORISM AGENCIES IN SOUTH ASIA AND SOUTH-EAST ASIA COUNTERING TERRORISM ONLINE WITH ARTIFICIAL INTELLIGENCE An Overview for Law Enforcement and Counter-Terrorism Agencies in South Asia and South-East Asia A Joint Report by UNICRI and UNCCT 3 Disclaimer The opinions, findings, conclusions and recommendations expressed herein do not necessarily reflect the views of the Unit- ed Nations, the Government of Japan or any other national, regional or global entities involved. Moreover, reference to any specific tool or application in this report should not be considered an endorsement by UNOCT-UNCCT, UNICRI or by the United Nations itself. The designation employed and material presented in this publication does not imply the expression of any opinion whatsoev- er on the part of the Secretariat of the United Nations concerning the legal status of any country, territory, city or area of its authorities, or concerning the delimitation of its frontiers or boundaries. Contents of this publication may be quoted or reproduced, provided that the source of information is acknowledged. The au- thors would like to receive a copy of the document in which this publication is used or quoted. Acknowledgements This report is the product of a joint research initiative on counter-terrorism in the age of artificial intelligence of the Cyber Security and New Technologies Unit of the United Nations Counter-Terrorism Centre (UNCCT) in the United Nations Office of Counter-Terrorism (UNOCT) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) through its Centre for Artificial Intelligence and Robotics.
    [Show full text]
  • Deepfaked Online Content Is Highly Effective in Manipulating People's
    Deepfaked online content is highly effective in manipulating people’s attitudes and intentions Sean Hughes, Ohad Fried, Melissa Ferguson, Ciaran Hughes, Rian Hughes, Xinwei Yao, & Ian Hussey In recent times, disinformation has spread rapidly through social media and news sites, biasing our (moral) judgements of other people and groups. “Deepfakes”, a new type of AI-generated media, represent a powerful new tool for spreading disinformation online. Although Deepfaked images, videos, and audio may appear genuine, they are actually hyper-realistic fabrications that enable one to digitally control what another person says or does. Given the recent emergence of this technology, we set out to examine the psychological impact of Deepfaked online content on viewers. Across seven preregistered studies (N = 2558) we exposed participants to either genuine or Deepfaked content, and then measured its impact on their explicit (self-reported) and implicit (unintentional) attitudes as well as behavioral intentions. Results indicated that Deepfaked videos and audio have a strong psychological impact on the viewer, and are just as effective in biasing their attitudes and intentions as genuine content. Many people are unaware that Deepfaking is possible; find it difficult to detect when they are being exposed to it; and most importantly, neither awareness nor detection serves to protect people from its influence. All preregistrations, data and code available at osf.io/f6ajb. The proliferation of social media, dating apps, news and copy of a person that can be manipulated into doing or gossip sites, has brought with it the ability to learn saying anything (2). about a person’s moral character without ever having Deepfaking has quickly become a tool of to interact with them in real life.
    [Show full text]
  • One | What Is Ai?
    ONE | WHAT IS AI? Artificial intelligence (AI) is a difficult concept to grasp. As an illustra- tion, only 17 percent of 1,500 U.S. business leaders in 2017 said they were familiar with how AI would affect their companies.1 These executives understood there was considerable potential for revolutionizing business processes but were not clear how AI would be deployed within their own organizations or how it would alter their industries. Hollywood offers little help in improving people’s understanding of advanced technologies. Many movies conflate AI with malevolent robots or hyperintelligent beings such as the Terminator or the evil HAL in Arthur C. Clarke’s 2001: A Space Odyssey. In film depictions, superpow- ered entities inevitably gain humanlike intelligence, go rogue, and inflict tremendous harm on the human population. The ominous message in these cinematic renditions is that AI is dangerous because it will eventu- ally enslave humans.2 One of the difficulties in understanding AI is the lack of a uniform definition. People often intertwine many things in their conceptions and then imagine the worst possible outcomes. They assume advanced technol- ogies will have omniscient capabilities, destructive motivations, and lim- 1 West-Allen_Turning Point_ab_i-xx_1-277.indd 1 6/2/20 10:30 AM 2 TURNING POINT ited human oversight and will be impossible to control. For those reasons, it is important to clarify what we mean by artificial intelligence, provide understandable examples of how it is being used, and outline its major risks. AI ORIGINS Alan Turing generally is credited with conceptualizing the idea of AI in 1950, when he speculated about “thinking machines” that could reason at the level of a human being.
    [Show full text]