Deepfakes: a New Threat to Face Recognition? Assessment and Detection Pavel Korshunov and Sebastien´ Marcel

Deepfakes: a New Threat to Face Recognition? Assessment and Detection Pavel Korshunov and Sebastien´ Marcel

1 DeepFakes: a New Threat to Face Recognition? Assessment and Detection Pavel Korshunov and Sebastien´ Marcel Abstract—It is becoming increasingly easy to automatically responding to the public demand to detect face swapping replace a face of one person in a video with the face of another technology, researchers are starting to work on databases person by using a pre-trained generative adversarial network and detection methods, including image and video data [6] (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to generated with an older face swapping approach Face2Face [7] 3 detect these Deepfake videos. To help developing such methods, in or videos collected using Snapchat application [8]. this paper, we present the first publicly available set of Deepfake In this paper, we present a first publicly available database videos generated from videos of VidTIMIT database. We used of videos where faces are swapped using the open source open source software based on GANs to create the Deepfakes, GAN-based approach4, which is developed from the orig- and we emphasize that training and blending parameters can 1 significantly impact the quality of the resulted videos. To demon- inal autoencoder-based Deepfake algorithm . We manually strate this impact, we generated videos with low and high visual selected 16 similar looking pairs of people from publicly quality (320 videos each) using differently tuned parameter sets. available VidTIMIT database5. For each 32 subject, we trained We showed that the state of the art face recognition systems based two different models, in the paper, referred to as low quality on VGG and Facenet neural networks are vulnerable to Deepfake (LQ), with 64 × 64 input/output size, and high quality (HQ), videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting with 128×128 size, models (see Figure 1 for examples). Since Deepfake videos are necessary. By considering several baseline there are 10 videos per person in VidTIMIT database, we approaches, we found that audio-visual approach based on lip- generated 320 videos corresponding to each version, resulting sync inconsistency detection was not able to distinguish Deepfake in total 620 videos with faces swapped. For the audio, we kept videos. The best performing method, which is based on visual the original audio track of each video, i.e., no manipulation quality metrics and is often used in presentation attack detection domain, resulted in 8.97% equal error rate on high quality was done to the audio channel. Deepfakes. Our experiments demonstrate that GAN-generated It is also important to understand how much of a threat Deepfake videos are challenging for both face recognition systems Deepfake videos are to face recognition systems. Because if and existing detection methods, and the further development of these systems are not fooled by Deepfakes, creating a separate face swapping technology will make it even more so. system for detecting Deepfakes would not be necessary. To Index Terms—Deepfake videos, Face swapping, Video the vulnerability of face recognition to Deepfake videos, we database, Tampering detection, Face recognition evaluate two state of the art systems: based on VGG [9] and Facenet6 [10] neural networks on both untampered videos and videos with faces swapped. I. INTRODUCTION For detection of the Deepfakes, we first used an audio- Recent advances in automated video and audio editing tools, visual approach that detects inconsistency between visual lip generative adversarial networks (GANs), and social media movements and speech in audio [11]. It allows us to under- allow creation and fast dissemination of high quality tam- stand how well the generated Deepfakes can mimic mouth pered video content. Such content already led to appearance movement and whether the lips are synchronized with the of deliberate misinformation, coined ‘fake news’, which is speech. We also applied several baseline methods from pre- arXiv:1812.08685v1 [cs.CV] 20 Dec 2018 impacting political landscapes of several countries [1]. A sentation attack detection domain, by treating Deepfake videos recent surge of videos, often obscene, in which a face can be as digital presentation attacks [8], including simple principal swapped with someone else’s using neural networks, so called component analysis (PCA) and linear discriminant analysis 1 2 Deepfakes , are of a great public concern . Accessible open (LDA) approaches, and the approach based on image quality source software and apps for such face swapping lead to large metrics (IQM) and support vector machine (SVM) [12], [13]. amounts of synthetically generated Deepfake videos appearing To allow researchers to verify, reproduce, and extend our in social media and news, posing a significant technical work, we provide the database of Deepfake videos7, face challenge for detection and filtering of such content. Therefore, recognition and Deepfake detection systems with correspond- the development of efficient tools that can automatically detect ing scores as an open source Python package8. these videos with swapped faces is of a paramount importance. Therefore, this paper has the following main contributions: Until recently, most of the research was focusing on advanc- ing the face swapping technology [2], [3], [4], [5]. However, 3https://www.snapchat.com/ 4https://github.com/shaoanlu/faceswap-GAN P. Korshunov and S. Marcel are at Idiap Research Institute, Martigny, 5http://conradsanderson.id.au/vidtimit/ Switzerland; emails: fpavel.korshunov,[email protected] 6https://github.com/davidsandberg/facenet 1Open source: https://github.com/deepfakes/faceswap 7https://www.idiap.ch/dataset/deepfaketimit 2BBC report (Feb 3, 2018): http://www.bbc.com/news/technology-42912529 8Complete implementation: https://gitlab.idiap.ch/bob/bob.report.deepfakes 2 (g) Original 1 (h) Original 2 (i) LQ swap 1 ! 2 (j) HQ swap 1 ! 2 (k) LQ swap 2 ! 1 (l) HQ swap 2 ! 1 Fig. 1: Screenshot of the original videos from VidTIMIT database and low (LQ) and high quality (HQ) Deepfake videos. • Publicly available database of low and high quality sets techniques based on Gasussian blurring, which means the fa- of videos from VidTIMIT database with swapped faces cial expressions of the input faces were not preserved. Another using GAN-based approach; method based on LBP-like features with SVM classifier was • Vulnerability analysis of VGG and Facenet based face proposed by Agarwal et al. [8] and evaluated on the videos recognition systems; collected by the authors with Snapchat3 phone application. • Evaluation of several detection methods of Deepfakes, Snapchat uses active 3D model to swap faces in real time, so including lip-syncing approach and image quality metrics the resulted videos are not really Deepfakes, but it is still a with SVM method; widely used tool and database of such videos, if it will ever become public (the authors promised to release it but have not II. RELATED WORK done so at the moment of publication), it can be interesting to research community. One of the first works on face swapping is by Bitouk et Rossler¨ et al. [6] presented the most comprehensive al. [14], where the authors searched in a database for a face database of non-Deepfake swapped faces (5000000 images similar in appearance to the input face and then focused on from more than 1000 videos) to date. The authors also perfecting the blending of the found face into the input image. benchmarked the state of the art forgery classification and The main motivation for this work was de-identification of an segmentation methods. The authors used Face2Face [7] tool input face and its privacy preservation. Hence, the approach to generate the database, which is based on expression trans- did not allow for a seamless swapping of any two given formation using 3D facial model and a pre-computed database faces. Until the latest era of neural networks, most of the of mouth interiors. One of the latest approaches [21] proposed techniques for face swapping or facial reenacment were based to use blinking detection as the means to distinguish swapped on similarity searchers between faces or face patches in target faces in Deepfake videos. The authors generated 49 videos (not and source video and various blending techniques [15], [16], publicly available) and argued that the proposed eye blinking [17], [18], [4]. detection was effective in detecting Deepfake videos. The first approach that used a generative adversarial network However, no public Deepfake video database where GAN- to train a model between pre-selected two faces was proposed based approach was applied is available. Hence, it is unclear by Korshunova et al. in 2017 [3]. Another related work with whether the above methods would be effective in detecting even a more ambitious idea was to use long short term such faces. In fact, the Deepfakes that we have generated can memory (LSTM) based architecture to synthesize a mouth effectively mimic the facial expressions, mouth movements, feature solely from an audio speech [19]. Right after these and blinking, so the current detection approaches need to publication became public, they attracted a lot of publicity. be evaluated on such videos. For instance, it is practically Open source approaches replicating these techniques started impossible to evaluate the methods proposed in [6] and [21] to appear, which resulted in the Deepfake phenomena. as their implementations are not yet available. The rapid spread of Deepfakes and the ease of generating such videos are calling for a reliable detection method. So far, however, there are only few publications focusing on detecting III. DEEPFAKE DATABASE GAN-generated videos with swapped faces and very little As original data, we took video from VidTIMIT database5. data for evaluation and benchmarking is publicly available. The database contains 10 videos for each of 43 subjects, For instance, Zhang et al. [20] proposed the method based which were shot in controlled environment with people facing on speeded up robust features (SURF) descriptors and SVM camera and reciting predetermined short phrases.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us