The State of Deepfakes: Landscape, Threats, and Impact, Henry Ajder, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen, September 2019
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Cinema Recognizing Making Intimacy Vincent Tajiri Even Hotter!
FEBRUARY 2020 SOUTH AFRICA THROW SEX & BACK CINEMA RECOGNIZING MAKING INTIMACY VINCENT TAJIRI EVEN HOTTER! WITHOUT Flaunt THE MASK CANDID WITH YOUR ORVILLE PECK SWAG STERLING K. BROWN SHOWS US THE HOW SCOOP ON SEX! OUR SEXPERT SHARES INTIMATE DETAILS Dee Cobb WWW.PLAYBOY.CO.ZA R45.00 20019 9 772517 959409 EVERY. ISSUE. EVER. The complete Playboy archive Instant access to every issue, article, story, and pictorial Playboy has ever published – only on iPlayboy.com. VISIT PLAYBOY.COM/ARCHIVE SOUTH AFRICA Editor-in-Chief Dirk Steenekamp Associate Editor Jason Fleetwood Graphic Designer Koketso Moganetsi Fashion Editor Lexie Robb Grooming Editor Greg Forbes Gaming Editor Andre Coetzer Tech Editor Peter Wolff Illustrations Toon53 Productions Motoring Editor John Page Senior Photo Editor Luba V Nel ADVERTISING SALES [email protected] for more information PHONE: +27 10 006 0051 MAIL: PO Box 71450, Bryanston, Johannesburg, South Africa, 2021 Ƶč%./0(++.ǫ(+'ć+1.35/""%!.'Čƫ*.++/0.!!0Ē+1.35/ǫ+1(!2. ČĂāĊā EMAIL: [email protected] WEB: www.playboy.co.za FACEBOOK: facebook.com/playboysouthafrica TWITTER: @PlayboyMagSA INSTAGRAM: playboymagsa PLAYBOY ENTERPRISES, INTERNATIONAL Hugh M. Hefner, FOUNDER U.S. PLAYBOY ǫ!* +$*Čƫ$%!"4!10%2!þ!. ƫ++,!.!"*!.Čƫ$%!"ƫ.!0%2!þ!. Michael Phillips, SVP, Digital Products James Rickman, Executive Editor PLAYBOY INTERNATIONAL PUBLISHING !!*0!(Čƫ$%!"ƫ+))!.%(þ!.Ē! +",!.0%+*/ Hazel Thomson, Senior Director, International Licensing PLAYBOY South Africa is published by DHS Media House in South Africa for South Africa. Material in this publication, including text and images, is protected by copyright. It may not be copied, reproduced, republished, posted, broadcast, or transmitted in any way without written consent of DHS Media House. -
Artificial Intelligence in Health Care: the Hope, the Hype, the Promise, the Peril
Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Michael Matheny, Sonoo Thadaney Israni, Mahnoor Ahmed, and Danielle Whicher, Editors WASHINGTON, DC NAM.EDU PREPUBLICATION COPY - Uncorrected Proofs NATIONAL ACADEMY OF MEDICINE • 500 Fifth Street, NW • WASHINGTON, DC 20001 NOTICE: This publication has undergone peer review according to procedures established by the National Academy of Medicine (NAM). Publication by the NAM worthy of public attention, but does not constitute endorsement of conclusions and recommendationssignifies that it is the by productthe NAM. of The a carefully views presented considered in processthis publication and is a contributionare those of individual contributors and do not represent formal consensus positions of the authors’ organizations; the NAM; or the National Academies of Sciences, Engineering, and Medicine. Library of Congress Cataloging-in-Publication Data to Come Copyright 2019 by the National Academy of Sciences. All rights reserved. Printed in the United States of America. Suggested citation: Matheny, M., S. Thadaney Israni, M. Ahmed, and D. Whicher, Editors. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication. Washington, DC: National Academy of Medicine. PREPUBLICATION COPY - Uncorrected Proofs “Knowing is not enough; we must apply. Willing is not enough; we must do.” --GOETHE PREPUBLICATION COPY - Uncorrected Proofs ABOUT THE NATIONAL ACADEMY OF MEDICINE The National Academy of Medicine is one of three Academies constituting the Nation- al Academies of Sciences, Engineering, and Medicine (the National Academies). The Na- tional Academies provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions. -
Real Vs Fake Faces: Deepfakes and Face Morphing
Graduate Theses, Dissertations, and Problem Reports 2021 Real vs Fake Faces: DeepFakes and Face Morphing Jacob L. Dameron WVU, [email protected] Follow this and additional works at: https://researchrepository.wvu.edu/etd Part of the Signal Processing Commons Recommended Citation Dameron, Jacob L., "Real vs Fake Faces: DeepFakes and Face Morphing" (2021). Graduate Theses, Dissertations, and Problem Reports. 8059. https://researchrepository.wvu.edu/etd/8059 This Thesis is protected by copyright and/or related rights. It has been brought to you by the The Research Repository @ WVU with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you must obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Thesis has been accepted for inclusion in WVU Graduate Theses, Dissertations, and Problem Reports collection by an authorized administrator of The Research Repository @ WVU. For more information, please contact [email protected]. Real vs Fake Faces: DeepFakes and Face Morphing Jacob Dameron Thesis submitted to the Benjamin M. Statler College of Engineering and Mineral Resources at West Virginia University in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering Xin Li, Ph.D., Chair Natalia Schmid, Ph.D. Matthew Valenti, Ph.D. Lane Department of Computer Science and Electrical Engineering Morgantown, West Virginia 2021 Keywords: DeepFakes, Face Morphing, Face Recognition, Facial Action Units, Generative Adversarial Networks, Image Processing, Classification. -
Synthetic Video Generation
Synthetic Video Generation Why seeing should not always be believing! Alex Adam Image source https://www.pocket-lint.com/apps/news/adobe/140252-30-famous-photoshopped-and-doctored-images-from-across-the-ages Image source https://www.pocket-lint.com/apps/news/adobe/140252-30-famous-photoshopped-and-doctored-images-from-across-the-ages Image source https://www.pocket-lint.com/apps/news/adobe/140252-30-famous-photoshopped-and-doctored-images-from-across-the-ages Image source https://www.pocket-lint.com/apps/news/adobe/140252-30-famous-photoshopped-and-doctored-images-from-across-the-ages Image Tampering Historically, manipulated Off the shelf software (e.g imagery has deceived Photoshop) exists to do people this now Has become standard in Public have become tabloids/social media somewhat numb to it - it’s no longer as impactful/shocking How does machine learning fit in? Advent of machine learning Video manipulation is now has made image also tractable with enough manipulation even easier data and compute Can make good synthetic Public are largely unaware of videos using a gaming this and the danger it poses! computer in a bedroom Part I: Faceswap ● In 2017, reddit (/u/deepfakes) posted Python code that uses machine learning to swap faces in images/video ● ‘Deepfake’ content flooded reddit, YouTube and adult websites ● Reddit since banned this content (but variants of the code are open source https://github.com/deepfakes/faceswap) ● Autoencoder concepts underlie most ‘Deepfake’ methods Faceswap Algorithm Image source https://medium.com/@jonathan_hui/how-deep-learning-fakes-videos-deepfakes-and-how-to-detect-it-c0b50fbf7cb9 Inference Image source https://medium.com/@jonathan_hui/how-deep-learning-fakes-videos-deepfakes-and-how-to-detect-it-c0b50fbf7cb9 ● Faceswap model is an autoencoder. -
Image-Based Sexual Abuse Risk Factors and Motivators Vasileia
RUNNING HEAD: IBSA FACTORS AND MOTIVATORS iPredator: Image-based Sexual Abuse Risk Factors and Motivators Vasileia Karasavva A thesis submitted to the Faculty of Graduate and Postdoctoral Affairs in partial fulfillment of the requirements for the degree of Master of Arts in Psychology Carleton University Ottawa, Ontario © 2020 Vasileia Karasavva IBSA FACTORS AND MOTIVATORS i Abstract Image-based sexual abuse (IBSA) can be defined as the non-consensual sharing or threatening to share of nude or sexual images of another person. This is one of the first studies examining how demographic characteristics (gender, sexual orientation), personality traits (Dark Tetrad), and attitudes (aggrieved entitlement, sexual entitlement, sexual image abuse myth acceptance) predict the likelihood of engaging in IBSA perpetration and victimization. In a sample of 816 undergraduate students (72.7% female and 23.3% male), approximately 15% of them had at some point in their life, distributed and/or threatened to distribute nude or sexual pictures of someone else without their consent and 1 in 3 had experienced IBSA victimization. Higher psychopathy or narcissism scores were associated with an increased likelihood of having engaged in IBSA perpetration. Additionally, those with no history of victimization were 70% less likely to have engaged in IBSA perpetration compared to those who had experienced someone disseminating their intimate image without consent themselves. These results suggest that a cyclic relationship between IBSA victimization exists, where victims of IBSA may turn to perpetration, and IBSA perpetrators may leave themselves vulnerable to future victimization. The challenges surrounding IBSA policy and legislation highlight the importance of understanding the factors and motivators associated with IBSA perpetration. -
CESE Guiding Values
EndExploitationMovement.com Guiding Values As members of the Coalition to End Sexual Exploitation (CESE) we believe that diverse and multi-disciplinary collaboration, activism, prevention, and education are key to eradicating the multi-faceted and wide-ranging web of sexual abuse and exploitation. Therefore, we embrace the following guiding values: Dignity. Every human being, regardless of race, sex, Unity in Diversity. The movement against sexual socioeconomic status, age, ability, or other factors, abuse and exploitation must be rooted in a spirit of possesses inherent human dignity and therefore has collaboration and common purpose among all those a right to live free from the harms of sexual abuse who share a vision for a world free from sexual abuse and exploitation. and exploitation. Therefore, we do not limit membership in CESE on the basis of personal politics, religion, or Healthy Sexuality. Respect, intimacy, mutuality, worldview. We always strive to maintain a non-partisan, responsibility, and love are integral aspects of non-sectarian, and singular focus on sexual abuse and healthy sexuality. exploitation issues, recognizing that working together on our shared concerns creates the greatest amount Intersectionality. Sexual abuse and exploitation thrive of progress. This includes collaboration among efforts when responses to these harms fail to account for focused on prevention, education, therapeutic services, the interconnections between all forms of sexual policy, grassroots activism, direct services, medical abuse and exploitation. Accordingly, we actively foster services, and more multidisciplinary education and cross-collaboration between experts and organizations addressing all forms Abolitionism. We are committed to the elimination of sexual harm. of all forms of sexual abuse and exploitation (e.g., sexual assault, intimate partner sexual violence, rape, Authentic Consent. -
New York's Right to Publicity and Deepfakes Law Breaks New Ground
New York’s Right to Publicity and Deepfakes Law Breaks New Ground December 17, 2020 On November 30, 2020, New York Governor Andrew Cuomo signed a path-breaking law addressing synthetic or digitally manipulated media. The law has two main components. First, the law establishes a postmortem right of publicity to protect performers’ likenesses— including digitally manipulated likenesses—from unauthorized commercial exploitation for 40 years after death. Second, the law bans nonconsensual computer-generated pornography (often called deepfake pornography)—highly realistic false images created by artificial intelligence (AI).1 With this law, New York becomes the first state in the nation to explicitly extend a person’s right of publicity to computer-generated likenesses, so-called digital replicas. Professional actors and the Screen Actors Guild (SAG-AFTRA) have pushed for this law for years to protect their likenesses from unauthorized postmortem use, especially as technology has advanced and actors have begun to appear in movies years after their deaths.2 New York also joins four other states that have outlawed certain kinds of deepfakes related either to pornography or to elections. Taken together, these laws, and the roughly 30 bills currently pending 1 Press Release, Governor Cuomo Signs Legislation Establishing a “Right to Publicity” for Deceased Individuals to Protect Against the Commercial Exploitation of Their Name or Likeness, OFFICE OF GOV. CUOMO (Nov. 30, 2020), https://www.governor.ny.gov/news/governor-cuomo-signs-legislation-establishing- right-publicity-deceased-individuals-protect; S5959D (New York), https://www.nysenate.gov/legislation/bills/2019/s5959. 2 See Dave McNary, SAG-AFTRA Commends Gov. -
Exposing Deepfake Videos by Detecting Face Warping Artifacts
Exposing DeepFake Videos By Detecting Face Warping Artifacts Yuezun Li, Siwei Lyu Computer Science Department University at Albany, State University of New York, USA Abstract sibility to large-volume training data and high-throughput computing power, but more to the growth of machine learn- In this work, we describe a new deep learning based ing and computer vision techniques that eliminate the need method that can effectively distinguish AI-generated fake for manual editing steps. videos (referred to as DeepFake videos hereafter) from real In particular, a new vein of AI-based fake video gen- videos. Our method is based on the observations that cur- eration methods known as DeepFake has attracted a lot rent DeepFake algorithm can only generate images of lim- of attention recently. It takes as input a video of a spe- ited resolutions, which need to be further warped to match cific individual (’target’), and outputs another video with the original faces in the source video. Such transforms leave the target’s faces replaced with those of another individ- distinctive artifacts in the resulting DeepFake videos, and ual (’source’). The backbone of DeepFake are deep neu- we show that they can be effectively captured by convo- ral networks trained on face images to automatically map lutional neural networks (CNNs). Compared to previous the facial expressions of the source to the target. With methods which use a large amount of real and DeepFake proper post-processing, the resulting videos can achieve a generated images to train CNN classifier, our method does high level of realism. not need DeepFake generated images as negative training In this paper, we describe a new deep learning based examples since we target the artifacts in affine face warp- method that can effectively distinguish DeepFake videos ing as the distinctive feature to distinguish real and fake from the real ones. -
JCS Deepfake
Don’t Believe Your Eyes (Or Ears): The Weaponization of Artificial Intelligence, Machine Learning, and Deepfakes Joe Littell 1 Agenda ØIntroduction ØWhat is A.I.? ØWhat is a DeepFake? ØHow is a DeepFake created? ØVisual Manipulation ØAudio Manipulation ØForgery ØData Poisoning ØConclusion ØQuestions 2 Introduction Deniss Metsavas, an Estonian soldier convicted of spying for Russia’s military intelligence service after being framed for a rape in Russia. (Picture from Daniel Lombroso / The Atlantic) 3 What is A.I.? …and what is it not? General Artificial Intelligence (AI) • Machine (or Statistical) Learning (ML) is a subset of AI • ML works through the probability of a new event happening based on previously gained knowledge (Scalable pattern recognition) • ML can be supervised, leaning requiring human input into the data, or unsupervised, requiring no input to the raw data. 4 What is a Deepfake? • Deepfake is a mash up of the words for deep learning, meaning machine learning using a neural network, and fake images/video/audio. § Taken from a Reddit user name who utilized faceswap app for his own ‘productions.’ • Created by the use of two machine learning algorithms, Generative Adversarial Networks, and Auto-Encoders. • Became known for the use in underground pornography using celebrity faces in highly explicit videos. 5 How is a Deepfake created? • Deepfakes are generated using Generative Adversarial Networks, and Auto-Encoders. • These algorithms work through the uses of competing systems, where one creates a fake piece of data and the other is trained to determine if that datatype is fake or not • Think of it like a counterfeiter and a police officer. -
Deepfake Videos in the Wild: Analysis and Detection
Deepfake Videos in the Wild: Analysis and Detection Jiameng Pu† Neal Mangaokar† Lauren Kelly Virginia Tech University of Michigan Virginia Tech [email protected] [email protected] [email protected] Parantapa Bhattacharya Kavya Sundaram Mobin Javed University of Virginia Virginia Tech LUMS Pakistan [email protected] [email protected] [email protected] Bolun Wang Bimal Viswanath Facebook Virginia Tech [email protected] [email protected] ABSTRACT 25, 29, 34, 40, 42, 52] to evaluate the proposed detection schemes. AI-manipulated videos, commonly known as deepfakes, are an In addition, governments are also realizing the seriousness of this emerging problem. Recently, researchers in academia and indus- issue. In the USA, the first federal legislation on deepfakes was try have contributed several (self-created) benchmark deepfake signed into law in December 2019 [14], with the government en- datasets, and deepfake detection algorithms. However, little effort couraging research on deepfake detection technologies. However, has gone towards understanding deepfake videos in the wild, lead- most existing research efforts from academia and industry have ing to a limited understanding of the real-world applicability of been conducted with limited or no knowledge of actual deepfake research contributions in this space. Even if detection schemes videos in the wild. Therefore, we have a limited understanding of the are shown to perform well on existing datasets, it is unclear how real-world applicability of existing research contributions in this well the -
1 Law's Current Inability to Properly Address Deepfake Pornography
NOTES “The New Weapon of Choice”:1 Law’s Current Inability to Properly Address Deepfake Pornography Deepfake technology uses artificial intelligence to realistically manipulate videos by splicing one person’s face onto another’s. While this technology has innocuous usages, some perpetrators have instead used it to create deepfake pornography. These creators use images ripped from social media sites to construct—or request the generation of—a pornographic video showcasing any woman who has shared images of herself online. And while this technology sounds complex enough to be relegated to Hollywood production studios, it is rapidly becoming free and easy-to-use. The implications of deepfake pornography seep into all facets of victims’ lives. Not only does deepfake pornography shatter these victims’ sexual privacy, its online permanency also inhibits their ability to use the internet and find a job. Although much of the scholarship and media attention on deepfakes has been devoted to the implications of deepfakes in the political arena and the attendant erosion of our trust in the government, the implications of deepfake pornography are equally devastating. This Note analyzes the legal remedies available to victims, concludes that none are sufficient, and proposes a new statutory and regulatory framework to provide adequate redress. THE NEXT ITERATION OF REVENGE PORNOGRAPHY ...................... 1480 I. WHAT IS DEEPFAKE PORNOGRAPHY? ................................. 1482 A. The Devastating Consequences of Deepfake Pornography ........................................................... 1482 B. Origins of Deepfake Pornography and Where Deepfake Technology Is Going ............................... 1484 C. How Deepfakes Work .............................................. 1487 1. Makena Kelly, Congress Grapples with How to Regulate Deepfakes, VERGE (June 13, 2019, 1:30 PM), https://www.theverge.com/2019/6/13/18677847/deep-fakes-regulation-facebook-adam- schiff-congress-artificial-intelligence [https://perma.cc/5AKJ-GKGZ] (“As Rep. -
Deepfakes 2020 the Tipping Point, Sentinel
SENTINEL DEEPFAKES 2020: THE TIPPING POINT The Current Threat Landscape, its Impact on the U.S 2020 Elections, and the Coming of AI-Generated Events at Scale. Sentinel - 2020 1 About Sentinel. Sentinel works with governments, international media outlets and defense agencies to help protect democracies from disinformation campaigns, synthetic media and information operations by developing a state-of-the-art AI detection platform. Headquartered in Tallinn, Estonia, the company was founded by ex-NATO AI and cybersecurity experts, and is backed by world-class investors including Jaan Tallinn (Co-Founder of Skype & early investor in DeepMind) and Taavet Hinrikus (Co-Founder of TransferWise). Our vision is to become the trust layer for the Internet by verifying the entire critical information supply chain and safeguard 1 billion people from information warfare. Acknowledgements We would like to thank our investors, partners, and advisors who have helped us throughout our journey and share our vision to build a trust layer for the internet. Special thanks to Mikk Vainik of Republic of Estonia’s Ministry of Economic Affairs and Communications, Elis Tootsman of Accelerate Estonia, and Dr. Adrian Venables of TalTech for your feedback and support as well as to Jaan Tallinn, Taavet Hinrikus, Ragnar Sass, United Angels VC, Martin Henk, and everyone else who has made this report possible. Johannes Tammekänd CEO & Co-Founder © 2020 Sentinel Contact: [email protected] Authors: Johannes Tammekänd, John Thomas, and Kristjan Peterson Cite: Deepfakes 2020: The Tipping Point, Johannes Tammekänd, John Thomas, and Kristjan Peterson, October 2020 Sentinel - 2020 2 Executive Summary. “There are but two powers in the world, the sword and the mind.