Video Manipulation Detection Using Stream Descriptors
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Proposed Framework for Digital Video Authentication
PROPOSED FRAMEWORK FOR DIGITAL VIDEO AUTHENTICATION by GREGORY SCOTT WALES A.S., Community College of the Air Force, 1990 B.S., Champlain College, 2012 M.S., Champlain College, 2015 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Master of Science Recording Arts 2019 © 2019 GREGORY SCOTT WALES ALL RIGHTS RESERVED ii This thesis for the Master of Science degree by Gregory Scott Wales has been approved for the Recording Arts Program by Catalin Grigoras, Chair Jeffrey M. Smith Marcus Rogers Date: May 18, 2019 iii Wales, Gregory Scott (M.S., Recording Arts Program) Proposed Framework for Digital Video Authentication Thesis directed by Associate Professor Catalin Grigoras. ABSTRACT One simply has to look at news outlets or social media to see our society video records events from births to deaths and everything in between. A trial court’s acceptance of videos supporting administrative hearings, civil litigation, and criminal cases is based on a foundation that the videos offered into evidence are authentic; however, technological advancements in video editing capabilities provide an easy method to edit digital videos. The proposed framework offers a structured approach to evaluate and incorporate methods, existing and new, that come from scientific research and publication. The thesis offers a quick overview of digital video file creation chain (including factors that influence the final file), general description of the digital video file container, and description of camera sensor noises. The thesis addresses the overall development and proposed use of the framework, previous research of analysis methods / techniques, testing of the methods / techniques, and an overview of the testing results. -
Information Warfare
Memes That Kill: The Future Of Information Warfare I Memes and social networks have become weaponized, while many governments seem ill-equipped to understand the new reality of information warfare. How will we fight state-sponsored disinformation and propaganda in the future? In 2011, a university professor with a background in robotics presented an idea that seemed radical at the time. After conducting research backed by DARPA — the same defense agency that helped spawn the internet — Dr. Robert Finkelstein proposed the creation of a brand new arm of the US military, a “Meme Control Center.” In internet-speak the word “meme” often refers to an amusing picture that goes viral on social media. More broadly, however, a meme is any idea that spreads, whether that idea is true or false. It is this broader definition of meme that Finklestein had in mind when he proposed the Meme Control Center and his idea of “memetic warfare.” II If possible, images should fit within the bounds of the column, but legibility is first priority. See AI Trends PDF for examples of images. Here’s an image caption for source attribution or description if needed. From “Tutorial: Military Memetics,” by Dr. Robert Finkelstein, presented at Social Media for Defense Summit, 2011 Basically, Dr. Finklestein’s Meme Control Center would pump the internet full of “memes” that would benefit the national security of the United States. Finkelstein saw a future in which guns and bombs are replaced by rumor, digital fakery, and social engineering. Fast forward seven years, and Dr. Finklestein’s ideas don’t seem radical at all. -
Tackling Deepfakes in European Policy
Tackling deepfakes in European policy STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 690.039 – July 2021 EN Tackling deepfakes in European policy The emergence of a new generation of digitally manipulated media capable of generating highly realistic videos – also known as deepfakes – has generated substantial concerns about possible misuse. In response to these concerns, this report assesses the technical, societal and regulatory aspects of deepfakes. The assessment of the underlying technologies for deepfake videos, audio and text synthesis shows that they are developing rapidly, and are becoming cheaper and more accessible by the day. The rapid development and spread of deepfakes is taking place within the wider context of a changing media system. An assessment of the risks associated with deepfakes shows that they can be psychological, financial and societal in nature, and their impacts can range from the individual to the societal level. The report identifies five dimensions of the deepfake lifecycle that policy-makers could take into account to prevent and address the adverse impacts of deepfakes. The legislative framework on artificial intelligence (AI) proposed by the European Commission presents an opportunity to mitigate some of these risks, although regulation should not focus on the technological dimension of deepfakes alone. The report includes policy options under each of the five dimensions, which could be incorporated into the AI legislative framework, the proposed European Union digital services act package and beyond. A combination of measures will likely be necessary to limit the risks of deepfakes, while harnessing their potential. -
Deepfakes Generation and Detection: State-Of-The-Art, Open Challenges, Countermeasures, and Way Forward
Deepfakes Generation and Detection: State-of-the-art, open challenges, countermeasures, and way forward Momina Masood1, Mariam Nawaz2, Khalid Mahmood Malik3, Ali Javed4, Aun Irtaza5 1,2Department of Computer Science, University of Engineering and Technology-Taxila, Pakistan 3,4Department of Computer Science and Engineering, Oakland University, Rochester, MI, USA 5Electrical and Computer Engineering Department, University of Michigan-Dearborn, MI, USA Abstract Easy access to audio-visual content on social media, combined with the availability of modern tools such as Tensorflow or Keras, open-source trained models, and economical computing infrastructure, and the rapid evolution of deep-learning (DL) methods, especially Generative Adversarial Networks (GAN), have made it possible to generate deepfakes to disseminate disinformation, revenge porn, financial frauds, hoaxes, and to disrupt government functioning. The existing surveys have mainly focused on deepfake video detection only. No attempt has been made to review approaches for detection and generation of both audio and video deepfakes. This paper provides a comprehensive review and detailed analysis of existing tools and machine learning (ML) based approaches for deepfake generation and the methodologies used to detect such manipulations for the detection and generation of both audio and video deepfakes. For each category of deepfake, we discuss information related to manipulation approaches, current public datasets, and key standards for the performance evaluation of deepfake detection techniques along with their results. Additionally, we also discuss open challenges and enumerate future directions to guide future researchers on issues that need to be considered to improve the domains of both the deepfake generation and detection. This work is expected to assist the readers in understanding the creation and detection mechanisms of deepfake, along with their current limitations and future direction.