Label-Conditioned Next-Frame Video Generation with Neural Flows

Total Page:16

File Type:pdf, Size:1020Kb

Label-Conditioned Next-Frame Video Generation with Neural Flows Label-Conditioned Next-Frame Video Generation with Neural Flows David Donahue University of Massachusetts Lowell Lowell, USA david [email protected] Abstract—Recent state-of-the-art video generation systems em- needed to produce them must be kept in memory until the ploy Generative Adversarial Networks (GANs) or Variational final backpropagation step. Autoencoders (VAEs) to produce novel videos. However, VAE In this work, we explore the use of neural flows as a method models typically produce blurry outputs when faced with sub- optimal conditioning of the input, and GANs are known to be of video generation in order to solve the following issues: unstable for large output sizes. In addition, the output videos of these models are difficult to evaluate, partly because the GAN 1) Existing Generative Adversarial Networks must gener- loss function is not an accurate measure of convergence. In this ate the entire video at once. We propose to alleviate work, we propose using a state-of-the-art neural flow generator called Glow to generate videos conditioned on a textual label, this problem by introducing a neural flow trained to one frame at a time. Neural flow models are more stable than maximize cross entropy. This model will generate only standard GANs, as they only optimize a single cross entropy loss the next frame, conditioned on all previous frames. This function, which is monotonic and avoids the circular convergence only requires a context encoder over previous frames, issues of the GAN minimax objective. In addition, we also show with significantly less memory usage how to condition Glow on external context, while still preserving the invertible nature of each ”flow” layer. Finally, we evaluate the 2) Existing models are difficult to evaluate, as currently proposed Glow model by calculating cross entropy on a held-out GAN-generated instances must be judged by their qual- validation set of videos, in order to compare multiple versions ity (qualitative). Neural flows instead allow for direct of the proposed model via an ablation study. We show generated estimation of the probability density of samples, allow- videos and discuss future improvements. ing for a quantitative evaluation scheme by measuring Index Terms—neural flow, glow, video generation, evaluation the log-probability of examples in a held-out validation set I. INTRODUCTION 3) Generative Adversarial Networks as well as varia- Text-to-video generation is the process by which a model tional autoencoders are unstable without heavy hyper- conditions on text, and produces a video based on that text parameter tuning. We demonstrate the stability of neural description. This is the exact opposite of video captioning, flow architectures over previous methods, and in turn which aims to produce a caption that would describe a given show examples of instability and how to improve them video [6]. It may be argued that text-to-video is a harder task, as there are many more degrees of freedom in pixel space. The outline of this paper is as follows 1. We introduce In addition, video generation is more complex than text-to- recent work in video generation, followed by a description image generation [8], as a video can be viewed as a collection of the system architecture to be implemented. Then of images and thus is a superset of the image generation we include quantitative cross entropy evaluation of the problem. Video generation has many applications, including system, and conduct an ablation study to conclude the arXiv:1910.11106v1 [cs.CV] 16 Oct 2019 the automatic generation of animation in educational settings, effectiveness of each proposed feature. Finally, we support the realistic generation of sprite movement in video games, our results with visual frames of generated videos as as well as other auto-generated entertainment. Reliable video well as plotted internal model values to justify model frame generation could play a significant role in the media of behavior. We conclude with future improvements and tomorrow. necessary next steps. Our contributions are as follows: Recently, Generative Adversarial Networks (GANs) have demonstrated promising performance in the domain of real- • We utilize the existing Glow neural flow architecture for valued output generation [7]. Some works are beginning to video generation apply this generative architecture to conditional video genera- • We demonstrate how to condition Glow on label features tion, with promising results [1]. However, these GAN systems and on a previous frame state typically produce all frames of a video at once, in order • We perform an ablation study of improvements to the to ensure coherence across all frames and sufficient error architecture propagation between frames. This is difficult because all video frames, as well as all the intermediate hidden activations 1Computer vision class project Fig. 1: Proposed Glow video generation model. Converts dataset image samples into points in Gaussian space, then maximizes the probability of images in that space. During inference, a Gaussian point is sampled and converted back to a realistic image. In this work, we replace images with video frames and add context dependence on previous frames in the video. • We propose potential future improvements to help con- of new images from a Gaussian latent space. This model is vergence and sample quality trained via a cross entropy loss function similar to that of recent language generation models [17], potentially providing II. RECENT WORK more stable training when compared to the minimax objective of Generative Adversarial Networks. Compared to other fields of computer vision, such as object More recently, the Glow architecture was proposed as a classification and detection [19], image generation has seen modification of the RealNVP neural flow. The Glow model slower progress in comparison. An older choice for video introduces several improvements, such as additive coupling generation was the variational autoencoder (VAE), which layers and 1x1 invertible convolutions to speed up training utilized an autoencoder with a KL loss component to learn time and increase model complexity for better generation and sample from a continuous latent space. Training points in [18]. They apply their model to the task of celebrity image the latent space are tasked with maximizing a lower bound of generation. While this model introduces a number of features the generated data [15]. This KL loss compactifies the latent that the GAN does not possess, such as exact estimation of space by forcing all training points toward the origin. During sample densities, it enforces strict constraints that layers inside inference, a point can be sampled from a Gaussian centered the model (flow layers) be invertible and have easily calculable on the origin and decoded to a new training point. However, log-determinants. Still, the Glow architecture shows promising the compactification of the latent space introduces a loss of results in sample quality. information that produces blurry reconstructions. As such, the VAE struggles with the generation of high-quality samples. III. THE NEURAL FLOW Recently, Generative Adversarial Networks have made great Here we give a brief background of neural flows [16]. For strides in a variety of areas [7]. They work by instantiating two a given dataset of images x 2 D assumed to come from some models: a generator network and a discriminator network. The distribution p(x), we initialize a neural flow to approximate generator attempts to produce samples which are then judged that distribution as qθ(x), then maximize the log-likelihood of by the discriminator as being real (from the dataset) or fake N dataset examples under this model using the cross entropy (from the generator). Error propagation from the discriminator objective function: is then used to train the generator to produce more realistic N samples. Most work on GANs focus on image synthesis, but 1 X L = − log q (x ) (1) GANs can theoretically generate any continuous-valued output N θ i [10]. In the context of video generation, the generator would i=1 produce video frames, which the discriminator would then take This minimization has been shown to be equivalent to as input and judge as real or fake. GAN networks can also be minimizing the KL-divergence between p(x) and qθ(x). This conditioned on input, as in the case of text-to-image generation is identical to the objective of existing language generation [8]. models, with the exception that the image domain is continu- A generative architecture which has received less attention ous . A neural flow is able to model a continuous distribution when compared to the Generative Adversarial Network is the by transforming a simple Gaussian distribution into a complex neural flow [16]. Referred to in the paper as RealNVP, this continuous distribution of any kind. A neural flow model neural flow allows for exact estimation of the probability approximates p(x) as follows. It is composed of a series of density of image examples, and allows for direct generation invertible transformations: This architecture choice is motivated by the fact that the f1 f2 fn x () h1 () h2 () z (2) first frame is the most difficult to generate, and sets up the scene. Subsequent frames work to animate the first frame and We refer to the combined transformation as f (x), and the θ continue the plot. inverse transformation as x = f −1(x). We assign a Gaussian θ One problem with the proposed tail model is that it must distribution to all points in z-space, and calculate the prob- condition on previous frame information in order to generate ability p(x) of any image point x as follows. We restrict the next frame. This is a problem, as the original Glow each invertible flow layer such that the log-determinant is model does not specify a mechanism for conditioning on easily calculable, then by the change of variables formula, we context information, but rather is utilized for unconditional calculate the corresponding z as z = f (x) and log-probability θ celebrity image generation [18].
Recommended publications
  • Randomized Lempel-Ziv Compression for Anti-Compression Side-Channel Attacks
    Randomized Lempel-Ziv Compression for Anti-Compression Side-Channel Attacks by Meng Yang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Applied Science in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2018 c Meng Yang 2018 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Abstract Security experts confront new attacks on TLS/SSL every year. Ever since the compres- sion side-channel attacks CRIME and BREACH were presented during security conferences in 2012 and 2013, online users connecting to HTTP servers that run TLS version 1.2 are susceptible of being impersonated. We set up three Randomized Lempel-Ziv Models, which are built on Lempel-Ziv77, to confront this attack. Our three models change the determin- istic characteristic of the compression algorithm: each compression with the same input gives output of different lengths. We implemented SSL/TLS protocol and the Lempel- Ziv77 compression algorithm, and used them as a base for our simulations of compression side-channel attack. After performing the simulations, all three models successfully pre- vented the attack. However, we demonstrate that our randomized models can still be broken by a stronger version of compression side-channel attack that we created. But this latter attack has a greater time complexity and is easily detectable. Finally, from the results, we conclude that our models couldn't compress as well as Lempel-Ziv77, but they can be used against compression side-channel attacks.
    [Show full text]
  • Conversion Principles and Circuits
    Interfacing Analog and Digital Worlds: Conversion Principles and Circuits Nimal Skandhakumar Faculty of Technology University of Sri Jayewardenepura 2019 1 From Analog to Digital to Analog 2 Analog Signal ● A continuous signal that contains 1. Continuous time-varying quantities, such as 2. Infinite range of values temperature or speed, with infinite 3. More exact values, but more possible values in between difficult to work with ● Can be used to measure changes in some physical phenomena, such as light, sound, pressure, or temperature. 3 Analog Signal ● Advantages: 1. Major advantages of the analog signal is infinite amount of data. 2. Density is much higher. 3. Easy processing. ● Disadvantages: 1. Unwanted noise in recording. 2. If we transmit data at long distance then unwanted disturbance is there. 3. Generation loss is also a big con of analog signals. 4 Digital Signal ● A type of signal that can take on a 1. Discrete set of discrete values (a quantized 2. Finite range of values signal) 3. Not as exact as analog, but easier ● Can represent a discrete set of to work with values using any discrete set of waveforms; and we can represent it like (0 or 1), (on or off) 5 Difference between analog and digital signals Analog Digital Signalling Continuous signal Discrete time signal Data Subjected to deterioration by noise during Can be noise-immune without deterioration transmissions transmission and write/read cycle. during transmission and write/read cycle. Bandwidth Analog signal processing can be done in There is no guarantee that digital signal real time and consumes less bandwidth. processing can be done in real time and consumes more bandwidth to carry out the same information.
    [Show full text]
  • 8 Video Compression Codecs
    ARTICLE VIDEO COMPRESSION CODECS: A SURVIVAL GUIDE Iain E. Richardson, Vcodex Ltd., UK 1. Introduction Not another video codec! Since the frst commercially viable video codec formats appeared in the early 1990s, we have seen the emergence of a plethora of compressed digital video formats, from MPEG-1 and MPEG-2 to recent codecs such as HEVC and VP9. Each new format offers certain advantages over its predecessors. However, the increasing variety of codec formats poses many questions for anyone involved in collecting, archiving and delivering digital video content, such as: ■■ Which codec format (if any) is best? ■■ What is a suitable acquisition protocol for digital video? ■■ Is it possible to ensure that early ‘born digital’ material will still be playable in future decades? ■■ What are the advantages and disadvantages of converting (transcoding) older formats into newer standards? ■■ What is the best way to deliver video content to end-users? In this article I explain how a video compression codec works and consider some of the practi- cal concerns relating to choosing and controlling a codec. I discuss the motivations behind the continued development of new codec standards and suggest practical measures to help deal with the questions listed above. 2. Codecs and compression 2.1 What is a video codec? ‘Codec’ is a contraction of ‘encoder and decoder’. A video encoder converts ‘raw’ or uncom- pressed digital video data into a compressed form which is suitable for storage or transmission. A video decoder extracts digital video data from a compressed fle, converting it into a display- able, uncompressed form.
    [Show full text]
  • Nevion Virtuoso
    datasheet Virtuoso Media Function Nevion Virtuoso JPEG 2000 Encoding and Decoding The Virtuoso JPEG 2000 Media Function provides visually lossless compressed Applications transport of SD and HD signals with ultra- • Professional broadcast contribution low latency, using only 10% of the bandwidth • Live sports and event contribution required for uncompressed video. • Studio-to-studio media exchange • Managed video services over IP Nevion Virtuoso can run multiple instances of the JPEG 2000 Media Function on a single Key features platform with built-in network aggregation • Multi-channel JPEG 2000 encoding or decoding to 10G Ethernet. • Visually lossless VQ and low multi-generation loss • Transport of SD, HD and 3G-SDI over IP/ GigE The JPEG 2000 Media Function requires the HBR Accelerator, which supports SD/HD/3G-SDI input, • Interoperability with 3rd party through VSF TR-01 for encoding or decoding of up to four JPEG 2000 • Very low end-to-end latency, 3-4 frames with TR01 channels per Accelerator. • Option for ultra-low latency (sub-frame) The VSF TR-01 compliant TS over IP encapsulation • Supports FEC, SIPS / SMPTE 2022-7 and Launch ensures perfectly synchronized transport of video, Delay Offset (LDO) IP protection mechanisms audio and ancillary data, as well as interoperability • Integrated frame synchronization on decoder with 3rd party equipment • User-friendly web GUI for monitoring and control The Nevion Virtuoso JPEG 2000 Media Function, • Thumbnails for input/output confidence monitoring combined with Nevion’s advanced protection • Built-in TS monitoring (ETSI TR 101 290 Priority 1) of mechanisms, enables broadcasters to utilize encoder output and decoder input, with option for cost-efficient IP links for the real-time transport of Pri 2 and Pri 3 monitoring including PCR validation professional media with low bandwidth utilization, • Software license approach ensures easy and combined with extremely high quality and availability.
    [Show full text]
  • Creating and Archiving Born Digital Video
    Federal Agencies Digitization Guidelines Initiative Creating and Archiving Born Digital Video Part III. High Level Recommended Practices December 2, 2014 The FADGI Audio-Visual Working Group http://www.digitizationguidelines.govaudio-visual/ Creating and Archiving Born Digital Video III: High Level Recommended Practices By the Federal Agencies Digitization Guidelines Initiative Audio-Visual Working Group http://www.digitizationguidelines.gov/audio-visual/ Version 1.1, December 2, 2014 TABLE OF CONTENTS Introduction .............................................................................................................................................................. 4 What is this Document? .......................................................................................................................................... 4 Key to Case History References .............................................................................................................................. 4 Understanding the Recommended Practices .......................................................................................................... 4 Part 1. Advice for File Creators .............................................................................................................................. 6 Plan for High Quality Video Files and Metadata ................................................................................................... 6 RP 1.1 Select a camera and other recording equipment with the capability to capture at high quality
    [Show full text]
  • 1 Introduction
    HELSINKI UNIVERSITY OF TECHNOLOGY Faculty of Electronics, Communications, and Automation Department of Communications and Networking Le Wang Evaluation of Compression for Energy-aware Communication in Wireless Networks Master’s Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Technology. Espoo, May 11, 2009 Supervisor: Professor Jukka Manner Instructor: Sebastian Siikavirta 2 HELSINKI UNIVERSITY OF TECHNOLOGY ABSTRACT OF MASTER’S THESIS Author: Le Wang Title: Evaluation of Compression for Energy-aware Communication in Wireless Networks Number of pages: 75 p. Date: 11th May 2009 Faculty: Faculty of Electronics, Communications, and Automation Department: Department of Communications and Networks Code: S-38 Supervisor: Professor Jukka Manner Instructor: Sebastian Siikavirta Abstract In accordance with the development of ICT-based communication, energy efficient communication in wireless networks is being required for reducing energy consumption, cutting down greenhouse emissions and improving business competitiveness. Due to significant energy consumption of transmitting data over wireless networks, data compression techniques can be used to trade the overhead of compression/decompression for less communication energy. Careless and blind compression in wireless networks not only causes an expansion of file sizes, but also wastes energy. This study aims to investigate the usages of data compression to reduce the energy consumption in a hand-held device. By con- ducting experiments as the methodologies, the impacts of transmission on energy consumption are explored on wireless interfaces. Then, 9 lossless compression algo- rithms are examined on popular Internet traffic in the view of compression ratio, speed and consumed energy. Additionally, energy consumption of uplink, downlink and overall system is investigated to achieve a comprehensive understanding of compression in wireless networks.
    [Show full text]
  • Data Compression in Solid State Storage
    Data Compression in Solid State Storage John Fryar [email protected] Santa Clara, CA August 2013 1 Acknowledgements This presentation would not have been possible without the counsel, hard work and graciousness of the following individuals and/or organizations: Raymond Savarda Sandgate Technologies Santa Clara, CA August 2013 2 Disclaimers The opinions expressed herein are those of the author and do not necessarily represent those of any other organization or individual unless specifically cited. A thorough attempt to acknowledge all sources has been made. That said, we’re all human… Santa Clara, CA August 2013 3 Learning Objectives At the conclusion of this tutorial the audience will have been exposed to: • The different types of Data Compression • Common Data Compression Algorithms • The Deflate/Inflate (GZIP/GUNZIP) algorithms in detail • Implementation Options (Software/Hardware) • Impacts of design parameters in Performance • SSD benefits and challenges • Resources for Further Study Santa Clara, CA August 2013 4 Agenda • Background, Definitions, & Context • Data Compression Overview • Data Compression Algorithm Survey • Deflate/Inflate (GZIP/GUNZIP) in depth • Software Implementations • HW Implementations • Tradeoffs & Advanced Topics • SSD Benefits and Challenges • Conclusions Santa Clara, CA August 2013 5 Definitions Item Description Comments Open A system which will compress Must strictly adhere to standards System data for use by other entities. on compress / decompress I.E. the compressed data will algorithms exit the system Interoperability among vendors mandated for Open Systems Closed A system which utilizes Can support a limited, optimized System compressed data internally but subset of standard. does not expose compressed Also allows custom algorithms data to the outside world No Interoperability req’d.
    [Show full text]
  • A Guide to MPEG Fundamentals and Protocol Analysis and Protocol to MPEG Fundamentals a Guide
    MPEG Tutorial A Guide to MPEG Fundamentals and Protocol Analysis (Including DVB and ATSC) A Guide to MPEG Fundamentals and Protocol Analysis (Including DVB and ATSC) A Guide to MPEG Fundamentals and Protocol Analysis ii www.tektronix.com/video_audio/ A Guide to MPEG Fundamentals and Protocol Analysis A Guide to MPEG Fundamentals and Protocol Analysis Contents Section 1 – Introduction to MPEG · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1 1.1 Convergence · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1 1.2 Why Compression is Needed · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1 1.3 Applications of Compression · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1 1.4 Introduction to Video Compression · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 2 1.5 Introduction to Audio Compression · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 4 1.6 MPEG Signals · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 4 1.7 Need for Monitoring and Analysis · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 5 1.8 Pitfalls of Compression · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 6 Section 2 – Compression in Video · · · · · · · · · · · · · · · · · · · · · · · · · · · · 7 2.1 Spatial or Temporal Coding? · · · · · · · · · · · · · · · · · ·
    [Show full text]
  • Introduction to Dolby Digital Plus, an Enhancement to the Dolby Digital Coding System
    Audio Engineering Society Convention Paper 6196 Presented at the 117th Convention 2004 October 28–31 San Francisco, CA, USA This convention paper has been reproduced from the author's advance manuscript, without editing, corrections, or consideration by the Review Board. The AES takes no responsibility for the contents. Additional papers may be obtained by sending request and remittance to Audio Engineering Society, 60 East 42nd Street, New York, New York 10165-2520, USA; also see www.aes.org. All rights reserved. Reproduction of this paper, or any portion thereof, is not permitted without direct permission from the Journal of the Audio Engineering Society. Introduction to Dolby Digital Plus, an Enhancement to the Dolby Digital Coding System Louis D. Fielder 1, Robert L. Andersen1, Brett G. Crockett1, Grant A. Davidson1, Mark F. Davis1, Stephen C. Turner1, Mark S. Vinton1, and Phillip A. Williams1 1 Dolby Laboratories, San Francisco, CA, 94103, USA ABSTRACT An extension to the existing Dolby Digital (AC-3) multichannel audio coding standard is described and its new capabilities explored. This coding system is designed with extensive compatibility with the existing standard by retaining most of its metadata and data-framing structure to preserve and extend functionality in existing applications. New features include simultaneous support for multiple program streams, carriage of multichannel signals beyond 5.1 channels, and fine-grained control and support for data rates up to 6.144 Mbps. New coding “tools” including spectral extension, enhanced channel coupling, transient pre-noise processing, and improved filterbank/quantization enhance the performance of earlier AC-3 technology. 1. INTRODUCTION applications.
    [Show full text]
  • Pilot SMPTE SP Working Document
    SMPTE ER 2121-2:2018 ENGINEERING REPORT Interoperable Master Format — Business Requirements for DPP Technical Specifications Page 1 of 10 Every attempt has been made to ensure that the information contained in this document is accurate. Errors in this document should be reported to the SMPTE Business Requirements proponent(s) identified below with a copy to [email protected]. All other inquiries in respect of this document, including inquiries as to intellectual property requirements, should be addressed to the SMPTE Business Requirements proponents identified below. Business Requirements Owner(s) contact information: Digital Production Partnership 4th Floor, Fat Side, ITV, 200 Gray’s Inn Road, London, WC1X 8HF, UK Email: [email protected] Digital Production Partnership is a UK Registered Company. Company Registration Number: 09478697 North American Broadcasters Association 205 Wellington St. W., Suite 9C200 Toronto, ON M5V 3G7, Canada Email: [email protected] or [email protected] Telephone: +1-416-205-3363 Copyright © 2018 by THE SOCIETY OF MOTION PICTURE AND TELEVISION ENGINEERS 446 Hamilton Avenue., White Plains, NY 10601 (914) 761-1100 SMPTE ER 2121-2:2018 Table of Contents 1 Scope 3 2 Conformance Notation 3 3 Business Requirements 3 Annex A Business Requirements Overview 5 Annex B Business Requirements - Additional Notes 8 Annex C Glossary of Terms 9 Forward SMPTE (the Society of Motion Picture and Television Engineers) is an internationally-recognized standards developing organization. Headquartered and incorporated in the United States of America, SMPTE has members in over 80 countries on six continents. SMPTE’s Engineering Documents, including Standards, Recommended Practices, Engineering Guidelines and Technical Specifications, are prepared by SMPTE’s Technology Committees.
    [Show full text]
  • LOVELY PROFESSIONAL UNIVERSITY TERM PAPER of DIGITA L COMMUNICATON SYSTEM (DATA COMPRESSION) ABSTRACT: History:- About the Au
    LOVELY PROFESSIONAL UNIVERSITY TERM PAPER OF DIGITA L COMMUNICATON SYSTEM (DATA COMPRESSION) BY : Joginder singh Roll:34 Sec:B6803 About the Author:- ABSTRACT: This term paper deals with the term data Claude Elwood Shannon (April 30, 1916 ± February 24, 2001) was compression. how it is done ,lossy compression, it also explain an American mathematician, electronic engineer, and about limitations and future aspects of data compression. cryptographer known as "the father of information theory". History:- Shannon is famous for having founded information theory with one landmark paper published in 1948. But he is also credited with founding both digital computer and digital circuit design theory in In his 1948 paper, ``A Mathematical Theory of Communication,'' 1937, when, as a 21-year-old master's student at MIT, he wrote a Claude E. Shannon formulated the theory of data compression. thesis demonstrating that electrical application of Boolean algebra Shannon established that there is a fundamental limit to lossless could construct and resolve any logical, numerical relationship. It data compression. This limit, called the entropy rate, is denoted by has been claimed that this was the most important master's thesis H. The exact value of H depends on the information source --- of all time. more specifically, the statistical nature of the source. It is possible to compress the source, in a lossless manner, with compression rate close to H. It is mathematically i mpossible to do better than H. Shannon also developed the theory of lossy data compression. This is better known as rate-distortion theory. In lossy data compression, the decompressed data does not have to be exactly WHAT IS DATA COPRESSION ? the same as the original data.
    [Show full text]
  • Reconstruct and Represent Video Contents for Captioning Via Reinforcement Learning
    1 Reconstruct and Represent Video Contents for Captioning via Reinforcement Learning Wei Zhang Bairui Wang Lin Ma Wei Liu Abstract—In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) in a novel encoder-decoder-reconstructor architecture, which leverages both forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder component makes use of the forward flow to produce a sentence description based on the encoded video semantic features. Two types of reconstructors are subsequently proposed to employ the backward flow and reproduce the video features from local and global perspectives, respectively, capitalizing on the hidden state sequence generated by the decoder. Moreover, in order to make a comprehensive reconstruction of the video features, we propose to fuse the two types of reconstructors together. The generation loss yielded by the encoder-decoder component and the reconstruction loss introduced by the reconstructor are jointly cast into training the proposed RecNet in an end-to-end fashion. Furthermore, the RecNet is fine-tuned by CIDEr optimization via reinforcement learning, which significantly boosts the captioning performance. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the performance of video captioning consistently. Index Terms—Video Captioning, Reconstruction Network (RecNet), Backward Information. F 1 INTRODUCTION ESCRIBING visual contents with natural language auto- D matically has received increasing attention in both the computer vision and natural language processing communi- ties.
    [Show full text]