Emotional Voice Conversion with Cycle-Consistent Adversarial Network

Total Page:16

File Type:pdf, Size:1020Kb

Emotional Voice Conversion with Cycle-Consistent Adversarial Network EMOTIONAL VOICE CONVERSION WITH CYCLE-CONSISTENT ADVERSARIAL NETWORK Songxiang Liu, Yuewen Cao, Helen Meng Human-Computer Communications Laboratory Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China fsxliu, ywcao, [email protected] ABSTRACT important role in conveying various types of non-linguistic informa- tion, such as intention, attitude and mood, which represent the emo- Emotional Voice Conversion, or emotional VC, is a technique of tions of the speaker [9]. Recently, there has been tremendous active converting speech from one emotion state into another one, keep- research in modeling prosodic features of speech for emotional VC, ing the basic linguistic information and speaker identity. Previous most of which involve modeling two prosodic elements, namely the approaches for emotional VC need parallel data and use dynamic F0 contour and energy contour. A Gaussian mixture model (GMM) time warping (DTW) method to temporally align the source-target and a classification regression tree model were adopted to model the speech parameters. These approaches often define a minimum gen- F0 contour conversion from neutral speech to emotional speech in eration loss as the objective function, such as L1 or L2 loss, to learn [10]. A system for transforming the emotion in speech was built in model parameters. Recently, cycle-consistent generative adversarial [11], where the F0 contour was modeled and generated by context- networks (CycleGAN) have been used successfully for non-parallel sensitive syllable-based HMMs, the duration was transformed using VC. This paper investigates the efficacy of using CycleGAN for phone-based relative decision trees, and the spectrum was converted emotional VC tasks. Rather than attempting to learn a mapping be- using a GMM-based or a codebook selection approach. tween parallel training data using a frame-to-frame minimum gener- ation loss, the CycleGAN uses two discriminators and one classifier Prosody is inherently supra-segmental and hierarchical in na- to guide the learning process, where the discriminators aim to differ- ture, of which the conversion is affected by both short- and long-term entiate between the natural and converted speech and the classifier dependencies, such as the sequence of segments, syllables, words aims to classify the underlying emotion from the natural and con- within an utterance as well as lexical and syntactic systems of a verted speech. The training process of the CycleGAN models ran- language [12, 13, 14, 15, 16]. There have been many attempts to domly pairs source-target speech parameters, without any temporal model prosodic characteristics in multiple temporal levels, such as alignment operation. The objective and subjective evaluation results the phone, syllable and phrase levels [17, 18, 19, 20]. Continuous confirm the effectiveness of using CycleGAN models for emotional wavelet transform (CWT) can effectively model F0 in different tem- VC. The non-parallel training for a CycleGAN indicates its potential poral scales and significantly improve speech synthesis performance for non-parallel emotional VC. [21]. The CWT methods were also adopted for emotional VC. CWT was adopted for F0 modeling within the non-negative matrix factor- Index Terms— Emotional voice conversion, generative adver- ization (NMF) model [22], and for F0 and energy contour modeling sarial networks, cycleGAN within a deep bidirectional LSTM (DBLSTM) model [23]. Using CWT method to decompose the F0 in different scales has also been 1. INTRODUCTION explored in [9, 24], where neural networks (NNs) or deep belief net- works (DBNs) were adopted. Human speech is a complex signal that contains rich information, While previous work has shown the efficacy of using GMMs, arXiv:2004.03781v1 [eess.AS] 8 Apr 2020 which includes linguistic information, para- and non-linguistic infor- DBLSTMs, NNs and DBNs to model the feature mapping for the mation. While linguistic information and para-linguistic information spectral and the prosodic features, they all need parallel data and controlled by the speaker help make the expression convey informa- parallel training, which means the source and target data should have tion precisely, non-linguistic information such as emotion accompa- parallel scripts and a dynamic time warping (DTW) method is used nied with speech plays an important role in human social interaction. to temporally align the source and target features before training the Compared with Voice Conversion (VC) [1, 2, 3, 4], which is a tech- models. Parallel training data is more difficult to collect than non- nique to convert one speaker’s voice to sound like that of another, parallel data in many cases. Besides, the use of DTW may introduce emotional Voice Conversion, or emotional VC, is a technique of con- alignment errors, which degrades VC performance. Moreover, pre- verting speech from one emotion state into another one, keeping the vious emotional VC approaches often define a minimum generation basic linguistic information and speaker identity. loss as the objective function, such as L1 or L2 loss. One of the is- Most of the VC approaches focus more on the conversion of sues using a minimum generation loss is an over-smoothing effect short-time spectral features, but less on the conversion of the prosody often observed in the generated speech parameters. Since this loss features such as F0 [5, 6, 7, 8, 3]. In these works, F0 features are may also be inconsistent with human’s perception of speech, directly usually converted by a simple logarithmic Gaussian (LG) normal- optimizing the model parameters using a minimum generation loss ized transform. For emotional VC, however, parametrically mod- may not generate speech that sounds natural to human. Generative eling the prosody features is important, since the prosody plays an adversarial networks (GANs) [25] have been incorporated into TTS and VC systems [26], where it is found that GAN models are ca- Cycle consistency pable of generating more natural spectral parameters and F0 than SA SAB SABA conventional minimum generation error training algorithm regard- less of its hyper-parameter settings. Since any utterance spoken by a GAB GBA speaker with the source or target emotional state can be used as train- ing sample, if a non-parallel emotional VC model can achieve com- DB parable performance with the parallel counterparts, it will be more flexible, more practical and more valuable than parallel emotional C VC systems. The recently emerged cycle-consistent generative ad- DA versarial network (CycleGAN) [27], which belongs to the large fam- ily of GAN models, provides a potential way to achieve non-parallel emotional VC. The CycleGAN was originally designed to transform GBA GAB styles in images, where the styles of the images are translated while S S SBAB the textures remain unchanged. CycleGAN models have been used B BA successfully for developing non-parallel VC systems [28, 29]. Cycle consistency In this paper, we investigate the efficacy of using CycleGAN models for emotional VC tasks. Emotional VC is similar to image Fig. 1. CycleGAN structure for emotional voice conversion. GAB style transformation, where we can regard the underlying linguis- and GBA refer to generators. DA and DB refer to discriminators. C tic information as analogous to image content and the accompany- refers to emotion classifier. S denotes genuine or converted spectral ing emotion as analogous to image style. Rather than attempting and prosodic features. to learn a mapping between parallel training data using a frame-to- frame minimum generation loss, in this paper, the CycleGAN uses two discriminators and one classifier to guide the learning process– SA and SBA, and discriminator DB distinguishes between SB and the discriminators aim to differentiate between natural and converted SAB . To this end, an adversarial loss, which measures how dis- speech and the classifier aims to classify the underlying emotion tinguishable the converted features SAB from the genuine target from the natural and converted speech. The spectral features, F0 domain features SB , is defined as contour and energy contour are simultaneously converted by the Cy- cleGAN model. We utilize CWT or logarithmic representation of the F0 and energy features. Although the training data we use is Ladv(GAB ;DB ) = ESB ∼pdata(SB )[log DB (SB )] (1) parallel, a non-parallel training process is adopted to learn the Cy- + ESA∼pdata(SA)[log(1 − DB (GAB (SA))]: cleGAN model, which means that source and target features are ran- domly paired during training, without any temporal alignment pro- The adversarial loss distinguishing the converted features SBA cess. The objective and subjective evaluation results confirm the ef- from the genuine source domain features SA has a similar formula- fectiveness of using CycleGAN models for emotional VC. The ad- tion. vantages offered by the CycleGAN model include (i)utilizing GAN Cycle Consistency Loss: The adversarial loss makes SA and loss instead of minimum generation loss, (ii)getting rid of source- SBA or SB and SAB as similar as possible while the cycle consis- target alignment errors and (iii) flexible non-parallel training, etc. tency loss guarantees that an input SA can retain its original form The non-parallel training for a CycleGAN indicates its potential for after passing through the two generators GAB and GBA. Using non-parallel emotional VC. the notation in Figure 1, SABA, which equals to GBA(GAB
Recommended publications
  • Randomized Lempel-Ziv Compression for Anti-Compression Side-Channel Attacks
    Randomized Lempel-Ziv Compression for Anti-Compression Side-Channel Attacks by Meng Yang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Applied Science in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2018 c Meng Yang 2018 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Abstract Security experts confront new attacks on TLS/SSL every year. Ever since the compres- sion side-channel attacks CRIME and BREACH were presented during security conferences in 2012 and 2013, online users connecting to HTTP servers that run TLS version 1.2 are susceptible of being impersonated. We set up three Randomized Lempel-Ziv Models, which are built on Lempel-Ziv77, to confront this attack. Our three models change the determin- istic characteristic of the compression algorithm: each compression with the same input gives output of different lengths. We implemented SSL/TLS protocol and the Lempel- Ziv77 compression algorithm, and used them as a base for our simulations of compression side-channel attack. After performing the simulations, all three models successfully pre- vented the attack. However, we demonstrate that our randomized models can still be broken by a stronger version of compression side-channel attack that we created. But this latter attack has a greater time complexity and is easily detectable. Finally, from the results, we conclude that our models couldn't compress as well as Lempel-Ziv77, but they can be used against compression side-channel attacks.
    [Show full text]
  • Conversion Principles and Circuits
    Interfacing Analog and Digital Worlds: Conversion Principles and Circuits Nimal Skandhakumar Faculty of Technology University of Sri Jayewardenepura 2019 1 From Analog to Digital to Analog 2 Analog Signal ● A continuous signal that contains 1. Continuous time-varying quantities, such as 2. Infinite range of values temperature or speed, with infinite 3. More exact values, but more possible values in between difficult to work with ● Can be used to measure changes in some physical phenomena, such as light, sound, pressure, or temperature. 3 Analog Signal ● Advantages: 1. Major advantages of the analog signal is infinite amount of data. 2. Density is much higher. 3. Easy processing. ● Disadvantages: 1. Unwanted noise in recording. 2. If we transmit data at long distance then unwanted disturbance is there. 3. Generation loss is also a big con of analog signals. 4 Digital Signal ● A type of signal that can take on a 1. Discrete set of discrete values (a quantized 2. Finite range of values signal) 3. Not as exact as analog, but easier ● Can represent a discrete set of to work with values using any discrete set of waveforms; and we can represent it like (0 or 1), (on or off) 5 Difference between analog and digital signals Analog Digital Signalling Continuous signal Discrete time signal Data Subjected to deterioration by noise during Can be noise-immune without deterioration transmissions transmission and write/read cycle. during transmission and write/read cycle. Bandwidth Analog signal processing can be done in There is no guarantee that digital signal real time and consumes less bandwidth. processing can be done in real time and consumes more bandwidth to carry out the same information.
    [Show full text]
  • 8 Video Compression Codecs
    ARTICLE VIDEO COMPRESSION CODECS: A SURVIVAL GUIDE Iain E. Richardson, Vcodex Ltd., UK 1. Introduction Not another video codec! Since the frst commercially viable video codec formats appeared in the early 1990s, we have seen the emergence of a plethora of compressed digital video formats, from MPEG-1 and MPEG-2 to recent codecs such as HEVC and VP9. Each new format offers certain advantages over its predecessors. However, the increasing variety of codec formats poses many questions for anyone involved in collecting, archiving and delivering digital video content, such as: ■■ Which codec format (if any) is best? ■■ What is a suitable acquisition protocol for digital video? ■■ Is it possible to ensure that early ‘born digital’ material will still be playable in future decades? ■■ What are the advantages and disadvantages of converting (transcoding) older formats into newer standards? ■■ What is the best way to deliver video content to end-users? In this article I explain how a video compression codec works and consider some of the practi- cal concerns relating to choosing and controlling a codec. I discuss the motivations behind the continued development of new codec standards and suggest practical measures to help deal with the questions listed above. 2. Codecs and compression 2.1 What is a video codec? ‘Codec’ is a contraction of ‘encoder and decoder’. A video encoder converts ‘raw’ or uncom- pressed digital video data into a compressed form which is suitable for storage or transmission. A video decoder extracts digital video data from a compressed fle, converting it into a display- able, uncompressed form.
    [Show full text]
  • Nevion Virtuoso
    datasheet Virtuoso Media Function Nevion Virtuoso JPEG 2000 Encoding and Decoding The Virtuoso JPEG 2000 Media Function provides visually lossless compressed Applications transport of SD and HD signals with ultra- • Professional broadcast contribution low latency, using only 10% of the bandwidth • Live sports and event contribution required for uncompressed video. • Studio-to-studio media exchange • Managed video services over IP Nevion Virtuoso can run multiple instances of the JPEG 2000 Media Function on a single Key features platform with built-in network aggregation • Multi-channel JPEG 2000 encoding or decoding to 10G Ethernet. • Visually lossless VQ and low multi-generation loss • Transport of SD, HD and 3G-SDI over IP/ GigE The JPEG 2000 Media Function requires the HBR Accelerator, which supports SD/HD/3G-SDI input, • Interoperability with 3rd party through VSF TR-01 for encoding or decoding of up to four JPEG 2000 • Very low end-to-end latency, 3-4 frames with TR01 channels per Accelerator. • Option for ultra-low latency (sub-frame) The VSF TR-01 compliant TS over IP encapsulation • Supports FEC, SIPS / SMPTE 2022-7 and Launch ensures perfectly synchronized transport of video, Delay Offset (LDO) IP protection mechanisms audio and ancillary data, as well as interoperability • Integrated frame synchronization on decoder with 3rd party equipment • User-friendly web GUI for monitoring and control The Nevion Virtuoso JPEG 2000 Media Function, • Thumbnails for input/output confidence monitoring combined with Nevion’s advanced protection • Built-in TS monitoring (ETSI TR 101 290 Priority 1) of mechanisms, enables broadcasters to utilize encoder output and decoder input, with option for cost-efficient IP links for the real-time transport of Pri 2 and Pri 3 monitoring including PCR validation professional media with low bandwidth utilization, • Software license approach ensures easy and combined with extremely high quality and availability.
    [Show full text]
  • Creating and Archiving Born Digital Video
    Federal Agencies Digitization Guidelines Initiative Creating and Archiving Born Digital Video Part III. High Level Recommended Practices December 2, 2014 The FADGI Audio-Visual Working Group http://www.digitizationguidelines.govaudio-visual/ Creating and Archiving Born Digital Video III: High Level Recommended Practices By the Federal Agencies Digitization Guidelines Initiative Audio-Visual Working Group http://www.digitizationguidelines.gov/audio-visual/ Version 1.1, December 2, 2014 TABLE OF CONTENTS Introduction .............................................................................................................................................................. 4 What is this Document? .......................................................................................................................................... 4 Key to Case History References .............................................................................................................................. 4 Understanding the Recommended Practices .......................................................................................................... 4 Part 1. Advice for File Creators .............................................................................................................................. 6 Plan for High Quality Video Files and Metadata ................................................................................................... 6 RP 1.1 Select a camera and other recording equipment with the capability to capture at high quality
    [Show full text]
  • 1 Introduction
    HELSINKI UNIVERSITY OF TECHNOLOGY Faculty of Electronics, Communications, and Automation Department of Communications and Networking Le Wang Evaluation of Compression for Energy-aware Communication in Wireless Networks Master’s Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Technology. Espoo, May 11, 2009 Supervisor: Professor Jukka Manner Instructor: Sebastian Siikavirta 2 HELSINKI UNIVERSITY OF TECHNOLOGY ABSTRACT OF MASTER’S THESIS Author: Le Wang Title: Evaluation of Compression for Energy-aware Communication in Wireless Networks Number of pages: 75 p. Date: 11th May 2009 Faculty: Faculty of Electronics, Communications, and Automation Department: Department of Communications and Networks Code: S-38 Supervisor: Professor Jukka Manner Instructor: Sebastian Siikavirta Abstract In accordance with the development of ICT-based communication, energy efficient communication in wireless networks is being required for reducing energy consumption, cutting down greenhouse emissions and improving business competitiveness. Due to significant energy consumption of transmitting data over wireless networks, data compression techniques can be used to trade the overhead of compression/decompression for less communication energy. Careless and blind compression in wireless networks not only causes an expansion of file sizes, but also wastes energy. This study aims to investigate the usages of data compression to reduce the energy consumption in a hand-held device. By con- ducting experiments as the methodologies, the impacts of transmission on energy consumption are explored on wireless interfaces. Then, 9 lossless compression algo- rithms are examined on popular Internet traffic in the view of compression ratio, speed and consumed energy. Additionally, energy consumption of uplink, downlink and overall system is investigated to achieve a comprehensive understanding of compression in wireless networks.
    [Show full text]
  • Data Compression in Solid State Storage
    Data Compression in Solid State Storage John Fryar [email protected] Santa Clara, CA August 2013 1 Acknowledgements This presentation would not have been possible without the counsel, hard work and graciousness of the following individuals and/or organizations: Raymond Savarda Sandgate Technologies Santa Clara, CA August 2013 2 Disclaimers The opinions expressed herein are those of the author and do not necessarily represent those of any other organization or individual unless specifically cited. A thorough attempt to acknowledge all sources has been made. That said, we’re all human… Santa Clara, CA August 2013 3 Learning Objectives At the conclusion of this tutorial the audience will have been exposed to: • The different types of Data Compression • Common Data Compression Algorithms • The Deflate/Inflate (GZIP/GUNZIP) algorithms in detail • Implementation Options (Software/Hardware) • Impacts of design parameters in Performance • SSD benefits and challenges • Resources for Further Study Santa Clara, CA August 2013 4 Agenda • Background, Definitions, & Context • Data Compression Overview • Data Compression Algorithm Survey • Deflate/Inflate (GZIP/GUNZIP) in depth • Software Implementations • HW Implementations • Tradeoffs & Advanced Topics • SSD Benefits and Challenges • Conclusions Santa Clara, CA August 2013 5 Definitions Item Description Comments Open A system which will compress Must strictly adhere to standards System data for use by other entities. on compress / decompress I.E. the compressed data will algorithms exit the system Interoperability among vendors mandated for Open Systems Closed A system which utilizes Can support a limited, optimized System compressed data internally but subset of standard. does not expose compressed Also allows custom algorithms data to the outside world No Interoperability req’d.
    [Show full text]
  • A Guide to MPEG Fundamentals and Protocol Analysis and Protocol to MPEG Fundamentals a Guide
    MPEG Tutorial A Guide to MPEG Fundamentals and Protocol Analysis (Including DVB and ATSC) A Guide to MPEG Fundamentals and Protocol Analysis (Including DVB and ATSC) A Guide to MPEG Fundamentals and Protocol Analysis ii www.tektronix.com/video_audio/ A Guide to MPEG Fundamentals and Protocol Analysis A Guide to MPEG Fundamentals and Protocol Analysis Contents Section 1 – Introduction to MPEG · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1 1.1 Convergence · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1 1.2 Why Compression is Needed · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1 1.3 Applications of Compression · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1 1.4 Introduction to Video Compression · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 2 1.5 Introduction to Audio Compression · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 4 1.6 MPEG Signals · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 4 1.7 Need for Monitoring and Analysis · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 5 1.8 Pitfalls of Compression · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 6 Section 2 – Compression in Video · · · · · · · · · · · · · · · · · · · · · · · · · · · · 7 2.1 Spatial or Temporal Coding? · · · · · · · · · · · · · · · · · ·
    [Show full text]
  • Introduction to Dolby Digital Plus, an Enhancement to the Dolby Digital Coding System
    Audio Engineering Society Convention Paper 6196 Presented at the 117th Convention 2004 October 28–31 San Francisco, CA, USA This convention paper has been reproduced from the author's advance manuscript, without editing, corrections, or consideration by the Review Board. The AES takes no responsibility for the contents. Additional papers may be obtained by sending request and remittance to Audio Engineering Society, 60 East 42nd Street, New York, New York 10165-2520, USA; also see www.aes.org. All rights reserved. Reproduction of this paper, or any portion thereof, is not permitted without direct permission from the Journal of the Audio Engineering Society. Introduction to Dolby Digital Plus, an Enhancement to the Dolby Digital Coding System Louis D. Fielder 1, Robert L. Andersen1, Brett G. Crockett1, Grant A. Davidson1, Mark F. Davis1, Stephen C. Turner1, Mark S. Vinton1, and Phillip A. Williams1 1 Dolby Laboratories, San Francisco, CA, 94103, USA ABSTRACT An extension to the existing Dolby Digital (AC-3) multichannel audio coding standard is described and its new capabilities explored. This coding system is designed with extensive compatibility with the existing standard by retaining most of its metadata and data-framing structure to preserve and extend functionality in existing applications. New features include simultaneous support for multiple program streams, carriage of multichannel signals beyond 5.1 channels, and fine-grained control and support for data rates up to 6.144 Mbps. New coding “tools” including spectral extension, enhanced channel coupling, transient pre-noise processing, and improved filterbank/quantization enhance the performance of earlier AC-3 technology. 1. INTRODUCTION applications.
    [Show full text]
  • Pilot SMPTE SP Working Document
    SMPTE ER 2121-2:2018 ENGINEERING REPORT Interoperable Master Format — Business Requirements for DPP Technical Specifications Page 1 of 10 Every attempt has been made to ensure that the information contained in this document is accurate. Errors in this document should be reported to the SMPTE Business Requirements proponent(s) identified below with a copy to [email protected]. All other inquiries in respect of this document, including inquiries as to intellectual property requirements, should be addressed to the SMPTE Business Requirements proponents identified below. Business Requirements Owner(s) contact information: Digital Production Partnership 4th Floor, Fat Side, ITV, 200 Gray’s Inn Road, London, WC1X 8HF, UK Email: [email protected] Digital Production Partnership is a UK Registered Company. Company Registration Number: 09478697 North American Broadcasters Association 205 Wellington St. W., Suite 9C200 Toronto, ON M5V 3G7, Canada Email: [email protected] or [email protected] Telephone: +1-416-205-3363 Copyright © 2018 by THE SOCIETY OF MOTION PICTURE AND TELEVISION ENGINEERS 446 Hamilton Avenue., White Plains, NY 10601 (914) 761-1100 SMPTE ER 2121-2:2018 Table of Contents 1 Scope 3 2 Conformance Notation 3 3 Business Requirements 3 Annex A Business Requirements Overview 5 Annex B Business Requirements - Additional Notes 8 Annex C Glossary of Terms 9 Forward SMPTE (the Society of Motion Picture and Television Engineers) is an internationally-recognized standards developing organization. Headquartered and incorporated in the United States of America, SMPTE has members in over 80 countries on six continents. SMPTE’s Engineering Documents, including Standards, Recommended Practices, Engineering Guidelines and Technical Specifications, are prepared by SMPTE’s Technology Committees.
    [Show full text]
  • LOVELY PROFESSIONAL UNIVERSITY TERM PAPER of DIGITA L COMMUNICATON SYSTEM (DATA COMPRESSION) ABSTRACT: History:- About the Au
    LOVELY PROFESSIONAL UNIVERSITY TERM PAPER OF DIGITA L COMMUNICATON SYSTEM (DATA COMPRESSION) BY : Joginder singh Roll:34 Sec:B6803 About the Author:- ABSTRACT: This term paper deals with the term data Claude Elwood Shannon (April 30, 1916 ± February 24, 2001) was compression. how it is done ,lossy compression, it also explain an American mathematician, electronic engineer, and about limitations and future aspects of data compression. cryptographer known as "the father of information theory". History:- Shannon is famous for having founded information theory with one landmark paper published in 1948. But he is also credited with founding both digital computer and digital circuit design theory in In his 1948 paper, ``A Mathematical Theory of Communication,'' 1937, when, as a 21-year-old master's student at MIT, he wrote a Claude E. Shannon formulated the theory of data compression. thesis demonstrating that electrical application of Boolean algebra Shannon established that there is a fundamental limit to lossless could construct and resolve any logical, numerical relationship. It data compression. This limit, called the entropy rate, is denoted by has been claimed that this was the most important master's thesis H. The exact value of H depends on the information source --- of all time. more specifically, the statistical nature of the source. It is possible to compress the source, in a lossless manner, with compression rate close to H. It is mathematically i mpossible to do better than H. Shannon also developed the theory of lossy data compression. This is better known as rate-distortion theory. In lossy data compression, the decompressed data does not have to be exactly WHAT IS DATA COPRESSION ? the same as the original data.
    [Show full text]
  • Reconstruct and Represent Video Contents for Captioning Via Reinforcement Learning
    1 Reconstruct and Represent Video Contents for Captioning via Reinforcement Learning Wei Zhang Bairui Wang Lin Ma Wei Liu Abstract—In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) in a novel encoder-decoder-reconstructor architecture, which leverages both forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder component makes use of the forward flow to produce a sentence description based on the encoded video semantic features. Two types of reconstructors are subsequently proposed to employ the backward flow and reproduce the video features from local and global perspectives, respectively, capitalizing on the hidden state sequence generated by the decoder. Moreover, in order to make a comprehensive reconstruction of the video features, we propose to fuse the two types of reconstructors together. The generation loss yielded by the encoder-decoder component and the reconstruction loss introduced by the reconstructor are jointly cast into training the proposed RecNet in an end-to-end fashion. Furthermore, the RecNet is fine-tuned by CIDEr optimization via reinforcement learning, which significantly boosts the captioning performance. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the performance of video captioning consistently. Index Terms—Video Captioning, Reconstruction Network (RecNet), Backward Information. F 1 INTRODUCTION ESCRIBING visual contents with natural language auto- D matically has received increasing attention in both the computer vision and natural language processing communi- ties.
    [Show full text]