Modification of Adaptive Huffman Coding for Use in Encoding Large Alphabets

Total Page:16

File Type:pdf, Size:1020Kb

Modification of Adaptive Huffman Coding for Use in Encoding Large Alphabets ITM Web of Conferences 15, 01004 (2017) DOI: 10.1051/itmconf/20171501004 CMES’17 Modification of Adaptive Huffman Coding for use in encoding large alphabets Mikhail Tokovarov1,* 1Lublin University of Technology, Electrical Engineering and Computer Science Faculty, Institute of Computer Science, Nadbystrzycka 36B, 20-618 Lublin, Poland Abstract. The paper presents the modification of Adaptive Huffman Coding method – lossless data compression technique used in data transmission. The modification was related to the process of adding a new character to the coding tree, namely, the author proposes to introduce two special nodes instead of single NYT (not yet transmitted) node as in the classic method. One of the nodes is responsible for indicating the place in the tree a new node is attached to. The other node is used for sending the signal indicating the appearance of a character which is not presented in the tree. The modified method was compared with existing methods of coding in terms of overall data compression ratio and performance. The proposed method may be used for large alphabets i.e. for encoding the whole words instead of separate characters, when new elements are added to the tree comparatively frequently. Huffman coding is frequently chosen for implementing open source projects [3]. The present paper contains the 1 Introduction description of the modification that may help to improve Efficiency and speed – the two issues that the current the algorithm of adaptive Huffman coding in terms of world of technology is centred at. Information data savings. technology (IT) is no exception in this matter. Such an area of IT as social media has become extremely popular 2 Study of related works and widely used, so that high transmission speed has gained a great importance. One way of obtaining high Huffman coding has been developed in 1952 by David communication performance is developing more Huffman. He developed the method during his Sc. D. study efficient hardware. The other one is to develop the if MIT and published in the 1952 paper "A Method for software that would allow to compress the data in such the Construction of Minimum-Redundancy Codes" [1]. a way that would reduce the size of data but not affect its Huffman coding is the most optimal among methods information content. In other words, to encode the data encoding symbols separately, but it is not always optimal by the method called lossless data compression. This compared to some other compression methods, such as term means that the methods of this type allow the e.g. arithmetic and Lempel-Ziv-Welch coding [4]. original data to be perfectly reconstructed from the However, the last two methods, as it has been said, are encoded message. patent-covered, so developers often tend to use Huffman Binary or entropy encoding are the most popular coding [3]. Comparative simplicity and high speed due branches among lossless data compression methods. The to lack of arithmetic calculations are the advantages of term entropy encoding means that the length of a code is this method as well. Due to these reasons Huffman approximately proportional to the negative logarithm of coding is often used for developing encoding engines for the occurrence of the character encoded with the code. many applications in various areas [5]. Simplifying it may be said: the higher probability of the The fact that billions of people are exchanging character is, the shorter is its code [1]. gigantic amount of data every day stimulated Several methods of entropy encoding exist, these are development of compression technologies. This topic the most frequently used methods of this branch: constantly presents significant interest for researches, the - arithmetic coding, following works may be presented as the examples: - range coding, Jarosław Duda et al. worked out the method called - Huffman coding, Asymmetric Numeral Systems, the method was - asymmetric Numeral Systems. developed on the basis of the two encoding methods: Arithmetic and Range coding are quite similar with arithmetic and Huffman coding. It combined the some differences [2], but arithmetic coding is covered by advantages from the two methods: patent, that is why, due to lack of patent coverage, * Corresponding author: [email protected] © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). ITM Web of Conferences 15, 01004 (2017) DOI: 10.1051/itmconf/20171501004 CMES’17 - near cur symbo probabilit hence etter Bu th compression atio o no show tual pace compression atio of ithmeti coding, nd pability saving esides f coded message h tab with h f encoding d ecoding f ffman ding 6]; code-character airs hould transmitted. r - Facebook standard orithm is ased n Z77 reason, th ther index should b troduced. Th ent- dictionary der d – effectiv entropy coding to-original-bits atio(SOBR) escribed cordance based n Huffman method 7]; with h formu below: - Bro coding gorithm which used mos of the modern nterne Browsers, uch Chrome, era. 2 Similarly, to h Zstandard is ased n e 푛 combination f LZ77 d modified ffman ding 8]. Th only way mak SOBR u to CR use So, may clearly seen hat esp ng history e on coding ee for essag bu this ee wi no b Huffman coding gorithm still resents grea teres optima since h character robabilit may differen for plication. in arious messages. 3 Huffman coding explained 3.1 Static Huffman coding algorithm Th concep of ffman ding s ased n binary tree. h tree nsists f wo inds f nodes: intermediate nodes, . nod having escendant nod and h nod which do o hav descendants. Th nod called leaves. haracter may stored only in leaf node, is ndition nsur th character cod be prefix-free [1]. means, no aracter h th code, at would th in segmen of other Fig. 1. uffman enerated based the phrase th n examp f a Huffman tree". character code. s h examp of prefix des, following t sequences may be used: n . Th cod 110 en to in segmen of h 3.2 Adaptive Huffman coding algorithm code 110101, hese wo odes may not be decoded unambiguously. h tree organized cording h Th method f aptiv Huffman Coding AHC) following rinciples 4]: reviewed h ar w proposed y effrey n a) any nod th ee may no hav ing descendant: his pe blished n 9]. either wo r on Th algorithm working uring eating tree may b) each od in h tree th number igned b observed ur 2. This umber lled weight. ending n th typ of th od weigh may hav h ollowing meanings - if the od is leaf, the weigh value ual h number f mes h character tored h leaf ccurs n th messag ent; - if h nod is termedia nod its weigh value is equ to sum f escendants' weights. c) th weigh f h righ escendan hould no than th weigh f th lef escendant. Fig. 2. The lgorithm AHC, example for encoding he or Th cod for ery aracter defined th path "abb". in h binary ee from h roo to leaf ntaining th character Figur 1), g. blank pac character This method ased n e am principles the which s h mos frequen haracter h tree h the s method lus h following xtensions 9]: cod 111, d e 'p character, which ccurs nly nce a) every od in the ee has s key number, he y has the ode [4]. numbers arranged n following way Th Figur 1 resents th tree nstructed n - maximum alu of th key number n h tree may e accordance with method f ffman ding. h calculated main eatur of is th tree nstructed efor th transmission started n th basis f nalysis f the (3 probabilit of epar characters n h who messag - th roo h퐷 푘larges 푘 key number;푘푘 푘 [1]. - an ncestor larger number an ny f Th d compression atio(DCR) s escribed y he descendants; formu 1): - th righ descendan should ave ger ey number than th lef escendant; 1 b) after pu of ny haracter tree pdated; 퐷 2 ITM Web of Conferences 15, 01004 (2017) DOI: 10.1051/itmconf/20171501004 CMES’17 - near cur symbo probabilit hence etter Bu th compression atio o no show tual pace c) spec od called NYT no y transmitted) compression atio of ithmeti coding, nd pability saving esides f coded message h tab with h used for oth ndicating th place or new aracter Listing 1. Pseudocode presenting dating ree HC. f encoding d ecoding f ffman ding 6]; code-character airs hould transmitted. r and or ignalizing a th new haracter btained, UpdateTree(character ch) - Facebook standard orithm is ased n Z77 reason, th ther index should b troduced. Th ent- key has th valu th ee, its weigh always 1 if ch is not found in the tree dictionary der d – effectiv entropy coding to-original-bits atio(SOBR) escribed cordance equ zero; 2 make a newLeafNode as the right descendant of the based n Huffman method 7]; with h formu below: d) a et of odes wit equal eight lues s alled a ock. oldNYTNode; - Bro coding gorithm which used mos of the Th encoding gorithm s escribed isting . h 3 newLeafNode's keyValue := oldNYTNode's modern nterne Browsers, uch Chrome, era. 2 algorithm ntains nly h process f pdating th tree keyValue-1; Similarly, to h Zstandard is ased n e 푛 and o no consider communication. f he 4 make a newNYTNode as the left descendant combination f LZ77 d modified ffman ding 8].
Recommended publications
  • CALIFORNIA STATE UNIVERSITY, NORTHRIDGE LOSSLESS COMPRESSION of SATELLITE TELEMETRY DATA for a NARROW-BAND DOWNLINK a Graduate P
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE LOSSLESS COMPRESSION OF SATELLITE TELEMETRY DATA FOR A NARROW-BAND DOWNLINK A graduate project submitted in partial fulfillment of the requirements For the degree of Master of Science in Electrical Engineering By Gor Beglaryan May 2014 Copyright Copyright (c) 2014, Gor Beglaryan Permission to use, copy, modify, and/or distribute the software developed for this project for any purpose with or without fee is hereby granted. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Copyright by Gor Beglaryan ii Signature Page The graduate project of Gor Beglaryan is approved: __________________________________________ __________________ Prof. James A Flynn Date __________________________________________ __________________ Dr. Deborah K Van Alphen Date __________________________________________ __________________ Dr. Sharlene Katz, Chair Date California State University, Northridge iii Contents Copyright .......................................................................................................................................... ii Signature Page ...............................................................................................................................
    [Show full text]
  • Greedy Algorithm Implementation in Huffman Coding Theory
    iJournals: International Journal of Software & Hardware Research in Engineering (IJSHRE) ISSN-2347-4890 Volume 8 Issue 9 September 2020 Greedy Algorithm Implementation in Huffman Coding Theory Author: Sunmin Lee Affiliation: Seoul International School E-mail: [email protected] <DOI:10.26821/IJSHRE.8.9.2020.8905 > ABSTRACT In the late 1900s and early 2000s, creating the code All aspects of modern society depend heavily on data itself was a major challenge. However, now that the collection and transmission. As society grows more basic platform has been established, efficiency that can dependent on data, the ability to store and transmit it be achieved through data compression has become the efficiently has become more important than ever most valuable quality current technology deeply before. The Huffman coding theory has been one of desires to attain. Data compression, which is used to the best coding methods for data compression without efficiently store, transmit and process big data such as loss of information. It relies heavily on a technique satellite imagery, medical data, wireless telephony and called a greedy algorithm, a process that “greedily” database design, is a method of encoding any tries to find an optimal solution global solution by information (image, text, video etc.) into a format that solving for each optimal local choice for each step of a consumes fewer bits than the original data. [8] Data problem. Although there is a disadvantage that it fails compression can be of either of the two types i.e. lossy to consider the problem as a whole, it is definitely or lossless compressions.
    [Show full text]
  • A Survey on Different Compression Techniques Algorithm for Data Compression Ihardik Jani, Iijeegar Trivedi IC
    International Journal of Advanced Research in ISSN : 2347 - 8446 (Online) Computer Science & Technology (IJARCST 2014) Vol. 2, Issue 3 (July - Sept. 2014) ISSN : 2347 - 9817 (Print) A Survey on Different Compression Techniques Algorithm for Data Compression IHardik Jani, IIJeegar Trivedi IC. U. Shah University, India IIS. P. University, India Abstract Compression is useful because it helps us to reduce the resources usage, such as data storage space or transmission capacity. Data Compression is the technique of representing information in a compacted form. The actual aim of data compression is to be reduced redundancy in stored or communicated data, as well as increasing effectively data density. The data compression has important tool for the areas of file storage and distributed systems. To desirable Storage space on disks is expensively so a file which occupies less disk space is “cheapest” than an uncompressed files. The main purpose of data compression is asymptotically optimum data storage for all resources. The field data compression algorithm can be divided into different ways: lossless data compression and optimum lossy data compression as well as storage areas. Basically there are so many Compression methods available, which have a long list. In this paper, reviews of different basic lossless data and lossy compression algorithms are considered. On the basis of these techniques researcher have tried to purpose a bit reduction algorithm used for compression of data which is based on number theory system and file differential technique. The statistical coding techniques the algorithms such as Shannon-Fano Coding, Huffman coding, Adaptive Huffman coding, Run Length Encoding and Arithmetic coding are considered.
    [Show full text]
  • 16.1 Digital “Modes”
    Contents 16.1 Digital “Modes” 16.5 Networking Modes 16.1.1 Symbols, Baud, Bits and Bandwidth 16.5.1 OSI Networking Model 16.1.2 Error Detection and Correction 16.5.2 Connected and Connectionless 16.1.3 Data Representations Protocols 16.1.4 Compression Techniques 16.5.3 The Terminal Node Controller (TNC) 16.1.5 Compression vs. Encryption 16.5.4 PACTOR-I 16.2 Unstructured Digital Modes 16.5.5 PACTOR-II 16.2.1 Radioteletype (RTTY) 16.5.6 PACTOR-III 16.2.2 PSK31 16.5.7 G-TOR 16.2.3 MFSK16 16.5.8 CLOVER-II 16.2.4 DominoEX 16.5.9 CLOVER-2000 16.2.5 THROB 16.5.10 WINMOR 16.2.6 MT63 16.5.11 Packet Radio 16.2.7 Olivia 16.5.12 APRS 16.3 Fuzzy Modes 16.5.13 Winlink 2000 16.3.1 Facsimile (fax) 16.5.14 D-STAR 16.3.2 Slow-Scan TV (SSTV) 16.5.15 P25 16.3.3 Hellschreiber, Feld-Hell or Hell 16.6 Digital Mode Table 16.4 Structured Digital Modes 16.7 Glossary 16.4.1 FSK441 16.8 References and Bibliography 16.4.2 JT6M 16.4.3 JT65 16.4.4 WSPR 16.4.5 HF Digital Voice 16.4.6 ALE Chapter 16 — CD-ROM Content Supplemental Files • Table of digital mode characteristics (section 16.6) • ASCII and ITA2 code tables • Varicode tables for PSK31, MFSK16 and DominoEX • Tips for using FreeDV HF digital voice software by Mel Whitten, KØPFX Chapter 16 Digital Modes There is a broad array of digital modes to service various needs with more coming.
    [Show full text]
  • On the Coding Gain of Dynamic Huffman Coding Applied to a Wavelet-Based Perceptual Audio Coder
    ON THE CODING GAIN OF DYNAMIC HUFFMAN CODING APPLIED TO A WAVELET-BASED PERCEPTUAL AUDIO CODER N. Ruiz Reyes1, M. Rosa Zurera2, F. López Ferreras2, P. Jarabo Amores2, and P. Vera Candeas1 1Departamento de Electrónica, Universidad de Jaén, Escuela Universitaria Politénica, 23700 Linares, Jaén, SPAIN, e-mail: [email protected] 2Dpto. de Teoría de la Señal y Comunicaciones, Universidad de Alcalá, Escuela Politécnica 28871 Alcalá de Henares, Madrid, SPAIN, e-mail: [email protected] ABSTRACT The last one, the ISO/MPEG-4 standard, is composed of several speech and audio coders supporting bit rates This paper evaluates the coding gain of using a dynamic from 2 to 64 kbps per channel. ISO/MPEG-4 includes Huffman entropy coder in an audio coder that uses a the already proposed AAC standard, which provides high wavelet-packet decomposition that is close to the sub- quality audio coding at bit rates of 64 kbps per channel. band decomposition made by the human ear. The sub- Parallel to the definition of the ISO/MPEG standards, band audio signals are modeled as samples of a station- several audio coding algorithms have been proposed that ary random process with laplacian probability density use the wavelet transform as the tool to decompose the function because experimental results indicate that the audio signal. The most promising results correspond to highest coding efficiency is obtained in that case. We adapted wavelet-based audio coders. Probably, the most have also studied how the entropy coding gain varies cited is the one designed by Sinha and Tewfik [2], a high with the band index.
    [Show full text]
  • Applying Compression to a Game's Network Protocol
    Applying Compression to a Game's Network Protocol Mikael Hirki Aalto University, Finland [email protected] Abstract. This report presents the results of applying different com- pression algorithms to the network protocol of an online game. The al- gorithm implementations compared are zlib, liblzma and my own imple- mentation based on LZ77 and a variation of adaptive Huffman coding. The comparison data was collected from the game TomeNET. The re- sults show that adaptive coding is especially useful for compressing large amounts of very small packets. 1 Introduction The purpose of this project report is to present and discuss the results of applying compression to the network protocol of a multi-player online game. This poses new challenges for the compression algorithms and I have also developed my own algorithm that tries to tackle some of these. Networked games typically follow a client-server model where multiple clients connect to a single server. The server is constantly streaming data that contains game updates to the clients as long as they are connected. The client will also send data to the about the player's actions. This report focuses on compressing the data stream from server to client. The amount of data sent from client to server is likely going to be very small and therefore it would not be of any interest to compress it. Many potential benefits may be achieved by compressing the game data stream. A server with limited bandwidth could theoretically serve more clients. Compression will lower the bandwidth requirements for each individual client. Compression will also likely reduce the number of packets required to transmit arXiv:1206.2362v1 [cs.IT] 28 May 2012 larger data bursts.
    [Show full text]
  • The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on Iot Nodes in Smart Cities
    sensors Article The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities Ammar Nasif *, Zulaiha Ali Othman and Nor Samsiah Sani Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, University Kebangsaan Malaysia, Bangi 43600, Malaysia; [email protected] (Z.A.O.); [email protected] (N.S.S.) * Correspondence: [email protected] Abstract: Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, Citation: Nasif, A.; Othman, Z.A.; we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Sani, N.S.
    [Show full text]
  • Data Compression in Solid State Storage
    Data Compression in Solid State Storage John Fryar [email protected] Santa Clara, CA August 2013 1 Acknowledgements This presentation would not have been possible without the counsel, hard work and graciousness of the following individuals and/or organizations: Raymond Savarda Sandgate Technologies Santa Clara, CA August 2013 2 Disclaimers The opinions expressed herein are those of the author and do not necessarily represent those of any other organization or individual unless specifically cited. A thorough attempt to acknowledge all sources has been made. That said, we’re all human… Santa Clara, CA August 2013 3 Learning Objectives At the conclusion of this tutorial the audience will have been exposed to: • The different types of Data Compression • Common Data Compression Algorithms • The Deflate/Inflate (GZIP/GUNZIP) algorithms in detail • Implementation Options (Software/Hardware) • Impacts of design parameters in Performance • SSD benefits and challenges • Resources for Further Study Santa Clara, CA August 2013 4 Agenda • Background, Definitions, & Context • Data Compression Overview • Data Compression Algorithm Survey • Deflate/Inflate (GZIP/GUNZIP) in depth • Software Implementations • HW Implementations • Tradeoffs & Advanced Topics • SSD Benefits and Challenges • Conclusions Santa Clara, CA August 2013 5 Definitions Item Description Comments Open A system which will compress Must strictly adhere to standards System data for use by other entities. on compress / decompress I.E. the compressed data will algorithms exit the system Interoperability among vendors mandated for Open Systems Closed A system which utilizes Can support a limited, optimized System compressed data internally but subset of standard. does not expose compressed Also allows custom algorithms data to the outside world No Interoperability req’d.
    [Show full text]
  • Advanced Master in Space Communications Systems PROJECT 1 : DATA COMPRESSION APPLIED to IMAGES Maël Barthe & Arthur Louchar
    Advanced Master in Space Communications Systems PROJECT 1 : DATA COMPRESSION APPLIED TO IMAGES Maël Barthe & Arthur Louchart Project 1 : Data compression applied to images Supervised by M. Tarik Benaddi Special thanks to : Lenna Sjööblom In the early seventies, an unknown researcher at the University of Southern California working on compression technologies scanned in the image of Lenna Sjööblom centerfold from Playboy magazine (playmate of the month November 1972). Since that time, images of the Playmate have been used as the industry standard for testing ways in which pictures can be manipulated and transmitted electronically. Over the past 25 years, no image has been more important in the history of imaging and electronic communications. Because of the ubiquity of her Playboy photo scan, she has been called the "first lady of the internet". The title was given to her by Jeff Seideman in a press release he issued announcing her appearance at the 50th annual IS&T Conference (Imaging Science & Technology). This project is dedicated to Lenna. 1 Project 1 : Data compression applied to images Supervised by M. Tarik Benaddi Table of contents INTRODUCTION ............................................................................................................................................. 3 1. WHAT IS DATA COMPRESSION? .......................................................................................................................... 4 1.1. Definition ..........................................................................................................................................
    [Show full text]
  • Real-Time Video Compression Techniques and Algorithms
    Page i Real-Time Video Compression Page ii THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE MULTIMEDIA SYSTEMS AND APPLICATIONS Consulting Editor Borko Furht Florida Atlantic University Recently Published Titles: VIDEO AND IMAGE PROCESSING IN MULTIMEDIA SYSTEMS, by Borko Furht, Stephen W. Smoliar, HongJiang Zhang ISBN: 0-7923-9604-9 MULTIMEDIA SYSTEMS AND TECHNIQUES, edited by Borko Furht ISBN: 0-7923-9683-9 MULTIMEDIA TOOLS AND APPLICATIONS, edited by Borko Furht ISBN: 0-7923-9721-5 MULTIMEDIA DATABASE MANAGEMENT SYSTEMS, by B. Prabhakaran ISBN: 0-7923-9784-3 Page iii Real-Time Video Compression Techniques and Algorithms by Raymond Westwater Borko Furht Florida Atlantic University Page iv Distributors for North America: Kluwer Academic Publishers 101 Philip Drive Assinippi Park Norwell, Massachusetts 02061 USA Distributors for all other countries: Kluwer Academic Publishers Group Distribution Centre Post Office Box 322 3300 AH Dordrecht, THE NETHERLANDS Library of Congress Cataloging-in-Publication Data A C.I.P. Catalogue record for this book is available from the Library of Congress. Copyright © 1997 by Kluwer Academic Publishers All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher, Kluwer Academic Publishers, 101 Philip Drive, Assinippi Park, Norwell, Massachusetts 02061 Printed on acid-free paper. Printed in the United States of America Page v Contents Preface vii 1. The Problem of Video Compression 1 1.1 Overview of Video Compression Techniques 3 1.2 Applications of Compressed Video 6 1.3 Image and Video Formats 8 1.4 Overview of the Book 12 2.
    [Show full text]
  • Region-Of-Interest-Based Video Coding for Video Conference Applications Marwa Meddeb
    Region-of-interest-based video coding for video conference applications Marwa Meddeb To cite this version: Marwa Meddeb. Region-of-interest-based video coding for video conference applications. Signal and Image processing. Telecom ParisTech, 2016. English. tel-01410517 HAL Id: tel-01410517 https://hal-imt.archives-ouvertes.fr/tel-01410517 Submitted on 6 Dec 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. EDITE – ED 130 Doctorat ParisTech THÈSE pour obtenir le grade de docteur délivré par Télécom ParisTech Spécialité « Signal et Images » présentée et soutenue publiquement par Marwa MEDDEB le 15 Février 2016 Codage vidéo par régions d’intérêt pour des applications de visioconférence Region-of-interest-based video coding for video conference applications Directeurs de thèse : Béatrice Pesquet-Popescu (Télécom ParisTech) Marco Cagnazzo (Télécom ParisTech) Jury M. Marc ANTONINI , Directeur de Recherche, Laboratoire I3S Sophia-Antipolis Rapporteur M. François Xavier COUDOUX , Professeur, IEMN DOAE Valenciennes Rapporteur M. Benoit MACQ , Professeur, Université Catholique de Louvain Président Mme. Béatrice PESQUET-POPESCU , Professeur, Télécom ParisTech Directeur de thèse M. Marco CAGNAZZO , Maître de Conférence HdR, Télécom ParisTech Directeur de thèse M. Joël JUNG , Ingénieur de Recherche, Orange Labs Examinateur M.
    [Show full text]
  • The Lempel Ziv Algorithm
    The Lempel Ziv Algorithm Christina Zeeh Seminar ”Famous Algorithms” January 16, 2003 The Lempel Ziv Algorithm is an algorithm for lossless data compres- sion. It is not a single algorithm, but a whole family of algorithms, stem- ming from the two algorithms proposed by Jacob Ziv and Abraham Lem- pel in their landmark papers in 1977 and 1978. Lempel Ziv algorithms are widely used in compression utilities such as gzip, GIF image compression and the V.42 modem standard. Figure 1: The Lempel Ziv Algorithm Family This report shows how the two original Lempel Ziv algorithms, LZ77 and LZ78, work and also presents and compares several of the algorithms that have been derived from the original Lempel Ziv algorithms. 1 Contents 1 Introduction to Data Compression 3 2 Dictionary Coding 4 2.1 Static Dictionary . 4 2.2 Semi-Adaptive Dictionary . 4 2.3 Adaptive Dictionary . 5 3 LZ77 5 3.1 Principle . 5 3.2 The Algorithm . 6 3.3 Example . 7 3.4 Improvements . 8 3.4.1 LZR . 8 3.4.2 LZSS . 8 3.4.3 LZB . 9 3.4.4 LZH . 9 3.5 Comparison . 9 4 LZ78 10 4.1 Principle . 10 4.2 Algorithm . 11 4.3 Example . 11 4.4 Improvements . 12 4.4.1 LZW . 13 4.4.2 LZC . 14 4.4.3 LZT . 14 4.4.4 LZMS . 14 4.4.5 LZJ . 15 4.4.6 LZFG . 15 4.5 Comparison . 15 4.6 Comparison LZ77 and LZ78 . 16 2 1 Introduction to Data Compression Most data shows patterns and is subject to certain constraints.
    [Show full text]