Corpus for Comparing Compression Methods and an Extension of a Excom Library

Total Page:16

File Type:pdf, Size:1020Kb

Corpus for Comparing Compression Methods and an Extension of a Excom Library Czech Technical University in Prague Faculty of Electrical Engineering Department of Computer Science and Engineering Master's Thesis Corpus for comparing compression methods and an extension of a ExCom library Bc. Jakub Rezn´ıˇcekˇ Supervisor: doc. Ing. Jan Holub, Ph.D. Study Programme: Electrical Engineering and Information Technology Field of Study: Computer Science and Engineering May 2010 iv Acknowledgements I would especially like to thank my supervisor, doc. Ing. Jan Holub, Ph.D., for the idea and for his invaluable contributions to this thesis. I would also like to thank my family and my beloved girlfriend Lucie for their support and encouragement, and Bc. OndˇrejSkaliˇcka for his quality comments. v vi Declaration I hereby declare that I have completed this thesis independently and that I have listed all the literature and publications used. I have no objection to usage of this work in compliance with the act x 60 under Act No. 121/2000, Coll. (copyright law), and with the rights connected with the copyright act including the changes in the act. In Litv´ınov on May 14, 2010 ............................................................. vii viii Abstract This thesis deals with the development of a new corpus whose purpose is to evaluate the lossless compression algorithms. We also propose a design of a methodology to maintain its timeliness. The methodology is conceived as a list of steps which are necessary to update a corpus. The efficiency and validity of the corpus, called the Prague Corpus, is verified by thorough ex- periments using the various compression techniques. This includes the representatives of the statistical and dictionary based methods which were implemented and consequently adapted for the universal library of compression algorithms called ExCom. The second part of the thesis deals with an efficient implementation of all methods which has been researched and described in detail. Abstrakt Tato pr´acese zab´yv´av´yvojem nov´ekorpusu, jehoˇz´uˇcelemje porovn´an´ıbezeztr´atov´ych kom- presn´ıch algoritm˚u. Souˇcasnˇepˇredkl´ad´amen´avrhmetodiky pro zachov´an´ıjeho aktu´alnosti. Tato metodika je koncipov´anajako seznam krok˚u,kter´ejsou pro aktualizaci nezbytn´e. Uˇcinnost´ a platnost korpusu, kter´ybyl pojmenov´anjako Praˇzsk´yKorpus, je ovˇeˇrenapomoc´ıpodrobn´ych experiment˚uza vyuˇzit´ır˚uzn´ych kompresn´ıch technik. Mezi tyto techniky patˇr´ız´astupci stati- stick´ych a slovn´ıkov´ych metod, kter´ebyly implementov´any a n´aslednˇeupraveny pro univerz´aln´ı knihovnu kompresn´ıch algoritm˚uExCom. Druh´aˇc´astpr´ace je vˇenov´anaefektivn´ıimplementaci vˇsech metod, kter´ebyly detailnˇeprozkoum´any a pops´any. ix x Contents List of Figures xv List of Tables xvii 1 Introduction1 1.1 History........................................1 1.2 Motivation......................................1 1.3 Main goals......................................2 1.4 ExCom library....................................2 1.4.1 Purpose....................................2 1.4.2 Description..................................3 1.5 Thesis organisation..................................3 2 Data compression5 2.1 Introduction......................................5 2.2 Basic concepts and definitions............................5 2.3 Entropy and redundancy...............................7 2.4 A classification of methods.............................8 2.4.1 Statistical methods..............................8 2.4.2 Dictionary-based methods..........................9 2.4.3 Other methods................................9 2.5 A classification of adaptivity............................ 10 2.6 Compression methods implemented in this work.................. 10 2.6.1 Shannon-Fano coding............................ 11 2.6.2 Static Huffman coding............................ 13 2.6.2.1 The sibling property........................ 15 2.6.2.2 Comparison of Static Huffman coding and Shannon-Fano ap- proach............................... 15 2.6.3 Dynamic Huffman coding.......................... 17 2.6.3.1 FGK algorithm........................... 18 2.6.3.2 Vitter's method.......................... 18 2.6.4 LZ77...................................... 18 2.6.4.1 Decompression algorithm..................... 22 2.6.5 LZSS..................................... 22 2.6.5.1 Techniques to improve the speed performance......... 24 2.6.6 LZ78...................................... 25 2.6.7 LZW...................................... 28 2.7 Summary....................................... 31 3 Implementation 33 xi 3.1 Implementation details................................ 33 3.2 Supporting classes.................................. 33 3.2.1 Universal codes................................ 33 3.2.2 Sliding window................................ 34 3.3 Statistical methods.................................. 35 3.3.1 Shannon-fano coding............................. 35 3.3.2 Static Huffman coding............................ 36 3.3.3 Dymamic Huffman coding.......................... 37 3.4 Dictionary methods................................. 37 3.4.1 LZ77 & LZSS................................. 37 3.4.2 LZ78 & LZW................................. 38 3.4.2.1 Dictionary search using a hash function............. 38 3.4.2.2 Dictionary search using a binary tree.............. 39 3.5 Additional notes................................... 41 4 A new corpus 43 4.1 General purpose................................... 43 4.2 Existing corpora................................... 43 4.2.1 Calgary Corpus................................ 43 4.2.2 Canterbury Corpus.............................. 44 4.2.3 Silesia Corpus................................. 46 4.2.4 Other corpora................................. 46 4.3 Data file classes analysis............................... 47 4.3.1 Text and binary files............................. 47 4.3.2 Commonly used data files.......................... 47 4.4 The design of methodology............................. 48 4.4.1 Obtaining data and further preprocessing................. 49 4.4.2 Post-processing of results.......................... 49 4.5 Corpus files...................................... 50 4.6 Summary....................................... 51 5 Experimental measurements 53 5.1 Experimental details................................. 53 5.2 Performance experiments.............................. 54 5.2.1 Canterbury Corpus.............................. 54 5.2.2 Prague Corpus................................ 57 5.3 Parameter experiments................................ 61 5.4 Existing methods................................... 62 5.5 Summary....................................... 64 6 Conclusion 67 6.1 Future work...................................... 67 References 69 A The Corpus methodology 73 B The Corpus report 77 C Prague corpus files 79 D Public domain data sources 85 xii E Detailed results of experiments 87 F User manual 95 G List of abbreviations 97 H Contents of DVD 99 xiii xiv List of Figures 2.1 Static model...................................... 10 2.2 Semi-adaptive model................................. 10 2.3 Adaptive model.................................... 11 2.4 Shannon-Fano algorithm in action......................... 12 2.5 Pseudocode of Shannon-Fano algorithm...................... 13 2.6 The particular steps of building Huffman tree................... 14 2.7 The sibling property................................. 16 2.8 Pseudocode of Huffman algorithm......................... 17 2.9 Pseudocode of adaptive Huffman coding...................... 19 2.10 Example of LZ77 Sliding window.......................... 21 2.11 Pseudocode of LZ77 encoding process....................... 23 2.12 Pseudocode of LZSS encoding process....................... 24 2.13 Pseudocode of LZ78 encoding process....................... 27 2.14 The first four steps of building the LZ78 dictionary................ 28 2.15 The final LZ78 dictionary tree............................ 29 2.16 Pseudocode of LZW compression method..................... 30 2.17 The LZW dictionary after its initialization..................... 31 2.18 The final form of LZW dictionary tree with an additional table......... 32 3.1 Preorder traversal of the Huffman tree....................... 36 3.2 The usage of sBufferIndices............................. 37 3.3 Example of LZW dictionary............................. 40 3.4 Illustration of tree nodes with the corresponding relations............ 40 4.1 Scatter plot for CPP source codes......................... 50 5.1 Compression and decompression times of the statistical methods........ 55 5.2 Compression times of the dictionary methods................... 56 5.3 Decompression times of the dictionary methods.................. 57 5.4 Compression times of all methods on the files from the Prague Corpus (set A). 59 5.5 Compression ratios of the all implemented methods on the files from the Prague Corpus (set A).................................... 60 5.6 Compression ratio of LZ78 depeneding on the dictionary size.......... 62 5.7 Compression time and ratio depending on the logarithm of the LZSS Search buffer size....................................... 63 5.8 Compression times of the context methods on the files from the Prague Corpus (set B)......................................... 65 E.1 Compression ratio of the dictionary methods on the files from the Canterbury files 87 E.2 Compression times of the statistical methods on the files from the Prague Corpus (set B)......................................... 90 xv E.3 Compression and decompression times of the dictionary methods on the files from the Prague Corpus (set
Recommended publications
  • Data Compression for Remote Laboratories
    Paper—Data Compression for Remote Laboratories Data Compression for Remote Laboratories https://doi.org/10.3991/ijim.v11i4.6743 Olawale B. Akinwale Obafemi Awolowo University, Ile-Ife, Nigeria [email protected] Lawrence O. Kehinde Obafemi Awolowo University, Ile-Ife, Nigeria [email protected] Abstract—Remote laboratories on mobile phones have been around for a few years now. This has greatly improved accessibility of these remote labs to students who cannot afford computers but have mobile phones. When money is a factor however (as is often the case with those who can’t afford a computer), the cost of use of these remote laboratories should be minimized. This work ad- dressed this issue of minimizing the cost of use of the remote lab by making use of data compression for data sent between the user and remote lab. Keywords—Data compression; Lempel-Ziv-Welch; remote laboratory; Web- Sockets 1 Introduction The mobile phone is now the de facto computational device of choice. They are quite powerful, connected devices which are almost always with us. This is so much so that about every important online service has a mobile version or at least a version compatible with mobile phones and tablets (for this paper we would adopt the phrase mobile device to represent mobile phones and tablets). This has caused online labora- tory developers to develop online laboratory solutions which are mobile device- friendly [1, 2, 3, 4, 5, 6, 7, 8]. One of the main benefits of online laboratories is the possibility of using fewer re- sources to serve more people i.e.
    [Show full text]
  • CALIFORNIA STATE UNIVERSITY, NORTHRIDGE LOSSLESS COMPRESSION of SATELLITE TELEMETRY DATA for a NARROW-BAND DOWNLINK a Graduate P
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE LOSSLESS COMPRESSION OF SATELLITE TELEMETRY DATA FOR A NARROW-BAND DOWNLINK A graduate project submitted in partial fulfillment of the requirements For the degree of Master of Science in Electrical Engineering By Gor Beglaryan May 2014 Copyright Copyright (c) 2014, Gor Beglaryan Permission to use, copy, modify, and/or distribute the software developed for this project for any purpose with or without fee is hereby granted. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Copyright by Gor Beglaryan ii Signature Page The graduate project of Gor Beglaryan is approved: __________________________________________ __________________ Prof. James A Flynn Date __________________________________________ __________________ Dr. Deborah K Van Alphen Date __________________________________________ __________________ Dr. Sharlene Katz, Chair Date California State University, Northridge iii Contents Copyright .......................................................................................................................................... ii Signature Page ...............................................................................................................................
    [Show full text]
  • Incompressibility and Lossless Data Compression: an Approach By
    Incompressibility and Lossless Data Compression: An Approach by Pattern Discovery Incompresibilidad y compresión de datos sin pérdidas: Un acercamiento con descubrimiento de patrones Oscar Herrera Alcántara and Francisco Javier Zaragoza Martínez Universidad Autónoma Metropolitana Unidad Azcapotzalco Departamento de Sistemas Av. San Pablo No. 180, Col. Reynosa Tamaulipas Del. Azcapotzalco, 02200, Mexico City, Mexico Tel. 53 18 95 32, Fax 53 94 45 34 [email protected], [email protected] Article received on July 14, 2008; accepted on April 03, 2009 Abstract We present a novel method for lossless data compression that aims to get a different performance to those proposed in the last decades to tackle the underlying volume of data of the Information and Multimedia Ages. These latter methods are called entropic or classic because they are based on the Classic Information Theory of Claude E. Shannon and include Huffman [8], Arithmetic [14], Lempel-Ziv [15], Burrows Wheeler (BWT) [4], Move To Front (MTF) [3] and Prediction by Partial Matching (PPM) [5] techniques. We review the Incompressibility Theorem and its relation with classic methods and our method based on discovering symbol patterns called metasymbols. Experimental results allow us to propose metasymbolic compression as a tool for multimedia compression, sequence analysis and unsupervised clustering. Keywords: Incompressibility, Data Compression, Information Theory, Pattern Discovery, Clustering. Resumen Presentamos un método novedoso para compresión de datos sin pérdidas que tiene por objetivo principal lograr un desempeño distinto a los propuestos en las últimas décadas para tratar con los volúmenes de datos propios de la Era de la Información y la Era Multimedia.
    [Show full text]
  • Modification of Adaptive Huffman Coding for Use in Encoding Large Alphabets
    ITM Web of Conferences 15, 01004 (2017) DOI: 10.1051/itmconf/20171501004 CMES’17 Modification of Adaptive Huffman Coding for use in encoding large alphabets Mikhail Tokovarov1,* 1Lublin University of Technology, Electrical Engineering and Computer Science Faculty, Institute of Computer Science, Nadbystrzycka 36B, 20-618 Lublin, Poland Abstract. The paper presents the modification of Adaptive Huffman Coding method – lossless data compression technique used in data transmission. The modification was related to the process of adding a new character to the coding tree, namely, the author proposes to introduce two special nodes instead of single NYT (not yet transmitted) node as in the classic method. One of the nodes is responsible for indicating the place in the tree a new node is attached to. The other node is used for sending the signal indicating the appearance of a character which is not presented in the tree. The modified method was compared with existing methods of coding in terms of overall data compression ratio and performance. The proposed method may be used for large alphabets i.e. for encoding the whole words instead of separate characters, when new elements are added to the tree comparatively frequently. Huffman coding is frequently chosen for implementing open source projects [3]. The present paper contains the 1 Introduction description of the modification that may help to improve Efficiency and speed – the two issues that the current the algorithm of adaptive Huffman coding in terms of world of technology is centred at. Information data savings. technology (IT) is no exception in this matter. Such an area of IT as social media has become extremely popular 2 Study of related works and widely used, so that high transmission speed has gained a great importance.
    [Show full text]
  • Greedy Algorithm Implementation in Huffman Coding Theory
    iJournals: International Journal of Software & Hardware Research in Engineering (IJSHRE) ISSN-2347-4890 Volume 8 Issue 9 September 2020 Greedy Algorithm Implementation in Huffman Coding Theory Author: Sunmin Lee Affiliation: Seoul International School E-mail: [email protected] <DOI:10.26821/IJSHRE.8.9.2020.8905 > ABSTRACT In the late 1900s and early 2000s, creating the code All aspects of modern society depend heavily on data itself was a major challenge. However, now that the collection and transmission. As society grows more basic platform has been established, efficiency that can dependent on data, the ability to store and transmit it be achieved through data compression has become the efficiently has become more important than ever most valuable quality current technology deeply before. The Huffman coding theory has been one of desires to attain. Data compression, which is used to the best coding methods for data compression without efficiently store, transmit and process big data such as loss of information. It relies heavily on a technique satellite imagery, medical data, wireless telephony and called a greedy algorithm, a process that “greedily” database design, is a method of encoding any tries to find an optimal solution global solution by information (image, text, video etc.) into a format that solving for each optimal local choice for each step of a consumes fewer bits than the original data. [8] Data problem. Although there is a disadvantage that it fails compression can be of either of the two types i.e. lossy to consider the problem as a whole, it is definitely or lossless compressions.
    [Show full text]
  • I Introduction
    PPM Performance with BWT Complexity: A New Metho d for Lossless Data Compression Michelle E ros California Institute of Technology e [email protected] Abstract This work combines a new fast context-search algorithm with the lossless source co ding mo dels of PPM to achieve a lossless data compression algorithm with the linear context-search complexity and memory of BWT and Ziv-Lemp el co des and the compression p erformance of PPM-based algorithms. Both se- quential and nonsequential enco ding are considered. The prop osed algorithm yields an average rate of 2.27 bits per character bp c on the Calgary corpus, comparing favorably to the 2.33 and 2.34 bp c of PPM5 and PPM and the 2.43 bp c of BW94 but not matching the 2.12 bp c of PPMZ9, which, at the time of this publication, gives the greatest compression of all algorithms rep orted on the Calgary corpus results page. The prop osed algorithm gives an average rate of 2.14 bp c on the Canterbury corpus. The Canterbury corpus web page gives average rates of 1.99 bp c for PPMZ9, 2.11 bp c for PPM5, 2.15 bp c for PPM7, and 2.23 bp c for BZIP2 a BWT-based co de on the same data set. I Intro duction The Burrows Wheeler Transform BWT [1] is a reversible sequence transformation that is b ecoming increasingly p opular for lossless data compression. The BWT rear- ranges the symb ols of a data sequence in order to group together all symb ols that share the same unb ounded history or \context." Intuitively, this op eration is achieved by forming a table in which each row is a distinct cyclic shift of the original data string.
    [Show full text]
  • Dc5m United States Software in English Created at 2016-12-25 16:00
    Announcement DC5m United States software in english 1 articles, created at 2016-12-25 16:00 articles set mostly positive rate 10.0 1 3.8 Google’s Brotli Compression Algorithm Lands to Windows Edge Microsoft has announced that its Edge browser has started using Brotli, the compression algorithm that Google open-sourced last year. 2016-12-25 05:00 1KB www.infoq.com Articles DC5m United States software in english 1 articles, created at 2016-12-25 16:00 1 /1 3.8 Google’s Brotli Compression Algorithm Lands to Windows Edge Microsoft has announced that its Edge browser has started using Brotli, the compression algorithm that Google open-sourced last year. Brotli is on by default in the latest Edge build and can be previewed via the Windows Insider Program. It will reach stable status early next year, says Microsoft. Microsoft touts a 20% higher compression ratios over comparable compression algorithms, which would benefit page load times without impacting client-side CPU costs. According to Google, Brotli uses a whole new data format , which makes it incompatible with Deflate but ensures higher compression ratios. In particular, Google says, Brotli is roughly as fast as zlib when decompressing and provides a better compression ratio than LZMA and bzip2 on the Canterbury Corpus. Brotli appears to be especially tuned for the web , that is for offline encoding and online decoding of Web assets, or Android APKs. Google claims a compression ratio improvement of 20–26% over its own Zopfli algorithm, which still provides the best compression ratio of any deflate algorithm.
    [Show full text]
  • A Survey on Different Compression Techniques Algorithm for Data Compression Ihardik Jani, Iijeegar Trivedi IC
    International Journal of Advanced Research in ISSN : 2347 - 8446 (Online) Computer Science & Technology (IJARCST 2014) Vol. 2, Issue 3 (July - Sept. 2014) ISSN : 2347 - 9817 (Print) A Survey on Different Compression Techniques Algorithm for Data Compression IHardik Jani, IIJeegar Trivedi IC. U. Shah University, India IIS. P. University, India Abstract Compression is useful because it helps us to reduce the resources usage, such as data storage space or transmission capacity. Data Compression is the technique of representing information in a compacted form. The actual aim of data compression is to be reduced redundancy in stored or communicated data, as well as increasing effectively data density. The data compression has important tool for the areas of file storage and distributed systems. To desirable Storage space on disks is expensively so a file which occupies less disk space is “cheapest” than an uncompressed files. The main purpose of data compression is asymptotically optimum data storage for all resources. The field data compression algorithm can be divided into different ways: lossless data compression and optimum lossy data compression as well as storage areas. Basically there are so many Compression methods available, which have a long list. In this paper, reviews of different basic lossless data and lossy compression algorithms are considered. On the basis of these techniques researcher have tried to purpose a bit reduction algorithm used for compression of data which is based on number theory system and file differential technique. The statistical coding techniques the algorithms such as Shannon-Fano Coding, Huffman coding, Adaptive Huffman coding, Run Length Encoding and Arithmetic coding are considered.
    [Show full text]
  • Implementing Associative Coder of Buyanovsky (ACB)
    Implementing Associative Coder of Buyanovsky (ACB) data compression by Sean Michael Lambert A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science Montana State University © Copyright by Sean Michael Lambert (1999) Abstract: In 1994 George Mechislavovich Buyanovsky published a basic description of a new data compression algorithm he called the “Associative Coder of Buyanovsky,” or ACB. The archive program using this idea, which he released in 1996 and updated in 1997, is still one of the best general compression utilities available. Despite this, the ACB algorithm is still barely understood by data compression experts, primarily because Buyanovsky never published a detailed description of it. ACB is a new idea in data compression, merging concepts from existing statistical and dictionary-based algorithms with entirely original ideas. This document presents several variations of the ACB algorithm and the details required to implement a basic version of ACB. IMPLEMENTING ASSOCIATIVE CODER OF BUYANOVSKY (ACB) DATA COMPRESSION by Sean Michael Lambert A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science MONTANA STATE UNIVERSITY-BOZEMAN Bozeman, Montana April 1999 © COPYRIGHT by Sean Michael Lambert 1999 All Rights Reserved ii APPROVAL of a thesis submitted by Sean Michael Lambert This thesis has been read by each member of the thesis committee and has been found to be satisfactory regarding content, English usage, format, citations, bibliographic style, and consistency, and is ready for submission to the College of Graduate Studies. Brendan Mumey U /l! ^ (Signature) Date Approved for the Department of Computer Science J.
    [Show full text]
  • An Analysis of XML Compression Efficiency
    An Analysis of XML Compression Efficiency Christopher J. Augeri1 Barry E. Mullins1 Leemon C. Baird III Dursun A. Bulutoglu2 Rusty O. Baldwin1 1Department of Electrical and Computer Engineering Department of Computer Science 2Department of Mathematics and Statistics United States Air Force Academy (USAFA) Air Force Institute of Technology (AFIT) USAFA, Colorado Springs, CO Wright Patterson Air Force Base, Dayton, OH {chris.augeri, barry.mullins}@afit.edu [email protected] {dursun.bulutoglu, rusty.baldwin}@afit.edu ABSTRACT We expand previous XML compression studies [9, 26, 34, 47] by XML simplifies data exchange among heterogeneous computers, proposing the XML file corpus and a combined efficiency metric. but it is notoriously verbose and has spawned the development of The corpus was assembled using guidelines given by developers many XML-specific compressors and binary formats. We present of the Canterbury corpus [3], files often used to assess compressor an XML test corpus and a combined efficiency metric integrating performance. The efficiency metric combines execution speed compression ratio and execution speed. We use this corpus and and compression ratio, enabling simultaneous assessment of these linear regression to assess 14 general-purpose and XML-specific metrics, versus prioritizing one metric over the other. We analyze compressors relative to the proposed metric. We also identify key collected metrics using linear regression models (ANOVA) versus factors when selecting a compressor. Our results show XMill or a simple comparison of means, e.g., X is 20% better than Y. WBXML may be useful in some instances, but a general-purpose compressor is often the best choice. 2. XML OVERVIEW XML has gained much acceptance since first proposed in 1998 by Categories and Subject Descriptors the World-Wide Web Consortium (W3C).
    [Show full text]
  • 16.1 Digital “Modes”
    Contents 16.1 Digital “Modes” 16.5 Networking Modes 16.1.1 Symbols, Baud, Bits and Bandwidth 16.5.1 OSI Networking Model 16.1.2 Error Detection and Correction 16.5.2 Connected and Connectionless 16.1.3 Data Representations Protocols 16.1.4 Compression Techniques 16.5.3 The Terminal Node Controller (TNC) 16.1.5 Compression vs. Encryption 16.5.4 PACTOR-I 16.2 Unstructured Digital Modes 16.5.5 PACTOR-II 16.2.1 Radioteletype (RTTY) 16.5.6 PACTOR-III 16.2.2 PSK31 16.5.7 G-TOR 16.2.3 MFSK16 16.5.8 CLOVER-II 16.2.4 DominoEX 16.5.9 CLOVER-2000 16.2.5 THROB 16.5.10 WINMOR 16.2.6 MT63 16.5.11 Packet Radio 16.2.7 Olivia 16.5.12 APRS 16.3 Fuzzy Modes 16.5.13 Winlink 2000 16.3.1 Facsimile (fax) 16.5.14 D-STAR 16.3.2 Slow-Scan TV (SSTV) 16.5.15 P25 16.3.3 Hellschreiber, Feld-Hell or Hell 16.6 Digital Mode Table 16.4 Structured Digital Modes 16.7 Glossary 16.4.1 FSK441 16.8 References and Bibliography 16.4.2 JT6M 16.4.3 JT65 16.4.4 WSPR 16.4.5 HF Digital Voice 16.4.6 ALE Chapter 16 — CD-ROM Content Supplemental Files • Table of digital mode characteristics (section 16.6) • ASCII and ITA2 code tables • Varicode tables for PSK31, MFSK16 and DominoEX • Tips for using FreeDV HF digital voice software by Mel Whitten, KØPFX Chapter 16 Digital Modes There is a broad array of digital modes to service various needs with more coming.
    [Show full text]
  • On the Coding Gain of Dynamic Huffman Coding Applied to a Wavelet-Based Perceptual Audio Coder
    ON THE CODING GAIN OF DYNAMIC HUFFMAN CODING APPLIED TO A WAVELET-BASED PERCEPTUAL AUDIO CODER N. Ruiz Reyes1, M. Rosa Zurera2, F. López Ferreras2, P. Jarabo Amores2, and P. Vera Candeas1 1Departamento de Electrónica, Universidad de Jaén, Escuela Universitaria Politénica, 23700 Linares, Jaén, SPAIN, e-mail: [email protected] 2Dpto. de Teoría de la Señal y Comunicaciones, Universidad de Alcalá, Escuela Politécnica 28871 Alcalá de Henares, Madrid, SPAIN, e-mail: [email protected] ABSTRACT The last one, the ISO/MPEG-4 standard, is composed of several speech and audio coders supporting bit rates This paper evaluates the coding gain of using a dynamic from 2 to 64 kbps per channel. ISO/MPEG-4 includes Huffman entropy coder in an audio coder that uses a the already proposed AAC standard, which provides high wavelet-packet decomposition that is close to the sub- quality audio coding at bit rates of 64 kbps per channel. band decomposition made by the human ear. The sub- Parallel to the definition of the ISO/MPEG standards, band audio signals are modeled as samples of a station- several audio coding algorithms have been proposed that ary random process with laplacian probability density use the wavelet transform as the tool to decompose the function because experimental results indicate that the audio signal. The most promising results correspond to highest coding efficiency is obtained in that case. We adapted wavelet-based audio coders. Probably, the most have also studied how the entropy coding gain varies cited is the one designed by Sinha and Tewfik [2], a high with the band index.
    [Show full text]