Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning

Total Page:16

File Type:pdf, Size:1020Kb

Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning Item Type Conference Paper Authors Gajjala, Rishikesh R.; Banchhor, Shashwat; Abdelmoniem, Ahmed M.; Dutta, Aritra; Canini, Marco; Kalnis, Panos Citation Gajjala, R. R., Banchhor, S., Abdelmoniem, A. M., Dutta, A., Canini, M., & Kalnis, P. (2020). Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning. Proceedings of the 1st Workshop on Distributed Machine Learning. doi:10.1145/3426745.3431334 Eprint version Post-print DOI 10.1145/3426745.3431334 Publisher Association for Computing Machinery (ACM) Rights Archived with thanks to ACM Download date 27/09/2021 06:37:00 Link to Item http://hdl.handle.net/10754/666175 Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning Rishikesh R. Gajjala1, Shashwat Banchhor1, Ahmed M. Abdelmoniem1∗ Aritra Dutta, Marco Canini, Panos Kalnis KAUST ABSTRACT adopted in practice. In this setting, at each iteration, each worker Distributed stochastic algorithms, equipped with gradient compres- maintains a local copy of the same DNN model, accesses one of sion techniques, such as codebook quantization, are becoming in- the non-intersecting partitions of the data, and calculates its local creasingly popular and considered state-of-the-art in training large gradient. The gradient information are exchanged synchronously deep neural network (DNN) models. However, communicating the through the network for aggregation and the aggregated global quantized gradients in a network requires efficient encoding tech- gradient is sent back to the workers. The workers jointly update niques. For this, practitioners generally use Elias encoding-based the model parameters by using this global gradient. However, the techniques without considering their computational overhead or network latency during gradient transmission creates a commu- data-volume. In this paper, based on Huffman coding, we propose nication bottleneck and as a result, the training becomes slow. To several lossless encoding techniques that exploit different character- remedy this, gradient compression techniques, such as quantiza- istics of the quantized gradients during distributed DNN training. tion [2, 21, 49, 51], sparsification [1, 13, 42, 45], hybrid compressors Then, we show their effectiveness on 5 different DNN models across [4, 43], and low-rank methods [47, 48] are used. In this paper, we three different data-sets, and compare them with classic state-of- focus on the gradient quantization techniques. d d the-art Elias-based encoding techniques. Our results show that the We are interested in quantization operators, Q¹·º : R ! R , proposed Huffman-based encoders (i.e., RLH, SH, and SHS) can that produce a lower-precision quantized vector Q¹xº from the reduce the encoded data-volume by up to 5:1×; 4:32×, and 3:8×, original vector x. In general, depending on quantization technique, i th respectively, compared to the Elias-based encoders. the quantized gradient, Q¹дt º, resulting from i worker is further encoded by using one of the existing encoding techniques. For in- KEYWORDS stance, random dithering [2] and ternary quantization [49] use Elias C Distributed training, Gradient compression, Quantization, Huffman encoding, Sattler et al. [37] use Golomb encoding [15], and Nat coding, Elias and Run-length Encoding. [21] uses a fixed-length 8-bit encoding. In distributed settings, the quality of the trained model may only be impacted due to com- ACM Reference Format: pression. Moreover, if the encoding is losses, it can help reduce the 1 1 1 Rishikesh R. Gajjala , Shashwat Banchhor , Ahmed M. Abdelmoniem communicated volume without further impact on model quality. and Aritra Dutta, Marco Canini, Panos Kalnis. 2020. Huffman Coding To elaborate more, let the total communication time, T , to be the Based Encoding Techniques for Fast Distributed Deep Learning. In 1st Workshop on Distributed Machine Learning (DistributedML ’20), Decem- sum of the time taken for compression, transmission, and decom- ber 1, 2020, Barcelona, Spain. ACM, New York, NY, USA, 7 pages. https: pression. The main goal of encoding techniques is to encode the //doi.org/10.1145/3426745.3431334 compressed gradients in compact form as a result of quantization which can further reduce T . To reduce T , in this work, we focus 1 INTRODUCTION on the lossless encoding component used in gradient quantization and evaluate their effectiveness across a wide range of compression As the DNN models are becoming complex, one of the fundamental methods. That is, we decouple quantization from the encoding part challenges in training them is their increasing size. To efficiently and explore possible combinations of quantization techniques and train these models, practitioners generally employ distributed par- lossless encoding.1 If the effective transfer speed (including any allel training over multiple computing nodes/workers [5, 11]. In this reductions of transfer speed as a result of compression overhead) S, work, we focus on the data-parallel paradigm as it is the most widely that accounts for network transmission and computation necessary ∗Corresponding author: Ahmed M. Abdelmoniem ([email protected]). to compression and encoding, is constant for homogeneous train- 1Equal Contribution. ing environments such as, data centers or private clusters, then Rishikesh and Shashwat were with Indian Institute of Technology, Delhi. Significant time, T can be translated primarily to the reduction of the com- part of the work is done while they were interns at KAUST. V municated data-volume, V as T = S . Moreover, existing work on Permission to make digital or hard copies of all or part of this work for personal or quantization, exploit this assumption and employ arbitrary encod- classroom use is granted without fee provided that copies are not made or distributed ing techniques without carefully considering their complexity and for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM the resulting data-volume. To this end, we try to fill this gap and must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, make the following contributions: to post on servers or to redistribute to lists, requires prior specific permission and/or a (i) Based on the classic Huffman coding, we propose three en- fee. Request permissions from [email protected]. DistributedML ’20, December 1, 2020, Barcelona, Spain coding techniques—(a) run-length Huffman (RLH) encoding,b ( ) © 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-8182-6/20/12...$15.00 1As long as the encoding is lossless, the convergence of a distributed optimizer (e.g., https://doi.org/10.1145/3426745.3431334 SGD and its variants) with a quantization strategy Q¹·º remains unaffected. DistributedML ’20, December 1, 2020, Barcelona, Spain R. R. Gajjala et al. sample Huffman (SH) encoding, andc ( ) sample Huffman with spar- Algorithm 1: Abstraction for Distributed quantized SGD— sity (SHS); to encode the quantized gradients generated from code- From the perspective of the ith worker. book quantization. For each of these encoders, we calculate their Input :Local data Di , model parameter xt , learning rate ηt > 0; entropy, average code-length, and computational complexity and Output :Trained model x 2 Rd . compare against the state-of-the-art encoding techniques used in 1 for t = 1, 2, ··· , do i th gradient compression—Elias and Run-length encoding (RLE). 2 On Worker:Compute a local stochastic gradient дt at t iteration; i i 3 Quantize дt to Q¹дt º; (ii) We analyze the performance of our proposed encoders on a i i 4 Encode Q¹дt º to C¹Q¹дt ºº; wide spectrum of quantization techniques [2, 21, 49], on a variety of i i 5 On Master :Decode C¹Q¹дt ºº to Q¹дt º ; DNN models (ResNet-20, VGG-16, ResNet-50, GoogLeNet, LSTM), 1 Ín i 6 Do all-to-all reduction д˜t = n i=1 Q¹дt º; performing different tasks, across diverse datasets (CIFAR-10, Ima- 7 Encode д˜t to C¹д˜t º and send back to workers; geNet, Penn Tree Bank (PTB)) and report our findings. The results 8 On Worker:Decode C¹д˜t º; show that RLH, SHS and SH can reduce the data-volume by up to 9 Update model parameters locally via: xt+1 = xt − ηt д˜t ; end 5:1×, 4:32× and 3:8× over the Elias-based encoders, respectively. And, sampling-based techniques (i.e., SH and SHS) achieve faster uses only binary bits. E.g., the sign compression in [6]. However, encoding times of up to 2:5× compared to the Elias-based encoders. in more sophisticated cases, the gradient components are projected To the best of our knowledge, this is the first work that theoreti- into a vector of fixed length (set by the user), such as random cally and empirically dissects the efficiency of different encoders dithering. The user can change the codebook length by varying with respect to the communicated data-volume and complexity. Our the quantization states, s, and the inclusion probabilities of each proposed encoders can also be used to encode the composition of component will vary. For a bit-width b and a scaling factor δ > 0, sparsification and quantization as in Q-sparse local SGD[4]. Since Yu et al. in [51], quantized д»i¼ 2 »ε; ε + δ¼ with ε 2 dom¹δ;bº := n o compression is worker-independent and involves no inter-worker −2b−1δ;:::; −δ; 0; δ;:::; ¹2b−1 − 1ºδ as: communication, encoding methods can apply to compression in asynchronous settings [34]. Notations. We denote ith component of a vector x by x»i¼. By xi , ( ε+δ −д»i¼ t ε with probability pi = δ we denote a vector arising from the tth iteration at the ith node. A д˜»i¼ = (1) ε + δ with probability 1 − pi : string of p consecutive zeros is denoted by 0p .
Recommended publications
  • XAPP616 "Huffman Coding" V1.0
    Product Obsolete/Under Obsolescence Application Note: Virtex Series R Huffman Coding Author: Latha Pillai XAPP616 (v1.0) April 22, 2003 Summary Huffman coding is used to code values statistically according to their probability of occurence. Short code words are assigned to highly probable values and long code words to less probable values. Huffman coding is used in MPEG-2 to further compress the bitstream. This application note describes how Huffman coding is done in MPEG-2 and its implementation. Introduction The output symbols from RLE are assigned binary code words depending on the statistics of the symbol. Frequently occurring symbols are assigned short code words whereas rarely occurring symbols are assigned long code words. The resulting code string can be uniquely decoded to get the original output of the run length encoder. The code assignment procedure developed by Huffman is used to get the optimum code word assignment for a set of input symbols. The procedure for Huffman coding involves the pairing of symbols. The input symbols are written out in the order of decreasing probability. The symbol with the highest probability is written at the top, the least probability is written down last. The least two probabilities are then paired and added. A new probability list is then formed with one entry as the previously added pair. The least symbols in the new list are then paired. This process is continued till the list consists of only one probability value. The values "0" and "1" are arbitrarily assigned to each element in each of the lists. Figure 1 shows the following symbols listed with a probability of occurrence where: A is 30%, B is 25%, C is 20%, D is 15%, and E = 10%.
    [Show full text]
  • Image Compression Through DCT and Huffman Coding Technique
    International Journal of Current Engineering and Technology E-ISSN 2277 – 4106, P-ISSN 2347 – 5161 ©2015 INPRESSCO®, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Image Compression through DCT and Huffman Coding Technique Rahul Shukla†* and Narender Kumar Gupta† †Department of Computer Science and Engineering, SHIATS, Allahabad, India Accepted 31 May 2015, Available online 06 June 2015, Vol.5, No.3 (June 2015) Abstract Image compression is an art used to reduce the size of a particular image. The goal of image compression is to eliminate the redundancy in a file’s code in order to reduce its size. It is useful in reducing the image storage space and in reducing the time needed to transmit the image. Image compression is more significant for reducing data redundancy for save more memory and transmission bandwidth. An efficient compression technique has been proposed which combines DCT and Huffman coding technique. This technique proposed due to its Lossless property, means using this the probability of loss the information is lowest. Result shows that high compression rates are achieved and visually negligible difference between compressed images and original images. Keywords: Huffman coding, Huffman decoding, JPEG, TIFF, DCT, PSNR, MSE 1. Introduction that can compress almost any kind of data. These are the lossless methods they retain all the information of 1 Image compression is a technique in which large the compressed data. amount of disk space is required for the raw images However, they do not take advantage of the 2- which seems to be a very big disadvantage during dimensional nature of the image data.
    [Show full text]
  • The Strengths and Weaknesses of Different Image Compression Methods Samuel Teare and Brady Jacobson Lossy Vs Lossless
    The Strengths and Weaknesses of Different Image Compression Methods Samuel Teare and Brady Jacobson Lossy vs Lossless Lossy compression reduces a file size by permanently removing parts of the data that may be redundant or not as noticeable. Lossless compression guarantees the original data can be recovered or decompressed from the compressed file. PNG Compression PNG Compression consists of three parts: 1. Filtering 2. LZ77 Compression Deflate Compression 3. Huffman Coding Filtering Five types of Filters: 1. None - No filter 2. Sub - difference between this byte and the byte to its left a. Sub(x) = Original(x) - Original(x - bpp) 3. Up - difference between this byte and the byte above it a. Up(x) = Original(x) - Above(x) 4. Average - difference between this byte and the average of the byte to the left and the byte above. a. Avg(x) = Original(x) − (Original(x-bpp) + Above(x))/2 5. Paeth - Uses the byte to the left, above, and above left. a. The nearest of the left, above, or above left to the estimate is the Paeth Predictor b. Paeth(x) = Original(x) - Paeth Predictor(x) Paeth Algorithm Estimate = left + above - above left Distance to left = Absolute(estimate - left) Distance to above = Absolute(estimate - above) Distance to above left = Absolute(estimate - above left) The byte with the smallest distance is the Paeth Predictor LZ77 Compression LZ77 Compression looks for sequences in the data that are repeated. LZ77 uses a sliding window to keep track of previous bytes. This is then used to compress a group of bytes that exhibit the same sequence as previous bytes.
    [Show full text]
  • An Optimized Huffman's Coding by the Method of Grouping
    An Optimized Huffman’s Coding by the method of Grouping Gautam.R Dr. S Murali Department of Electronics and Communication Engineering, Professor, Department of Computer Science Engineering Maharaja Institute of Technology, Mysore Maharaja Institute of Technology, Mysore [email protected] [email protected] Abstract— Data compression has become a necessity not only the 3 depending on the size of the data. Huffman's coding basically in the field of communication but also in various scientific works on the principle of frequency of occurrence for each experiments. The data that is being received is more and the symbol or character in the input. For example we can know processing time required has also become more. A significant the number of times a letter has appeared in a text document by change in the algorithms will help to optimize the processing processing that particular document. After which we will speed. With the invention of Technologies like IoT and in assign a variable string to the letter that will represent the technologies like Machine Learning there is a need to compress character. Here the encoding take place in the form of tree data. For example training an Artificial Neural Network requires structure which will be explained in detail in the following a lot of data that should be processed and trained in small paragraph where encoding takes place in the form of binary interval of time for which compression will be very helpful. There tree. is a need to process the data faster and quicker. In this paper we present a method that reduces the data size.
    [Show full text]
  • Arithmetic Coding
    Arithmetic Coding Arithmetic coding is the most efficient method to code symbols according to the probability of their occurrence. The average code length corresponds exactly to the possible minimum given by information theory. Deviations which are caused by the bit-resolution of binary code trees do not exist. In contrast to a binary Huffman code tree the arithmetic coding offers a clearly better compression rate. Its implementation is more complex on the other hand. In arithmetic coding, a message is encoded as a real number in an interval from one to zero. Arithmetic coding typically has a better compression ratio than Huffman coding, as it produces a single symbol rather than several separate codewords. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 ≤ n < 1.0) Arithmetic coding is a lossless coding technique. There are a few disadvantages of arithmetic coding. One is that the whole codeword must be received to start decoding the symbols, and if there is a corrupt bit in the codeword, the entire message could become corrupt. Another is that there is a limit to the precision of the number which can be encoded, thus limiting the number of symbols to encode within a codeword. There also exist many patents upon arithmetic coding, so the use of some of the algorithms also call upon royalty fees. Arithmetic coding is part of the JPEG data format.
    [Show full text]
  • Image Compression Using Discrete Cosine Transform Method
    Qusay Kanaan Kadhim, International Journal of Computer Science and Mobile Computing, Vol.5 Issue.9, September- 2016, pg. 186-192 Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320–088X IMPACT FACTOR: 5.258 IJCSMC, Vol. 5, Issue. 9, September 2016, pg.186 – 192 Image Compression Using Discrete Cosine Transform Method Qusay Kanaan Kadhim Al-Yarmook University College / Computer Science Department, Iraq [email protected] ABSTRACT: The processing of digital images took a wide importance in the knowledge field in the last decades ago due to the rapid development in the communication techniques and the need to find and develop methods assist in enhancing and exploiting the image information. The field of digital images compression becomes an important field of digital images processing fields due to the need to exploit the available storage space as much as possible and reduce the time required to transmit the image. Baseline JPEG Standard technique is used in compression of images with 8-bit color depth. Basically, this scheme consists of seven operations which are the sampling, the partitioning, the transform, the quantization, the entropy coding and Huffman coding. First, the sampling process is used to reduce the size of the image and the number bits required to represent it. Next, the partitioning process is applied to the image to get (8×8) image block. Then, the discrete cosine transform is used to transform the image block data from spatial domain to frequency domain to make the data easy to process.
    [Show full text]
  • Hybrid Compression Using DWT-DCT and Huffman Encoding Techniques for Biomedical Image and Video Applications
    Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320–088X IJCSMC, Vol. 2, Issue. 5, May 2013, pg.255 – 261 RESEARCH ARTICLE Hybrid Compression Using DWT-DCT and Huffman Encoding Techniques for Biomedical Image and Video Applications K.N. Bharath 1, G. Padmajadevi 2, Kiran 3 1Department of E&C Engg, Malnad College of Engineering, VTU, India 2Associate Professor, Department of E&C Engg, Malnad College of Engineering, VTU, India 3Department of E&C Engg, Malnad College of Engineering, VTU, India 1 [email protected]; 2 [email protected]; 3 [email protected] Abstract— Digital image and video in their raw form require an enormous amount of storage capacity. Considering the important role played by digital imaging and video, it is necessary to develop a system that produces high degree of compression while preserving critical image/video information. There is various transformation techniques used for data compression. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are the most commonly used transformation. DCT has high energy compaction property and requires less computational resources. On the other hand, DWT is multi resolution transformation. In this work, we propose a hybrid DWT-DCT, Huffman algorithm for image and video compression and reconstruction taking benefit from the advantages of both algorithms. The algorithm performs the Discrete Cosine Transform (DCT) on the Discrete Wavelet Transform (DWT) coefficients. Simulations have been conducted on several natural, benchmarks, medical and endoscopic images. Several high definition and endoscopic videos have also been used to demonstrate the advantage of the proposed scheme.
    [Show full text]
  • Implementation of a Fast Mpeg-2 Compliant Huffman Decoder
    IMPLEMENTATION OF A FAST MPEG-2 COMPLIANT HUFFMAN DECODER Mikael Karlsson Rudberg ([email protected]) and Lars Wanhammar ([email protected]) Department of Electrical Engineering, Linköping University, S-581 83 Linköping, Sweden Tel: +46 13 284059; fax: +46 13 139282 ABSTRACT 2. HUFFMAN DECODER In this paper a 100 Mbit/s Huffman decoder Huffman decoding can be performed in a numerous implementation is presented. A novel approach ways. One common principle is to decode the where a parallel decoding of data mixed with a incoming bit stream in parallel [3, 4]. The serial input has been used. The critical path has simplified decoding process is described below: been reduced and a significant increase in throughput is achieved. The decoder is aimed at 1. Feed a symbol decoder and a length the MPEG-2 Video decoding standard and has decoder with M bits, where M is the length therefore been designed to meet the required of the longest code word. performance. 2. The symbol decoder maps the input vector to the corresponding symbol. A length 1. INTRODUCTION decoder will at the same time find the length of the input vector. Huffman coding is a lossless compression 3. The information from the length decoder is technique often used in combination with other used in the input buffer to fill up the buffer lossy compression methods, in for instance digital again (with between one and M bits, figure video and audio applications. The Huffman coding 1). method uses codes with different lengths, where symbols with high probability are assigned shorter The problem with this solution is the long critical codes than symbols with lower probability.
    [Show full text]
  • Modification of Adaptive Huffman Coding for Use in Encoding Large Alphabets
    ITM Web of Conferences 15, 01004 (2017) DOI: 10.1051/itmconf/20171501004 CMES’17 Modification of Adaptive Huffman Coding for use in encoding large alphabets Mikhail Tokovarov1,* 1Lublin University of Technology, Electrical Engineering and Computer Science Faculty, Institute of Computer Science, Nadbystrzycka 36B, 20-618 Lublin, Poland Abstract. The paper presents the modification of Adaptive Huffman Coding method – lossless data compression technique used in data transmission. The modification was related to the process of adding a new character to the coding tree, namely, the author proposes to introduce two special nodes instead of single NYT (not yet transmitted) node as in the classic method. One of the nodes is responsible for indicating the place in the tree a new node is attached to. The other node is used for sending the signal indicating the appearance of a character which is not presented in the tree. The modified method was compared with existing methods of coding in terms of overall data compression ratio and performance. The proposed method may be used for large alphabets i.e. for encoding the whole words instead of separate characters, when new elements are added to the tree comparatively frequently. Huffman coding is frequently chosen for implementing open source projects [3]. The present paper contains the 1 Introduction description of the modification that may help to improve Efficiency and speed – the two issues that the current the algorithm of adaptive Huffman coding in terms of world of technology is centred at. Information data savings. technology (IT) is no exception in this matter. Such an area of IT as social media has become extremely popular 2 Study of related works and widely used, so that high transmission speed has gained a great importance.
    [Show full text]
  • Huffman Based LZW Lossless Image Compression Using Retinex Algorithm
    ISSN (Print) : 2319-5940 ISSN (Online) : 2278-1021 International Journal of Advanced Research in Computer and Communication Engineering Vol. 2, Issue 8, August 2013 Huffman Based LZW Lossless Image Compression Using Retinex Algorithm Dalvir Kaur1, Kamaljit Kaur 2 Master of Technology in Computer Science & Engineering, Sri Guru Granth Sahib World University, Fatehgarh Sahib, Punjab, India1 Assistant Professor, Department Of Computer Science & Engineering, Sri Guru Granth Sahib World University, Fatehgarh Sahib, Punjab, India2 Abstract-Image compression is an application of data compression that encodes the original image with few bits. The objective of image compression is to reduce irrelevance and redundancy of the image data in order to be able to store or transmit data in an efficient form. So image compression can reduce the transmit time over the network and increase the speed of transmission. In Lossless image compression no data loss when the compression Technique is done. In this research, a new lossless compression scheme is presented and named as Huffman Based LZW Lossless Image Compression using Retinex Algorithm which consists of three stages: In the first stage, a Huffman coding is used to compress the image. In the second stage all Huffman code words are concatenated together and then compressed with LZW coding and decoding. In the third stage the Retinex algorithm are used on compressed image for enhance the contrast of image and improve the quality of image. This Proposed Technique is used to increase the compression ratio (CR), Peak signal of Noise Ratio (PSNR), and Mean Square Error (MSE) in the MATLAB Software. Keywords-Compression, Encode, Decode, Huffman, LZW.
    [Show full text]
  • 16.1 Digital “Modes”
    Contents 16.1 Digital “Modes” 16.5 Networking Modes 16.1.1 Symbols, Baud, Bits and Bandwidth 16.5.1 OSI Networking Model 16.1.2 Error Detection and Correction 16.5.2 Connected and Connectionless 16.1.3 Data Representations Protocols 16.1.4 Compression Techniques 16.5.3 The Terminal Node Controller (TNC) 16.1.5 Compression vs. Encryption 16.5.4 PACTOR-I 16.2 Unstructured Digital Modes 16.5.5 PACTOR-II 16.2.1 Radioteletype (RTTY) 16.5.6 PACTOR-III 16.2.2 PSK31 16.5.7 G-TOR 16.2.3 MFSK16 16.5.8 CLOVER-II 16.2.4 DominoEX 16.5.9 CLOVER-2000 16.2.5 THROB 16.5.10 WINMOR 16.2.6 MT63 16.5.11 Packet Radio 16.2.7 Olivia 16.5.12 APRS 16.3 Fuzzy Modes 16.5.13 Winlink 2000 16.3.1 Facsimile (fax) 16.5.14 D-STAR 16.3.2 Slow-Scan TV (SSTV) 16.5.15 P25 16.3.3 Hellschreiber, Feld-Hell or Hell 16.6 Digital Mode Table 16.4 Structured Digital Modes 16.7 Glossary 16.4.1 FSK441 16.8 References and Bibliography 16.4.2 JT6M 16.4.3 JT65 16.4.4 WSPR 16.4.5 HF Digital Voice 16.4.6 ALE Chapter 16 — CD-ROM Content Supplemental Files • Table of digital mode characteristics (section 16.6) • ASCII and ITA2 code tables • Varicode tables for PSK31, MFSK16 and DominoEX • Tips for using FreeDV HF digital voice software by Mel Whitten, KØPFX Chapter 16 Digital Modes There is a broad array of digital modes to service various needs with more coming.
    [Show full text]
  • Real-Time Video Compression Techniques and Algorithms
    Page i Real-Time Video Compression Page ii THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE MULTIMEDIA SYSTEMS AND APPLICATIONS Consulting Editor Borko Furht Florida Atlantic University Recently Published Titles: VIDEO AND IMAGE PROCESSING IN MULTIMEDIA SYSTEMS, by Borko Furht, Stephen W. Smoliar, HongJiang Zhang ISBN: 0-7923-9604-9 MULTIMEDIA SYSTEMS AND TECHNIQUES, edited by Borko Furht ISBN: 0-7923-9683-9 MULTIMEDIA TOOLS AND APPLICATIONS, edited by Borko Furht ISBN: 0-7923-9721-5 MULTIMEDIA DATABASE MANAGEMENT SYSTEMS, by B. Prabhakaran ISBN: 0-7923-9784-3 Page iii Real-Time Video Compression Techniques and Algorithms by Raymond Westwater Borko Furht Florida Atlantic University Page iv Distributors for North America: Kluwer Academic Publishers 101 Philip Drive Assinippi Park Norwell, Massachusetts 02061 USA Distributors for all other countries: Kluwer Academic Publishers Group Distribution Centre Post Office Box 322 3300 AH Dordrecht, THE NETHERLANDS Library of Congress Cataloging-in-Publication Data A C.I.P. Catalogue record for this book is available from the Library of Congress. Copyright © 1997 by Kluwer Academic Publishers All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher, Kluwer Academic Publishers, 101 Philip Drive, Assinippi Park, Norwell, Massachusetts 02061 Printed on acid-free paper. Printed in the United States of America Page v Contents Preface vii 1. The Problem of Video Compression 1 1.1 Overview of Video Compression Techniques 3 1.2 Applications of Compressed Video 6 1.3 Image and Video Formats 8 1.4 Overview of the Book 12 2.
    [Show full text]