Source Encoding and Compression
Total Page:16
File Type:pdf, Size:1020Kb
Source Encoding and Compression Jukka Teuhola Computer Science Department of Information Technology University of Turku Spring 2016 Lecture notes 2 Table of Contents 1. Introduction ...................................................................................................................... 3 2. Coding-theoretic foundations ............................................................................................ 6 3. Information-theoretic foundations ................................................................................... 12 4. Source coding methods ................................................................................................... 15 4.1. Shannon-Fano coding ............................................................................................... 15 4.2. Huffman coding ........................................................................................................ 17 4.2.1. Extended Huffman code......................................................................................... 20 4.2.2. Adaptive Huffman coding ...................................................................................... 20 4.2.3. Canonical Huffman code ........................................................................................ 23 4.3. Tunstall code ............................................................................................................ 26 4.4. Arithmetic coding ..................................................................................................... 29 4.4.1. Adaptive arithmetic coding .................................................................................... 35 4.4.2. Adaptive arithmetic coding for a binary alphabet .................................................... 36 4.4.3. Viewpoints to arithmetic coding ............................................................................. 39 5. Predictive models for text compression ........................................................................... 40 5.1. Predictive coding based on fixed-length contexts....................................................... 42 5.2. Dynamic-context predictive compression .................................................................. 48 5.3. Prediction by partial match........................................................................................ 52 5.4. Burrows-Wheeler Transform .................................................................................... 58 6. Dictionary models for text compression .......................................................................... 62 6.1. LZ77 family of adaptive dictionary methods ............................................................. 63 6.2. LZ78 family of adaptive dictionary methods ............................................................. 66 6.3. Performance comparison ........................................................................................... 71 7. Introduction to Image Compression ................................................................................ 72 7.1. Lossless compression of bi-level images .................................................................... 72 7.2. Lossless compression of grey-scale images................................................................ 75 7.3. Lossy image compression: JPEG ............................................................................... 77 Literature (optional) · T. C. Bell, J. G. Cleary, I. H. Witten: Text Compression, 1990. · R. W. Hamming: Coding and Information Theory, 2nd ed., Prentice-Hall, 1986. · K. Sayood: Introduction to Data Compression, 3rd ed., Morgan Kaufmann, 2006. · K. Sayood: Lossless Compression Handbook, Academic Press, 2003. · I. H. Witten, A. Moffat, T. C. Bell: Managing Gigabytes: compressing and indexing documents and images, 2nd ed., Morgan Kaufmann, 1999. · Miscellaneous articles (mentioned in footnotes) 3 1. Introduction This course is about data compression, which means reducing the size of source data representation by decreasing the redundancy occurring in it. In practice this means choosing a suitable coding scheme for the source symbols. There are two practical motivations for compression: Reduction of storage requirements, and increase of transmission speed in data communication. Actually, the former can be regarded as transmission of data ‘from now to then’. We shall concentrate on lossless compression, where the source data must be recovered in the decompression phase exactly into the original form. The other alternative is lossy compression, where it is sufficient to recover the original data approximately, within specified error bounds. Lossless compression is typical for alphabetic and other symbolic source data types, whereas lossy compression is most often applied to numerical data which result from digitally sampled continuous phenomena, such as sound and images. There are two main fields of coding theory, namely 1. Source coding, which tries to represent the source symbols in minimal form for storage or transmission efficiency. 2. Channel coding, the purpose of which is to enhance detection and correction of trans- mission errors, by choosing symbol representations which are far apart from each other. Data compression can be considered an extension of source coding. It can be divided into two phases: 1. Modelling of the information source means defining suitable units for coding, such as characters or words, and estimating the probability distribution of these units. 2. Source coding (called also statistical or entropy coding) is applied to the units, using their probabilities. The theory of the latter phase is nowadays quite mature, and optimal methods are known. Modelling, instead, still offers challenges, because it is most often approximate, and can be made more and more precise. On the other hand, there are also practical considerations, in addition to minimizing the data size, such as compression and decompression speed, and also the size of the model itself. In data transmission, the different steps can be thought to be performed in sequence, as follows: Model Model Errors Source Channel Channel Source Source encoding encoding decoding decoding Sink Communication channel We consider source and channel coding to be independent, and concentrate on the former. Both phases can also be performed either by hardware or software. We shall discuss only the software solutions, since they are more flexible. 4 In compression, we usually assume two alphabets, one for the source and the other for the target. The former could be for example ASCII or Unicode, and the latter is most often binary. There are many ways to classify the huge number of suggested compression methods. One is based on the grouping of source and target symbols in compression: 1. Fixed-to-fixed: A fixed number of source symbols are grouped and represented by a fixed number of target symbols. This is seldom applicable; an example could be reducing the ASCII codes of numbers 0-9 to 4 bits, if we know (from our model) that only numbers occur in the source. 2. Variable-to-fixed: A variable number of source symbols are grouped and represented by fixed-length codes. These methods are quite popular and many of the commercial com- pression packages belong to this class. A manual example is the Braille code, developed for the blind, where 2x3 arrays of dots represents characters but also some combinations of characters. 3. Fixed-to-variable: The source symbols are represented by a variable number of target symbols. A well-known example of this class is the Morse code, where each character is represented by a variable number of dots and dashes. The most frequent characters are assigned the shortest codes, for compression. The target alphabet of Morse code is not strictly binary, because there appear also inter-character and inter-word spaces. 4. Variable-to-variable: A variable-size group of source symbols is represented by a variable- size code. We could, for example, make an index of all words occurring in a source text, and assign them variable-length binary codes; the shortest codes of course for the most common words. The first two categories are often called block coding methods, where the block refers to the fixed-size result units. The best (with respect to compressed size) methods today belong to category 3, where extremely precise models of the source result in very high gains. For example, English text can typically be compressed to about 2 bits per source character. Category 4 is, of course, the most general, but it seems that it does not offer notable improvement over category 3; instead, modelling of the source becomes more complicated, in respect of both space and time. Thus, in this course we shall mainly take example methods from categories 2 and 3. The former are often called also dictionary methods, and the latter statistical methods. Another classification of compression methods is based on the availability of the source during compression: 1. Off-line methods assume that all the source data (the whole message1) is available at the start of the compression process. Thus, the model of the data can be built before the actual encoding. This is typical of storage compression trying to reduce the consumed disk space. 2. On-line methods can start the coding without seeing the whole message at once. It is possible that the tail of the message is not even generated yet. This is the normal situation in data transmission, where the sender generates source symbols one-by-one and