
Silesian University of Technology Faculty of Automatic Control, Electronics and Computer Science Institute of Computer Science Doctor of Philosophy Dissertation Universal lossless data compression algorithms Sebastian Deorowicz Supervisor: Prof. dr hab. inz.˙ Zbigniew J. Czech Gliwice, 2003 To My Parents Contents Contents i 1 Preface 1 2 Introduction to data compression 7 2.1 Preliminaries ............................... 7 2.2 What is data compression? ....................... 8 2.3 Lossy and lossless compression .................... 9 2.3.1 Lossy compression ....................... 9 2.3.2 Lossless compression ...................... 10 2.4 Definitions ................................ 10 2.5 Modelling and coding ......................... 11 2.5.1 Modern paradigm of data compression . 11 2.5.2 Modelling ............................ 11 2.5.3 Entropy coding ......................... 12 2.6 Classes of sources ............................ 16 2.6.1 Types of data .......................... 16 2.6.2 Memoryless source ....................... 16 2.6.3 Piecewise stationary memoryless source . 17 2.6.4 Finite-state machine sources . 17 2.6.5 Context tree sources ...................... 19 2.7 Families of universal algorithms for lossless data compression . 20 2.7.1 Universal compression ..................... 20 2.7.2 Ziv–Lempel algorithms .................... 20 2.7.3 Prediction by partial matching algorithms . 23 2.7.4 Dynamic Markov coding algorithm . 26 2.7.5 Context tree weighting algorithm . 27 2.7.6 Switching method ....................... 28 2.8 Specialised compression algorithms . 29 i ii CONTENTS 3 Algorithms based on the Burrows–Wheeler transform 31 3.1 Description of the algorithm ...................... 31 3.1.1 Compression algorithm .................... 31 3.1.2 Decompression algorithm ................... 36 3.2 Discussion of the algorithm stages . 38 3.2.1 Original algorithm ....................... 38 3.2.2 Burrows–Wheeler transform . 38 3.2.3 Run length encoding ...................... 45 3.2.4 Second stage transforms .................... 46 3.2.5 Entropy coding ......................... 52 3.2.6 Preprocessing the input sequence . 57 4 Improved compression algorithm based on the Burrows–Wheeler trans- form 61 4.1 Modifications of the basic version of the compression algorithm . 61 4.1.1 General structure of the algorithm . 61 4.1.2 Computing the Burrows–Wheeler transform . 62 4.1.3 Analysis of the output sequence of the Burrows–Wheeler transform ............................ 64 4.1.4 Probability estimation for the piecewise stationary memo- ryless source ........................... 70 4.1.5 Weighted frequency count as the algorithm’s second stage 81 4.1.6 Efficient probability estimation in the last stage . 84 4.2 How to compare data compression algorithms? . 92 4.2.1 Data sets ............................. 92 4.2.2 Multi criteria optimisation in compression . 97 4.3 Experiments with the algorithm stages . 98 4.3.1 Burrows–Wheeler transform computation . 98 4.3.2 Weight functions in the weighted frequency count transform102 4.3.3 Approaches to the second stage . 104 4.3.4 Probability estimation . 106 4.4 Experimental comparison of the improved algorithm and the other algorithms ................................ 106 4.4.1 Choosing the algorithms for comparison . 106 4.4.2 Examined algorithms . 107 4.4.3 Comparison procedure . 109 4.4.4 Experiments on the Calgary corpus . 110 4.4.5 Experiments on the Silesia corpus . 120 4.4.6 Experiments on files of different sizes and similar contents 128 4.4.7 Summary of comparison results . 136 5 Conclusions 141 iii Acknowledgements 145 Bibliography 147 Appendices 161 A Silesia corpus 163 B Implementation details 167 C Detailed options of examined compression programs 173 D Illustration of the properties of the weight functions 177 E Detailed compression results for files of different sizes and similar contents 185 List of Symbols and Abbreviations 191 List of Figures 195 List of Tables 196 Index 197 Chapter 1 Preface I am now going to begin my story (said the old man), so please attend. —ANDREW LANG The Arabian Nights Entertainments (1898) Contemporary computers process and store huge amounts of data. Some parts of these data are excessive. Data compression is a process that reduces the data size, removing the excessive information. Why is a shorter data sequence of- ten more suitable? The answer is simple: it reduces the costs. A full-length movie of high quality could occupy a vast part of a hard disk. The compressed movie can be stored on a single CD-ROM. Large amounts of data are transmit- ted by telecommunication satellites. Without compression we would have to launch many more satellites that we do to transmit the same number of televi- sion programs. The capacity of Internet links is also limited and several meth- ods reduce the immense amount of transmitted data. Some of them, as mirror or proxy servers, are solutions that minimise a number of transmissions on long distances. The other methods reduce the size of data by compressing them. Multimedia is a field in which data of vast sizes are processed. The sizes of text documents and application files also grow rapidly. Another type of data for which compression is useful are database tables. Nowadays, the amount of information stored in databases grows fast, while their contents often exhibit much redundancy. Data compression methods can be classified in several ways. One of the most important criteria of classification is whether the compression algorithm 1 2 CHAPTER 1. PREFACE removes some parts of data which cannot be recovered during the decompres- sion. The algorithms removing irreversibly some parts of data are called lossy, while others are called lossless. The lossy algorithms are usually used when a perfect consistency with the original data is not necessary after the decom- pression. Such a situation occurs for example in compression of video or picture data. If the recipient of the video is a human, then small changes of colors of some pixels introduced during the compression could be imperceptible. The lossy compression methods typically yield much better compression ratios than lossless algorithms, so if we can accept some distortions of data, these methods can be used. There are, however, situations in which the lossy methods must not be used to compress picture data. In many countries, the medical images can be compressed only by the lossless algorithms, because of the law regulations. One of the main strategies in developing compression methods is to prepare a specialised compression algorithm for the data we are going to transmit or store. One of many examples where this way is useful comes from astronomy. The distance to a spacecraft which explores the universe is huge, what causes big communication problems. A critical situation took place during the Jupiter mission of Galileo spacecraft. After two years of flight, the Galileo’s high-gain antenna did not open. There was a way to get the collected data through a sup- porting antenna, but the data transmission speed through it was slow. The sup- porting antenna was designed to work with a speed of 16 bits per second at the Jupiter distance. The Galileo team improved this speed to 120 bits per sec- ond, but the transmission time was still quite long. Another way to improve the transmission speed was to apply highly efficient compression algorithm. The compression algorithm that works at Galileo spacecraft reduces the data size about 10 times before sending. The data have been still transmitted since 1995. Let us imagine the situation without compression. To receive the same amount of data we would have to wait about 80 years. The situation described above is of course specific, because we have here good knowledge of what kind of information is transmitted, reducing the size of the data is crucial, and the cost of developing a compression method is of lower importance. In general, however, it is not possible to prepare a specialised compression method for each type of data. The main reasons are: it would result in a vast number of algorithms and the cost of developing a new compression method could surpass the gain obtained by the reduction of the data size. On the other hand, we can assume nothing about the data. If we do so, we have no way of finding the excessive information. Thus a compromise is needed. The standard approach in compression is to define the classes of sources producing different types of data. We assume that the data are produced by a source of some class and apply a compression method designed for this particular class. The algorithms working well on the data that can be approximated as an output 3 of some general source class are called universal. Before we turn to the families of universal lossless data compression algo- rithms, we have to mention the entropy coders. An entropy coder is a method that assigns to every symbol from the alphabet a code depending on the prob- ability of symbol occurrence. The symbols that are more probable to occur get shorter codes than the less probable ones. The codes are assigned to the symbols in such a way that the expected length of the compressed sequence is minimal. The most popular entropy coders are Huffman coder and an arithmetic coder. Both the methods are optimal, so one cannot assign codes for which the expected compressed sequence length would be shorter. The Huffman coder is optimal in the class of methods that assign codes of integer length, while the arithmetic coder is free from this limitation. Therefore it usually leads to shorter expected code length. A number of universal lossless data compression algorithms were proposed. Nowadays they are widely used. Historically the first ones were introduced by Ziv and Lempel [202, 203] in 1977–78. The authors propose to search the data to compress for identical parts and to replace the repetitions with the informa- tion where the identical subsequences appeared before. This task can be accom- plished in several ways.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages214 Page
-
File Size-