A Survey of Various Data Compression Techniques

A Survey of Various Data Compression Techniques

A Survey of Various Data Compression Techniques Charles Alexander Smith April 1, 2010 1 Introduction This paper is a survey of various methods of data compression. When the computer age came about in the 1940's, storage space became an issue. Data compression was the answer to that problem. The compression process takes an original data set and reduces its size by taking out unnecessary data. There are two main types of compression, lossy and lossless. This paper will deal exclusively with lossy compression. So, through the compression/decompression algorithm, some data from the original file is deleted and not recovered. There are many methods of compression, and this paper will go into depth about compression using Fourier Transformations and Wavelet Compression. Each of these methods has strengths and weaknesses in regards to different types of file; the paper will also cover those aspects of the two methods. 2 Background 2.1 History The beginnings of data compression began as early as 1838 with its use is morse code.1 Letters that were common in English were given shorter codes to save time as the messages were being typed.1 Over 100 years later, as the computer ago was on the rise, this simple morse code method, was built upon and became a study that is known as Information Theory. However, even before Information Theory, a mathematician by the name of Joseph Fourier observed that \any sufficiently smooth function could be decomposed into sums of sine waves with frequencies corresponding to successive integers."2 Fourier's method was applied to sounds. After breaking a function into frequencies, Fourier was able to drop the highest and lowest frequencies, but keep the rest. This change was unable to be heard by the human ear, and space was able to be saved during recordings. This methodology is still used today in Compression algorithms. This method was also applied to images in the 1950's, in the attempt to decrease the amount of data that needed to be sent to television; however, no solution was given at that time. In fact, it took another 30 years until the notion of image compression became prevelant in mainstream technology.2 The area of Wavelet Compression was founded by Alfred Haar, a Hungarian Mathematician, in the early 20th Century.3 Haar created the first known wavelet, which is now known as a Haar Wavelet. This area of compression has grown rapidly since the 1970's when interest in Wavelets and their uses began to spread through the mathematical field. Fractal compression is a lossy image compression, which is has proven to be more efficient than image compression using Fourier transformations. In 1987, Michael Barnsley was at the top of field in the development of fractal compression, and currently has many patents on the algorithms.4 Fractal Compression has incredible compression ratios, which is an attractive aspect to much commercial use, especially to companies that transfer much data over the internet. Many companies, from Microsoft to NetFlix, now use this type of algorithm. Without this method, streaming information over the internet would take many times longer. Although Compressive Sensing has been studied for over 40 years, it did not see significant strides until 2004.5 At that time, a mathematician, by the name of Emmanuel J. Candes, was performing research with magnetic resonance imaging.5 He found that an image could be reconstructed even when the data seemed insufficient by the Nyquist-Shannon criterion.5 This criterion states that any bandlimited signal can be reconstructed if the sampling rate, per second, is greater than twice the 1 highest frequency in the original file.6 With this finding, the area of compressive sensing exploded, and is a current hot-spot in Information Theory. 2.2 Mathematical Background In order for us to understand the theory behind compression and decompression, it is necessary to recall several key ideas from Linear Algebra. These underlying ideas make the compression process n possible. Let β = fb1; b2; ··· ; bkg be a set of vectors in C . We say that a linear combination of these vectors is any expression of the form c1b1 + c2b2 + ··· + ckbk where c1; c2; : : : ; ck are scalars. A set of vectors are linearly independent if k X cjbj = 0 implies that cj = 0 for all j = 1; 2; : : : ; k. j=1 Additionally, the set of all linear combinations of the vectors in β is called the span of β.A basis is a set of vectors that are both linearly independent and spanning. The conjugate of a complex number z = a + bi will be denoted by z = a − bi. If B is an n × k matrix with complex entries, B = (bj;l : j = 1; 2; : : : ; n; l = 1; 2; : : : k; ) then the conjugate transpose of B is the k × n matrix B∗ given by ∗ B = bl;j : l = 1; 2; : : : k; j = 1; 2; : : : ; n : The inner product of two vectors bj and bl is defined by ∗ (bj; bl) = bl bj Note that the inner product of two vectors is a single complex number. Two vectors are said to be orthogonal if their inner product is equal to zero. A set of non-zero vectors is said to be orthogonal if every pair of distinct vectors from it are orthogonal. The length or norm of a vector b is defined by p kbk = b∗b: A set β of non-zero vectors is said to be orthonormal if β is orthogonal and the length of each vector in β is one. Theorem 1. If fb1 ··· bkg is orthogonal, then fb1 ··· bkg is linearly independent. Proof. First, assume that c1b1 + ··· + ckbk = 0. To show linear independence, we must show that c1 = c2 = ··· = cn = 0. To do this, fix a j 2 f1; 2; ··· ; ng. Now, compute the inner product of our ∗ first equation with bj . ∗ ∗ bj (c1b1 + ··· + cjbj + ··· + ckbk) = bj 0 Using the distributive law, we obtain the simplified equation ∗ ∗ ∗ c1bj b1 + ··· + cjbj bj + ··· + ckbj bk = 0 (1) And by orthogonality, ( ∗ 0 if j 6= l bj bl = 2 kbjk if j = l So, using this information in conjunction with (1), 2 c1 (0) + ··· + cj kbjk + ··· + ck (0) = 0 2 This shows that cj kbjk = 0; however, since we know that bj is nonzero, it must be true that cj = 0. And, since j was arbitrary in f1; 2; ··· ; kg, we have completed our proof. 2 n n Corollary 1. If fb1 ··· bng is an orthogonal set in C , then it is a basis for C . Proof. By Theorem 1, the n vectors fb1; b2; ··· ; bng are linearly independent. Since the dimension of Cn is n, it follows that the vectors must form a basis for Cn. n Given the basis β = fb1 ··· bng in C , we will define the basis matrix, B, as the n × n matrix whose columns are composed of the basis vectors bi, for i = 1; 2; ··· ; n. Theorem 2. If β is a basis in Cn, then the associated basis matrix, B, is invertible. Proof. From basic linear algebra, we know that B will be invertible if and only if Bx = s has a solution for all s in Cn. Let s 2 Cn. Since β is a basis for Cn, there exists a vector x whose components are the scalars x1, x2, ··· , xn, such that s = x1b1 + x2b2 + ··· + xnbn So, Bx = s has the solution x = c, where c is a vector of scalars that satisfies the above equation. To show uniqueness, suppose there exists a vector y such that y 6= x and By = s. Then, it is true that B (c − y) = Bc − By = s − s = 0, which yields (c1 − y1) b1 + ··· + (cn − yn) bn Since all of the indices are linearly independent, we say that ci − yi = 0 for all i in N. However, this introduces a contradiction since y = c = x. This shows uniqueness, which proves that B is invertible. Theorem 3. Let β be an orthogonal basis in Cn with the associated basis matrix B. Then B∗B is a diagonal matrix. ∗ ∗ Proof. The (j; l) entry of B B is bj bl. However, because the vectors are orthogonal, we have ( ∗ 0 if j 6= l bj bl = 2 kbjk if j = l Hence, 2 2 3 kb1k 0 ··· 0 2 6 0 kb2k ··· 0 7 B∗B = 6 7 6 . .. 7 4 . 5 2 0 0 · · · kbnk n Corollary 2. If fb1 ··· bng is an orthonormal basis in C , then the associated basis matrix B is orthonormal. Proof. We need to show that B∗B = I, where I is the n×n identity matrix. However, this directly follows from Theorem 3 and the property of orthonormality. Note the implication that if the columns of B are orthonormal, then B−1 = B∗. 3 Overview of the Compression/Decompression Process For our purposes a signal is a vector s in Cn. Usually, n will be quite large. The individual entries in a signal, s, are samples that represent the values of some continuous waveform, s(t), at discrete times. We will always assume that the samples are evenly spaced in time; in other words, we assume that they were obtained at a constant sampling frequency, which we will denote as fs. For example, if we sample an audio signal, s(t), at the standard rate of fs = 44; 100 hertz for T = 5 seconds, the result will yield a signal vector s with n = fsT = 220; 500 samples. 3 Both the Fourier and wavelet compression algorithms studied in this project have equivalent overall structure. The original signal, s, is written in terms of some basis, β, in Cn: s = Bc = c1b1 + c2b2 + ··· + cnbn (2) The vector c = (c1; c2; ··· ; cn) is called the coefficient vector of s with respect to the basis β.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    20 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us