LOVELY PROFESSIONAL UNIVERSITY TERM PAPER of DIGITA L COMMUNICATON SYSTEM (DATA COMPRESSION) ABSTRACT: History:- About the Au
Total Page:16
File Type:pdf, Size:1020Kb
LOVELY PROFESSIONAL UNIVERSITY TERM PAPER OF DIGITA L COMMUNICATON SYSTEM (DATA COMPRESSION) BY : Joginder singh Roll:34 Sec:B6803 About the Author:- ABSTRACT: This term paper deals with the term data Claude Elwood Shannon (April 30, 1916 ± February 24, 2001) was compression. how it is done ,lossy compression, it also explain an American mathematician, electronic engineer, and about limitations and future aspects of data compression. cryptographer known as "the father of information theory". History:- Shannon is famous for having founded information theory with one landmark paper published in 1948. But he is also credited with founding both digital computer and digital circuit design theory in In his 1948 paper, ``A Mathematical Theory of Communication,'' 1937, when, as a 21-year-old master's student at MIT, he wrote a Claude E. Shannon formulated the theory of data compression. thesis demonstrating that electrical application of Boolean algebra Shannon established that there is a fundamental limit to lossless could construct and resolve any logical, numerical relationship. It data compression. This limit, called the entropy rate, is denoted by has been claimed that this was the most important master's thesis H. The exact value of H depends on the information source --- of all time. more specifically, the statistical nature of the source. It is possible to compress the source, in a lossless manner, with compression rate close to H. It is mathematically i mpossible to do better than H. Shannon also developed the theory of lossy data compression. This is better known as rate-distortion theory. In lossy data compression, the decompressed data does not have to be exactly WHAT IS DATA COPRESSION ? the same as the original data. Instead, some amount of distortion, D, is tolerated. Shannon showed that, for a given source (with all its statistical properties known) and a given distortion measure, IN GENERAL:- there is a function, R(D), called the rate-distortion function. The theory says that if D is the tolerable amount of distortion, then In computer science and information theory, data compression, R(D) is the best possible compression rate. source coding or bit-rate reduction is the process of encoding information using fewer bits than the original representation would When the compression is lossless (i.e., no distortion or D=0), the use.use. best possible compression rate is R(0)=H (for a finite alphabet source). In other words, the best possible lossless compression rate is the entropy rate. In this sense, rate-distortion theory is a In terms of Digital:- generalization of lossless data compression theory, where we went from no distortion (D=0) to some distortion (D>0). Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission Lossless data compression theory and rate-distortion theory are bandwidth. On the downside, compressed data must be known collectively as source coding theory. Source coding theory decompressed to be used, and this extra processing may be sets fundamental limits on the performance of all data compression detrimental to some applications. algorithms. The theory, in itself, does not specify exactly how to design and implement these algorithms. It does, however, provide Example to show compression :- some hints and guidelines on how to achieve optimal performance. In the following sections, we will described how Shannon modeled For instance, a compression scheme for video may require an information source in terms of a random process, we will present Shannon lossless source coding theorem, and we will expensive hardware for the video to be decompressed fast enough discuss Shannon rate-distortion theory. A background in to be viewed as it is being decompressed (the option of probability theory is recommended. decompressing the video in full before watching it may be inconvenient, and requires storage space for the decompressed video). what, your, country, can, do, for, you -- give us almost everything we need for the entire quote. To construct the second half of the phrase, we just point to the words in the first half and fill in the The design of data compression schemes therefore involves trade- spaces and punctuation. offs among various factors, including the degree of compression, the amount of distortion introduced (if using a lossy compression scheme), and the computational resources required to compress and uncompress the data. Types of data compressions:- How file compression works:- 1. Lossy data compression If you download many programs and files off the Internet, you've probably encountered ZIP files before. This compression system is 2. Lossless data compression a very handy invention, especially for Web users, because it lets you reduce the overall number of bits and bytes in a file so it can be transmitted faster over slower Internet connections, or take up less space on a disk. Once you download the file, your computer uses a program such as WinZip or Stuffit t o expand the file back to its original size. If everything works correctly, the expanded file is identical to the original file before it was compressed. At first glance, this seems very mysterious. How can you reduce the number of bits and bytes and then add those exact bits and bytes back later? As it turns out, the basic idea behind the process is fairly straightforward. In this article, we'll examine this simple method as we take a very small file through the basic process of compression. Most types of computer files are fairly redundant -- they have the same information listed over and over again. File-compression programs simply get rid of the redundancy. Instead of listing a piece of information over and over again, a file-compression program lists that information once and then refers back to it whenever it appears in the original program. Lossy compression:- As an example, let's look at a type of information we're all familiar In information technology, "lossy" compression is a data encoding with: words. method which compresses data by discarding (losing) some of it. The procedure aims to minimise the amount of data that needs to be held, handled, and/or transmitted by a computer. Typically, a In John F. Kennedy's 1961 inaugural address, he delivered this substantial amount of data can be discarded before the result is famous line: sufficiently degraded to be noticed by the user. "Ask not what your country can do for you -- ask what you can do Lossy compression is most commonly used to compress for your country." multimedia data (audio, video, still images), especially in applications such as streaming media and internet telephony. By The quote has 17 words, made up of 61 letters, 16 spaces, one dash contrast, lossless compression is required for text and data files, and one period. If each letter, space or punctuation mark takes up such as bank records, text articles, etc. In many cases it is one unit of memory, we get a total file size of 79 units. To get the advantageous to make a master lossless file which can th en be used file size down, we need t o look for redundancies. to produce compressed files for different purposes; for example a multi-megabyte file can be used at full size to produce a full-page advertisement in a glossy magazine, and a 10 kilobyte lossy copy Immediately, we notice that: made for a small image on a web page. y "ask" appears two times Transform coding:- y "what" appears two times y "your" appears two times More generally, lossy compression can be thought of as an y "country" appears two times application of transform coding ± in the case of multimedia data, perceptual coding: it transforms the raw data to a domain that more y "can" appears two times accurately reflects the information content. For example, rather y "do" appears two times than expressing a sound file as the amplitude levels over time, one y "for" appears two times may express it as the frequency spectrum over time, which y "you" appears two times corresponds more accurately to human audio perception. Ignoring the difference between capital and lower-case letters, While data reduction (compression, be it lossy or lossless) is a roughly half of the phrase is redundant. Nine words -- ask, not, main goal of transform coding, it also allows other goals: one may represent data more accurately for the original amount of space [1] ± for example, in principle, if one starts with an analog or high- nearly always far superior to that of the audio and still-image resolution digital master, an MP3 file of a given bitrate (e.g. 320 equivalents. kbit/s) should provide a better representation than a raw uncompressed audio in WAV or AIFF file of the same bitrate. (Uncompressed audio can get lower bitrate only by lowering y Video can be compressed immensely (e.g. 100:1) with sampling frequency and/or sampling resolution.) Further, a little visible quality loss; transform coding may provide a better domain for manipulating or y Audio can often be compressed at 10:1 with otherwise editing the data ± for example, equalization of audio is imperceptible loss of quality; most naturally expressed in the frequency domain (boost the bass, y Still images are often lossily compressed at 10:1, as for instance) rather than in the raw time domain. with audio, but the quality loss is more noticeable, especially on closer inspection. From this point of view, perceptual encoding is not essentially about discarding data, but rather about a better representation of The compression rate is 5 to 6 % in case of lossy compression data. while in case of lossless compression it is about 50 to 60 % of the actual file Another use is for backward compatibility and graceful degradation: in color television, encoding color via a luminance- Lowering resolution:- chrominance transform domain (such as YUV) means that black- and-white sets display the luminance, while ignoring the color information.