Parallel Algorithms for Entropy-Coding Techniques ∗

Total Page:16

File Type:pdf, Size:1020Kb

Parallel Algorithms for Entropy-Coding Techniques ∗ Parallel Algorithms for Entropy-Coding Techniques ¤ ABDOU YOUSSEF Department of EE & CS The George Washington University Washington, DC 20052, USA email: [email protected] Abstract: With the explosion of imaging applica- of essentially logical computations. In practice, how- tions, and due to the massive amounts of imagery data, ever, ¯les are broken down into many substrings before data compression is essential. Lossless compression, being arithmetic-coded, for precision reasons that will also called entropy coding, is of special importance be- become clear later on. Accordingly, the coded streams cause not only it serves as a stand-alone system for of those substrings can be decoded in parallel. certain applications such as medical imaging, it also is For RLE, we design parallel algorithms for both en- an inherent part of lossy compression. Therefore, fast coding and decoding, each taking O(log N) time. Fi- entropy coding/decoding algorithms are desirable. In nally, in the case of Hu®man coding, the coding algo- this paper we will develop parallel algorithms for sev- rithm is easily data-parallel. The statistics gathering eral widely used entropy coding techniques, namely, for computing symbol probabilities before construct- arithmetic coding, run-length encoding (RLE), and ing the Hu®man tree is parallelized to take O(log2 N) Hu®man coding. Our parallel arithmetic coding algo- time. Like arithmetic coding, Hu®man decoding is rithm takes O(log2 N) time on an N-processor hyper- highly sequential. However, in certain applications cube, where N is the input size. For RLE, our parallel where the data is inherently broken into many blocks coding and decoding algorithms take O(log N) time on that are processed independently as in JPEG/MPEG N processors. Finally, in the case of Hu®man coding, [5, 11], simple provisions can be made to have the bit- the parallel coding algorithm takes O(log2 N +n log n), streams easily separable into many independent sub- where n is the alphabet size, n << N. As for decoding, streams that can be decoded independently in parallel. however, both arithmetic and Hu®man decoding are It must be noted that other lossless compression hard to parallelize. However, special provisions could techniques are also in use such as Lempel-Ziv, bit- be made in many applications to make arithmetic de- plane coding, and di®erential pulse-code modulation coding and Hu®man decoding fairly parallel. (DPCM) [14]. The ¯rst two will be considered in fu- ture work. The last technique, DPCM, is the subject matter of another paper appearing in this conference 1 Introduction [?]. The paper is organized as follows. The next section With the explosion of imaging applications and due gives a brief description of the various standard par- to the massive amounts of imagery data, data com- allel operations that will be used in our algorithms. pression is essential to reduce the storage and trans- Section 3 develops a parallel algorithm for arithmetic mission requirements of images and videos [5, 11, 14]. coding. Section 4 develops parallel encoding and de- Compression can be lossless or lossy. Lossless com- coding algorithms for RLE. Section 5 addresses the pression, also called entropy coding, allows for perfect parallelization of Hu®man coding and decoding. Con- reconstruction of the data, whereas lossy compression clusions and future directions are given in section 6. does not. Even in lossy compression, which is by far more prevalent in image and video compression, en- tropy coding is needed as a last stage after the data has 2 Preliminaries been transformed and quantized [14]. Therefore, fast entropy coding algorithms are of prime importance, The parallel algorithms designed in this paper use sev- especially in online or real-time applications such as eral standard parallel operations. A list of those oper- video teleconferencing. ations along with a brief description will follow. Parallel algorithms are an obvious choice for fast pro- cessing. Therefore, in this paper we will develop par- ² Parsort(Y [0 : N ¡ 1]; Z[0 : N1], ¼[0 : N ¡ 1]): allel algorithms for several widely used entropy coding This operation sorts in parallel the input array techniques, namely, arithmetic coding [13], run-length Y into the output array Z, and records the per- encoding (RLE) [16], and Hu®man coding [4]. Our mutation ¼ that orders Y to Z: Z[k] = Y [¼[k]]. parallel arithmetic coding algorithm takes O(log2 N) Of the many parallel sorting algorithms, we use the fastest practical one, namely, Batcher's bitonic time on an N-processor hypercube, where N is the in- 2 put size. Unfortunately, arithmetic decoding seems to sorting [2], which takes O(log N) parallel time on be hard to parallelize because it is a sequential process an N-processor hypercube. ¤This work was performed in part at the National institute ² C=Parmult(A0:N¡1): It multiplies the N ele- of Standards and technology. ments of the array A, yielding the product C. In 1 this paper, the elements of A are 2 £ 2 matrices. applied at the outset before arithmetic coding starts, This operation clearly takes simply O(logN) time and the inverse mapping is applied after arithmetic de- on O(N) processors connected as a hypercube. coding is completed. Henceforth, we will assume the alphabet to be f0; 1; :::; n ¡ 1g. ² A[0;N ¡ 1]=Parpre¯x(a0:N¡1): This is the well- The conditional probabilities fPkig are either com- known parallel pre¯x operation [7]. It computes puted statistically from the input ¯le or derived from from the input array a the array A where A[i] = an assumed theoretical probabilistic model about the a[0] + a[1] + ::: + a[i], for all i = 0; 1; :::; N ¡ 1. input ¯les. Naturally, the statistical method is the one Parallel pre¯x takes O(log N) time on N proces- used most often, and will be assumed here. The struc- sors connected in a variety of ways, including the ture of the probabilistic model is, however, still useful hypercube. in knowing what statistical data should be gathered. The model often used is the Markov model of a certain ² A[0 : N ¡ 1]=Barrier-Parpre¯x(a[0 : N ¡ 1]): order m, where m tends to be fairly small, in the order This operation assumes that the input array a is of 1{5. That is, the probability that the next symbol divided into groups of consecutive elements; ev- is of some value a depends on only the values of the ery group has a left-barrier at its start and a previous m symbols. Therefore, to determine statisti- right-barrier at its end. Barrier-Parpre¯x per- cally the probability that the next symbol is a given forms a parallel pre¯x within each group indepen- that the previous m symbols are some b1b2:::bm, it suf- dently from other groups. Barrier-Parpre¯x ¯ces to compute the frequency of occurrences of the takes O(log N) time on an N-processor hyper- substring b1b2:::bma in the input string, and normal- cube. To see this, let f[0 : N ¡ 1] be a flag array ize that frequency by N, which is the total number of where f[k] = 0 if k is a right-barrier, and f[k] = 1, substrings of length m + 1 symbols in the zero-padded otherwise. Clearly, A[i] = f[i ¡ 1]A[i ¡ 1] + a[i] input string. The padding of m 0's to the left of x for all i = 0; 1; :::; N ¡ 1. The latter is a linear re- is taken to simplify the statistics gathering at the left currence relation which can be solved in O(log N) boundary of x: assume that the imaginary symbols time on an N-processor hypercube [6]. x[¡m : ¡1] are all 0. To summerize, the sequential algorithm for comput- ing the statistical probabilities and performing arith- 3 Parallel Arithmetic Coding metic coding is presented next. Arithmetic coding [13] relies heavily on the probability distributions of the input ¯les to be coded. Essentially, Algorithm Arithmetic-coding(in: x[0 : N ¡ 1]; out: B) arithmetic coding maps each input ¯le to a subinterval begin [LR] of the unit interval [0 1] such that the probability /* The alphabet is assumed to be f0; 1; :::; n ¡ 1g */ of the input ¯le is R ¡ L. Afterwards, it represents the Phase I: Statistics Gathering fraction value L in n-ary using r = d¡ logn(R ¡ L)e for k = 0 to N ¡ 1 do n-ary digits, where n is the size of the alphabet. The 0 /* compute fPkig s which are initialized to 0*/ stream of those r digits are taken to be the code of the compute the frequency fk of the substring input ¯le. x[k ¡ m : k] in the whole string x; The mapping of an input ¯le into a subinterval [LR] set Qk = fk=N; is done progressively by reading the ¯le and updating let i = x[k], and set Pki = Qk; the [LR] subinterval to always be the corresponding endfor subinterval of the input substring scanned so for. The for k = 0 to N ¡ 1 do update rule works as follows. Assume that the input Let i = x[k]; set Pk = Pk0 + Pk1 + ::: + Pk;i¡1; ¯le is the string x[0 : N¡1] where every symbol is in the endfor alphabet fa0; a1; :::; an¡1g, and that the substring x[0 : k¡1] has been processed, i.e., mapped to interval [LR]. Phase II: ¯nding the interval [LR] correspond- Let Pki be the probability that the next symbol is ai ing to the string x given that the previous symbols are x[0 : k¡1].
Recommended publications
  • Data Compression: Dictionary-Based Coding 2 / 37 Dictionary-Based Coding Dictionary-Based Coding
    Dictionary-based Coding already coded not yet coded search buffer look-ahead buffer cursor (N symbols) (L symbols) We know the past but cannot control it. We control the future but... Last Lecture Last Lecture: Predictive Lossless Coding Predictive Lossless Coding Simple and effective way to exploit dependencies between neighboring symbols / samples Optimal predictor: Conditional mean (requires storage of large tables) Affine and Linear Prediction Simple structure, low-complex implementation possible Optimal prediction parameters are given by solution of Yule-Walker equations Works very well for real signals (e.g., audio, images, ...) Efficient Lossless Coding for Real-World Signals Affine/linear prediction (often: block-adaptive choice of prediction parameters) Entropy coding of prediction errors (e.g., arithmetic coding) Using marginal pmf often already yields good results Can be improved by using conditional pmfs (with simple conditions) Heiko Schwarz (Freie Universität Berlin) — Data Compression: Dictionary-based Coding 2 / 37 Dictionary-based Coding Dictionary-Based Coding Coding of Text Files Very high amount of dependencies Affine prediction does not work (requires linear dependencies) Higher-order conditional coding should work well, but is way to complex (memory) Alternative: Do not code single characters, but words or phrases Example: English Texts Oxford English Dictionary lists less than 230 000 words (including obsolete words) On average, a word contains about 6 characters Average codeword length per character would be limited by 1
    [Show full text]
  • Annual Report 2016
    ANNUAL REPORT 2016 PUNJABI UNIVERSITY, PATIALA © Punjabi University, Patiala (Established under Punjab Act No. 35 of 1961) Editor Dr. Shivani Thakar Asst. Professor (English) Department of Distance Education, Punjabi University, Patiala Laser Type Setting : Kakkar Computer, N.K. Road, Patiala Published by Dr. Manjit Singh Nijjar, Registrar, Punjabi University, Patiala and Printed at Kakkar Computer, Patiala :{Bhtof;Nh X[Bh nk;k wjbk ñ Ò uT[gd/ Ò ftfdnk thukoh sK goT[gekoh Ò iK gzu ok;h sK shoE tk;h Ò ñ Ò x[zxo{ tki? i/ wB[ bkr? Ò sT[ iw[ ejk eo/ w' f;T[ nkr? Ò ñ Ò ojkT[.. nk; fBok;h sT[ ;zfBnk;h Ò iK is[ i'rh sK ekfJnk G'rh Ò ò Ò dfJnk fdrzpo[ d/j phukoh Ò nkfg wo? ntok Bj wkoh Ò ó Ò J/e[ s{ j'fo t/; pj[s/o/.. BkBe[ ikD? u'i B s/o/ Ò ô Ò òõ Ò (;qh r[o{ rqzE ;kfjp, gzBk óôù) English Translation of University Dhuni True learning induces in the mind service of mankind. One subduing the five passions has truly taken abode at holy bathing-spots (1) The mind attuned to the infinite is the true singing of ankle-bells in ritual dances. With this how dare Yama intimidate me in the hereafter ? (Pause 1) One renouncing desire is the true Sanayasi. From continence comes true joy of living in the body (2) One contemplating to subdue the flesh is the truly Compassionate Jain ascetic. Such a one subduing the self, forbears harming others. (3) Thou Lord, art one and Sole.
    [Show full text]
  • Arithmetic Coding
    Arithmetic Coding Arithmetic coding is the most efficient method to code symbols according to the probability of their occurrence. The average code length corresponds exactly to the possible minimum given by information theory. Deviations which are caused by the bit-resolution of binary code trees do not exist. In contrast to a binary Huffman code tree the arithmetic coding offers a clearly better compression rate. Its implementation is more complex on the other hand. In arithmetic coding, a message is encoded as a real number in an interval from one to zero. Arithmetic coding typically has a better compression ratio than Huffman coding, as it produces a single symbol rather than several separate codewords. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 ≤ n < 1.0) Arithmetic coding is a lossless coding technique. There are a few disadvantages of arithmetic coding. One is that the whole codeword must be received to start decoding the symbols, and if there is a corrupt bit in the codeword, the entire message could become corrupt. Another is that there is a limit to the precision of the number which can be encoded, thus limiting the number of symbols to encode within a codeword. There also exist many patents upon arithmetic coding, so the use of some of the algorithms also call upon royalty fees. Arithmetic coding is part of the JPEG data format.
    [Show full text]
  • Image Compression Using Discrete Cosine Transform Method
    Qusay Kanaan Kadhim, International Journal of Computer Science and Mobile Computing, Vol.5 Issue.9, September- 2016, pg. 186-192 Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320–088X IMPACT FACTOR: 5.258 IJCSMC, Vol. 5, Issue. 9, September 2016, pg.186 – 192 Image Compression Using Discrete Cosine Transform Method Qusay Kanaan Kadhim Al-Yarmook University College / Computer Science Department, Iraq [email protected] ABSTRACT: The processing of digital images took a wide importance in the knowledge field in the last decades ago due to the rapid development in the communication techniques and the need to find and develop methods assist in enhancing and exploiting the image information. The field of digital images compression becomes an important field of digital images processing fields due to the need to exploit the available storage space as much as possible and reduce the time required to transmit the image. Baseline JPEG Standard technique is used in compression of images with 8-bit color depth. Basically, this scheme consists of seven operations which are the sampling, the partitioning, the transform, the quantization, the entropy coding and Huffman coding. First, the sampling process is used to reduce the size of the image and the number bits required to represent it. Next, the partitioning process is applied to the image to get (8×8) image block. Then, the discrete cosine transform is used to transform the image block data from spatial domain to frequency domain to make the data easy to process.
    [Show full text]
  • Comparison of Entropy and Dictionary Based Text Compression in English, German, French, Italian, Czech, Hungarian, Finnish, and Croatian
    mathematics Article Comparison of Entropy and Dictionary Based Text Compression in English, German, French, Italian, Czech, Hungarian, Finnish, and Croatian Matea Ignatoski 1 , Jonatan Lerga 1,2,* , Ljubiša Stankovi´c 3 and Miloš Dakovi´c 3 1 Department of Computer Engineering, Faculty of Engineering, University of Rijeka, Vukovarska 58, HR-51000 Rijeka, Croatia; [email protected] 2 Center for Artificial Intelligence and Cybersecurity, University of Rijeka, R. Matejcic 2, HR-51000 Rijeka, Croatia 3 Faculty of Electrical Engineering, University of Montenegro, Džordža Vašingtona bb, 81000 Podgorica, Montenegro; [email protected] (L.S.); [email protected] (M.D.) * Correspondence: [email protected]; Tel.: +385-51-651-583 Received: 3 June 2020; Accepted: 17 June 2020; Published: 1 July 2020 Abstract: The rapid growth in the amount of data in the digital world leads to the need for data compression, and so forth, reducing the number of bits needed to represent a text file, an image, audio, or video content. Compressing data saves storage capacity and speeds up data transmission. In this paper, we focus on the text compression and provide a comparison of algorithms (in particular, entropy-based arithmetic and dictionary-based Lempel–Ziv–Welch (LZW) methods) for text compression in different languages (Croatian, Finnish, Hungarian, Czech, Italian, French, German, and English). The main goal is to answer a question: ”How does the language of a text affect the compression ratio?” The results indicated that the compression ratio is affected by the size of the language alphabet, and size or type of the text. For example, The European Green Deal was compressed by 75.79%, 76.17%, 77.33%, 76.84%, 73.25%, 74.63%, 75.14%, and 74.51% using the LZW algorithm, and by 72.54%, 71.47%, 72.87%, 73.43%, 69.62%, 69.94%, 72.42% and 72% using the arithmetic algorithm for the English, German, French, Italian, Czech, Hungarian, Finnish, and Croatian versions, respectively.
    [Show full text]
  • The Pillars of Lossless Compression Algorithms a Road Map and Genealogy Tree
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3296-3414 © Research India Publications. http://www.ripublication.com The Pillars of Lossless Compression Algorithms a Road Map and Genealogy Tree Evon Abu-Taieh, PhD Information System Technology Faculty, The University of Jordan, Aqaba, Jordan. Abstract tree is presented in the last section of the paper after presenting the 12 main compression algorithms each with a practical This paper presents the pillars of lossless compression example. algorithms, methods and techniques. The paper counted more than 40 compression algorithms. Although each algorithm is The paper first introduces Shannon–Fano code showing its an independent in its own right, still; these algorithms relation to Shannon (1948), Huffman coding (1952), FANO interrelate genealogically and chronologically. The paper then (1949), Run Length Encoding (1967), Peter's Version (1963), presents the genealogy tree suggested by researcher. The tree Enumerative Coding (1973), LIFO (1976), FiFO Pasco (1976), shows the interrelationships between the 40 algorithms. Also, Stream (1979), P-Based FIFO (1981). Two examples are to be the tree showed the chronological order the algorithms came to presented one for Shannon-Fano Code and the other is for life. The time relation shows the cooperation among the Arithmetic Coding. Next, Huffman code is to be presented scientific society and how the amended each other's work. The with simulation example and algorithm. The third is Lempel- paper presents the 12 pillars researched in this paper, and a Ziv-Welch (LZW) Algorithm which hatched more than 24 comparison table is to be developed.
    [Show full text]
  • Data Compression
    Data Compression Data Compression Compression reduces the size of a file: ! To save space when storing it. ! To save time when transmitting it. ! Most files have lots of redundancy. Who needs compression? ! Moore's law: # transistors on a chip doubles every 18-24 months. ! Parkinson's law: data expands to fill space available. ! Text, images, sound, video, . All of the books in the world contain no more information than is Reference: Chapter 22, Algorithms in C, 2nd Edition, Robert Sedgewick. broadcast as video in a single large American city in a single year. Reference: Introduction to Data Compression, Guy Blelloch. Not all bits have equal value. -Carl Sagan Basic concepts ancient (1950s), best technology recently developed. Robert Sedgewick and Kevin Wayne • Copyright © 2005 • http://www.Princeton.EDU/~cos226 2 Applications of Data Compression Encoding and Decoding hopefully uses fewer bits Generic file compression. Message. Binary data M we want to compress. ! Files: GZIP, BZIP, BOA. Encode. Generate a "compressed" representation C(M). ! Archivers: PKZIP. Decode. Reconstruct original message or some approximation M'. ! File systems: NTFS. Multimedia. M Encoder C(M) Decoder M' ! Images: GIF, JPEG. ! Sound: MP3. ! Video: MPEG, DivX™, HDTV. Compression ratio. Bits in C(M) / bits in M. Communication. ! ITU-T T4 Group 3 Fax. Lossless. M = M', 50-75% or lower. ! V.42bis modem. Ex. Natural language, source code, executables. Databases. Google. Lossy. M ! M', 10% or lower. Ex. Images, sound, video. 3 4 Ancient Ideas Run-Length Encoding Ancient ideas. Natural encoding. (19 " 51) + 6 = 975 bits. ! Braille. needed to encode number of characters per line ! Morse code.
    [Show full text]
  • Answers to Exercises
    Answers to Exercises A bird does not sing because he has an answer, he sings because he has a song. —Chinese Proverb Intro.1: abstemious, abstentious, adventitious, annelidous, arsenious, arterious, face- tious, sacrilegious. Intro.2: When a software house has a popular product they tend to come up with new versions. A user can update an old version to a new one, and the update usually comes as a compressed file on a floppy disk. Over time the updates get bigger and, at a certain point, an update may not fit on a single floppy. This is why good compression is important in the case of software updates. The time it takes to compress and decompress the update is unimportant since these operations are typically done just once. Recently, software makers have taken to providing updates over the Internet, but even in such cases it is important to have small files because of the download times involved. 1.1: (1) ask a question, (2) absolutely necessary, (3) advance warning, (4) boiling hot, (5) climb up, (6) close scrutiny, (7) exactly the same, (8) free gift, (9) hot water heater, (10) my personal opinion, (11) newborn baby, (12) postponed until later, (13) unexpected surprise, (14) unsolved mysteries. 1.2: A reasonable way to use them is to code the five most-common strings in the text. Because irreversible text compression is a special-purpose method, the user may know what strings are common in any particular text to be compressed. The user may specify five such strings to the encoder, and they should also be written at the start of the output stream, for the decoder’s use.
    [Show full text]
  • Lec 05 Arithmetic Coding
    ECE 5578 Multimedia Communication Lec 05 Arithmetic Coding Zhu Li Dept of CSEE, UMKC web: http://l.web.umkc.edu/lizhu phone: x2346 Z. Li, Multimedia Communciation, 2018 p.1 Outline Lecture 04 ReCap Arithmetic Coding About Homework-1 and Lab Z. Li, Multimedia Communciation, 2018 p.2 JPEG Coding Block (8x8 pel) based coding DCT transform to find sparse * representation Quantization reflects human visual system Zig-Zag scan to convert 2D to 1D string Run-Level pairs to have even more = compact representation Hoffman Coding on Level Category Quant Table: Fixed on the Level with in the category Z. Li, Multimedia Communciation, 2018 p.3 Coding of AC Coefficients Zigzag scanning: Example 8 24 -2 0 0 0 0 0 -31 -4 6 -1 0 0 0 0 0 -12 -1 2 0 0 0 0 0 0 -2 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Example: zigzag scanning result 24 -31 0 -4 -2 0 6 -12 0 0 0 -1 -1 0 0 0 2 -2 0 0 0 0 0 -1 EOB (Run, level) representation: (0, 24), (0, -31), (1, -4), (0, -2), (1, 6), (0, -12), (3, -1), (0, -1), (3, 2), (0, -2), (5, -1), EOB Z. Li, Multimedia Communciation, 2018 p.4 Coding of AC Coefficients Run / Base Run / Base … Run / Base codeword Catg. codeword Catg. Codeword Cat. EOB 1010 - - … ZRL 1111 1111 001 0/1 00 1/1 1100 … 15/1 1111 1111 1111 0101 0/2 01 1/2 11011 … 15/2 1111 1111 1111 0110 0/3 100 1/3 1111001 … 15/3 1111 1111 1111 0111 0/4 1011 1/4 111110110 … 15/4 1111 1111 1111 1000 0/5 11010 1/5 11111110110 … 15/5 1111 1111 1111 1001 … … … … … … … ZRL: represent 16 zeros when number of zeros exceeds 15.
    [Show full text]
  • An Efficient Compression Technique Using Arithmetic Coding
    Journal of Scientific Research & Reports 4(1): 60-67, 2015; Article no.JSRR.2015.009 ISSN: 2320–0227 SCIENCEDOMAIN international www.sciencedomain.org An Efficient Compression Technique Using Arithmetic Coding Ahsan Habib 1* and Debojyoty Chowdhury 1 1Department of Computer Science and Engineering, Shahjalal University of Science and Technology, Sylhet, Bangladesh. This work was carried out in collaboration between both authors. Author AH designed the study, wrote the protocol, wrote the first draft of the manuscript and managed the experimental process. Author DC managed the literature searches; analyses of the study performed the compression analysis. Both authors read and approved the final manuscript. Article Information DOI: 10.9734/JSRR/2015/12846 Editor(s): (1) Luigi Rodino, Faculty of Mathematical Analysis, Dipartimento di Matematica, Università di Torino, Italy. Reviewers: (1) H D Arora, Department of Mathematics, Amity Institute of Applied Sciences, Amity University, Noida – 201303, India. (2) Garba S. Adamu, Department of Mathematics and Computer, Waziri Umaru Federal Polytechnic, Birnin Kebbi, Nigeria. (3) Anonymous, Near East University, Cyprus. (4) Anonymous, Technical College – Kirkuk, Iraq. (5) Anonymous, Hanoi University of Science and Technology, Vietnam. Complete Peer review History: http://www.sciencedomain.org/review-history.php?iid=689&id=22&aid=6327 Received 21 st July 2014 Accepted 16 th September 2014 Original Research Article th Published 5 October 2014 ABSTRACT Data compression is a special kind of technique for representing various types of data such as text, audio, video, images etc. Data compression is used to reduce the number of bits and to store data. Different types of data compression mechanism are widely being used for compressing several types of data.
    [Show full text]
  • An Inter-Data Encoding Technique That Exploits Synchronized Data for Network Applications
    1 An Inter-data Encoding Technique that Exploits Synchronized Data for Network Applications Wooseung Nam, Student Member, IEEE, Joohyun Lee, Member, IEEE, Ness B. Shroff, Fellow, IEEE, and Kyunghan Lee, Member, IEEE Abstract—In a variety of network applications, there exists significant amount of shared data between two end hosts. Examples include data synchronization services that replicate data from one node to another. Given that shared data may have high correlation with new data to transmit, we question how such shared data can be best utilized to improve the efficiency of data transmission. To answer this, we develop an inter-data encoding technique, SyncCoding, that effectively replaces bit sequences of the data to be transmitted with the pointers to their matching bit sequences in the shared data so called references. By doing so, SyncCoding can reduce data traffic, speed up data transmission, and save energy consumption for transmission. Our evaluations of SyncCoding implemented in Linux show that it outperforms existing popular encoding techniques, Brotli, LZMA, Deflate, and Deduplication. The gains of SyncCoding over those techniques in the perspective of data size after compression in a cloud storage scenario are about 12.5%, 20.8%, 30.1%, and 66.1%, and are about 78.4%, 80.3%, 84.3%, and 94.3% in a web browsing scenario, respectively. Index Terms—Source coding; Data compression; Encoding; Data synchronization; Shared data; Reference selection F 1 INTRODUCTION are capable of exploiting previously stored or delivered URING the last decade, cloud-based data synchroniza- data for storing or transmitting new data. However, they D tion services for end-users such as Dropbox, OneDrive, mostly work at the level of files or chunks of a fixed and Google Drive have attracted a huge number of sub- size (e.g., 4MB in Dropbox, 8kB in Neptune [4]), which scribers.
    [Show full text]
  • Adaptive Video Compression Using Discrete Cosine and Wavelet Transform
    M. Mary Shanthi Rani et al. / International Journal of Computer Science Engineering (IJCSE) Adaptive Video Compression using Discrete Cosine and Wavelet Transform M. Mary Shanthi Rani1 Assistant Professor, Department of Computer Science & Applications The Gandhigram Rural Institute-Deemed University Gandhigram-624 302 Tamil Nadu, India. [email protected] P. Chitra2 Ph.D. Scholar, Department of Computer Science & Applications The Gandhigram Rural Institute-Deemed University Gandhigram-624 302 Tamil Nadu, India. [email protected] K. Anandharaj3 M.Phil Scholar, Department of Computer Science & Applications The Gandhigram Rural Institute-Deemed University Gandhigram-624 302 Tamil Nadu, India. [email protected] Abstract— Video compression is a process of eliminating redundancy and irrelevant information for efficient storage and transmission. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are the popular transforms used for efficient compression. DCT has high energy compaction and requires less computational resources. In this paper, a novel method is proposed combining DCT, DWT and Arithmetic coding for compressing the video. Arithmetic coding is used to optimize data, generating minimum redundancy code. Experimental results demonstrate the superior performance of the proposed method than similar terms of achieving high PSNR value and compression ratio. Keywords- DCT, DWT, Motion Estimation, Motion Compensation, Quantization, Arithmetic coding. I. INTRODUCTION Videos play a crucial role in multimedia applications. Normally, video file occupies huge amount of memory to store. A video file of 40 minutes of 30 frames per second frame rate and resolution of 750*570 will require 1.39 GB. The storage and bandwidth requirement of this uncompressed video is very high. Videos and images contain a large amount of redundant information.
    [Show full text]