Source Coding and Channel Coding for Mobile Multimedia Communication
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Large Alphabet Source Coding Using Independent Component Analysis Amichai Painsky, Member, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY 1 Large Alphabet Source Coding using Independent Component Analysis Amichai Painsky, Member, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE Abstract Large alphabet source coding is a basic and well–studied problem in data compression. It has many applications such as compression of natural language text, speech and images. The classic perception of most commonly used methods is that a source is best described over an alphabet which is at least as large as the observed alphabet. In this work we challenge this approach and introduce a conceptual framework in which a large alphabet source is decomposed into “as statistically independent as possible” components. This decomposition allows us to apply entropy encoding to each component separately, while benefiting from their reduced alphabet size. We show that in many cases, such decomposition results in a sum of marginal entropies which is only slightly greater than the entropy of the source. Our suggested algorithm, based on a generalization of the Binary Independent Component Analysis, is applicable for a variety of large alphabet source coding setups. This includes the classical lossless compression, universal compression and high-dimensional vector quantization. In each of these setups, our suggested approach outperforms most commonly used methods. Moreover, our proposed framework is significantly easier to implement in most of these cases. I. INTRODUCTION SSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical A source coding problem is concerned with finding a sample-to-codeword mapping, such that the average codeword length is minimal, and the samples may be uniquely decodable. -
Design and Implementation of a Decompression Engine for a Huffman-Based Compressed Data Cache Master’S Thesis in Embedded Electronic System Design
Chapter 1 Introduction Design and implementation of a decompression engine for a Huffman-based compressed data cache Master’s Thesis in Embedded Electronic System Design LI KANG Department of Computer Science and Engineering CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden, 2014 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. Design and implementation of a decompression engine for a Huffman-based compressed data cache Li Kang © Li Kang January 2014. Supervisor & Examiner: Angelos Arelakis, Per Stenström Chalmers University of Technology Department of Computer Science and Engineering SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 [Cover: Pipelined Huffman-based decompression engine, page 8. Source: A. Arelakis and P. Stenström, “A Case for a Value-Aware Cache”, IEEE Computer Architecture Letters, September 2012.] Department of Computer Science and Engineering Göteborg, Sweden January 2014 2 Abstract This master thesis studies the implementation of a decompression engine for Huffman based compressed data cache. -
Greedy Algorithm Implementation in Huffman Coding Theory
iJournals: International Journal of Software & Hardware Research in Engineering (IJSHRE) ISSN-2347-4890 Volume 8 Issue 9 September 2020 Greedy Algorithm Implementation in Huffman Coding Theory Author: Sunmin Lee Affiliation: Seoul International School E-mail: [email protected] <DOI:10.26821/IJSHRE.8.9.2020.8905 > ABSTRACT In the late 1900s and early 2000s, creating the code All aspects of modern society depend heavily on data itself was a major challenge. However, now that the collection and transmission. As society grows more basic platform has been established, efficiency that can dependent on data, the ability to store and transmit it be achieved through data compression has become the efficiently has become more important than ever most valuable quality current technology deeply before. The Huffman coding theory has been one of desires to attain. Data compression, which is used to the best coding methods for data compression without efficiently store, transmit and process big data such as loss of information. It relies heavily on a technique satellite imagery, medical data, wireless telephony and called a greedy algorithm, a process that “greedily” database design, is a method of encoding any tries to find an optimal solution global solution by information (image, text, video etc.) into a format that solving for each optimal local choice for each step of a consumes fewer bits than the original data. [8] Data problem. Although there is a disadvantage that it fails compression can be of either of the two types i.e. lossy to consider the problem as a whole, it is definitely or lossless compressions. -
Revisiting Huffman Coding: Toward Extreme Performance on Modern GPU Architectures
Revisiting Huffman Coding: Toward Extreme Performance on Modern GPU Architectures Jiannan Tian?, Cody Riveray, Sheng Diz, Jieyang Chenx, Xin Liangx, Dingwen Tao?, and Franck Cappelloz{ ?School of Electrical Engineering and Computer Science, Washington State University, WA, USA yDepartment of Computer Science, The University of Alabama, AL, USA zMathematics and Computer Science Division, Argonne National Laboratory, IL, USA xOak Ridge National Laboratory, TN, USA {University of Illinois at Urbana-Champaign, IL, USA Abstract—Today’s high-performance computing (HPC) appli- much more slowly than computing power, causing intra-/inter- cations are producing vast volumes of data, which are challenging node communication cost and I/O bottlenecks to become a to store and transfer efficiently during the execution, such that more serious issue in fast stream processing [6]. Compressing data compression is becoming a critical technique to mitigate the storage burden and data movement cost. Huffman coding is the raw simulation data at runtime and decompressing them arguably the most efficient Entropy coding algorithm in informa- before post-analysis can significantly reduce communication tion theory, such that it could be found as a fundamental step and I/O overheads and hence improving working efficiency. in many modern compression algorithms such as DEFLATE. On Huffman coding is a widely-used variable-length encoding the other hand, today’s HPC applications are more and more method that has been around for over 60 years [17]. It is relying on the accelerators such as GPU on supercomputers, while Huffman encoding suffers from low throughput on GPUs, arguably the most cost-effective Entropy encoding algorithm resulting in a significant bottleneck in the entire data processing. -
Lecture 2: Variable-Length Codes Continued
Data Compression Techniques Part 1: Entropy Coding Lecture 2: Variable-Length Codes Continued Juha K¨arkk¨ainen 01.11.2017 1 / 16 Kraft's Inequality When constructing a variable-length code, we are not really interested in what the individual codewords are as long as they satisfy two conditions: I The code is a prefix code (or at least a uniquely decodable code). I The codeword lengths are chosen to minimize the average codeword length. Kraft's inequality gives an exact condition for the existence of a prefix code in terms of the codeword lengths. Theorem (Kraft's Inequality) There exists a binary prefix code with codeword lengths `1; `2; : : : ; `σ if and only if σ X 2−`i ≤ 1 : i=1 2 / 16 Proof of Kraft's Inequality Consider a binary search on the real interval [0; 1). In each step, the current interval is split into two halves and one of the halves is chosen as the new interval. We can associate a search of ` steps with a binary string of length `: I Zero corresponds to choosing the left half. I One corresponds to choosing the right half. For any binary string w, let I(w) be the final interval of the associated search. Example 1011 corresponds to the search sequence [0; 1); [1=2; 2=2); [2=4; 3=4); [5=8; 6=8); [11=16; 12=16) and I(1011) = [11=16; 12=16). 3 / 16 Consider the set fI(w) j w 2 f0; 1g`g of all intervals corresponding to binary strings of lengths `. -
The Existence of Fibonacci Numbers in the Algorithmic Generator for Combinatoric Pascal Triangle
British Journal of Science 62 September 2014, Vol. 11 (2) THE EXISTENCE OF FIBONACCI NUMBERS IN THE ALGORITHMIC GENERATOR FOR COMBINATORIC PASCAL TRIANGLE BY Amannah, Constance Izuchukwu [email protected]; +234 8037720614 Department of Computer Science, Faculty of Natural and Applied Sciences, Ignatius Ajuru University of Education, P.M.B. 5047, Port Harcourt, Rivers State, Nigeria. & Nanwin, Nuka Domaka Department of Computer Science, Faculty of Natural and Applied Sciences, Ignatius Ajuru University of Education, P.M.B. 5047, Port Harcourt, Rivers State, Nigeria © 2014 British Journals ISSN 2047-3745 British Journal of Science 63 September 2014, Vol. 11 (2) ABSTRACT The discoveries of Leonard of Pisa, better known as Fibonacci, are revolutionary contributions to the mathematical world. His best-known work is the Fibonacci sequence, in which each new number is the sum of the two numbers preceding it. When various operations and manipulations are performed on the numbers of this sequence, beautiful and incredible patterns begin to emerge. The numbers from this sequence are manifested throughout nature in the forms and designs of many plants and animals and have also been reproduced in various manners in art, architecture, and music. This work simulated the Pascal triangle generator to produce the Fibonacci numbers or sequence. The Fibonacci numbers are generated by simply taken the sums of the "shallow" diagonals (shown in red) of Pascal's triangle. The Fibonacci numbers occur in the sums of "shallow" diagonals in Pascal's triangle. This Pascal triangle generator is a combinatoric algorithm that outlines the steps necessary for generating the elements and their positions in the rows of a Pascal triangle. -
The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on Iot Nodes in Smart Cities
sensors Article The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities Ammar Nasif *, Zulaiha Ali Othman and Nor Samsiah Sani Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, University Kebangsaan Malaysia, Bangi 43600, Malaysia; [email protected] (Z.A.O.); [email protected] (N.S.S.) * Correspondence: [email protected] Abstract: Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, Citation: Nasif, A.; Othman, Z.A.; we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Sani, N.S. -
The Generalized Principle of the Golden Section and Its Applications in Mathematics, Science, and Engineering
Chaos, Solitons and Fractals 26 (2005) 263–289 www.elsevier.com/locate/chaos The Generalized Principle of the Golden Section and its applications in mathematics, science, and engineering A.P. Stakhov International Club of the Golden Section, 6 McCreary Trail, Bolton, ON, Canada L7E 2C8 Accepted 14 January 2005 Abstract The ‘‘Dichotomy Principle’’ and the classical ‘‘Golden Section Principle’’ are two of the most important principles of Nature, Science and also Art. The Generalized Principle of the Golden Section that follows from studying the diagonal sums of the Pascal triangle is a sweeping generalization of these important principles. This underlies the foundation of ‘‘Harmony Mathematics’’, a new proposed mathematical direction. Harmony Mathematics includes a number of new mathematical theories: an algorithmic measurement theory, a new number theory, a new theory of hyperbolic functions based on Fibonacci and Lucas numbers, and a theory of the Fibonacci and ‘‘Golden’’ matrices. These mathematical theories are the source of many new ideas in mathematics, philosophy, botanic and biology, electrical and computer science and engineering, communication systems, mathematical education as well as theoretical physics and physics of high energy particles. Ó 2005 Elsevier Ltd. All rights reserved. Algebra and Geometry have one and the same fate. The rather slow successes followed after the fast ones at the beginning. They left science at such step where it was still far from perfect. It happened,probably,because Mathematicians paid attention to the higher parts of the Analysis. They neglected the beginnings and did not wish to work on such field,which they finished with one time and left it behind. -
1 Introduction
HELSINKI UNIVERSITY OF TECHNOLOGY Faculty of Electronics, Communications, and Automation Department of Communications and Networking Le Wang Evaluation of Compression for Energy-aware Communication in Wireless Networks Master’s Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Technology. Espoo, May 11, 2009 Supervisor: Professor Jukka Manner Instructor: Sebastian Siikavirta 2 HELSINKI UNIVERSITY OF TECHNOLOGY ABSTRACT OF MASTER’S THESIS Author: Le Wang Title: Evaluation of Compression for Energy-aware Communication in Wireless Networks Number of pages: 75 p. Date: 11th May 2009 Faculty: Faculty of Electronics, Communications, and Automation Department: Department of Communications and Networks Code: S-38 Supervisor: Professor Jukka Manner Instructor: Sebastian Siikavirta Abstract In accordance with the development of ICT-based communication, energy efficient communication in wireless networks is being required for reducing energy consumption, cutting down greenhouse emissions and improving business competitiveness. Due to significant energy consumption of transmitting data over wireless networks, data compression techniques can be used to trade the overhead of compression/decompression for less communication energy. Careless and blind compression in wireless networks not only causes an expansion of file sizes, but also wastes energy. This study aims to investigate the usages of data compression to reduce the energy consumption in a hand-held device. By con- ducting experiments as the methodologies, the impacts of transmission on energy consumption are explored on wireless interfaces. Then, 9 lossless compression algo- rithms are examined on popular Internet traffic in the view of compression ratio, speed and consumed energy. Additionally, energy consumption of uplink, downlink and overall system is investigated to achieve a comprehensive understanding of compression in wireless networks. -
Survey on Inverted Index Compression Over Structured Data
Volume 6, No. 3, May 2015 (Special Issue) ISSN No. 0976-5697 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info Survey on Inverted Index Compression over Structured Data B.Usharani M.TanoojKumar Dept.of Computer Science and Engineering Dept.of ComputerScience and Engineering Andhra Loyola Institute of Engineering and Technology Andhra Loyola Institute of Engineering and Technology India India e-mail:[email protected] e-mail:[email protected] Abstract: A user can retrieve the information by providing a few keywords in the search engine. In the keyword search engines, the query is specified in the textual form. The keyword search allows casual users to access database information. In keyword search, the system has to provide a search over billions of documents stored on millions of computers. The index stores summary of information and guides the user to search for more detailed information. The major concept in the information retrieval(IR) is the inverted index. Inverted index is one of the design factors of the index data structures. Inverted index is used to access the documents according to the keyword search. Inverted index is normally larger in size ,many compression techniques have been proposed to minimize the storage space for inverted index. In this paper we propose the Huffman coding technique to compress the inverted index. Experiments on the performance of inverted index compression using Huffman coding proves that this technique requires minimum storage space as well as increases the key word search performance and reduces the time to evaluate the query. -
Image Compression Using Extended Golomb Code Vector Quantization and Block Truncation Coding
ISSN(Online): 2319-8753 ISSN (Print): 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology (A High Impact Factor, Monthly, Peer Reviewed Journal) Visit: www.ijirset.com Vol. 8, Issue 11, November 2019 Image Compression Using Extended Golomb Code Vector Quantization And Block Truncation Coding Vinay Kumar1, Prof. Neha Malviya2 M. Tech. Scholar, Department of Electronics and Communication, Swami Vivekanand College of Science & Technology, Bhopal, India1 Assistant Professor, Department of Electronics and Communication, Swami Vivekanand College of Science & Technology, Bhopal, India 2 ABSTRACT: Text and image data are important elements for information processing almost in all the computer applications. Uncompressed image or text data require high transmission bandwidth and significant storage capacity. Designing an efficient compression scheme is more critical with the recent growth of computer applications. Extended Golomb code, for integer representation is proposed and used as a part of Burrows-Wheeler compression algorithm to compress text data. The other four schemes are designed for image compression. Out of four, three schemes are based on Vector Quantization (VQ) method and the fourth scheme is based on Absolute Moment Block Truncation Coding (AMBTC). The proposed scheme is block prediction-residual block coding AMBTC (BP-RBCAMBTC). In this scheme an image is divided into non-overlapping image blocks of small size (4x4 pixels) each. Each image block is encoded using neighboring image blocks which have high correlation among IX them. BP-RBCAMBTC obtains better image quality and low bit rate than the recently proposed method based on BTC. KEYWORDS: Block Truncation Code (BTC), Vector Quantization, Golomb Code Vector I. -
Lelewer and Hirschberg, "Data Compression"
Data Compression DEBRA A. LELEWER and DANIEL S. HIRSCHBERG Department of Information and Computer Science, University of California, Irvine, California 92717 This paper surveys a variety of data compression methods spanning almost 40 years of research, from the work of Shannon, Fano, and Huffman in the late 1940s to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory as they relate to the goals and evaluation of data compression methods are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported, and possibilities for future research are suggested Categories and Subject Descriptors: E.4 [Data]: Coding and Information Theory-data compaction and compression General Terms: Algorithms, Theory Additional Key Words and Phrases: Adaptive coding, adaptive Huffman codes, coding, coding theory, tile compression, Huffman codes, minimum-redundancy codes, optimal codes, prefix codes, text compression INTRODUCTION mation but whose length is as small as possible. Data compression has important Data compression is often referred to as application in the areas of data transmis- coding, where coding is a general term en- sion and data storage. Many data process- compassing any special representation of ing applications require storage of large data that satisfies a given need. Informa- volumes of data, and the number of such tion theory is defined as the study of applications is constantly increasing as the efficient coding and its consequences in use of computers extends to new disci- the form of speed of transmission and plines.