Nonuniform Probability Modulation for Reducing Energy Consumption of Remote Sensors

Total Page:16

File Type:pdf, Size:1020Kb

Nonuniform Probability Modulation for Reducing Energy Consumption of Remote Sensors Nonuniform probability modulation for reducing energy consumption of remote sensors Jarek Duda Jagiellonian University, Golebia 24, 31-007 Krakow, Poland. Email: [email protected] Abstract—One of the main goals of 5G wireless telecom- orthogonal family of functions (subcarriers), for example munication technology is improving energy efficiency, espe- in OFDM. cially of remote sensors which should be able for example These constellations are often QAM lattices of size to transmit on average 1bit/s for 10 years from a single AAA battery. There will be discussed using modulation up to 64 in LTE. There is assumed uniform probability with nonuniform probability distribution of symbols for modulation (UPM) - that every symbol is used with improving energy efficiency of transmission at cost of the same frequency. Generally, a stream of symbols reduced throughput. While the zero-signal (silence) has P having pss probability distribution ( s ps = 1) contains zero energy cost to emit, it can carry information if used asymptotically Shannon entropy h = Pm p lg(1=p ) alongside other symbols. If used more frequently than s=1 s s others, for example for majority of time slots or OFDM bits/symbol (lg ≡ log2), where m is the size of alphabet. subcarriers, the number of bits transmitted per energy unit Entropy is indeed maximized for uniform probability dis- can be significantly increased. For example for hexagonal tribution ps = 1=m, obtaining h = lg(m) bits/symbol, modulation and zero noise, this amount of bits per energy which is lg(64) = 6 bits/symbol for QAM64. unit can be doubled by reducing throughput 2.7 times, However, this natural choice of using uniform prob- thanks to using the zero-signal with probability ≈ 0.84. There will be discussed models and methods for such ability distribution is not always the optimal one. For nonuniform probability modulations (NPM). example when the channel has constraints, like forbid- ding two successive ones (’11’) in Fibonacci coding, I. INTRODUCTION then choosing Pr(xt+1 = 0jxt = 0) = Pr(xt+1 = The currently being developed 5th generation mobile 1jxt = 0) = 1=2 is not the optimal way. Instead, we network (5G) has many ambitious goals, like 10Gbps shouldp more often choose ’0’ symbol, optimally with peak data rates, 1ms latency and ultra-reliability. Another ' = ( 5−1)=2 probability, as this symbol allows to pro- high priority is reducing energy consumption, especially duce more entropy (information) in the successive step. to improve battery life of mobile and IoT devices. This For a general constraints the optimal probabilities can be goal is crucial for expected omnipresent fleet of remote found by using Maximal Entropy Random Walk [1]. sensors, monitoring all aspects of our world. Such sensor Another example of usefulness of nonuniform should be compact, inexpensive and has battery life probability distribution among symbols used for of order of 10 years, as battery replacement in many communication are various steganography-watermarking applications is economically infeasible. Hence, this is an problems, where we want encoding sequence to resemble asymmetric task: the main priority is to reduce energy some common data, for example picture resembling QR requirements of the sender, tolerating increased cost at codes. Surprisingly, generalization of the Kuznetsov- the receiver side. A crucial part of the cost of sending Tsybakov problem allows such encoding without arXiv:1608.04271v1 [cs.IT] 15 Aug 2016 information to the base station is establishing connection decoder knowing the used probability distributions (e.g. - their number and so energy cost can be reduced by picture to resemble) ([2], [3]). However, this lack of buffering information, or can be nearly eliminated if knowledge makes encoding more expensive. the sensor just transmits the information in time periods precisely scheduled with the base station. In this paper we will focus on a more basic reason We will discuss approach for reducing energy need of to use nonuniform probability distribution among the actual transmission of such buffered data, preferably symbols: that the cost of using various symbols compressed earlier to reduce its size. The information does not have to be the same. Assume Es is the is encoded in a sequence of symbols as points from a cost of using symbol s, then entropy for a fixed P chosen constellation: a discrete set of points in complex average energy (E = s psEs) is maximized for (I-Q) plane. This sequence of symbols can be used for Boltzmann probability distribution among symbols time sequence of impulses, or as coefficients for usually Pr(s) / e−βEs . For example in Morse code dash lasts much longer than dot, what comes with higher time II. CAPACITY AND ENERGY EFFICIENCY OF and energy cost. Designing a coding with more frequent NONUNIFORM PROBABILITY MODULATION (NPM) use of dot (Pr(dot) > Pr(dash)) would allow to lower In this section there will be first reminded why average cost per bit. Anther example of nonuniform Boltzmann distribution is the optimal choice from the cost is sending symbol ’1’ as electric current through a perspective of energy efficiency, then three modulations wire, symbol ’0’ as lack of this current - symbol ’1’ is will be analyzed, first without then with noise. more energy costly, hence should be used less frequently. For better intuition, Shannon entropy is measured in Pm bits: h = s=1 ps lg(1=ps) bits/symbol (lg ≡ log2). We will focus here on application for wireless commu- A. Probability distribution maximizing entropy nication modulation, where the cost related to emitting a symbol is usually assumed to be proportional to square Assume Es is the cost (energy) of using symbol s. 2 We want to choose the optimal probability distribution of its amplitude: Ex / jxj , hence we could improve energy efficiency by more frequent use of low amplitude fpsgs for some fixed average energy E: symbols. X X psEs = E ps = 1 (1) Basic theoretical considerations will be reminded, then s s used to analyze potential improvements especially for such that Shannon entropy is maximized: h ln(2) = P the situation of modulation for wireless technology: − s ps ln(ps). to reduce required energy per bit, especially for the Using the Lagrange multiplier method for λ and β purpose of improving battery life of remote sensors. The parameters: average amount of bits/symbol (entropy) is maximized ! ! for uniform probability distribution (UPM), hence using X X X L = − ps ln ps+λ ps − 1 +β psEs − E nonuniform distribution (NPM) means that more sym- s s s bols are required to write the same message, so the @L tradeoff of improving energy efficiency (bits per energy 0 = = − ln(p ) − 1 + λ + βE @p s s unit) is also reducing throughput (bits per symbol). s e−βEs e−βEs The use of nonuniform probability distribution of sym- p = = (2) s e1−λ Z bols requires a more complex coding scheme, especially 1−λ P −βEs from the perspective of error correction (channel coding). where Z = e = s e is the normalization Entropy coders allow to work with kind of reversed task: factor (called partition function). encode a sequence of symbols having some assumed The parameter β can be determined from average probability distribution into a bit sequence. Switching energy: P E e−βEs its encoder and decoder, we can encode a message E = s s P −βEs (a bit sequence) into a sequence of symbols having s e some chosen probability distribution. Due to low cost, As expected, Boltzmann distribution is the optimal a natural approach would be using a prefix code here, way to choose probability distribution of symbols: ps / for example 0 ! a; 10 ! b; 11 ! c. However, e−βEs . The standard way of evaluating cost of a signal it approximates probabilities with powers of 1=2 and in wireless telecommunication is square of its amplitude: cannot use probabilities 1=2 < p < 1, which turn 2 Es = jxj . Hence for x 2 R the optimal probability is out crucial in the discussed situations. Additionally, its Gaussian distribution with standard deviation σ2 = E: error correction would require some additional protection layer. Hence, a more appropriate recent entropy coding 1 −x2 ρG(x) = p e 2E will be discussed for this purpose: tANS coding ([4], 2Eπ [5]). While having cost similar to prefix codes (finite Z 1 1 state automaton, no multiplication), it operates on nearly HG := − ρG(x) lg(ρG(x))dx = lg(2πeE) (3) 2 accurate probabilities, including 1=2 < p < 1. Addi- −∞ tionally, its processing has an internal state, which can Let us compare it with uniform distribution, which is be exploited like the state of convolutional codes [6] usually used in practical modulation schemes. Take a for error correction purpose - thanks of it encoder does rectangular density function on some [−a; a] range with height 1 to integrate to 1. Its average energy is E = not need to apply another coding layer, saving energy 2a p R a 1 2 a2 required for this purpose. −a 2a x dx = 3 , getting a = 3E parameter for a 2 chosen average energy E. Now Z a 1 1 Hu := lg(2a)dx = lg(2a) = lg(12E) −a 2a 2 So the gain of using Gaussian distribution is 1 H − H = lg(πe=6) ≈ 0:2546 bits: (4) G u 2 There was used differential entropy (with integrals), which gets natural intuition when approximated with Riemann integration for some quantization step q: Z 1 X H = − ρ(x) lg(ρ(x))dx ≈ − qρ(kq) lg(ρ(kq)) = −∞ k2Z X X = − qρ(kq) lg(qρ(kq)) + qρ(kq) lg(q) k2Z k2Z The left hand side term is the standard entropy for probability distribution of quantization with step q, the right hand side term is approximately lg(1=q).
Recommended publications
  • The Existence of Fibonacci Numbers in the Algorithmic Generator for Combinatoric Pascal Triangle
    British Journal of Science 62 September 2014, Vol. 11 (2) THE EXISTENCE OF FIBONACCI NUMBERS IN THE ALGORITHMIC GENERATOR FOR COMBINATORIC PASCAL TRIANGLE BY Amannah, Constance Izuchukwu [email protected]; +234 8037720614 Department of Computer Science, Faculty of Natural and Applied Sciences, Ignatius Ajuru University of Education, P.M.B. 5047, Port Harcourt, Rivers State, Nigeria. & Nanwin, Nuka Domaka Department of Computer Science, Faculty of Natural and Applied Sciences, Ignatius Ajuru University of Education, P.M.B. 5047, Port Harcourt, Rivers State, Nigeria © 2014 British Journals ISSN 2047-3745 British Journal of Science 63 September 2014, Vol. 11 (2) ABSTRACT The discoveries of Leonard of Pisa, better known as Fibonacci, are revolutionary contributions to the mathematical world. His best-known work is the Fibonacci sequence, in which each new number is the sum of the two numbers preceding it. When various operations and manipulations are performed on the numbers of this sequence, beautiful and incredible patterns begin to emerge. The numbers from this sequence are manifested throughout nature in the forms and designs of many plants and animals and have also been reproduced in various manners in art, architecture, and music. This work simulated the Pascal triangle generator to produce the Fibonacci numbers or sequence. The Fibonacci numbers are generated by simply taken the sums of the "shallow" diagonals (shown in red) of Pascal's triangle. The Fibonacci numbers occur in the sums of "shallow" diagonals in Pascal's triangle. This Pascal triangle generator is a combinatoric algorithm that outlines the steps necessary for generating the elements and their positions in the rows of a Pascal triangle.
    [Show full text]
  • The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on Iot Nodes in Smart Cities
    sensors Article The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities Ammar Nasif *, Zulaiha Ali Othman and Nor Samsiah Sani Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, University Kebangsaan Malaysia, Bangi 43600, Malaysia; [email protected] (Z.A.O.); [email protected] (N.S.S.) * Correspondence: [email protected] Abstract: Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, Citation: Nasif, A.; Othman, Z.A.; we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Sani, N.S.
    [Show full text]
  • The Generalized Principle of the Golden Section and Its Applications in Mathematics, Science, and Engineering
    Chaos, Solitons and Fractals 26 (2005) 263–289 www.elsevier.com/locate/chaos The Generalized Principle of the Golden Section and its applications in mathematics, science, and engineering A.P. Stakhov International Club of the Golden Section, 6 McCreary Trail, Bolton, ON, Canada L7E 2C8 Accepted 14 January 2005 Abstract The ‘‘Dichotomy Principle’’ and the classical ‘‘Golden Section Principle’’ are two of the most important principles of Nature, Science and also Art. The Generalized Principle of the Golden Section that follows from studying the diagonal sums of the Pascal triangle is a sweeping generalization of these important principles. This underlies the foundation of ‘‘Harmony Mathematics’’, a new proposed mathematical direction. Harmony Mathematics includes a number of new mathematical theories: an algorithmic measurement theory, a new number theory, a new theory of hyperbolic functions based on Fibonacci and Lucas numbers, and a theory of the Fibonacci and ‘‘Golden’’ matrices. These mathematical theories are the source of many new ideas in mathematics, philosophy, botanic and biology, electrical and computer science and engineering, communication systems, mathematical education as well as theoretical physics and physics of high energy particles. Ó 2005 Elsevier Ltd. All rights reserved. Algebra and Geometry have one and the same fate. The rather slow successes followed after the fast ones at the beginning. They left science at such step where it was still far from perfect. It happened,probably,because Mathematicians paid attention to the higher parts of the Analysis. They neglected the beginnings and did not wish to work on such field,which they finished with one time and left it behind.
    [Show full text]
  • Survey on Inverted Index Compression Over Structured Data
    Volume 6, No. 3, May 2015 (Special Issue) ISSN No. 0976-5697 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info Survey on Inverted Index Compression over Structured Data B.Usharani M.TanoojKumar Dept.of Computer Science and Engineering Dept.of ComputerScience and Engineering Andhra Loyola Institute of Engineering and Technology Andhra Loyola Institute of Engineering and Technology India India e-mail:[email protected] e-mail:[email protected] Abstract: A user can retrieve the information by providing a few keywords in the search engine. In the keyword search engines, the query is specified in the textual form. The keyword search allows casual users to access database information. In keyword search, the system has to provide a search over billions of documents stored on millions of computers. The index stores summary of information and guides the user to search for more detailed information. The major concept in the information retrieval(IR) is the inverted index. Inverted index is one of the design factors of the index data structures. Inverted index is used to access the documents according to the keyword search. Inverted index is normally larger in size ,many compression techniques have been proposed to minimize the storage space for inverted index. In this paper we propose the Huffman coding technique to compress the inverted index. Experiments on the performance of inverted index compression using Huffman coding proves that this technique requires minimum storage space as well as increases the key word search performance and reduces the time to evaluate the query.
    [Show full text]
  • Image Compression Using Extended Golomb Code Vector Quantization and Block Truncation Coding
    ISSN(Online): 2319-8753 ISSN (Print): 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology (A High Impact Factor, Monthly, Peer Reviewed Journal) Visit: www.ijirset.com Vol. 8, Issue 11, November 2019 Image Compression Using Extended Golomb Code Vector Quantization And Block Truncation Coding Vinay Kumar1, Prof. Neha Malviya2 M. Tech. Scholar, Department of Electronics and Communication, Swami Vivekanand College of Science & Technology, Bhopal, India1 Assistant Professor, Department of Electronics and Communication, Swami Vivekanand College of Science & Technology, Bhopal, India 2 ABSTRACT: Text and image data are important elements for information processing almost in all the computer applications. Uncompressed image or text data require high transmission bandwidth and significant storage capacity. Designing an efficient compression scheme is more critical with the recent growth of computer applications. Extended Golomb code, for integer representation is proposed and used as a part of Burrows-Wheeler compression algorithm to compress text data. The other four schemes are designed for image compression. Out of four, three schemes are based on Vector Quantization (VQ) method and the fourth scheme is based on Absolute Moment Block Truncation Coding (AMBTC). The proposed scheme is block prediction-residual block coding AMBTC (BP-RBCAMBTC). In this scheme an image is divided into non-overlapping image blocks of small size (4x4 pixels) each. Each image block is encoded using neighboring image blocks which have high correlation among IX them. BP-RBCAMBTC obtains better image quality and low bit rate than the recently proposed method based on BTC. KEYWORDS: Block Truncation Code (BTC), Vector Quantization, Golomb Code Vector I.
    [Show full text]
  • Lelewer and Hirschberg, "Data Compression"
    Data Compression DEBRA A. LELEWER and DANIEL S. HIRSCHBERG Department of Information and Computer Science, University of California, Irvine, California 92717 This paper surveys a variety of data compression methods spanning almost 40 years of research, from the work of Shannon, Fano, and Huffman in the late 1940s to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory as they relate to the goals and evaluation of data compression methods are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported, and possibilities for future research are suggested Categories and Subject Descriptors: E.4 [Data]: Coding and Information Theory-data compaction and compression General Terms: Algorithms, Theory Additional Key Words and Phrases: Adaptive coding, adaptive Huffman codes, coding, coding theory, tile compression, Huffman codes, minimum-redundancy codes, optimal codes, prefix codes, text compression INTRODUCTION mation but whose length is as small as possible. Data compression has important Data compression is often referred to as application in the areas of data transmis- coding, where coding is a general term en- sion and data storage. Many data process- compassing any special representation of ing applications require storage of large data that satisfies a given need. Informa- volumes of data, and the number of such tion theory is defined as the study of applications is constantly increasing as the efficient coding and its consequences in use of computers extends to new disci- the form of speed of transmission and plines.
    [Show full text]
  • ABSTRACT Title of Thesis: a LOGIC SYSTEM for FIBONACCI
    ABSTRACT Title of Thesis: A LOGIC SYSTEM FOR FIBONACCI NUMBERS EQUIVALENT TO 64-BIT BINARY Jiani Shen, Master of Science, 2018 Thesis Directed By: Professor Robert W. Newcomb, Electrical & Computer Engineering Compared to the most commonly used binary computers, the Fibonacci computer has its own research values. Making study of Fibonacci radix system is of considerable importance to the Fibonacci computer. Most materials only explain how to use binary coefficients in Fibonacci base to represent positive integers and introduce a little about basic arithmetic on positive integers using complicated but incomplete methods. However, rarely have materials expanded the arithmetic to negative integers with an easier way. In this thesis, we first transfer the unsigned binary Fibonacci representation with minimal form(UBFR(min)) into the even-subscripted signed ternary Fibonacci representation(STFRe), which includes the negative integers and doubles the range over UBFR(min). Then, we develop some basic operations on both positive and negative integers by applying various properties of the Fibonacci sequence into arithmetic. We can set the arithmetic range equivalent to 64-bit binary as our daily binary computers, or whatever reasonable ranges we want. A LOGIC SYSTEM FOR FIBONACCI NUMBERS EQUIVALENT TO 64-BIT BINARY by Jiani Shen Thesis submitted to the Faculty of the Graduate School of the University of Maryland, College Park, in partial fulfillment of the requirements for the degree of [Master of Science] [2018] Advisory Committee: Professor [Robert W. Newcomb], Chair Professor A. Yavuz Oruc Professor Gang Qu © Copyright by [Jiani Shen] [2018] Acknowledgements Sincere thanks to dear Professor Robert Newcomb. He guides me to do the thesis and reviews the papers for me diligently.
    [Show full text]
  • Variable-Length Coding: Unary Coding Golomb Coding Elias Gamma and Delta Coding Fibonacci Coding
    www.vsb.cz Analysis and Signal Compression Information and Probability Theory Michal Vašinek VŠB – Technická univerzita Ostrava FEI/EA404 [email protected] 2020, February 12th Content Classes of Codes Variable-length coding: Unary coding Golomb coding Elias Gamma and Delta coding Fibonacci coding Michal Vašinek (VŠB-TUO) Analysis and Signal Compression 1 / 22 Codes Obrázek: Classes of codes, Cover and Thomas, Elements of Information Theory, p. 106. Michal Vašinek (VŠB-TUO) Analysis and Signal Compression 2 / 22 Nonsigular code Let X be a range of random variable X, for instance the alphabet of input data. Let D be d-ary alphabet of output, for instance binary alphabet D = f0; 1g. Nonsingular Code A code is said to be nonsingular if every element of the range of X maps into different string in D∗; that is: x 6= x0 ! C(x) 6= C(x0) Michal Vašinek (VŠB-TUO) Analysis and Signal Compression 3 / 22 Nonsigular code Nonsingular Code A code is said to be nonsingular if every element of the range of X maps into different string in D∗; that is: x 6= x0 ! C(x) 6= C(x0) Let C(’a’) = 0, C(’b’)=00, C(’c’)=01 be codewords of code C. Encode the sequence s = abc, i.e. C(s) = 0 00 01 = 00001. We can decode in many ways: aaac, bac, abc. Can be solved by adding special separating symbol. For instance with code 11. Michal Vašinek (VŠB-TUO) Analysis and Signal Compression 4 / 22 Uniquely Decodable Codes Definition The extension C∗ of a code C is the mapping from finite length strings of X to finite-length strings of D, defined by: C(x1x2; : : : ; xn) = C(x1)C(x2) :::C(xn) For instance, if C(x1) = 00 and C(x2) = 11 then C(x1x2) = 0011.
    [Show full text]
  • New Classes of Random Sequences For
    NEW CLASSES OF RANDOM SEQUENCES FOR CODING AND CRYPTOGRAPHY APPLICATIONS By KIRTHI KRISHNAMURTHY VASUDEVA MURTHY Bachelor of Engineering in Instrumentation Technology Visvesvaraya Technological University Bangalore, Karnataka 2013 Submitted to the Faculty of the Graduate College of the Oklahoma State University in partial fulfillment of the requirements for the Degree of MASTER OF SCIENCE May, 2016 NEW CLASSES OF RANDOM SEQUENCES FOR CODING AND CRYPTOGRAPHY APPLICATIONS Thesis Approved: Dr. Subhash Kak Thesis Adviser Dr. Keith A Teague Dr. George Scheets ii ACKNOWLEDGEMENTS I would like to express my deep gratitude to my master’s thesis advisor Dr. Subhash Kak. He continually and convincingly conveyed a spirit of adventure and motivation in regard to research with profound patience handling me in all my tough situations. Without his persistent support, this thesis would not have been possible. I render my sincere thanks to my committee members, Dr. Keith Teague and Dr. George Scheets for their support and guidance in final stages of thesis presentation and document review. I thank Oklahoma State University for giving opportunity to utilize and enhance my technical knowledge. Lastly, I would like to thank my parents and friends for constant encouragement emotionally and financially to pursue masters and complete thesis research and document at OSU. iii Acknowledgements reflect the views of the author and are not endorsed by committee members or Oklahoma State University. Name: KIRTHI KRISHNAMURTHY VASUDEVA MURTHY Date of Degree: MAY, 2016 Title of Study: NEW CLASSES OF RANDOM SEQUENCES FOR CODING AND CRYPTOGRAPHY APPLICATIONS Major Field: ELECTRICAL ENGINEERING Abstract: Cryptography is required for securing data in a digital or analog medium and there exists a variety of protocols to encode the data and decrypt them without third party interference.
    [Show full text]
  • Entropy Coding and Different Coding Techniques
    Journal of Network Communications and Emerging Technologies (JNCET) www.jncet.org Volume 6, Issue 5, May (2016) Entropy Coding and Different Coding Techniques Sandeep Kaur Asst. Prof. in CSE Dept, CGC Landran, Mohali Sukhjeet Singh Asst. Prof. in CS Dept, TSGGSK College, Amritsar. Abstract – In today’s digital world information exchange is been 2. CODING TECHNIQUES held electronically. So there arises a need for secure transmission of the data. Besides security there are several other factors such 1. Huffman Coding- as transfer speed, cost, errors transmission etc. that plays a vital In computer science and information theory, a Huffman code role in the transmission process. The need for an efficient technique for compression of Images ever increasing because the is a particular type of optimal prefix code that is commonly raw images need large amounts of disk space seems to be a big used for lossless data compression. disadvantage during transmission & storage. This paper provide 1.1 Basic principles of Huffman Coding- a basic introduction about entropy encoding and different coding techniques. It also give comparison between various coding Huffman coding is a popular lossless Variable Length Coding techniques. (VLC) scheme, based on the following principles: (a) Shorter Index Terms – Huffman coding, DEFLATE, VLC, UTF-8, code words are assigned to more probable symbols and longer Golomb coding. code words are assigned to less probable symbols. 1. INTRODUCTION (b) No code word of a symbol is a prefix of another code word. This makes Huffman coding uniquely decodable. 1.1 Entropy Encoding (c) Every source symbol must have a unique code word In information theory an entropy encoding is a lossless data assigned to it.
    [Show full text]
  • STUDIES on GOPALA-HEMACHANDRA CODES and THEIR APPLICATIONS. by Logan Childers December, 2020
    STUDIES ON GOPALA-HEMACHANDRA CODES AND THEIR APPLICATIONS. by Logan Childers December, 2020 Director of Thesis: Krishnan Gopalakrishnan, PhD Major Department: Computer Science Gopala-Hemachandra codes are a variation of the Fibonacci universal code and have ap- plications in data compression and cryptography. We study a specific parameterization of Gopala-Hemachandra codes and present several results pertaining to these codes. We show that GHa(n) always exists for n ≥ 1 when −2 ≥ a ≥ −4, meaning that these are universal codes. We develop two new algorithms to determine whether a GH code exists for a given a and n, and to construct them if they exist. We also prove that when a = −(4 + k) where k ≥ 1, that there are at most k consecutive integers for which GH codes do not exist. In 2014, Nalli and Ozyilmaz proposed a stream cipher based on GH codes. We show that this cipher is insecure and provide experimental results on the performance of our program that cracks this cipher. STUDIES ON GOPALA-HEMACHANDRA CODES AND THEIR APPLICATIONS. A Thesis Presented to The Faculty of the Department of Computer Science East Carolina University In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by Logan Childers December, 2020 Copyright Logan Childers, 2020 STUDIES ON GOPALA-HEMACHANDRA CODES AND THEIR APPLICATIONS. by Logan Childers APPROVED BY: DIRECTOR OF THESIS: Krishnan Gopalakrishnan, PhD COMMITTEE MEMBER: Venkat Gudivada, PhD COMMITTEE MEMBER: Karl Abrahamson, PhD CHAIR OF THE DEPARTMENT OF COMPUTER SCIENCE: Venkat Gudivada, PhD DEAN OF THE GRADUATE SCHOOL: Paul J. Gemperline, PhD Table of Contents LIST OF TABLES ..................................
    [Show full text]
  • Fibonacci Codes for Crosstalk Avoidance
    IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-ISSN: 2278-2834,p- ISSN: 2278-8735.Volume 8, Issue 3 (Nov. - Dec. 2013), PP 09-15 www.iosrjournals.org Fibonacci Codes for Crosstalk Avoidance 1Sireesha Kondapalli, 2Dr. Giri Babu Kande 1PG Student (M.Tech VLSI), Dept. Of ECE, Vasireddy Venkatadri Ins. Tech., Nambur, Guntur, AP, India 2 Professor & Head, Dept. Of ECE, Vasireddy Venkatadri Ins. Tech., Nambur, Guntur, AP, India Abstract: In the deep sub micrometer CMOS process technology, the interconnect resistance, length, and inter- wire capacitance are increasing significantly, which contribute to large on-chip interconnect propagation delay. Data transmitted over interconnect determine the propagation delay and the delay is very significant when adjacent wires are transitioning in opposite directions (i.e., crosstalk transitions) as compared to transitioning in the same direction. Propagation delay across long on-chip buses is significant when adjacent wires are transitioning in opposite direction (i.e., crosstalk transitions) as compared to transitioning in the same direction. By exploiting Fibonacci number system, we propose a family of Fibonacci coding techniques for crosstalk avoidance, relate them to some of the existing crosstalk avoidance techniques, and show how the encoding logic of one technique can be modified to generate code words of the other technique. Keywords: On-chip bus, crosstalk, Fibonacci coding. I. Introduction The advancement of very large scale integration (VLSI) technologies has been following Moore’s law for the past several decades: the number of transistors on an integrated circuit is doubling every two years and the channel length is scaling at the rate of 0.7/3 years.
    [Show full text]