Experiment 7 IMAGE COMPRESSION I Introduction

Total Page:16

File Type:pdf, Size:1020Kb

Experiment 7 IMAGE COMPRESSION I Introduction Experiment 7 IMAGE COMPRESSION I Introduction A digital image obtained by sampling and quantizing a continuous tone picture requires an enormous storage. For instance, a 24 bit color image with 512x512 pixels will occupy 768 Kbyte storage on a disk, and a picture twice of this size will not fit in a single floppy disk. To transmit such an image over a 28.8 Kbps modem would take almost 4 minutes. The purpose for image compression is to reduce the amount of data required for representing sampled digital images and therefore reduce the cost for storage and transmission. Image compression plays a key role in many important applications, including image database, image communications, remote sensing (the use of satellite imagery for weather and other earth-resource applications), document and medical imaging, facsimile transmission (FAX), and the control of remotely piloted vehicles in military, space, and hazardous waste control applications. In short, an ever-expanding number of applications depend on the efficient manipulation, storage, and transmission of binary, gray-scale, or color images. An important development in image compression is the establishment of the JPEG standard for compression of color pictures. Using the JPEG method, a 24 bit/pixel color images can be reduced to between 1 to 2 bits/pixel, without obvious visual artifacts. Such reduction makes it possible to store and transmit digital imagery with reasonable cost. It also makes it possible to download a color photograph almost in an instant, making electronic publishing/advertising on the Web a reality. Prior to this event, G3 and G4 standards have been developed for compression of facsimile documents, reducing the time for transmitting one page of text from about 6 minutes to 1 minute. In this experiment, we will introduce the basics of image compression, including both binary images and continuous tone images (gray-scale and color). Video compression will be covered in the next experiment. II Theories and Techniques for Image Compression In general, coding method can be classified into Lossless and Lossy. With lossless coding, the original sample values are retained exactly and compression is achieved by exploring the statistical redundancies in the signal. With lossy coding, the original signal is altered to some extent to achieve a higher compression radio. 1 II.1 Lossless Coding II.1.1 Variable Length Coding [1, Chapter6.4] In variable length coding (VLC), the more probable symbol is represented with fewer bits (using a shorter codeword). The Shannon’s first theorem [3] states that the average length per symbol, l, is bounded by the entropy of source, H, i.e., =−∑∑∑ ≤ = ≤ − + = + HpplplppHnnlog22 nnn (log n11 ) (10.1) where pn is the probability of the n-th symbol, H is the entropy of the source, which represents the average information, ln is the length of the codeword for symbol n, and l is the average codeword length. II.1.2 Huffman Coding The Shannon theorem only gives the bound but not the actual way of constructing the code to achieve the bound. On way to accomplish the later task is by a method known as Huffman Coding. Example: Consider an image that is quantized to 4 levels: 0, 1, 2, and 3. Suppose the probability of these levels are respectively 1/49, 4/49, 36/49 and 8/49. The design of a Huffman code is illustrated in the Figure 1. Symbol Prob Codeword Length 1 “ 2 “ 36/49 “ 1 “ 1 1 1 “ 3 “ 8/49 “ 01 “ 2 0 1 “ 1 “ 4/49 13/49 “ 001 “ 3 0 0 5/49 “ 0 “ 1/49 “ 000 “ 3 Figure 1 An Example of Huffman Coding In this example, we have 2 36 8 4 1 67 Average length l =⋅+⋅++1 2 () ⋅==3 14 . 49 49 49 49 49 =−∑ = Entropy of the source Hppkklog116. ∴<<+H l H 1 II.1.3 Other Variable Length Coding Methods LZW Coding (Lempel, Ziv, And Welsh)[2] is the algorithm used in several public domain software for lossless data compression, such as gzip (UNIX) and pkzip (DOS). One of the most famous graphic file formats GIF also incorporates the LZW coding scheme. Another method known as Arithmetic Coding [2] is more powerful than both Huffman coding and LZW Coding. But it also requires more computation. II.1.4 Runlength Coding (RLC) of Bilevel Images [1, Chapter 6.6] In one dimensional runlength coding of bilevel images, one scans the pixels from left to right along each scan line. Assume that a line always starts and ends with white pixels, one counts the number (referred to as runlength) of white pixels and that of the black pixels alternatively. The last run of white pixels are replaced with a special symbol “EOL” (end of line). The runlengths of white and black are coded using separate codebooks. The codebook, say, for the white runlengths is designed using Huffman Coding method by treating each possible runlength (including EOL) as a symbol. An example of runlengths Coding is illustrated in the Fig. 2. 3 ⊗ EOL (End of line) RUN-LENGTH CODING → White Runlength ⇒ Black Runlength ------------------------------------------------------------------- ------------------------------------------------------------------- - - x x x x - - x - - - - - x x x x - - x x x x - - x x x x - - - - x - - - - - x - - - - - x - - - - - x - - x - - - - - x - - - - x - - - - - x - - - - - x - - - - - x - - x - - - - - x - - - - x x x x - - x - - - - - x x x x - - x x x x - - x x x x - - - - x - - - - - x - - - - - - - - x - - - - - x - - - - - x - - - - x - - - - - x - - - - - - - - x - - - - - x - - - - - x - - - - x x x x - - x x x x - x x x x - - x x x x - - x x x x - - ------------------------------------------------------------------- ------------------------------------------------------------------- ⊗ ⊗ 2 4 2 1 5 4 2 4 2 4 → ⇒ → ⇒ → ⇒ → ⇒ → ⇒ ⊗ 2 1 5 1 5 1 5 1 2 1 2 1 → ⇒ → ⇒ → ⇒ → ⇒ → ⇒ → ⇒ ⊗ . 2 4 2 4 1 4 2 4 2 4 → ⇒ → ⇒ → ⇒ → ⇒ → ⇒ ⊗ Fig. 2 An example of runlength coding II.1.5 Two Dimensional Runlength Coding [1, Chapter 6.6] One dimensional runlength coding method only explores the correlation among pixels in the same line. In two dimensional runlength coding or relative address coding, the correlation among pixels in the current line as well as the previous line is explored. With this method, when a transition in color occurs, the distance of this pixel to the most closest transition pixel (both before and after this pixel) in the previous line as well as to the last transition pixel in the same line are calculated, and the one with the shortest distance is coded, along with an index indicating which type of distance is coded. See Fig. 6.17 in [1]. 4 II.1.6 CCITT Group 3 and Group 4 Facsimile Coding Standard - The READ Code [1,Chapter 6.6] In the Group 3 method, the first line in every K lines is coded using 1-D runlength coding, and the following (K-1) lines are coded using a 2-D runlength coding method known as Relative Element Address Designate (READ). For details of this method and the actual code tables, see [1], Sec. 6.6.1. The reason that the 1-D RLC is used for every K line is to suppress propagation of transmission errors. Otherwise, if the READ method is used continuously, when one bit error occurs somewhere during transmission, it will affect the entire page. The Group 4 method is designed for more secure transmission media, such as leased data lines where the bit error rate is very low. The algorithm is basically a streamline of the Group 3 method, with 1-D RLC eliminated. II.1.7 Lossless Predictive Coding Motivation: The value of a current pixel usually does not change rapidly from those of adjacent pixels. Thus it can be predicted quite accurately from the previous samples. The prediction error will have a non-uniform distribution, centered mainly near zero, which has a lower entropy than the original samples, which usually have a uniform distribution. For detail see [2] Sec. 9.4. With entropy coding (e.g. Huffman coding), the error values can be specified with fewer bits than that required for specifying the original sample values. II.2 Transform Coding (Lossy Coding) [1,Chapter 6.5] Lossless coding can achieve a compression ratio of 2 -- 3 for most images. To further reduce the data amount, lossy coding methods apply quantization to the original samples or parameters of some transformation of the original signal ( e.g. prediction or transformation). The transformation is to exploit the statistical correlation among original samples. Popular methods include linear prediction and unitary transforms. We have discussed linear prediction coding and its application in speech and audio coding in the previous experiment. You have learnt and experimented with uniform and non-uniform quantization in the previous experiment as well. In this section, we focus on transform coding, which is more effective for images. One of the most popular lossy coding schemes for images is transform coding. In block-based transform coding, one divides an image into non-overlapping blocks. For each block, one first transforms the original pixel values into a set of transform coefficients using a unitary transform. The transformed coefficients are then quantized and coded. In the decoder, one reconstructs the original block from the quantized coefficients through an inverse transform. The transform is designed to compact the energy of the original signal into only a few coefficients, and to reduce the correlation among the variables to be coded. Both will contribute to the reduction of the bit rate. 5 II.2.1 The Discrete Cosine Transformation (DCT) The DCT is popular with image signals because it matches well with the statistics of common image signals. The basis vectors of the one dimensional N-point DCT are defined by: 1 = ()21nk+ π k 0 hn()= α ()cos( k ),withα ( k ) = N k 2N 2 kN=−12, ,..., 1 .
Recommended publications
  • Music 422 Project Report
    Music 422 Project Report Jatin Chowdhury, Arda Sahiner, and Abhipray Sahoo October 10, 2020 1 Introduction We present an audio coder that seeks to improve upon traditional coding methods by implementing block switching, spectral band replication, and gain-shape quantization. Block switching improves on coding of transient sounds. Spectral band replication exploits low sensitivity of human hearing towards high frequencies. It fills in any missing coded high frequency content with information from the low frequencies. We chose to do gain-shape quantization in order to explicitly code the energy of each band, preserving the perceptually important spectral envelope more accurately. 2 Implementation 2.1 Block Switching Block switching, as introduced by Edler [1], allows the audio coder to adaptively change the size of the block being coded. Modern audio coders typically encode frequency lines obtained using the Modified Discrete Cosine Transform (MDCT). Using longer blocks for the MDCT allows forbet- ter spectral resolution, while shorter blocks have better time resolution, due to the time-frequency trade-off known as the Fourier uncertainty principle [2]. Due to their poor time resolution, using long MDCT blocks for an audio coder can result in “pre-echo” when the coder encodes transient sounds. Block switching allows the coder to use long MDCT blocks for normal signals being encoded and use shorter blocks for transient parts of the signal. In our coder, we implement the block switching algorithm introduced in [1]. This algorithm consists of four types of block windows: long, short, start, and stop. Long and short block windows are simple sine windows, as defined in [3]: (( ) ) 1 π w[n] = sin n + (1) 2 N where N is the length of the block.
    [Show full text]
  • (A/V Codecs) REDCODE RAW (.R3D) ARRIRAW
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]
  • Center for Signal Processing
    21 3. LOSSLESS/LOSSY COMPRESSION Overview of Lossless Compression: It is also known as: • Noiseless coding • Lossless coding • Invertible coding • Entropy coding • Data compaction codes. They can perfectly recover original data (if no storage or transmission bit errors, i.e., noiseless channel). 1. They normally have variable length binary codewords to produce variable numbers of bits per symbol. 2. Only works for digital sources. Properties of Variable rate (length) systems: • They can more efficiently trade off quality/distortion (noise) and rate. • They are generally more complex and costly to build. • They require buffering to match fixed-rate transmission system requirements. • They tend to suffer catastrophic transmission error propagation; in other words, once in error then the error grows quickly. 3. But they can provide produce superior rate/distortion (noise) trade-off. For instance, in image compression we can use more bits for edges, fewer for flat areas. Similarly, more bits can be assigned for plosives, fewer for vowels in speech compression. General idea: Code highly probable symbols into short binary sequences, low probability symbols into long binary sequences, so that average is minimized. Most famous examples: • Morse code: Consider dots and dashes as binary levels “0” and “1”. Assign codeword length inversely proportional to letter relative frequencies. In other words, most frequent letter (message) is assigned the smallest number of bits, whereas the least likely message has the longest codeword, such as “e” and “”z” in English alphabet. Total number of bits per second would much less than if we have used fixed-rate ASCII representation for alphanumerical data. • Huffman code 1952, employed in UNIX compact utility and many standards, known to be optimal under specific constraints.
    [Show full text]
  • Coding and Compression
    Chapter 3 Multimedia Systems Technology: Co ding and Compression 3.1 Intro duction In multimedia system design, storage and transport of information play a sig- ni cant role. Multimedia information is inherently voluminous and therefore requires very high storage capacity and very high bandwidth transmission capacity.For instance, the storage for a video frame with 640 480 pixel resolution is 7.3728 million bits, if we assume that 24 bits are used to en- co de the luminance and chrominance comp onents of each pixel. Assuming a frame rate of 30 frames p er second, the entire 7.3728 million bits should b e transferred in 33.3 milliseconds, which is equivalent to a bandwidth of 221.184 million bits p er second. That is, the transp ort of such large number of bits of information and in such a short time requires high bandwidth. Thereare two approaches that arepossible - one to develop technologies to provide higher bandwidth of the order of Gigabits per second or more and the other to nd ways and means by which the number of bits to betransferred can be reduced, without compromising the information content. It amounts to saying that we need a transformation of a string of characters in some representation such as ASCI I into a new string e.g., of bits that contains the same information but whose length must b e as small as p ossible; i.e., data compression. Data compression is often referred to as co ding, whereas co ding is a general term encompassing any sp ecial representation of data that achieves a given goal.
    [Show full text]
  • Ts 103 491 V1.1.1 (2017-04)
    ETSI TS 103 491 V1.1.1 (2017-04) TECHNICAL SPECIFICATION DTS-UHD Audio Format; Delivery of Channels, Objects and Ambisonic Sound Fields 2 ETSI TS 103 491 V1.1.1 (2017-04) Reference DTS/JTC-DTS-UHD Keywords audio, codec, object audio ETSI 650 Route des Lucioles F-06921 Sophia Antipolis Cedex - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N° 348 623 562 00017 - NAF 742 C Association à but non lucratif enregistrée à la Sous-Préfecture de Grasse (06) N° 7803/88 Important notice The present document can be downloaded from: http://www.etsi.org/standards-search The present document may be made available in electronic versions and/or in print. The content of any electronic and/or print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any existing or perceived difference in contents between such versions and/or in print, the only prevailing document is the print of the Portable Document Format (PDF) version kept on a specific network drive within ETSI Secretariat. Users of the present document should be aware that the document may be subject to revision or change of status. Information on the current status of this and other ETSI documents is available at https://portal.etsi.org/TB/ETSIDeliverableStatus.aspx If you find errors in the present document, please send your comment to one of the following services: https://portal.etsi.org/People/CommiteeSupportStaff.aspx Copyright Notification No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and microfilm except as authorized by written permission of ETSI.
    [Show full text]
  • (LSF) Vector Quantization in Speech Codec
    electronics Article An Efficient Codebook Search Algorithm for Line Spectrum Frequency (LSF) Vector Quantization in Speech Codec Yuqun Xue 1,2, Yongsen Wang 1,2, Jianhua Jiang 1,2, Zenghui Yu 1, Yi Zhan 1, Xiaohua Fan 1 and Shushan Qiao 1,2,* 1 Smart Sensing R&D Center, Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China; [email protected] (Y.X.); [email protected] (Y.W.); [email protected] (J.J.); [email protected] (Z.Y.); [email protected] (Y.Z.); [email protected] (X.F.) 2 Department of Microelectronics, University of Chinese Academy of Sciences, Beijing 100049, China * Correspondence: [email protected] Abstract: A high-performance vector quantization (VQ) codebook search algorithm is proposed in this paper. VQ is an important data compression technique that has been widely applied to speech, image, and video compression. However, the process of the codebook search demands a high computational load. To solve this issue, a novel algorithm that consists of training and encoding procedures is proposed. In the training procedure, a training speech dataset was used to build the squared-error distortion look-up table for each subspace. In the encoding procedure, firstly, an input vector was quickly assigned to a search subspace. Secondly, the candidate code word group was obtained by employing the triangular inequality elimination (TIE) equation. Finally, a partial distortion elimination technique was employed to reduce the number of multiplications. The proposed method reduced the number of searches and computation load significantly, especially when the input vectors were uncorrelated.
    [Show full text]
  • Multi-Image Classification and Compression Using Vector Quantization
    W&M ScholarWorks Dissertations, Theses, and Masters Projects Theses, Dissertations, & Master Projects 1999 Multi-image classification and compression using vector quantization Beverly J. Thompson College of William & Mary - Arts & Sciences Follow this and additional works at: https://scholarworks.wm.edu/etd Part of the Computer Sciences Commons Recommended Citation Thompson, Beverly J., "Multi-image classification and compression using vector quantization" (1999). Dissertations, Theses, and Masters Projects. Paper 1539623956. https://dx.doi.org/doi:10.21220/s2-qz52-5502 This Dissertation is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Dissertations, Theses, and Masters Projects by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. INFORMATION TO USERS This manuscript has baen reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter fece, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these win be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps.
    [Show full text]
  • Algorithms for Fast Vector Quantization∗
    Algorithms for Fast Vector Quantization∗ Sunil Aryay Department of Computer Science The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong David M. Mountz Department of Computer Science and Institute for Advanced Computer Studies University of Maryland, College Park, Maryland, USA Abstract Nearest neighbor searching is an important geometric subproblem in vector quanti- zation. Existing studies have shown that the difficulty of solving this problem efficiently grows rapidly with dimension. Indeed, existing approaches on unstructured codebooks in dimension 16 are little better than brute-force search. We show that if one is willing to relax the requirement of finding the true nearest neighbor then dramatic improve- ments in running time are possible, with negligible degradation in the quality of the result. We present an empirical study of three nearest neighbor algorithms on a number of data distributions, and in dimensions varying from 8 to 16. The first algorithm is the standard k-d tree algorithm which has been enhanced to use incremental distance calculation, the second is a further improvement that orders search by the proximity of the k-d cell to the query point, and the third is based on a simple greedy search in a structure called a neighborhood graph. Key words: Nearest neighbor searching, closest-point queries, data compression, vec- tor quantization, k-d trees. 1 Introduction The nearest neighbor problem is to find the point closest to a query point among a set of n points in d-dimensional space. We assume that the distances are measured in the Euclidean metric. Finding the nearest neighbor is a problem of significant importance in many applications.
    [Show full text]
  • Vq-Wav2vec:Self-Supervised Learningof Discrete Speech Representations
    Published as a conference paper at ICLR 2020 VQ-WAV2VEC:SELF-SUPERVISED LEARNING OF DISCRETE SPEECH REPRESENTATIONS Alexei Baevski∗4 Steffen Schneider∗5y Michael Auli4 4 Facebook AI Research, Menlo Park, CA, USA 5 University of Tubingen,¨ Germany ABSTRACT We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a Gumbel-Softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.1 1 INTRODUCTION Learning discrete representations of speech has gathered much recent interest (Versteegh et al., 2016; Dunbar et al., 2019). A popular approach to discover discrete units is via autoencoding (Tjandra et al., 2019; Eloff et al., 2019; Chorowski et al., 2019) sometimes coupled with an autoregressive model (Chung et al., 2019). Another line of research is to learn continuous speech representations in a self-supervised way via predicting context information (Chung & Glass, 2018; van den Oord et al., 2018; Schneider et al., 2019). In this paper, we combine these two lines of research by learning discrete representations of speech via a context prediction task instead of reconstructing the input. This enables us to directly apply well performing NLP algorithms to speech data (Figure 1a). L1 L2 L3 vq-wav2vec BERT AM t C h e Z^ q c Z a X t (a) vq-wav2vec (b) Discretized speech training pipeline Figure 1: (a) The vq-wav2vec encoder maps raw audio (X ) to a dense representation (Z) which is quantized (q) to Z^ and aggregated into context representations (C); training requires future time step prediction.
    [Show full text]
  • Video Compression: MPEG-4 and Beyond Video Compression: MPEG-4 and Beyond
    Video Compression: MPEG-4 and Beyond Video Compression: MPEG-4 and Beyond Ali Saman Tosun, [email protected] Abstract Technological developments in the networking technology and the computers make use of video possible. Storing and transmitting uncompressed raw video is not a good idea, it requires large storage space and bandwidth. Special algorithms which take the characteristics of the video into account can compress the video with high compression ratio. In this paper I will give a overview of the standardization efforts on video compression: MPEG-1, MPEG-2, MPEG-4, MPEG-7, and I will explain the current video compression trends briefly. See also: Multimedia Networking Products | Multimedia Over IP: RSVP, RTP, RTCP, RTSP | Multimedia Networking References | Books on Multimedia | Protocols for Multimedia on the Internet (Class Lecture) | Video over ATM networks | Multimedia networks: An introduction (Talk by Prof. Jain) | Other Reports on Recent Advances in Networking Back to Raj Jain's Home Page Table of Contents: ● 1. Introduction ● 2. H.261 ● 3. H.263 ● 4. H.263+ ● 5. MPEG ❍ 5.1 MPEG-1 ❍ 5.2 MPEG-2 ❍ 5.3 MPEG-3 ❍ 5.4 MPEG-4 ❍ 5.5 MPEG-7 ● 6. J.81 ● 7. Fractal-Based Coding ● 8. Model-based Video Coding http://www.cis.ohio-state.edu/~jain/cis788-99/compression/index.html (1 of 13) [2/7/2000 10:39:09 AM] Video Compression: MPEG-4 and Beyond ● 9. Scalable Video Coding ● 10 . Wavelet-based Coding ● Summary ● References ● List of Acronyms 1. Introduction Over the last couple of years there has been a great increase in the use of video in digital form due to the popularity of the Internet.
    [Show full text]
  • A Novel and Efficient Vector Quantization Based CPRI
    1 A Novel and Efficient Vector Quantization Based CPRI Compression Algorithm Hongbo Si, Boon Loong Ng, Md. Saifur Rahman, and Jianzhong (Charlie) Zhang Abstract The future wireless network, such as Centralized Radio Access Network (C-RAN), will need to deliver data rate about 100 to 1000 times the current 4G technology. For C-RAN based network architecture, there is a pressing need for tremendous enhancement of the effective data rate of the Common Public Radio Interface (CPRI). Compression of CPRI data is one of the potential enhancements. In this paper, we introduce a vector quantization based compression algorithm for CPRI links, utilizing Lloyd algorithm. Methods to vectorize the I/Q samples and enhanced initialization of Lloyd algorithm for codebook training are investigated for improved performance. Multi-stage vector quantization and unequally protected multi-group quantization are considered to reduce codebook search complexity and codebook size. Simulation results show that our solution can achieve compression of 4 times for uplink and 4:5 times for downlink, within 2% Error Vector Magnitude (EVM) distortion. Remarkably, vector quantization codebook proves to be quite robust against data modulation mismatch, fading, signal-to- noise ratio (SNR) and Doppler spread. I. INTRODUCTION arXiv:1510.04940v1 [cs.IT] 16 Oct 2015 The amount of wireless IP data traffic is projected to grow by well over 100 times within a decade (from under 3 exabytes in 2010 to more than 500 exabytes by 2020) [1]. To address such wireless data traffic demand, there has been increasing effort to define the 5G network in recent years. It is widely recognized that the 5G network will be required to deliver data rate about The material in this paper was submitted in part to the 2015 IEEE Global Communications Conference (Globecom 2015), San Diego, CA, USA, Dec.
    [Show full text]
  • Бвбдгжезй © "!#!© "! %$& %'% (0)214365 78 369
    ¡¢¡¤£¦¥ §©¨ © © "!$#%&!$'(#)+*, Erik Montnémery Johannes Sandvall February 2004 Ogg/Vorbis in embedded systems 3 Abstract Ogg/Vorbis is currently a growing audio format, mainly used for online distribution of music. The number of available encoded audio files is quickly increasing even though MP3 still is the most used format. Many internet radio stations have begun streaming in Ogg/Vorbis and even more are examining the possibilities. In contrast with other renown formats such as AAC and MP3, Ogg/Vorbis is totally license and royalty free. For embedded platforms the licensing and royalty cost for supporting commercial formats can be quite taxing as payments are often per device. The aim of this thesis is to implement an embedded Ogg/Vorbis system under strict memory and CPU usage constraints. As opposed to most other audio for- mats, Ogg/Vorbis includes codebooks and other data structures in the data stream, thus greatly increasing dynamic memory usage. Furthermore, the reference decoder is based on floating point math, albeit a fixed-point implementation also exists. These problems paired with the short time elapsed since Ogg/Vorbis was introduced has had the implications that very few embedded implementations have been done so far. Since a reference fixed-point math decoder exists under a generous BSD-like li- cense the efforts have been concentrated at optimizations of memory and CPU usage of the available code. Data structures and lookup tables have been optimized and the largest CPU consuming block, the IMDCT, has been replaced with a mathematically equivalent FFT that is more than eight times faster. Ogg/Vorbis in embedded systems 5 Contents Contents 5 1 Introduction 7 1.1 Motivation .
    [Show full text]