An Efficient Coding Method for Teleconferencing Video and Confocal Microscopic Image Sequences

Total Page:16

File Type:pdf, Size:1020Kb

An Efficient Coding Method for Teleconferencing Video and Confocal Microscopic Image Sequences Journal of Computing and Information Technology - CIT 16, 2008, 3, 145–156 145 doi:10.2498/cit.1000892 An Efficient Coding Method for Teleconferencing Video and Confocal Microscopic Image Sequences Vinay Arya1, Ankush Mittal1, Amit Pande2 and R. C. Joshi1 1Dept. of Electronics and Computer Engineering, IIT Roorkee, Uttaranchal, India 2Dept. of Electrical and Computer Engineering, Iowa State University, Ames, USA In this paper we propose a three-dimensional vector scalar quantization schemes and its simple de- quantization-based video coding scheme. The algorithm coder structure. This method of compression uses a 3D vector quantization pyramidal codebook-based model with adaptive pyramidal codebook for compres- has gained interest over conventional means of sion. The pyramidal codebook-based model helps in compression (i.e., DPCM and Transform cod- getting high compression in case of modest motion. The ing [1]) because the conventional methods usu- adaptive vector quantization algorithm is used to train the ally employ some form of scalar quantization. codebook for optimal performance with time. Some of the distinguished features of our algorithm are its excel- Scalar quantization is not optimal as the suc- lent performance due to its adaptive behavior to the video cessive samples in a digital signal are usually composition and excellent compression due to codebook correlated or dependent. A vector can be used approach. We also propose an efficient codebook-based to describe almost any type of pattern, such as post-processing technique which enables the vector quan- tizer to possess higher correlation preservation property. the patterns that arise in digital images. Based on the special pattern of the codebook imposed by Different approaches may be considered for post-processing technique, a window-based fast search (WBFS) algorithm is proposed. The WBFS algorithm video compression. Since lossless compres- not only accelerates the vector quantization processing, sion approaches result in small compression ra- but also results in better rate-distortion performance. tios, we have considered the lossy compression The proposed approach can be used for both telecon- techniques. Several studies dealing with vec- ferencing videos and to compress images obtained from tor quantization-based lossy compression of im- confocal laser scanning microscopy (CLSM). The results ages has been proposed and applied to different show that the proposed method gave higher subjective [ ] and objective image quality of reconstructed images at a image and videos 2-8 . better compression ratio and presented more acceptable Some of the main approaches developed are results when applying image processing filters such as edge detection on reconstructed images. The experi- discussed in this section. Localized model-ba- mental results demonstrate that the proposed method sed approach to teleconferencing using three-di- outperforms the teleconferencing compression standards mensional vector quantization [2] method uses H.261 and LBG-based vector quantization techniques. LBG algorithm for creating a codebook. This Keywords: confocal microscopic images, codebook, codebook creating method does not vary with codebook-preprocessing, rate distortion, vector quan- time. Thus this codebook needs to be trained tization, window-based fast search for optimal performance with varying time, and hence varying input vector characteristics. To overcome this shortcoming, we have proposed an adaptive codebook approach. 1. Introduction Besides good R-D tradeoff, a good encoder should also posses an acceptable level of com- Vector quantization has been widely used in im- plexity. Predictive Vector Quantizer [3] and age and video compression because of its supe- Finite State Vector Quantizer [4] alleviate the rior rate distortion performance over traditional computational burden of a vector quantizer with 146 An Efficient Coding Method for Teleconferencing Video and Confocal Microscopic Image Sequences memory, but at the same time they utilize the Briefly, arterial segments are stained with flu- correlation among neighboring vectors. In [5], orescent markers for the cell nucleus or beta- the inter-index correlation is exploited to code adrenergic receptors. Tissues are then slide the indices more efficiently. Nevertheless, the mounted on the stage of either a NORAN3 (nu- easiest way to exploit the inter-vector correla- clear work) or Leica4 (receptor work) CLSM. tion is to use the DPCM technique in the in- Serial optical sections (x, y plane) are collected dex domain, i.e., to precede the entropy cod- at intervals of 1μm down through the axial plane ing of indices with a DPCM stage. Motivated (z-axis) to produce a ’stack’ of optical planes by these considerations, a simple but efficient which can be processed as a 3D volume. The post-processing technique for adaptive code- book is proposed. The post-processing is done processing, analysis and transfer of the result- to reorder the codebook according to the po- ing data volumes are time inefficient. A robust tential of code words. The reordering is done non-lossy or low distortion lossy compression once for all and is applicable to all existing routine would be of great value for biomedical codebook design methods. The post-processed purpose, e.g. in studying vascular structure [10]. codebook possesses a special structure which Furthermore, the emergence of multi photon in turn leads to decreased first-order differential microscopy as a practical laboratory tool now entropy, i.e., increased correlation between suc- enables even greater depth penetration within cessive indices. This is beneficial to subsequent thick biological samples. This, coupled with DPCM and entropy coding, and it improves the studies involving multiple fluorophores im- the performance of the entire coding system. aged over time period, results in the collection Another merit of the proposed post-processing of data sets approaching 1 GB per experiment. techniques is that it imposes a certain structure Without efficient compression algorithms, man- to the codebook. This structure can be utilized agement of such enormous amounts of data will to accelerate the searching process. Thus it re- be very difficult. lieves the computational requirement of the en- coder. Three-dimensional image compression meth- For microscopic medical images, Burt [6] and ods outperform their two-dimensional counter- Adelson [7] introduced quantization into the im- parts in the sense of higher rate distortion perfor- age pyramid structure and proposed a multi- mance for compressing volumetric image data. resolution lossy compression technique using The state-of-the-art transform-based 3D com- a scalar quantizer. In 2003, Cockshott pro- pressors, such as 3D-SPIHT and 3D-DCT, are posed a vector quantization-based 3D pyramid characterized by their rate control ability, where approach [8]. This method has 3D codebooks the qualities of the image are adjustable with re- × × of size 2 2 4. By using multiple level local- spect to the rates, but are not explicitly control- ized codebooks with variable size, it is possible lable. In [8], the authors discuss the differences to get high compression even if there is some between 3D microscopy raw data and the movie motion in microscopic image sequence. sequence. In microscopy the data corresponds to tracer densities at sub volumes in a 3D grid, Confocal Microscopic Image Sequence with the size of the sub volumes constrained by the microscopy optics. This contrasts with ( ) the data in a movie sequence which has time Confocal laser scanning microscopy CLSM , [ ] which is single photon microscopy, has been varying 3D generating surfaces. In 8 ,theau- available to biomedical scientists for almost 20 thors hypothesize that the higher order statistics years [9]. However, it is only in very recent of 3D microscopy data will differ from those times that affordable computer power has en- of film and that the optimal compression strat- abled biologists to fully exploit the data con- egy will differ from that used in film and video tained within the large (>100Mb) image vol- applications. umes. CLSM is used to collect 3D volumetric data describing the cellular organization and re- In [8], the author presents the use of 3D pyramid ceptor protein distribution through the vascular data structures to compress microscopy data. wall of small segments of human and animal In [11], the author presents vector quantization- arteries. based method in an enhanced image pyramid An Efficient Coding Method for Teleconferencing Video and Confocal Microscopic Image Sequences 147 with error feedback so that the quality of the de- cell has an index. The encoder gives an index compressed image depends only on the encod- to the given vector. On the other end, the de- ing of coefficients from the finest band. This coder generates an output vector from a given achieves some effective reduction of artifacts. index. The task of the encoder is to identify the location of input vector in one of the N geomet- In 3D microscopy, the raw data corresponds to rically specified regions of k spaces. Decoding tracer densities at sub volumes in a 3D grid with is simply a table lookup task fully determined the size of the sub volumes constrained by the by specifying the codebook. A decoder does microscopy optics. The data is typically digi- not need to know the geometry of the partition tized as a sequence of 2D images i.e., a three to perform his job. Vector quantization has been dimensional array with time, x and y axis as the [ ] widely used in image-coding systems because three dimensions 8 . of its superior rate distortion (R-D) performance The paper is organized as follows. The vector over traditional scalar quantization schemes and quantization and adaptive codebook procedures simple decoder structure. are explained in Section 2. 3D vector quantiza- tion used for compression and encoding of tele- conferencing videos and confocal microscopic image sequences is described in Section 3. An efficient codebook post processing technique and a window-based fast search algorithm for vector quantization are explained in Section 4. The experimental results and analysis are given in Section 5.
Recommended publications
  • Music 422 Project Report
    Music 422 Project Report Jatin Chowdhury, Arda Sahiner, and Abhipray Sahoo October 10, 2020 1 Introduction We present an audio coder that seeks to improve upon traditional coding methods by implementing block switching, spectral band replication, and gain-shape quantization. Block switching improves on coding of transient sounds. Spectral band replication exploits low sensitivity of human hearing towards high frequencies. It fills in any missing coded high frequency content with information from the low frequencies. We chose to do gain-shape quantization in order to explicitly code the energy of each band, preserving the perceptually important spectral envelope more accurately. 2 Implementation 2.1 Block Switching Block switching, as introduced by Edler [1], allows the audio coder to adaptively change the size of the block being coded. Modern audio coders typically encode frequency lines obtained using the Modified Discrete Cosine Transform (MDCT). Using longer blocks for the MDCT allows forbet- ter spectral resolution, while shorter blocks have better time resolution, due to the time-frequency trade-off known as the Fourier uncertainty principle [2]. Due to their poor time resolution, using long MDCT blocks for an audio coder can result in “pre-echo” when the coder encodes transient sounds. Block switching allows the coder to use long MDCT blocks for normal signals being encoded and use shorter blocks for transient parts of the signal. In our coder, we implement the block switching algorithm introduced in [1]. This algorithm consists of four types of block windows: long, short, start, and stop. Long and short block windows are simple sine windows, as defined in [3]: (( ) ) 1 π w[n] = sin n + (1) 2 N where N is the length of the block.
    [Show full text]
  • (A/V Codecs) REDCODE RAW (.R3D) ARRIRAW
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]
  • Center for Signal Processing
    21 3. LOSSLESS/LOSSY COMPRESSION Overview of Lossless Compression: It is also known as: • Noiseless coding • Lossless coding • Invertible coding • Entropy coding • Data compaction codes. They can perfectly recover original data (if no storage or transmission bit errors, i.e., noiseless channel). 1. They normally have variable length binary codewords to produce variable numbers of bits per symbol. 2. Only works for digital sources. Properties of Variable rate (length) systems: • They can more efficiently trade off quality/distortion (noise) and rate. • They are generally more complex and costly to build. • They require buffering to match fixed-rate transmission system requirements. • They tend to suffer catastrophic transmission error propagation; in other words, once in error then the error grows quickly. 3. But they can provide produce superior rate/distortion (noise) trade-off. For instance, in image compression we can use more bits for edges, fewer for flat areas. Similarly, more bits can be assigned for plosives, fewer for vowels in speech compression. General idea: Code highly probable symbols into short binary sequences, low probability symbols into long binary sequences, so that average is minimized. Most famous examples: • Morse code: Consider dots and dashes as binary levels “0” and “1”. Assign codeword length inversely proportional to letter relative frequencies. In other words, most frequent letter (message) is assigned the smallest number of bits, whereas the least likely message has the longest codeword, such as “e” and “”z” in English alphabet. Total number of bits per second would much less than if we have used fixed-rate ASCII representation for alphanumerical data. • Huffman code 1952, employed in UNIX compact utility and many standards, known to be optimal under specific constraints.
    [Show full text]
  • Coding and Compression
    Chapter 3 Multimedia Systems Technology: Co ding and Compression 3.1 Intro duction In multimedia system design, storage and transport of information play a sig- ni cant role. Multimedia information is inherently voluminous and therefore requires very high storage capacity and very high bandwidth transmission capacity.For instance, the storage for a video frame with 640 480 pixel resolution is 7.3728 million bits, if we assume that 24 bits are used to en- co de the luminance and chrominance comp onents of each pixel. Assuming a frame rate of 30 frames p er second, the entire 7.3728 million bits should b e transferred in 33.3 milliseconds, which is equivalent to a bandwidth of 221.184 million bits p er second. That is, the transp ort of such large number of bits of information and in such a short time requires high bandwidth. Thereare two approaches that arepossible - one to develop technologies to provide higher bandwidth of the order of Gigabits per second or more and the other to nd ways and means by which the number of bits to betransferred can be reduced, without compromising the information content. It amounts to saying that we need a transformation of a string of characters in some representation such as ASCI I into a new string e.g., of bits that contains the same information but whose length must b e as small as p ossible; i.e., data compression. Data compression is often referred to as co ding, whereas co ding is a general term encompassing any sp ecial representation of data that achieves a given goal.
    [Show full text]
  • Experiment 7 IMAGE COMPRESSION I Introduction
    Experiment 7 IMAGE COMPRESSION I Introduction A digital image obtained by sampling and quantizing a continuous tone picture requires an enormous storage. For instance, a 24 bit color image with 512x512 pixels will occupy 768 Kbyte storage on a disk, and a picture twice of this size will not fit in a single floppy disk. To transmit such an image over a 28.8 Kbps modem would take almost 4 minutes. The purpose for image compression is to reduce the amount of data required for representing sampled digital images and therefore reduce the cost for storage and transmission. Image compression plays a key role in many important applications, including image database, image communications, remote sensing (the use of satellite imagery for weather and other earth-resource applications), document and medical imaging, facsimile transmission (FAX), and the control of remotely piloted vehicles in military, space, and hazardous waste control applications. In short, an ever-expanding number of applications depend on the efficient manipulation, storage, and transmission of binary, gray-scale, or color images. An important development in image compression is the establishment of the JPEG standard for compression of color pictures. Using the JPEG method, a 24 bit/pixel color images can be reduced to between 1 to 2 bits/pixel, without obvious visual artifacts. Such reduction makes it possible to store and transmit digital imagery with reasonable cost. It also makes it possible to download a color photograph almost in an instant, making electronic publishing/advertising on the Web a reality. Prior to this event, G3 and G4 standards have been developed for compression of facsimile documents, reducing the time for transmitting one page of text from about 6 minutes to 1 minute.
    [Show full text]
  • Ts 103 491 V1.1.1 (2017-04)
    ETSI TS 103 491 V1.1.1 (2017-04) TECHNICAL SPECIFICATION DTS-UHD Audio Format; Delivery of Channels, Objects and Ambisonic Sound Fields 2 ETSI TS 103 491 V1.1.1 (2017-04) Reference DTS/JTC-DTS-UHD Keywords audio, codec, object audio ETSI 650 Route des Lucioles F-06921 Sophia Antipolis Cedex - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N° 348 623 562 00017 - NAF 742 C Association à but non lucratif enregistrée à la Sous-Préfecture de Grasse (06) N° 7803/88 Important notice The present document can be downloaded from: http://www.etsi.org/standards-search The present document may be made available in electronic versions and/or in print. The content of any electronic and/or print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any existing or perceived difference in contents between such versions and/or in print, the only prevailing document is the print of the Portable Document Format (PDF) version kept on a specific network drive within ETSI Secretariat. Users of the present document should be aware that the document may be subject to revision or change of status. Information on the current status of this and other ETSI documents is available at https://portal.etsi.org/TB/ETSIDeliverableStatus.aspx If you find errors in the present document, please send your comment to one of the following services: https://portal.etsi.org/People/CommiteeSupportStaff.aspx Copyright Notification No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and microfilm except as authorized by written permission of ETSI.
    [Show full text]
  • (LSF) Vector Quantization in Speech Codec
    electronics Article An Efficient Codebook Search Algorithm for Line Spectrum Frequency (LSF) Vector Quantization in Speech Codec Yuqun Xue 1,2, Yongsen Wang 1,2, Jianhua Jiang 1,2, Zenghui Yu 1, Yi Zhan 1, Xiaohua Fan 1 and Shushan Qiao 1,2,* 1 Smart Sensing R&D Center, Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China; [email protected] (Y.X.); [email protected] (Y.W.); [email protected] (J.J.); [email protected] (Z.Y.); [email protected] (Y.Z.); [email protected] (X.F.) 2 Department of Microelectronics, University of Chinese Academy of Sciences, Beijing 100049, China * Correspondence: [email protected] Abstract: A high-performance vector quantization (VQ) codebook search algorithm is proposed in this paper. VQ is an important data compression technique that has been widely applied to speech, image, and video compression. However, the process of the codebook search demands a high computational load. To solve this issue, a novel algorithm that consists of training and encoding procedures is proposed. In the training procedure, a training speech dataset was used to build the squared-error distortion look-up table for each subspace. In the encoding procedure, firstly, an input vector was quickly assigned to a search subspace. Secondly, the candidate code word group was obtained by employing the triangular inequality elimination (TIE) equation. Finally, a partial distortion elimination technique was employed to reduce the number of multiplications. The proposed method reduced the number of searches and computation load significantly, especially when the input vectors were uncorrelated.
    [Show full text]
  • Multi-Image Classification and Compression Using Vector Quantization
    W&M ScholarWorks Dissertations, Theses, and Masters Projects Theses, Dissertations, & Master Projects 1999 Multi-image classification and compression using vector quantization Beverly J. Thompson College of William & Mary - Arts & Sciences Follow this and additional works at: https://scholarworks.wm.edu/etd Part of the Computer Sciences Commons Recommended Citation Thompson, Beverly J., "Multi-image classification and compression using vector quantization" (1999). Dissertations, Theses, and Masters Projects. Paper 1539623956. https://dx.doi.org/doi:10.21220/s2-qz52-5502 This Dissertation is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Dissertations, Theses, and Masters Projects by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. INFORMATION TO USERS This manuscript has baen reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter fece, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these win be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps.
    [Show full text]
  • Algorithms for Fast Vector Quantization∗
    Algorithms for Fast Vector Quantization∗ Sunil Aryay Department of Computer Science The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong David M. Mountz Department of Computer Science and Institute for Advanced Computer Studies University of Maryland, College Park, Maryland, USA Abstract Nearest neighbor searching is an important geometric subproblem in vector quanti- zation. Existing studies have shown that the difficulty of solving this problem efficiently grows rapidly with dimension. Indeed, existing approaches on unstructured codebooks in dimension 16 are little better than brute-force search. We show that if one is willing to relax the requirement of finding the true nearest neighbor then dramatic improve- ments in running time are possible, with negligible degradation in the quality of the result. We present an empirical study of three nearest neighbor algorithms on a number of data distributions, and in dimensions varying from 8 to 16. The first algorithm is the standard k-d tree algorithm which has been enhanced to use incremental distance calculation, the second is a further improvement that orders search by the proximity of the k-d cell to the query point, and the third is based on a simple greedy search in a structure called a neighborhood graph. Key words: Nearest neighbor searching, closest-point queries, data compression, vec- tor quantization, k-d trees. 1 Introduction The nearest neighbor problem is to find the point closest to a query point among a set of n points in d-dimensional space. We assume that the distances are measured in the Euclidean metric. Finding the nearest neighbor is a problem of significant importance in many applications.
    [Show full text]
  • Vq-Wav2vec:Self-Supervised Learningof Discrete Speech Representations
    Published as a conference paper at ICLR 2020 VQ-WAV2VEC:SELF-SUPERVISED LEARNING OF DISCRETE SPEECH REPRESENTATIONS Alexei Baevski∗4 Steffen Schneider∗5y Michael Auli4 4 Facebook AI Research, Menlo Park, CA, USA 5 University of Tubingen,¨ Germany ABSTRACT We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a Gumbel-Softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition.1 1 INTRODUCTION Learning discrete representations of speech has gathered much recent interest (Versteegh et al., 2016; Dunbar et al., 2019). A popular approach to discover discrete units is via autoencoding (Tjandra et al., 2019; Eloff et al., 2019; Chorowski et al., 2019) sometimes coupled with an autoregressive model (Chung et al., 2019). Another line of research is to learn continuous speech representations in a self-supervised way via predicting context information (Chung & Glass, 2018; van den Oord et al., 2018; Schneider et al., 2019). In this paper, we combine these two lines of research by learning discrete representations of speech via a context prediction task instead of reconstructing the input. This enables us to directly apply well performing NLP algorithms to speech data (Figure 1a). L1 L2 L3 vq-wav2vec BERT AM t C h e Z^ q c Z a X t (a) vq-wav2vec (b) Discretized speech training pipeline Figure 1: (a) The vq-wav2vec encoder maps raw audio (X ) to a dense representation (Z) which is quantized (q) to Z^ and aggregated into context representations (C); training requires future time step prediction.
    [Show full text]
  • Video Compression: MPEG-4 and Beyond Video Compression: MPEG-4 and Beyond
    Video Compression: MPEG-4 and Beyond Video Compression: MPEG-4 and Beyond Ali Saman Tosun, [email protected] Abstract Technological developments in the networking technology and the computers make use of video possible. Storing and transmitting uncompressed raw video is not a good idea, it requires large storage space and bandwidth. Special algorithms which take the characteristics of the video into account can compress the video with high compression ratio. In this paper I will give a overview of the standardization efforts on video compression: MPEG-1, MPEG-2, MPEG-4, MPEG-7, and I will explain the current video compression trends briefly. See also: Multimedia Networking Products | Multimedia Over IP: RSVP, RTP, RTCP, RTSP | Multimedia Networking References | Books on Multimedia | Protocols for Multimedia on the Internet (Class Lecture) | Video over ATM networks | Multimedia networks: An introduction (Talk by Prof. Jain) | Other Reports on Recent Advances in Networking Back to Raj Jain's Home Page Table of Contents: ● 1. Introduction ● 2. H.261 ● 3. H.263 ● 4. H.263+ ● 5. MPEG ❍ 5.1 MPEG-1 ❍ 5.2 MPEG-2 ❍ 5.3 MPEG-3 ❍ 5.4 MPEG-4 ❍ 5.5 MPEG-7 ● 6. J.81 ● 7. Fractal-Based Coding ● 8. Model-based Video Coding http://www.cis.ohio-state.edu/~jain/cis788-99/compression/index.html (1 of 13) [2/7/2000 10:39:09 AM] Video Compression: MPEG-4 and Beyond ● 9. Scalable Video Coding ● 10 . Wavelet-based Coding ● Summary ● References ● List of Acronyms 1. Introduction Over the last couple of years there has been a great increase in the use of video in digital form due to the popularity of the Internet.
    [Show full text]
  • A Novel and Efficient Vector Quantization Based CPRI
    1 A Novel and Efficient Vector Quantization Based CPRI Compression Algorithm Hongbo Si, Boon Loong Ng, Md. Saifur Rahman, and Jianzhong (Charlie) Zhang Abstract The future wireless network, such as Centralized Radio Access Network (C-RAN), will need to deliver data rate about 100 to 1000 times the current 4G technology. For C-RAN based network architecture, there is a pressing need for tremendous enhancement of the effective data rate of the Common Public Radio Interface (CPRI). Compression of CPRI data is one of the potential enhancements. In this paper, we introduce a vector quantization based compression algorithm for CPRI links, utilizing Lloyd algorithm. Methods to vectorize the I/Q samples and enhanced initialization of Lloyd algorithm for codebook training are investigated for improved performance. Multi-stage vector quantization and unequally protected multi-group quantization are considered to reduce codebook search complexity and codebook size. Simulation results show that our solution can achieve compression of 4 times for uplink and 4:5 times for downlink, within 2% Error Vector Magnitude (EVM) distortion. Remarkably, vector quantization codebook proves to be quite robust against data modulation mismatch, fading, signal-to- noise ratio (SNR) and Doppler spread. I. INTRODUCTION arXiv:1510.04940v1 [cs.IT] 16 Oct 2015 The amount of wireless IP data traffic is projected to grow by well over 100 times within a decade (from under 3 exabytes in 2010 to more than 500 exabytes by 2020) [1]. To address such wireless data traffic demand, there has been increasing effort to define the 5G network in recent years. It is widely recognized that the 5G network will be required to deliver data rate about The material in this paper was submitted in part to the 2015 IEEE Global Communications Conference (Globecom 2015), San Diego, CA, USA, Dec.
    [Show full text]