A Survey on Lossless and Lossy Data Compression Methods
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Information and Compression Introduction Summary of Course
Information and compression Introduction ■ We will be looking at what information is Information and Compression ■ Methods of measuring it ■ The idea of compression ■ 2 distinct ways of compressing images Paul Cockshott Faraday partnership Summary of Course Agenda ■ At the end of this lecture you should have ■ Encoding systems an idea what information is, its relation to ■ BCD versus decimal order, and why data compression is possible ■ Information efficiency ■ Shannons Measure ■ Chaitin, Kolmogorov Complexity Theory ■ Relation to Goedels theorem Overview Connections ■ Data coms channels bounded ■ Simple encoding example shows different ■ Images take great deal of information in raw densities possible form ■ Shannons information theory deals with ■ Efficient transmission requires denser transmission encoding ■ Chaitin relates this to Turing machines and ■ This requires that we understand what computability information actually is ■ Goedels theorem limits what we know about compression paul 1 Information and compression Vocabulary What is information ■ information, ■ Information measured in bits ■ entropy, ■ Bit equivalent to binary digit ■ compression, ■ Why use binary system not decimal? ■ redundancy, ■ Decimal would seem more natural ■ lossless encoding, ■ Alternatively why not use e, base of natural ■ lossy encoding logarithms instead of 2 or 10 ? Voltage encoding Noise Immunity Binary voltage ■ ■ ■ 5v Digital information BINARY DECIMAL encoded as ranges of ■ 2 values 0,1 ■ 10 values 0..9 continuous variables 1 ■ 1 isolation band ■ -
A Survey Paper on Different Speech Compression Techniques
Vol-2 Issue-5 2016 IJARIIE-ISSN (O)-2395-4396 A Survey Paper on Different Speech Compression Techniques Kanawade Pramila.R1, Prof. Gundal Shital.S2 1 M.E. Electronics, Department of Electronics Engineering, Amrutvahini College of Engineering, Sangamner, Maharashtra, India. 2 HOD in Electronics Department, Department of Electronics Engineering , Amrutvahini College of Engineering, Sangamner, Maharashtra, India. ABSTRACT This paper describes the different types of speech compression techniques. Speech compression can be divided into two main types such as lossless and lossy compression. This survey paper has been written with the help of different types of Waveform-based speech compression, Parametric-based speech compression, Hybrid based speech compression etc. Compression is nothing but reducing size of data with considering memory size. Speech compression means voiced signal compress for different application such as high quality database of speech signals, multimedia applications, music database and internet applications. Today speech compression is very useful in our life. The main purpose or aim of speech compression is to compress any type of audio that is transfer over the communication channel, because of the limited channel bandwidth and data storage capacity and low bit rate. The use of lossless and lossy techniques for speech compression means that reduced the numbers of bits in the original information. By the use of lossless data compression there is no loss in the original information but while using lossy data compression technique some numbers of bits are loss. Keyword: - Bit rate, Compression, Waveform-based speech compression, Parametric-based speech compression, Hybrid based speech compression. 1. INTRODUCTION -1 Speech compression is use in the encoding system. -
An Improved Fast Fractal Image Compression Coding Method
Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 9, 241 - 247, 2016 doi:10.21311/001.39.9.32 An Improved Fast Fractal Image Compression Coding Method Haibo Gao, Wenjuan Zeng, Jifeng Chen, Cheng Zhang College of Information Science and Engineering, Hunan International Economics University, Changsha 410205, Hunan, China Abstract The main purpose of image compression is to use as many bits as possible to represent the source image by eliminating the image redundancy with acceptable restoration. Fractal image compression coding has achieved good results in the field of image compression due to its high compression ratio, fast decoding speed as well as independency between decoding image and resolution. However, there is a big problem in this method, i.e. long coding time because there are considerable searches of fractal blocks in the coding. Based on the theoretical foundation of fractal coding and fractal compression algorithm, this paper researches every step of this compression method, including block partition, block search and the representation of affine transformation coefficient, in order to come up with optimization and improvement strategies. The simulation result shows that the algorithm of this paper can achieve excellent effects in coding time and compression ratio while ensuring better image quality. The work on fractal coding in this paper has made certain contributions to the research of fractal coding. Key words: Image Compression, Fractal Theory, Coding and Decoding. 1. INTRODUCTION Image compression is aimed to use as many bytes as possible to represent and transmit the original big image and to have a better quality in the restored image. -
Arxiv:2004.10531V1 [Cs.OH] 8 Apr 2020
ROOT I/O compression improvements for HEP analysis Oksana Shadura1;∗ Brian Paul Bockelman2;∗∗ Philippe Canal3;∗∗∗ Danilo Piparo4;∗∗∗∗ and Zhe Zhang1;y 1University of Nebraska-Lincoln, 1400 R St, Lincoln, NE 68588, United States 2Morgridge Institute for Research, 330 N Orchard St, Madison, WI 53715, United States 3Fermilab, Kirk Road and Pine St, Batavia, IL 60510, United States 4CERN, Meyrin 1211, Geneve, Switzerland Abstract. We overview recent changes in the ROOT I/O system, increasing per- formance and enhancing it and improving its interaction with other data analy- sis ecosystems. Both the newly introduced compression algorithms, the much faster bulk I/O data path, and a few additional techniques have the potential to significantly to improve experiment’s software performance. The need for efficient lossless data compression has grown significantly as the amount of HEP data collected, transmitted, and stored has dramatically in- creased during the LHC era. While compression reduces storage space and, potentially, I/O bandwidth usage, it should not be applied blindly: there are sig- nificant trade-offs between the increased CPU cost for reading and writing files and the reduce storage space. 1 Introduction In the past years LHC experiments are commissioned and now manages about an exabyte of storage for analysis purposes, approximately half of which is used for archival purposes, and half is used for traditional disk storage. Meanwhile for HL-LHC storage requirements per year are expected to be increased by factor 10 [1]. arXiv:2004.10531v1 [cs.OH] 8 Apr 2020 Looking at these predictions, we would like to state that storage will remain one of the major cost drivers and at the same time the bottlenecks for HEP computing. -
The Discrete Cosine Transform (DCT): Theory and Application
The Discrete Cosine Transform (DCT): 1 Theory and Application Syed Ali Khayam Department of Electrical & Computer Engineering Michigan State University March 10th 2003 1 This document is intended to be tutorial in nature. No prior knowledge of image processing concepts is assumed. Interested readers should follow the references for advanced material on DCT. ECE 802 – 602: Information Theory and Coding Seminar 1 – The Discrete Cosine Transform: Theory and Application 1. Introduction Transform coding constitutes an integral component of contemporary image/video processing applications. Transform coding relies on the premise that pixels in an image exhibit a certain level of correlation with their neighboring pixels. Similarly in a video transmission system, adjacent pixels in consecutive frames2 show very high correlation. Consequently, these correlations can be exploited to predict the value of a pixel from its respective neighbors. A transformation is, therefore, defined to map this spatial (correlated) data into transformed (uncorrelated) coefficients. Clearly, the transformation should utilize the fact that the information content of an individual pixel is relatively small i.e., to a large extent visual contribution of a pixel can be predicted using its neighbors. A typical image/video transmission system is outlined in Figure 1. The objective of the source encoder is to exploit the redundancies in image data to provide compression. In other words, the source encoder reduces the entropy, which in our case means decrease in the average number of bits required to represent the image. On the contrary, the channel encoder adds redundancy to the output of the source encoder in order to enhance the reliability of the transmission. -
Hardware Architecture for Elliptic Curve Cryptography and Lossless Data Compression
Hardware architecture for elliptic curve cryptography and lossless data compression by Miguel Morales-Sandoval A thesis presented to the National Institute for Astrophysics, Optics and Electronics in partial ful¯lment of the requirement for the degree of Master of Computer Science Thesis Advisor: PhD. Claudia Feregrino-Uribe Computer Science Department National Institute for Astrophysics, Optics and Electronics Tonantzintla, Puebla M¶exico December 2004 Abstract Data compression and cryptography play an important role when transmitting data across a public computer network. While compression reduces the amount of data to be transferred or stored, cryptography ensures that data is transmitted with reliability and integrity. Compression and encryption have to be applied in the correct way: data are compressed before they are encrypted. If it were the opposite case the result of the cryptographic operation would be illegible data and no patterns or redundancy would be present, leading to very poor or no compression. In this research work, a hardware architecture that joins lossless compression and public-key cryptography for secure data transmission applications is discussed. The architecture consists of a dictionary-based lossless data compressor to compress the incoming data, and an elliptic curve crypto- graphic module that performs two EC (elliptic curve) cryptographic schemes: encryp- tion and digital signature. For lossless data compression, the dictionary-based LZ77 algorithm is implemented using a systolic array aproach. The elliptic curve cryptosys- tem is de¯ned over the binary ¯eld F2m , using polynomial basis, a±ne coordinates and the binary method to compute an scalar multiplication. While the model of the complete system was implemented in software, the hardware architecture was described in the Very High Speed Integrated Circuit Hardware Description Language (VHDL). -
A Review of Data Compression Technique
International Journal of Computer Science Trends and Technology (IJCST) – Volume 2 Issue 4, Jul-Aug 2014 RESEARCH ARTICLE OPEN ACCESS A Review of Data Compression Technique Ritu1, Puneet Sharma2 Department of Computer Science and Engineering, Hindu College of Engineering, Sonipat Deenbandhu Chhotu Ram University, Murthal Haryana-India ABSTRACT With the growth of technology and the entrance into the Digital Age, the world has found itself amid a vast amount of information. Dealing with such enormous amount of information can often present difficulties. Digital information must be stored and retrieved in an efficient manner, in order for it to be put to practical use. Compression is one way to deal with this problem. Images require substantial storage and transmission resources, thus image compression is advantageous to reduce these requirements. Image compression is a key technology in transmission and storage of digital images because of vast data associated with them. This paper addresses about various image compression techniques. There are two types of image compression: lossless and lossy. Keywords:- Image Compression, JPEG, Discrete wavelet Transform. I. INTRODUCTION redundancies. An inverse process called decompression Image compression is important for many applications (decoding) is applied to the compressed data to get the that involve huge data storage, transmission and retrieval reconstructed image. The objective of compression is to such as for multimedia, documents, videoconferencing, and reduce the number of bits as much as possible, while medical imaging. Uncompressed images require keeping the resolution and the visual quality of the considerable storage capacity and transmission bandwidth. reconstructed image as close to the original image as The objective of image compression technique is to reduce possible. -
Lossless Compression of Audio Data
CHAPTER 12 Lossless Compression of Audio Data ROBERT C. MAHER OVERVIEW Lossless data compression of digital audio signals is useful when it is necessary to minimize the storage space or transmission bandwidth of audio data while still maintaining archival quality. Available techniques for lossless audio compression, or lossless audio packing, generally employ an adaptive waveform predictor with a variable-rate entropy coding of the residual, such as Huffman or Golomb-Rice coding. The amount of data compression can vary considerably from one audio waveform to another, but ratios of less than 3 are typical. Several freeware, shareware, and proprietary commercial lossless audio packing programs are available. 12.1 INTRODUCTION The Internet is increasingly being used as a means to deliver audio content to end-users for en tertainment, education, and commerce. It is clearly advantageous to minimize the time required to download an audio data file and the storage capacity required to hold it. Moreover, the expec tations of end-users with regard to signal quality, number of audio channels, meta-data such as song lyrics, and similar additional features provide incentives to compress the audio data. 12.1.1 Background In the past decade there have been significant breakthroughs in audio data compression using lossy perceptual coding [1]. These techniques lower the bit rate required to represent the signal by establishing perceptual error criteria, meaning that a model of human hearing perception is Copyright 2003. Elsevier Science (USA). 255 AU rights reserved. 256 PART III / APPLICATIONS used to guide the elimination of excess bits that can be either reconstructed (redundancy in the signal) orignored (inaudible components in the signal). -
The H.264 Advanced Video Coding (AVC) Standard
Whitepaper: The H.264 Advanced Video Coding (AVC) Standard What It Means to Web Camera Performance Introduction A new generation of webcams is hitting the market that makes video conferencing a more lifelike experience for users, thanks to adoption of the breakthrough H.264 standard. This white paper explains some of the key benefits of H.264 encoding and why cameras with this technology should be on the shopping list of every business. The Need for Compression Today, Internet connection rates average in the range of a few megabits per second. While VGA video requires 147 megabits per second (Mbps) of data, full high definition (HD) 1080p video requires almost one gigabit per second of data, as illustrated in Table 1. Table 1. Display Resolution Format Comparison Format Horizontal Pixels Vertical Lines Pixels Megabits per second (Mbps) QVGA 320 240 76,800 37 VGA 640 480 307,200 147 720p 1280 720 921,600 442 1080p 1920 1080 2,073,600 995 Video Compression Techniques Digital video streams, especially at high definition (HD) resolution, represent huge amounts of data. In order to achieve real-time HD resolution over typical Internet connection bandwidths, video compression is required. The amount of compression required to transmit 1080p video over a three megabits per second link is 332:1! Video compression techniques use mathematical algorithms to reduce the amount of data needed to transmit or store video. Lossless Compression Lossless compression changes how data is stored without resulting in any loss of information. Zip files are losslessly compressed so that when they are unzipped, the original files are recovered. -
A Review on Different Lossless Image Compression Techniques
Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94 A Review on Different Lossless Image Compression Techniques 1Ilam Parithi.T * 2Balasubramanian.R Research Scholar/CSE Professor/CSE Manonmaniam Sundaranar Manonmaniam Sundaranar University, University, Tirunelveli, India Tirunelveli, India [email protected] [email protected] Abstract In the Morden world sharing and storing the data in the form of image is a great challenge for the social networks. People are sharing and storing a millions of images for every second. Though the data compression is done to reduce the amount of space required to store a data, there is need for a perfect image compression algorithm which will reduce the size of data for both sharing and storing. Keywords: Huffman Coding, Run Length Encoding (RLE), Arithmetic Coding, Lempel – Ziv-Welch Coding (LZW), Differential Pulse Code Modulation (DPCM). 1. INTRODUCTION: The Image compression is a technique for effectively coding the digital image by minimizing the numbers of bits needed for representing the image. The aim is to reduce the storage space, transmission cost and to maintain a good quality. The compression is also achieved by removing the redundancies like coding redundancy, inter pixel redundancy, and psycho visual redundancy. Generally the compression techniques are divided into two types. i) Lossy compression techniques and ii) Lossless compression techniques. Compression Techniques Lossless Compression Lossy Compression Run Length Coding Huffman Coding Wavelet based Fractal Compression Compression Arithmetic Coding LZW Coding Vector Quantization Block Truncation Coding Fig. -
Understanding Compression of Geospatial Raster Imagery
Understanding Compression of Geospatial Raster Imagery Document Overview This document was created for the North Carolina Geographic Information and Coordinating Council (GICC), http://ncgicc.com, by the GIS Technical Advisory Committee (TAC). Its purpose is to serve as a best practice or guidance document for GIS professionals that are compressing raster images. This document only addresses compressing geospatial raster data and specifically aerial or orthorectified imagery. It does not address compressing LiDAR data. Compression Overview Compression is the process of making data more compact so it occupies less disk storage space. The primary benefit of compressing raster data is reduction in file size. An added benefit is greatly improved performance over a network, because the user is transferring less data from a server to an application; however, compressed data must be decompressed to display in GIS software. The result may be slower raster display in GIS software than data that is not compressed. Compressed data can also increase CPU requirements on the server or desktop. Glossary of Common Terms Raster is a spatial data model made of rows and columns of cells. Each cell contains an attribute value identifying its color and location coordinate. Geospatial raster data like satellite images and aerial photographs are typically larger on average than vector data (predominately points, lines, or polygons). Compression is the process of making a (raster) file smaller while preserving all or most of the data it contains. Imagery compression enables storage of more data (image files) on a disk than if they were uncompressed. Compression ratio is the amount or degree of reduction of an image's file size. -
Block Transforms in Progressive Image Coding
This is page 1 Printer: Opaque this Blo ck Transforms in Progressive Image Co ding Trac D. Tran and Truong Q. Nguyen 1 Intro duction Blo ck transform co ding and subband co ding have b een two dominant techniques in existing image compression standards and implementations. Both metho ds actually exhibit many similarities: relying on a certain transform to convert the input image to a more decorrelated representation, then utilizing the same basic building blo cks such as bit allo cator, quantizer, and entropy co der to achieve compression. Blo ck transform co ders enjoyed success rst due to their low complexity in im- plementation and their reasonable p erformance. The most p opular blo ck transform co der leads to the current image compression standard JPEG [1] which utilizes the 8 8 Discrete Cosine Transform DCT at its transformation stage. At high bit rates 1 bpp and up, JPEG o ers almost visually lossless reconstruction image quality. However, when more compression is needed i.e., at lower bit rates, an- noying blo cking artifacts showup b ecause of two reasons: i the DCT bases are short, non-overlapp ed, and have discontinuities at the ends; ii JPEG pro cesses each image blo ck indep endently. So, inter-blo ck correlation has b een completely abandoned. The development of the lapp ed orthogonal transform [2] and its generalized ver- sion GenLOT [3, 4] helps solve the blo cking problem to a certain extent by b or- rowing pixels from the adjacent blo cks to pro duce the transform co ecients of the current blo ck.