<<

Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94

A Review on Different Lossless Techniques

1Ilam Parithi.T * 2Balasubramanian.R Research Scholar/CSE Professor/CSE Manonmaniam Sundaranar Manonmaniam Sundaranar University, University, Tirunelveli, India Tirunelveli, India [email protected] [email protected]

Abstract In the Morden world sharing and storing the data in the form of image is a great challenge for the social networks. People are sharing and storing a millions of images for every second. Though the is done to reduce the amount of space required to store a data, there is need for a perfect image compression which will reduce the size of data for both sharing and storing.

Keywords: , Run Length Encoding (RLE), , Lempel – Ziv-Welch Coding (LZW), Differential Pulse Code Modulation (DPCM).

1. INTRODUCTION:

The Image compression is a technique for effectively coding the by minimizing the numbers of bits needed for representing the image. The aim is to reduce the storage space, transmission cost and to maintain a good quality. The compression is also achieved by removing the redundancies like coding redundancy, inter redundancy, and psycho visual redundancy. Generally the compression techniques are divided into two types. i) techniques and ii) techniques.

Compression Techniques

Lossless Compression Lossy Compression

Run Length Coding Huffman Coding based Compression Compression Arithmetic Coding LZW Coding Block Truncation Coding

Fig. 1 Various Image Compression Techniques

© IJMSET-Advanced Scientific Research Forum (ASRF), All Rights Reserved “IJMSET promotes research nature, Research nature enriches the world’s future”

86 Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94

LOSSY COMPRESSION.[14]

This approach is based on the concept of compromising the accuracy of the reconstructed image in order to increase the compression ratio. If the resulting distortion can be tolerated the increase in the compression can be significant.

LOSSLESS COMPRESSION.[3]

The aim of using lossless image compression is to compare the image without any loss of data hence it helps in minimizing the space required for storing the data and speeding up transmission. In lossless compression approaches the original image can be completely recovered from the compressed image. These are also called noiseless as they do not add noise to the signal. It is known as entropy coding. Here the image after compression and decompression is identical to the original image and every bit of information is preserved during the decomposition process the reconstructed image after compression is an exact replica of the original image. The lossless compression scheme is used in application where no loss of image data can be compromised.

Input Image Decompressed Original Image

MAPPER INVERSE MAPPER Block QUANTIZER RER Compression SYMBOL SYMBOL Decompression Block ENCODER DECODER

Compressed Image

Fig.2 Image Compression Framework

2. LOSSLESS COMPRESSION TECHNIQUES.

The following are the some of the Lossless Image compression techniques.

1. Arithmetic Coding. 2. Huffman Coding. 3. LZW Coding. 4. Run Length Coding (RLE). 5. DPCM. 6. Lossless JPEG © IJMSET-Advanced Scientific Research Forum (ASRF), All Rights Reserved “IJMSET promotes research nature, Research nature enriches the world’s future”

87 Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94

2.1 Arithmetic coding. [14]

Arithmetic Coding is the complex technique for coding shortest message. Binary fractional number is used here. The arithmetic coding is used to assign short code to more probable event and longer code word to less probable event. It is always minimal. It is one of the efficient techniques for statistically lossless encoding. The aim of arithmetic coding is to define a method that provides code word with an ideal data. The incremental transmissions of bits are possible. This coding takes a stream of input symbol and it replaces it with floating point number.

2.2 Huffman Coding. [10]

Huffman Coding is an entropy coding technique used for lossless data compression. Here, the coding redundancy can be eliminated by choosing a better way of accessing the code. The in the images are treated as symbols. The symbols that occur more frequently are assigned to small number of bits; the symbols that occur rarely are assigned to larger number of bits. The binary code of any symbol is not the prefix of the code of any other symbol. Generally the Huffman algorithm is divided into two types. Static and Dynamic. Static Huffman algorithm is a technique that calculates the frequency of each symbol in the first phase and it constructs a Huffman tree in the second phase. Dynamic Huffman algorithm is expanded on Huffman algorithm that constructs a Huffman tree in one pass but it takes more space than the static Huffman algorithm. The example of Huffman coding is follows.

Step 1. Give the input data.

9 2 35 28 T E L A

Step 2. Arrange the input data in ascending order

2 9 28 35 E T A L

Step 3. Select the least two data’s

2 9 E T

Step 4. Add them together and update the data.

11

2 9 E T

© IJMSET-Advanced Scientific Research Forum (ASRF), All Rights Reserved “IJMSET promotes research nature, Research nature enriches the world’s future”

88 Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94

Step 5. The final Huffman tree will be 2 E

35 L 11 9 T 74 39 28 A

Fig 3 Final Huffman Tree

2.3 LZW Coding.[15]

[Lempel – Ziv-Welch] uses a dictionary to store the string patterns and how already encountered. Here the repeated patterns are encoded using indices. The input string is read by the encoder, then it analysis the repeated codes from the dictionary. If a new word is encountered the word will be sent as a output in the uncompressed form. LZW is always used in Graphics Interchange Format (GIF).This technique works well for the large database which has the repeated information.

7 8 8 1 4 12 12 36 8 8 8

Unique Unique Unique Unique Code A1 Code A2 Code A3 Code A4 A1 A2 A3 A4

Fig. 4 Example of LZW coding

2.4 RLE.[9]

It is the simplest dictionary based data compression technique. Image files frequently contain the same character repeated many times in a row. It tries to identify the length of pixel value and encode the image in the form of run. Each row of the image is written in a sequence. The length is represented as a run of black or white pixels.

E E E L L L L L A A

E 3 L 4 A 2

Fig 5 Example of Run Length Encoding

© IJMSET-Advanced Scientific Research Forum (ASRF), All Rights Reserved “IJMSET promotes research nature, Research nature enriches the world’s future”

89 Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94

2.5 DPCM.[14]

DPCM stands for Differential Pulse Code Modulation. It is a predictive compression technique in which a prediction of pixel being encoded is subtracted from the pixel to create a difference value which is then quantized for subsequent transmission. DPCM occurs with or without using quantizer. In DPCM without quantizer the reconstructed signal exactly resembles the original signal and it comes under lossless technique. The main drawback of this technique is the compression ratio will be poor.

Input(S) Output(R)

Predictor Predictor

+

Error (e) Error (e)

Entropy Entropy Encoder Decoder

Channel

Fig. 6 DPCM without Quantizer

Algorithm.

Step 1: Install the predictor.

Step 2: Select the first element in the matrix.

Step 3: Compute the error in transmission side.

Step 4: Transfer the error signal.

Step 5: Reconstruct the first value in the receiver side.

Step 6: Repeat step 2 to 5 for remaining pixels of the image.

© IJMSET-Advanced Scientific Research Forum (ASRF), All Rights Reserved “IJMSET promotes research nature, Research nature enriches the world’s future”

90 Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94 2.6 Lossless JPEG [13].

Lossless JPEG was developed as a late addition to JPEG in 1993, using a completely different technique from the lossy JPEG standard. It can perform lossless compression as long as the image size is a multiple of MCU (Minimum Coded Limit). It uses a predictive scheme based on the three nearest neighbors upper, left and upper-left and entropy coding is used on the prediction error. It uses a simple predictive algorithm and Huffman algorithm to encode the prediction difference. This technique is rarely used since its compression ratio is very low when compared to Lossy modes. This technique is very useful in the case of medical image compression where the loss of information is not tolerable.[15] The main steps of lossless operation mode are depicted in Fig.7.

Source Image data

Predictor

Entropy Encoder Lossless Encoder Table Specification

Compressed Image data

Fig.7 Block diagram of lossless JPEG Compression.

2.6.1 JPEG-LS [13].

JPEG-LS is a lossless/near-lossless compression standard for continuous –tone images. Its official designation is ISO-14495-1/ITU-T.87.It is a simple and efficient baseline algorithm, which consists of two independent and distinct stages called modeling and encoding. JPEG-LS were developed with the aim of providing a low –complexity lossless and near-lossless image compression standard that could offer better compression efficiency than lossless JPEG. JPEG LS is especially suited for low complexity hardware implementations. It was developed because at the time, the Huffman coding- based JPEG lossless standard and other standards were limited in their compression performance. The core of JPEG-LS is based on the LOCO-I algorithm, that relies on prediction, residual modeling and context-based coding of the residuals. Most of the low complexity of this technique comes from the assumption that prediction residuals follow a two-sided geometric distribution and from the use of Golomb-like codes, which are known to be approximately optimal for geometric distributions.

© IJMSET-Advanced Scientific Research Forum (ASRF), All Rights Reserved “IJMSET promotes research nature, Research nature enriches the world’s future”

91 Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94

Table 1. Merits and Demerits of different Lossless Compression .

S.No Techniques Merits Demerits

It has ability to keep the coding and Its computation is complex 1 the modeller separate. Iombo C, because it has multiple [12]. operations. Arithmetic Coding It applies the fractional values. Precision is big issue Iombo C, [12] It is much slower than Huffman Code trees needs not to be Coding. transmitted to the receiver. Iombo C,[12]

Simple for coding characters Coding tree will not be recreated. 2 Huffman Encoding It gives smaller bits for more performance depends on good estimate if estimate is not good frequently appearing than performance is poor. characters/symbols.

For each file a codeword table is Very hard to predict the original 3 been built compression. LZW Coding LZW algorithm works only Code word is recreated in the when the input data is decompression sufficiently large and there is sufficient redundancy in the data. LZW technique is used WinZip and many UNIX based applications

Run Length It is very easy technique to RLE compression is only 4 Encoding implement. efficient with files that contain lots of repetitive data. It does not require much CPU’s horse power.

Compatibility with legacy 5 JPEG – 2000 Reduced cost for storage and system.[7]. maintenance.

It is mainly developed for the use Compression data requires lots of Internet. of CPU horse power.[7]

Better for handling larger Images

© IJMSET-Advanced Scientific Research Forum (ASRF), All Rights Reserved “IJMSET promotes research nature, Research nature enriches the world’s future”

92 Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94

3. CONCLUSION.

This paper provides a review on different Lossless image compression algorithms. All the lossless image compression algorithms are excellent in some field but each and every algorithm has some drawbacks. These drawbacks are to be overcome in the future by implementing the soft computing algorithms in image compression which will provide better results than the existing algorithms.

REFERENECE

[1] Bhammar M.B,. Mehta K.”A survey of various image compression techniques” IJDI-ERET- International Journal Of Darashan Institute On Engineering Research & Emerging Technology Vol. 1, No. 1, 2012

[2] Melwin .Y, Solomon A. S, Nachiappa M.N “A survey of Compression techniques” International Journal of Recent Technology and Engineering (IJRTE) Volume-2, Issue-1, March 2013

[3] Kaimal A.B, Manimurugan S, Devadass C.S.C, “Image Compression techniques: survey" International Journal of Engineering Inventions Volume 2, Issue 4 (February 2013)

[4] Kaur M. and Kaur G. “A survey of lossless and lossy Image compression technique” International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 2, February 2013

[5]Somasundaram K. and Domnic S. “Modified vector quantization method for image compression” Proceeding of World Academy Of Science, Engineering and Technology volume 13 May 2006

[6] Kashyap N. and Singh S.N “Review of images Compression and compression of its algorithms” International Journal of Application or Innovation in Engineering & Management (IJAIEM) Volume 2, Issue 12, December 2013.

[7] Jpeg - 2000 compression available at http://www.prepressure.com/library/compressionalgorithm/jpeg-2000

[8]Khobragade.P.B Thakare.S.S “Image compression techniques A Review” International Journal of computer science and information technologies Vol 5(1) 2014

[9]RLE compression available at http://www.prepressure.com/library/compressionalgorithm/rle

[10] Huffman Coding available at http://cs.gettysburg.edu/~skim/cs216/lectures/huffman.pdf

[11] Chapter 7 Lossless Compression Algorithms available at http://www.course.sdu.edu.cn/download/12c34ecb-6cbf-46d7-af99- 982aaf6bf620.pdf

[12]Lomboc. Predictive data compression using Adaptive arithmetic coding Thesis http://etd.lsu.edu/docs/available/etd-07032007- 100117/unrestricted/Iombo_thesis.pdf

[13] Lossless Jpeg is available at http://en.wikipedia.org/wiki/Lossless_JPEG.

[14] Annadurai S, Shanmugalakshmi R, Fundamentals of Digital Image Processing, Pearson Education, 2011, ISBN: 978-81-775-8479-0

[15] Jayaraman S, Esakkirajan S, Veerakumar T, Digital Image Processing by Tata McGraw Hill, 2009, ISBN: 978-0-07-014479-8

© IJMSET-Advanced Scientific Research Forum (ASRF), All Rights Reserved “IJMSET promotes research nature, Research nature enriches the world’s future”

93 Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94 AUTHOR’S BRIEF BIOGRAPHY:

T.Ilam Parithi: He is currently pursuing his research in the Department of Computer Science & Engineering, Manonmaniam Sundaranar University, Tirunelveli. He completed his M.Phil in Computer Science from Manonmaniam Sundaranar University, and MCA from D.G Vaishnav College, Madras University, Chennai. His research interests include Image Compression, Cryptography and Neural Networks.

.

Dr.R.Balasubramanian: He is currently working as a Professor in the Department of Computer Science & Engineering, Manonmaniam Sundaranar University, and Tirunelveli. He received his B.E [Hons] degree in Computer Science & Engineering from Bharathidhasan University and M.E degree in Computer Science & Engineering from Regional Engineering College, Trichy, and Bharathidhasan University. He received his Doctorate in Computer Science & Engineering, Manonmaniam Sundaranar University, and Tirunelveli. He has published papers in many National and International level Journals and Conferences. His research interests are in the field of Digital Image Processing, Data Mining and Wireless Networks .

© IJMSET-Advanced Scientific Research Forum (ASRF), All Rights Reserved “IJMSET promotes research nature, Research nature enriches the world’s future”

94