IJSTE - International Journal of Science Technology & Engineering | Volume 3 | Issue 12 | June 2017 ISSN (online): 2349-784X Analysis of Image Compression Algorithm using

DCT

Aman George Ashish Sahu Student Assistant Professor Department of Information Technology Department of Information Technology SSTC, SSGI, FET Junwani, Bhilai - 490020, Chhattisgarh Swami Vivekananda Technical University, India Durg, C.G., India

Abstract

Image compression means reducing the size of graphics file, without compromising on its quality. Depending on the reconstructed image, to be exactly same as the original or some unidentified loss may be incurred, two techniques for compression exist. This paper presents DCT implementation because these are the lossy techniques. Image compression is the application of on digital images. The discrete cosine transform (DCT) is a technique for converting a signal into elementary frequency components. Image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. The IMAP block and IMAQ block of MATLAB was used to analyse and study the results of Image Compression using DCT and varying co-efficients for compression were developed to show the resulting image and error image from the original images. The original image is transformed in 8-by-8 blocks and then inverse transformed in 8-by-8 blocks to create the reconstructed image. Image Compression is specially used where tolerable degradation is required. This paper is a survey for lossy image compression using Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image applications and describes all the components of it. Keywords: DCT (Discrete Cosine Transform), Image Compression, JPEG, IMAP, IMAQ ______

I. INTRODUCTION

By entering the Digital Age, the world has faced a vast amount of information. Dealing with this vast amount of information can often result in many difficulties. We must store, retrieve, analyse and process Digital information in an efficient way, so as to be put to practical use. In the past decade many aspects of digital technology have been developed. Specifically in the fields of image acquisition, data storage and bitmap printing. Compressing an image is significantly different than compressing raw binary data. Images have certain statistical properties which can be exploited by encoders specifically designed for them so, the result is less than optimal when using general purpose compression programs to compress images. One of many techniques under image processing is image compression. Image compression have many applications and plays an important role in efficient transmission and storage of images. The image compression aims at reducing redundancy in image data to store or transmit only a minimal number of samples And from this we can reconstruct a good accession of the original image in accordance with human visual perception. An image is essentially a 2-D signal processed by the human visual system. The signal representing the image usually is analog form. However, for processing, storage and transmission by computer applications, they are converted from analog to digital form. A digital image is basically a 2-D array of pixels. An image is an artefact that depicts or records visual perception. The objective is to achieve a reasonable compression ratio as well as better quality of reproduction of image with low power consumption. Image compression addresses the problem of reducing the amount of data required to represent a digital image. It is a process intended to yield a compact representation of an image, thereby reducing the image storage/transmission requirements. Compression is achieved by the removal of one or more of the three basic data redundancies:  Coding Redundancy  Inter pixel redundancy  Psycho visual redundancy

II. SOME BASIC COMPRESSION METHODS

Principles behind Compression Image Compression addresses the problem of reducing the amount of data required to represent the digital image. We can achieve compression by removing of one or more of three basic data redundancies:  Spatial Redundancy or correlation between neighbouring pixel  Due to the correlation between different colour planes or spectral bands, the Spectral redundancy is founded  Due to properties of the human visual system, the Psycho-visual redundancy is founded.

All rights reserved by www.ijste.org 121

Analysis of Image Compression Algorithm using DCT (IJSTE/ Volume 3 / Issue 12 / 021)

We find the spatial and spectral redundancies when certain spatial and spectral patterns between the pixels and the colour components are common to each other and the psycho-visual redundancy produces from the fact that the human eye is insensitive to certain spatial frequencies. Various techniques can be used to compress the images to reduce their storage sizes as well as using a smaller space. We can use two ways to categorize compression techniques:  System - Lossy compression techniques is used in images where we can sacrifice some of the finer details in the image to save a little more bandwidth or storage space.  system - Lossless Compression System aims at reducing the of the compressed output without any distortion of the image. The bit-stream after decompression is identical to the original bit stream.  Predictive coding - It is a lossless coding method, which means the value for every element in the decoded image and the original image is identical to Differential Pulse Code Modulation (DPCM).  Transform coding - Transform coding forms an integral part of compression techniques. The reversible linear transform in transform coding aims at mapping the image into a set of coefficients and the resulting coefficients are then quantized and coded. The first attempts is the discrete cosine transform (DCT) domain.

Typical Image Coder Three closely connected components form a typical lossy image compression system, they are: Source Encoder, Quantizer, and Entropy Encoder.  Source Encoder (or Linear Transformer) - It is aimed at decorrelating the input signal by transforming its representation in which the set of data values is sparse, thereby compacting the information content of the signal into smaller number of coefficients, a variety of linear transforms have been developed such as : Discrete Cosine Transform (DCT), Discrete Fourier Transform (DFT).  Quantizer - A quantizer aims at reducing the number of bits needed to store transformed coefficients by reducing the precision of those values. Quantization performs on each individual coefficient i.e. Scalar Quantization (SQ) or it performs on a group of coefficients together i.e. Vector Quantization (VQ).  Entropy Coding - Entropy encoding removes redundancy by removing repeated bit patterns in the output of the Quantizer. The most common entropy coders are the Huffman Coding, Arithmetic Coding, Run Length Encoding (RLE) and Lempel- Ziv (LZ) algorithm.

Fig. 1: The encoding of image compression system

Performance Criteria in Image Compression We can estimate the performance by applying the following two essential criteria: the compression ratio (CR) and the quality measurement of the reconstructed image (PSNR) Compression Ratio The Compression ratio (CR) is the ratio between the original image size and the compressed image size CR = n1/n2 (1) Distortion measure Mean Square Error (MSE) is a measure of the distortion rate in the reconstructed image.

(2) PSNR PSNR has been accepted as a widely used quality measurement in the field of image compression.

(3)

Image Compression Processes Compression of image is achieved by the removal of one or more of the three basic data redundancies: Coding Redundancy Coding redundancy is present when less than optimal code words are used. It consists in using variable length code words selected as to match the statistics of the original source, in this case, the image itself or a processed version of its pixel values. This type of

All rights reserved by www.ijste.org 122

Analysis of Image Compression Algorithm using DCT (IJSTE/ Volume 3 / Issue 12 / 021) coding is always reversible and usually implemented using lookup tables. Examples of image coding schemes that explore coding redundancy are the Huffman codes and the arithmetic coding technique. Inter Pixel Redundancy Inter pixel redundancy results from correlations between the pixels of an image because in image neighbouring pixels are not statistically independent. In image neighbouring pixels are not statistically independent. It is due to the correlation between the neighbouring pixels of an image. This type of redundancy is called Inter-pixel redundancy. This type of redundancy is sometime also called spatial redundancy. This redundancy can be explored in several ways, one of which is by predicting a pixel value based on the values of its neighbouring pixels. In order to do so, the original 2-D array of pixels is usually mapped into a different format, e.g., an array of differences between adjacent pixels. If the original image pixels can be reconstructed from the transformed data set the mapping is said to be reversible. Psycho Visual Redundancy Psycho visual redundancy is due to data that is ignored by the human visual system (i.e. visually non-essential information). Many experiments on the psycho physical aspects of human vision have proven that the human eye does not respond with equal sensitivity to all incoming visual information; some pieces of information are more important than others. Most of the image coding algorithms in use today exploit this type of redundancy, such as the Discrete Cosine Transform based algorithm at the heart of the JPEG encoding standard. Additional Steps used in Image Compression  Error Metrics - Two of the error metrics used to compare the various image compression techniques are the Mean Square Error (MSE) and the Peak Signal to Noise Ratio (PSNR) to achieve desirable compression ratios. The MSE is the cumulative squared error between the compressed and the original image, whereas PSNR is a measure of the peak error. Logically, a higher value of PSNR is good because it means that the ratio of Signal to Noise is higher. Here, the 'signal' is the original image, and the 'noise' is the error in reconstruction.  Data Compression Ratio - Data compression ratio, also known as compression power, is used to quantify the reduction in data- representation size produced by data compression. The data compression ratio is analogous to the physical compression ratio it is used to measure physical compression of substances, and is defined in the same way, as the ratio between the uncompressed size and the compressed size.  Mean Square Error (MSE) - Mean square error is a criterion for an estimator: the choice is the one that minimizes the sum of squared errors due to bias and due to variance. The average of the square of the difference between the desired response and the actual system output.

(4) Where X (i,j) is the original image, Y (i,j) is the approximated version and H and W are the dimensions of the image.  Peak Signal To Noise Ratio (PSNR) - It is the ratio between the maximum possible power of a signal and the power of corrupting noise. The PSNR is most commonly used as a measure of quality of reconstruction in image compression etc. PSNR = 20 * log10 (255 / sqrt (MSE)) It is most easily defined via the mean squared error (MSE) which for two m×n monochrome images I and K where one of the images is considered noisy.

Run-Length Encoding (RLE) RLE stands for Run Length Encoding. It is a lossless algorithm that only furnishes decent compression ratios in specific types of data. It is form of data compression in which the same data value occurs in many consecutive data elements (known as Runs) are stored as a single data value and count. This is most useful on data that contains many such runs, for example, simple graphic images such as icons, line drawings, and animations. It may be increase the file size because, that doesn’t have many runs, and is not useful with files. RLE compression can be used in the following file formats:  TIFF files  PDF files

Huffman Coding The Huffman compression algorithm is invented by David Huffman, formerly a professor at MIT. Huffman compression is a lossless compression algorithm that is apotheosis for compressing text or program files. This credibly explains why it is used a lot in compression programs like ZIP or ARJ. Huffman encoding can be further optimized in two different ways:  Adaptive Huffman code dynamically changes the code words concordant to the change of probabilities of the symbols.  Extended Huffman compression can encode groups of symbols rather than single symbols and this is crucial for many image applications. The lossy image compression techniques play a major role in systems having limited transmission bandwidth and storage capacity.

All rights reserved by www.ijste.org 123

Analysis of Image Compression Algorithm using DCT (IJSTE/ Volume 3 / Issue 12 / 021)

III. PROBLEM FORMULATION

To analyse the image compression algorithm using 2-dimension DCT. According to the DCT properties, a DC is transformed to discrete delta-function at zero frequency. Hence, the transform image contains only the DC component. To transformed an image into 8 x 8 subsets by applying DCT in 2 dimension. Also, a subset of DCT co-efficient have been prepared in order to perform inverse DCT to get the reconstructed image. The work to be done is to perform the inverse transform of the transformed image and also to generate the error image in order to give the results in terms of MSE (Mean Square Error), as MSE increases, the image quality degrades and as the MSE would decrease, image quality would be enhanced with the help of changing the co-efficient for DCT Blocks.

IV. WORK IMPLEMENTATION

The method will uses the ROI part with the Discrete Cosine Transformation. It is a major challenge to compress the medical image to reduce the storage space, bandwidth utilization, blocking effect and other disadvantages. So there is a need to modify the existing techniques to get the advantage of these methods. DCT will uses with the ROI technique to compress the medical image to remove the blocking effect.

Research Plan First the medical image is segmented into parts and then ROI and non-ROI parts of the image are defined, based on which the compression is done. Then the image is transmitted to the receiver where the decompression process is carried out to check the quality of the image with ROI. It includes following steps:  Medical image is input for the process which may be ant type of medical image which have to process (MRI/CT/DICOM).  The image is segmented into different parts.  The classification of image is described.  Describes that whether the image is selected for ROI or other than ROI.  If the image is selected with ROI then it is compressed with lossless compression otherwise lossy compression is used.  Compression of image is done.  Image is transmitted to the desired system.  Image is received at receiver side.  Image decompression is done.  Correct image is received with lossless ROI compression.

ROI Selection Region of Interest is important in medical applications where certain parts of the image are of higher diagnostic significance than others. Figure 2 demonstrates the implementation design of selection of the ROI part. The image is first selected from the image folder and then the selection of the interested or required portion is done. The cursor that is moving on the screen is used to select the image ROI part. The interested part or the part of medical image where the disease will occurs is selected with the help of the movement of cursor. The part that is selected from the medical image is the part that the experts will select to test or to read the effected part of the image. The cursor can be moved in any of the four directions from where we have to select the interested part.

Fig. 2: Selection of ROI part of image

All rights reserved by www.ijste.org 124

Analysis of Image Compression Algorithm using DCT (IJSTE/ Volume 3 / Issue 12 / 021)

Output of ROI The output of ROI is the selected part of the medical image. Figure 3 describes that the image that is selected with the help of the moving cursor is displayed in this part of the image, is ROI image. The output will shows the original image and along with the original image it will shows the output of the ROI part which is region of interest that we have selected for the implementation.

Fig. 3: Output of the ROI part of image

Haar Wavelet Image Processing The Haar wavelet can be used on both ROI and Non- ROI parts of the image. Figure 4 describes the output of processing of the image with the help of Haar Wavelet Transform method, which shows the output in the form of total compression and gain compression. The output of the image is shown in the form of peak signal to noise ratio and mean square error. The input image entropy and the output image entropy is also shown when we compress or decompress the image with the help of Haar Wavelet method. With the increase in PSNR value, the image quality will also be increased and image quality will be decreased if the value of MSE will increased.

Fig. 4: Haar Wavelet image processing

All rights reserved by www.ijste.org 125

Analysis of Image Compression Algorithm using DCT (IJSTE/ Volume 3 / Issue 12 / 021)

Graphical Representation of Output Values

Fig. 5: Graphical representation of output values

In this graphical representation the Figure 5 describes about the graphical representation of the output values of the DCT and Haar Wavelet methods. The graph will shows that the output values of the DCT method are much better than that of Haar Wavelet method and also the image quality is high in case of discrete cosine transform. The DCT have higher values than other methods that is shown through the values that are given in the graph. The graph will shows the 4 series with different colors that are given for different values. The first point of all the series will gives the output of the first column and so on. The graph that has high values will have high quality of the selected point. The graph with the higher value will have high quality of the compressed and decompressed images.

V. CONCLUSION AND FUTURE SCOPE

Image compression is used for managing images in digital format. This survey paper has been focused on the Fast and efficient lossy coding algorithms JPEG for image Compression/Decompression using Discrete Cosine transform. We also briefly introduced the principles behind the Digital Image compression and various image compression methodologies .and the process steps including DCT, quantization, entropy encoding. Medical imaging has a great impact on the diagnosis of disease and surgical planning. The imaging devices continue to generate more data per patient, often large imaging. These data need long term storage and efficient transmission so there is a need to compress medical images. The various methods are used to compress the images. In all the research papers we have studied the various techniques; these techniques have several pros and cons. These are due to the methods that are used to compress the medical images. Imaging helps a lot to represent the internal problem of body in visual manner. Various medical diagnosing techniques are using digital images of human body as the deciding factors for next medical treatment. The new techniques are enhanced to compress the medical image so that the problems encountered in the previous study can be solved. As all know that the usage of images in medical science is increasing day by day. It is a major challenge to compress the medical image to reduce the storage space, bandwidth utilization, blocking effect and other disadvantages. So there is a need to modify the existing techniques to get the advantage of these methods. In the future work the image compression is performed on the medical color image with the help of different techniques and morphological operator to get the better results in the future.

REFERENCES

[1] V.K. Bairagi, A.M. Sapkal “Automated region based hybrid compression for digital imaging and communications in medicine magnetic resonance imaging images for telemedicine applications,” published in IET Science, Measurement and Technology, ISSN: 1751-8822, Issue No.4, Vol No.6, 2012. [2] Mr. Amit S. Tajne, Prof. Pravin S. Kulkarni “A Survey on Medical Image Compression Using Hybrid Technique,” International Journal of Computer Science and Mobile Computing (IJCSMC), ISSN: 2320–088X, Issue No.2, Vol.4, Feb 2015. [3] Li Zhiqiang, Sun Xiaoxin, Du Changbin, Ding Qun “JPEG algorithm analysis and application in image compression encryption of digital chaos,” Third International Conference on Instrumentation, Measurement, Computer, Communication and Control, 2013. [4] Jagadish H. Pujar, Lohit M. Kadlaskar “A new lossless method of image compression and decompression using huffman coding techniques,” Journal of Theoretical and Applied Information Technology, www.jatit.org. [5] Maneesha Gupta, Dr. Amit Kumar Garg “Analysis Of Image Compression Algorithm Using DCT,” International Journal of Engineering Research and Applications (IJERA), ISSN No. 2248-9622, Issue No.1, Vol.2, Jan-Feb 2012. [6] Navjot Kaur, Preeti Singh “A New Method of Image Compression Using Improved SPIHT and MFHWT,” International Journal of Latest Research in Science and Technology, ISSN: 2278-5299, Issue No.2, Vol.1, July-Aug (2012). [7] Dalvir Kaur, Kamaljit Kaur “Huffman Based LZW Lossless Image Compression Using Retinex Algorithm,” International Journal of Advanced Research in Computer and Communication Engineering, ISSN: 2319-5940, Issue No.8, Vol. 2, Aug 2013 [8] Lavanya. M, M. Suresh Kumar “intelligent compression of medical images Based on multi ROI,” International Journal of Emerging Technology and Advanced Engineering, ISSN: 2250-2459, Issue No.1, Vol. 3, Jan 2013.

All rights reserved by www.ijste.org 126

Analysis of Image Compression Algorithm using DCT (IJSTE/ Volume 3 / Issue 12 / 021)

[9] Yung-Gi Wu “Medical Image Compression by Sampling DCT Coefficients,” IEEE transactions on information technology in biomedicine, Issue No.1, Vol. 6, March 2002. [10] W.B. Pennebaker and J.L. Mitchell, “JPEG Still Image Data Compression Standards”, New York: Van Nostrand Reinhold. (1993). [11] Rafael C. Gonzalez, Richard E. Woods, and StevenL. Eddins.’’Digital Image Processing Using MATLAB”. ISBN-10:0130085197. ISBN-13: 978- 0130085191. Prentice Hall, 1st edition (September 5, 2003). [12] G. K .Wallace: “The JPEG Still Picture Compression Standard,” Communication of the ACM, Vol.34, No.4, (Apr 1991). [13] Anil K. Jain, ``Fundamentals of digital image processing,'' Englewood Cliffs: Prentice _Hall information and system sciences series. Prentice _Hall International, London, 1989. [14] Liu Chien-Chih and Hang Hsueh-Ming, "Acceleration and Implementation of JPEG 2000 Encoder on TI DSP platform" Image Processing, 2007. ICIP 2007. IEEE International Conference on, Vo1. 3, 2005. [15] ISO/IEC: Information technology-JPEG 2000 image coding system-Part 1: Core coding system. ISO/IEC 15444-1:2000(ISO/IEC JTC/SC 29/WG 1 N1646R) (March 2000). [16] Hudson, G.P., Yasuda, H., & Sebestyén, I. The international standardization of a still picture compression technique. In Proceedings of the IEEE Global Telecommunications Conference, IEEE Communications Society, pp. 10161021, Nov. 1988. [17] John C. Russ. The Image Processing Handbook. ISBN-10: 0849372542. ISBN-13: 978- 0849372544, CRC Press, 5th edition, (December 19, 2006). [18] N. Ahmed, T. Natarajan, and K. R. Rao, Discrete Cosine Transform, IEEE. Trans. Computer, Vol C- 23, Jan 1974. [19] Rafael C. Gonzalez and Richard E. Woods. “Digital Image Processing”, ISBN-10: 013168728X. ISBN-13: 978-0131687288, Prentice Hall, 3rd edition, (August 21, 2007). [20] Boliek, M., Gormish, M. J., Schwartz, E. L., and Keith, A. Next Generation Image Compression and Manipulation Using CREW, Proc. IEEE ICIP, 1997. [21] A. B. Watson, "DCT quantization matrices visually optimized for individual images," B. Rogowitz and J. Allebach, eds., Human Vision, Visual Processing, and Digital Display IV, SPIE Proc. 1913, (SPIE: Bellingham, WA), 1993. [22] Shannon, C. E and W. Weaver, The mathematical theory of communication. University of Illinois Press, Urbana 1949. [23] A. K Jain, Fundamentals of digital image processing, Prentice Hall, Englewood Cliffs, NJ, 1989. [24] Bonnie L. Stephens, Student Thesis on “Image Compression algorithms”, California State University, Sacramento, August 1996.

All rights reserved by www.ijste.org 127