Adaptive Block-Size Transform Coding for Image Compression
Total Page:16
File Type:pdf, Size:1020Kb
Published in Proceedings of the International Conference on 1 Acoustics, Speech, and Signal Processing (ICASSP97) 4, 2721-2724, 1997 which should be used for any reference to this work ADAPTIVE BLOCK-SIZE TRANSFORM CODING FOR IMAGE COMPRESSION Javier Bracamonte*, Michael Ansorge and Fausto Pellandini Institute of Microtechnology, University of Neuch^atel Rue A.-L. Breguet 2, CH-2000 Neuch^atel, Switzerland Email: [email protected] ABSTRACT a particular algorithm, it remains ¯xed. In JPEG, for In this paper we report the results of an adaptive block- instance, the value of N is 8, and thus the input image size transform coding scheme that is based on the se- is divided into blocks of 8x8 pixels exclusively. quential JPEG algorithm. This minimum information- Several adaptive transform-based methods are de- overhead method implies a transform coding technique scribed in [1, 2, 3]. Most of these methods are ¯xed with two di®erent block sizes: N £N and 2N £2N pix- block-size schemes in which an adaptive quantization els. The input image is divided into blocks of 2N £2N of the transform coe±cients is made. A variable block- pixels and each of these blocks is classi¯ed accord- size scheme has been reported in [4], in which it is also ing to its image activity. Depending on this classi¯- claimed that it was one of the ¯rst departures from tra- cation, either four N-point or a single 2N-point 2-D ditional ¯xed to variable block-size schemes. In their DCT is applied on the block. The purpose of the al- approach the transform coe±cients are processed by gorithm is to take advantage of large uniform regions vector quantization. that can be coded as a single large unit instead of four In this paper an adaptive block-size transform cod- small units|as it is made by a ¯xed block-size scheme. ing scheme based on the JPEG algorithm is reported, For the same reconstruction quality, the results of the and some results are presented showing the improve- adaptive algorithm show a signi¯cant improvement of ment of the compression e±ciency with respect to the the compression ratio with respect to the non-adaptive non-adaptive scheme. scheme. The remainder of this paper is organized as follows. The adaptive scheme is described in Section 2. The se- lection of the classi¯cation parameters is discussed in 1. INTRODUCTION Section 3. Computational complexity issues are ana- Transform-based image coding algorithms have been lyzed in Section 4. The performance of the algorithm the object of intense research during the last twenty is reported in Section 5, and ¯nally the conclusions are years. Eventually they have been selected as the main stated in Section 6. mechanism of data compression in the de¯nition of dig- ital image and video coding standards. For example 2. ADAPTIVE BLOCK-SIZE SCHEME JPEG, MPEG-1, MPEG-2, H.261 and H.263 all rely on an algorithm based on the Discrete Cosine Trans- A block diagram of the adaptive block-size scheme is form (DCT). shown in Figure 1. It is based on the sequential JPEG A transform-based image coding method involves algorithm [5], and it is described in the following para- subdividing the original image into smaller N £ N graphs. blocks of pixels and applying a unitary transform, such The input image is divided into blocks of 16x16 pix- as the DCT, on each of these blocks. A plethora of els (referred to as B blocks in this paper). On each of methods have been proposed regarding di®erent kinds these blocks a metric is evaluated in order to determine of processing executed after the transform operation. its degree of image activity. If the resulting measure In general, once the value of N has been selected for is below a prede¯ned threshold, then the block is clas- si¯ed as a 0 block. Otherwise, it is classi¯ed as a 1 *His work was supported in part by the Laboratory of Mi- block. crotechnology (LMT EPFL) common to the Swiss Federal In- stitute of Technology, Lausanne, and the Institute of Microtech- An adaptive DCT module receives the B blocks nology, University of Neuch^atel. along with their classi¯cation bit. The DCT module 2 1 0 0 1 DC-DPCM Header Quantizer Run-length Huffman Coder Data Blocks of coding 16x16 Adaptive pixels Original Block 8- or 16-point Compressed (B Blocks) Image Classification 2-D DCT image data Figure 1: Adaptive JPEG-based algorithm executes a 16-point 2-D DCT on those B blocks that ing prediction error frames, a two-class, two-di®erent have been classi¯ed as 0 blocks. The B blocks clas- block transform sizes, with a largest block-size of 16x16 si¯ed as 1 blocks are further divided into 4 subblocks pixels was a good compromise between minimum infor- of 8x8 pixels each, and then an 8-point 2-D DCT is mation overhead (0.004 bit/pixel) and good compres- applied on each of the 4 subblocks. The 2-D DCT co- sion e±ciency. e±cients are quantized and then zigzag reordered in or- The standard deviation was used as the metric to der to increase the sequences of zero-valued coe±cients. classify the B blocks. Experimentally, it was deter- The DC coe±cients are coded with a DPCM scheme, mined that setting the classi¯cation threshold at a value whereas the AC coe±cients are run-length coded. Fi- of 3.5 leads to decoded frames of the same quality as nally the resulting symbols are Hu®man coded. those obtained by the non-adaptive scheme. The purpose of the adaptive scheme is to take ad- vantage of large uniform regions within the image to 4. COMPUTATIONAL COMPLEXITY be coded. And then, instead of coding these 16 £ 16- pixel image regions as four blocks of 8 £ 8 pixels (as it By analyzing the spectrum of the transform of the 0 would be made by a ¯xed block scheme) the algorithm blocks, it was noticed that only the ¯rst four low fre- will code them as a single block of 16 £ 16 pixels. quency coe±cients have a signi¯cant magnitude. The Before describing the selection of the classi¯cation heavy computational overhead of a 16-point 2-D DCT parameters, it is useful to remark that two di®erent is then reduced considerably since only these four co- types of applications are possible for the adaptive e±cients are to be calculated. Thus, for each classi- scheme: either it is used to code natural still images, ¯ed 0 block a saving of about 85% of the operations or it is embedded in a video coding system in order to is achieved with respect to the non-adaptive scheme. code prediction error frames. The need for an applica- Since the proportion of selected 0 blocks is typically tion oriented de¯nition comes from the fact that natu- high (e.g., an average of 56% and 78% for the ¯rst 150 ral still images and prediction error frames are image- error prediction frames of Miss America and Claire re- data of di®erent structure. The classi¯cation algorithm spectively) the global reduction of operations|taking and its corresponding parameters can then be better into accont the computational overhead of the classi- selected to match the input data, in search of a higher ¯cation stage|is of 38% for Miss America and 55% coding e±ciency. In this paper, we report the results for Claire. This important saving of operations is even of addressing the video application. higher at the decoder where the classi¯cation overhead is not present. 3. SELECTION OF THE CLASSIFICATION PARAMETERS 5. RESULTS One sensitive drawback of most adaptive image coding The success of the adaptive algorithm in improving the algorithms is the generation of overhead information. compression ratio depends on the number of 0 blocks In general, the higher the degree of adaptability of a that are selected during the classi¯cation stage. The coding algorithm, the larger the amount of overhead higher the number of selected 0 blocks, the higher the data that needs to be sent to the decoder. improvement of the compression ratio with respect to In an adaptive block-size scheme, the overhead in- the non-adaptive scheme. formation is closely related to the size of the blocks Coding the image Lena (for high reconstruction and to the number and segmentation structure of the quality ) with the proposed scheme gives only a mod- di®erent classes. est improvement of the compression ratio. This is due For our adaptive scheme, it was found that for cod- to the very few number of 0 blocks that are selected 3 pected that by using a motion compensation technique, the compression ratio will be further improved, since a better prediction would increase the percentage of selected 0 blocks in the error prediction frames. 6. CONCLUSIONS An adaptive block-size transform coding scheme for image compression was presented. The method fea- tures a minimum information overhead, a signi¯cant reduction of the computational complexity and a cod- ing e±ciency that largely outperforms its non-adaptive counterpart. A combination of all these advantages may con- tribute to the development of e®ective solutions in vari- Figure 2: Lena: Block Classi¯cation ous application ¯elds, in particular for low power portable video communication systems. by the classi¯cation algorithm, as it is shown in Fig- 7. ACKNOWLEDGEMENTS ure 2. This was nevertheless the expected result since the classi¯cation parameters were not chosen to match This work was supported by the Swiss National Sci- the statistics of natural still images.