Adaptive Colour Decorrelation for Predictive Image Codecs François Pasteau, Clément Strauss, Marie Babel, Olivier Déforges, Laurent Bédat

Total Page:16

File Type:pdf, Size:1020Kb

Adaptive Colour Decorrelation for Predictive Image Codecs François Pasteau, Clément Strauss, Marie Babel, Olivier Déforges, Laurent Bédat Adaptive colour decorrelation for predictive image codecs François Pasteau, Clément Strauss, Marie Babel, Olivier Déforges, Laurent Bédat To cite this version: François Pasteau, Clément Strauss, Marie Babel, Olivier Déforges, Laurent Bédat. Adaptive colour decorrelation for predictive image codecs. European Signal Processing Conference, EUSIPCO, Aug 2011, Barcelona, Spain. pp.1-5. hal-00600354 HAL Id: hal-00600354 https://hal.archives-ouvertes.fr/hal-00600354 Submitted on 14 Jun 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ADAPTIVE COLOR DECORRELATION FOR PREDICTIVE IMAGE CODECS Franc¸ois Pasteau, Clement´ Strauss, Marie Babel, Olivier Deforges,´ Laurent Bedat´ IETR/Image group Lab CNRS UMR 6164/INSA Rennes 20, avenue des Buttes de Coesmes¨ 35043 RENNES Cedex, France ffpasteau, cstrauss, mbabel, odeforge, [email protected] ABSTRACT codecs such as LAR, JPEGLS or H264 to improve both When considering color images and more generally multi decorrelation and prediction processes of color images. This component images, state of the art image codecs usually new method being completely reversible can be applied to achieve component decorrelation through static color trans- both lossless and lossy coding. forms such as YUV or YCoCg. This approach leads to This new method can be decomposed in two processes. suboptimal results as statistics of the image are not taken First, adaptive decorrelation performed during the prediction into account. The new approach proposed here offers to re- process aimed at removing the residual correlation. A pre- move the correlation of one component according to another vious study [1] has shown good improvement over compres- adaptively during the prediction process of an image codec. sion rate when applying such decorrelation after the predic- Through two jointly used processes, one aiming at choos- tion process. However using adaptive decorrelation during ing the best predictor of a component and another aiming at the prediction process improve the effectiveness of the pre- improving the predictor’s effectiveness, this new approach dictor and therefore leads to both better compression rate and improves both image quality and compression ratio. This better quality. new technique has been applied to the LAR codec and shows In addition, inter component prediction aims at choosing an improvement over previous studies up to 20% in rate and the best fitted predictor for a component using information 0.5db in PSNR at low bit rates. from another previously encoded component of the image. The paper is organised as follows. The following section 1. INTRODUCTION introduces the concept of inter-component prediction. In sec- tion 3 Adaptive decorrelation perfomed during prediction is Despite the general use of color images, developement of im- explained. In section 4 both inter component prediction and age codecs such as JPEG, JPEG2K or the newly standardized adaptive decorrelation are jointly used. Finally an applica- JPEGXR has been primarily focused on giving the best per- tion of these two processes on the LAR codec is shown. formance on one component images. To handle color im- ages, state of the art codecs usually rely on color transforms such as YUV, YCgCo and/or subsampling to achieve both 2. INTER COMPONENT PREDICTION good compression ratio and good visual quality. However In this section we present an approach aiming at improving after performing color transforms and/or subsampling each the prediction of a component using the best predictor of an- component is encoded independently without using the resid- other previously encoded component. In order to propose ual correlation still existing between components. In a pre- algorithms and figures, notations described below need to be vious study [1], we showed that using the RGB color space defined. with an adaptive decorrelation and classification leads to bet- let Y be the first component of the image ter results than using static color transform such as YUV, let C be one of the other components of the image YCgCo. th let Yi be the i value of the Y component of the image When considering lossless coding, coding techniques j rely on statistical analyzes of the image to perform compres- let Ybi be the prediction of Yi through predictor j j sion. As subsampling would cause losses, only reversible let Yei be the reconstruction of Yi through predictor j color transforms can be used. However, even after apply- let aC be the decorrelation factor for the C component ing static color transforms, residual correlation still exists let x0 be x with adaptive decorrelation applied between components [1]. This underlaying correlation re- j let modeprediction(Y ; j) be the process returning Y sults in an suboptimal compression rate as decorrelation has i bi let entropycoder(x) be the entropy coding of x been done statically without consideration of the image it- let Q(x) return the value of x after quantization self. A critical application of lossless coding of colour im- −1 ages concerns cultural digital libraries [2]. Museums actu- let Q (x) return the value of x after inverse quantization −1 ally try to safely digitalize their belongings and thus produce let [x]Q be Q (Q(x)) large quantities of lossless colour pictures. Current digital let sign(x) = -1 if x < 0, 1 otherwise cameras are wide spread and generate high resolution colour let jxj be the absolute value of x images. Professional photographers tend to prefer lossless By applying different predictors from a predictor set on Y, compression of their pictures to avoid artifacts due to image selection of the best fitted to Y can be achieved. Therefore compression. this predictor can be used on component C to obtain a better In this paper we present a new method for predictive prediction. The Algorithm 1 explains the search for the best predictor. The process corresponds to a minimization of the Algorithm 2 Adaptive Decorrelation Algorithm j 0 distance between Ybi −Yei according to the norm 1. 1: fInitializationg 2: cpt = 0 Procedure 1 BestPrediction 3: NC = 0 0 4: DC = 0 Input: Y ;Y i ei 5: aC = 0 Output: bp 6: for all i do 1: best = +inf 0 7: Ybi = modeprediction(Yi;0) 2: bp = 0 0 3: for all j do 8: Cbi = modeprediction(Ci;0) j 0 0 0 4: Ybi = modeprediction(Yi; j) 9: Yei = Ybi + [Ybi −Yi]Q j 0 10: 5: if jYbi −Yei j < best then j 0 11: fAdaptive Decorrelation on predictorg 6: best = jYbi −Yei j 00 0 0 0 12: C = C + a × [Y −Y ] 7: bp = j bi bi C bi ei Q 00 00 00 8: end if 13: Cei = Cbi + [Cbi −Ci]Q 9: end for 14: 10: freturn the best predictor according to Yg 15: faC updateg 11: return bp 0 0 16: if DC > 0 or [Ybi −Yei ]Q 6= 0 then 00 0 0 0 17: NC = NC + (Cei −Cbi ) × sign([Ybi −Yei ]Q) Predictor 0 corresponds to the predictor used on com- 0 0 ponent Y. The effectiveness of such a method is highly de- 18: DC = DC + j[Ybi −Yei ]Qj pendent on the number of predictors and their effectiveness. 19: aC = NC=DC To keep a low computational cost, the number of predictors 20: cpt = cpt + 1 should be kept low. Such process already exists in literature, 21: for example in JPEGLS [3][4]. However the main advantage 22: fEnsure local adaptationg here is that the selection of the best predictor does not need to 23: if cpt > 1000 then be transmitted to the decoder as the decoder can execute the 24: NC = NC=4 same process. Therefore inter component prediction is cost 25: DC = DC=4 free. As it involves modification of predictors, inter compo- 26: cpt = 0 nent prediction is also lossless. 27: end if 28: end if As inter component prediction aims at choosing the right 0 0 predictor for a component, an other approach would be to 29: entropycoder(Q(Ybi −Yei )) improve the effectiveness of the predictor by removing the 00 00 30: entropycoder(Q(C −C )) residual correlation between Y and C. This approach is de- bi ei 31: end for scribed in the next section. 3. ADAPTIVE DECORRELATION DURING 00 PREDICTION PROCESS dictions after decorrelation Cbi to ensure an acurate update of aC. To avoid DC = 0 we perform a point reflection on the In a previous study devoted to inter component decorrelation points of coordinate f(x;y)jy < 0g around the origin (0,0). [1], we perform decorrelation as an independent process of 0 the prediction scheme. By doing so, only the compression Therefore only the absolute value of [Ybi −Yi]Q is taken into ratio is improved, as extra information given by the decor- consideration in DC. To ensure a locally adaptive decorre- relation process is not used to improve the prediction itself. lation, we divide the numerator and denominator of aC after Therefore, by performing adaptive decorrelation during the cpt iterations. The number of iterations and the divider of the prediction process both compression ratio and quality can be numerator and denominator have been empirically evaluated. improved in a single pass. Figure 1 represents the functional As adaptive decorrelation can be realised at both encoder implementation of such a technique. The algorithm used to and decoder side, the process itself is costfree.
Recommended publications
  • Tracking and Automation of Images by Colour Based
    Vol 11, Issue 8,August/ 2020 ISSN NO: 0377-9254 TRACKING AND AUTOMATION OF IMAGES BY COLOUR BASED PROCESSING N Alekhya 1, K Venkanna Naidu 2 and M.SunilKumar 3 1PG student, D.N.R College of Engineering, ECE, JNTUK, INDIA 2 Associate Professor D.N.R College of Engineering, ECE, JNTUK, INDIA 3Assistant Professor Sir CRR College of Engineering , EEE, JNTUK, INDIA [email protected], [email protected] ,[email protected] Abstract— Now a day all application sectors are mostly image analysis involves maneuver the moving for the automation processing and image data to conclude exactly the information sensing . for example image processing in compulsory to help to answer a computer imaging medical field ,in industrial process lines , object problem. detection and Ranging application, satellite Digital image processing methods stems from two imaging Processing ,Military imaging etc, In principal application areas: improvement of each and every application area the raw images pictorial information for human interpretation, and are to be captured and to be processed for processing of image data for tasks such as storage, human visual inspection or digital image transmission, and extraction of pictorial processing systems. Automation applications In information this proposed system the video is converted into The remaining paper is structured as follows. frames and then it is get divided into sub bands Section 2 deals with the existing method of Image and then background is get subtracted, then the Processing. Section 3 deals with the proposed object is get identified and then it is tracked in method of Image Processing. Section 4 deals the the framed from the video .This work presents a results and discussions.
    [Show full text]
  • Video Compression for New Formats
    ANNÉE 2016 THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université Bretagne Loire pour le grade de DOCTEUR DE L’UNIVERSITÉ DE RENNES 1 Mention : Traitement du Signal et Télécommunications Ecole doctorale Matisse présentée par Philippe Bordes Préparée à l’unité de recherche UMR 6074 - INRIA Institut National Recherche en Informatique et Automatique Adapting Video Thèse soutenue à Rennes le 18 Janvier 2016 devant le jury composé de : Compression to new Thomas SIKORA formats Professeur [Tech. Universität Berlin] / rapporteur François-Xavier COUDOUX Professeur [Université de Valenciennes] / rapporteur Olivier DEFORGES Professeur [Institut d'Electronique et de Télécommunications de Rennes] / examinateur Marco CAGNAZZO Professeur [Telecom Paris Tech] / examinateur Philippe GUILLOTEL Distinguished Scientist [Technicolor] / examinateur Christine GUILLEMOT Directeur de Recherche [INRIA Bretagne et Atlantique] / directeur de thèse Adapting Video Compression to new formats Résumé en français Adaptation de la Compression Vidéo aux nouveaux formats Introduction Le domaine de la compression vidéo se trouve au carrefour de l’avènement des nouvelles technologies d’écrans, l’arrivée de nouveaux appareils connectés et le déploiement de nouveaux services et applications (vidéo à la demande, jeux en réseau, YouTube et vidéo amateurs…). Certaines de ces technologies ne sont pas vraiment nouvelles, mais elles ont progressé récemment pour atteindre un niveau de maturité tel qu’elles représentent maintenant un marché considérable, tout en changeant petit à petit nos habitudes et notre manière de consommer la vidéo en général. Les technologies d’écrans ont rapidement évolués du plasma, LED, LCD, LCD avec rétro- éclairage LED, et maintenant OLED et Quantum Dots. Cette évolution a permis une augmentation de la luminosité des écrans et d’élargir le spectre des couleurs affichables.
    [Show full text]
  • 22926-22936 Page 22926 the Color Features Based on Computer Vision
    Umapathy Eaganathan* et al. /International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com ANALYSIS OF VARIOUS COLOR MODELS IN MEDICAL IMAGE PROCESSING Umapathy Eaganathan* Faculty in Computing, Asia Pacific University, Bukit Jalil, Kuala Lumpur, 57000, Malaysia. Received on: 20.10.2016 Accepted on: 25.11.2016 Abstract Color is a visual element for humanbeing to perceive in everyday activities. Inspecting the color manually may be a tedious job to the humanbeing whereas the current digital systems and technologies help to inspect even the pixels color accurately for the given objects. Various researches carried out by the researchers over the world to identify diseased spots, damaged spots, pattern recognition, and graphical identification. Color plays a vital role in image processing and geographical information systems. This article explains about different color models such as RGB, YCbCr, HSV, HSI, CIELab in medical image processing and it additionally explores how these color models can be derived from the basic color model RGB. Further this study will also discuss the suitable color models that can be used in medical image processing. Keywords : Image Processing, Medical Images, RGB Color, Image Segmentation, Pattern Recognition, Color Models 1. Introduction Nowadays in medical health system the images playing a vital role throughout the process of medical disease management, intensive treatment, surgical procedures and clinical researches. In medical image processing such as neuro-imaging, bio-imaging, medical visualisations the imaging modalities based on digital images and may vary depends upon the diagnosis stage. Main improvements of medical imaging systems working faster than the traditional disease identification system.
    [Show full text]
  • Application No. AU 2020201212 A1 (19) AUSTRALIAN PATENT OFFICE
    (12) STANDARD PATENT APPLICATION (11) Application No. AU 2020201212 A1 (19) AUSTRALIAN PATENT OFFICE (54) Title Adaptive color space transform coding (51) International Patent Classification(s) H04N 19/122 (2014.01) H04N 19/196 (2014.01) H04N 19/186 (2014.01) H04N 19/46 (2014.01) (21) Application No: 2020201212 (22) Date of Filing: 2020.02.20 (43) Publication Date: 2020.03.12 (43) Publication Journal Date: 2020.03.12 (62) Divisional of: 2017265177 (71) Applicant(s) Apple Inc. (72) Inventor(s) TOURAPIS, Alexandras (74) Agent / Attorney FPA Patent Attorneys Pty Ltd, ANZ Tower 161 Castlereagh Street, Sydney, NSW, 2000, AU 1002925143 ABSTRACT 2020 An apparatus for decoding video data comprising: a means for decoding encoded video data to determine transformed residual sample data and to determine color transform parameters Feb for a current image area, wherein the color transform parameters specify inverse color 20 transform coefficients with a predefined precision; a means for determining a selected inverse color transform for the current image area from the color transform parameters; a means for performing the selected inverse color transform on the transformed residual 1212 sample data to produce inverse transformed residual sample data; and a means for combining the inverse transformed residual sample data with motion predicted image data to generate restored image data for the current image area of an output video. 1002925143 ADAPTIVE COLOR SPACE TRANSFORM CODING 2020 Feb PRIORITY CLAIM 20 [01] The present application claims priority to U.S. Patent Application No. 13/940,025, filed on July 11, 2013, which is a Continuation-in-Part of U.S.
    [Show full text]
  • Ycbcr and XYZ Colour Image Edge Detection
    GNANATHEJA RAKESH V, T SREENIVASULU REDDY / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 2,Mar-Apr 2012, pp.152-156 YCoCg color Image Edge detection GNANATHEJA RAKESH V T SREENIVASULU REDDY M.Tech Student, Department of EEE Associate Professor of ECE, Sri Venkateswara University College of Engineering, Sri Venkateswara University College of Engineering, Tirupati Tirupati Abstract: Edges are prominent features in images gray value images when neighboring objects have and their analysis and detection are an essential goal different hues but equal intensities. Such objects can not in computer vision and image processing. Indeed, be distinguished in gray value images. They are treated identifying and localizing edges are a low level task in like one big object in the scene. This is not significant if a variety of applications such as 3-D reconstruction, obstacle avoidness is the task of the vision system. shape recognition, image compression, enhancement, Opposed to this, the capability of distinguishing between and restoration. This paper presents a comparative one big object and two (or several) small objects may study on different approaches to edge detection of become crucial for the task of object grasping or even in colour images. The approaches are based on 2-D image segmentation. Additionally, The common transforming the RGB image to YUV and YCoCg shortcomings of the RGB image edge detection and back to RGB colour model. Edges are detected arithmetic are the low speed and the color losses after the by Laplacian operator and Gradient Operator. The each component of the image is processed.
    [Show full text]
  • Colour Image Segmentation Using Perceptual Colour Difference Saliency Algorithm
    COLOUR IMAGE SEGMENTATION USING PERCEPTUAL COLOUR DIFFERENCE SALIENCY ALGORITHM By TAIWO, TUNMIKE BUKOLA (21557473) Submitted in fulfilment of the requirements of the MASTERS DEGREE IN INFORMATION AND COMMUNICATION TECHNNOLOGY in the DEPARTMENT OF INFORMATION TECHNOLOGY in the FACULTY OF ACCOUNTING AND INFORMATICS at DURBAN UNIVERSITY OF TECHNOLOGY July 2017 DECLARATION I, Taiwo, Tunmike Bukola hereby declares that this dissertation is my own work and has not been previously submitted in any form to any other university or institution of higher learning by other persons or myself. I further declare that all the sources of information used in this dissertation have been acknowledged. _________________________ ____________________ Taiwo, Tunmike Bukola Date Approved for final submission Supervisor: _______________________ ______________________ Professor O. O. Olugbara Date PhD (Computer Science) Co-Supervisor: __________________ ______________________ Dr. Delene Heukelman Date DTech (Information Technology) ii DEDICATION This dissertation is dedicated to my family for their support, encouragement and motivation throughout the period of this study. iii ACKNOWLEDGEMENTS First and foremost, my profound appreciation goes to the Almighty Jehovah, the giver of life and wisdom, for His limitless love, inspiration and strength throughout the period of this study. I return to Him all the glory, honour and adoration. I am particularly grateful to my supervisor, Prof. Oludayo Olugbara. A humble genius in image processing research field, his mentorship on both professional and personal levels was tremendous. Since the beginning of this study, he has always been there not only as my supervisor, but a father and guardian. I wouldn’t be writing this today if I had a different supervisor. I really appreciate all the valuable time he spent in helping me with programming and new concepts with reference to my research work.
    [Show full text]
  • Adaptive Color Space Model Based on Dominant Colors for Image and Video
    Adaptive color space model based on dominant colors for image and video... Madenda S., Darmayantie A. Adaptive color space model based on dominant colors for image and video compression performance improvement S. Madenda 1, A. Darmayantie 1 1 Computer Engineering Department, Gunadarma University, Jl. Margonda Raya. No. 100, Depok – Jawa Barat, Indonesia Abstract This paper describes the use of some color spaces in JPEG image compression algorithm and their impact in terms of image quality and compression ratio, and then proposes adaptive color space models (ACSM) to improve the performance of lossy image compression algorithm. The proposed ACSM consists of, dominant color analysis algorithm and YCoCg color space family. The YCoCg color space family is composed of three color spaces, which are YCcCr, YCpCg and YCyCb. The dominant colors analysis algorithm is developed which enables to automatically select one of the three color space models based on the suitability of the dominant colors contained in an image. The experimental results using sixty test images, which have varying colors, shapes and textures, show that the proposed adaptive color space model provides improved performance of 3 % to 10 % better than YCbCr, YDbDr, YCoCg and YCgCo-R color spaces family. In addition, the YCoCg color space family is a discrete transformation so its digital electronic implementation requires only two adders and two subtractors, both for forward and inverse conversions. Keywords: colors dominant analysis, adaptive color space, image compression, image quality, compression ratio. Citation: Madenda S, Darmayantie A. Adaptive color space model based on dominant colors for image and video compression performance improvement. Computer Optics 2021; 45(3): 405- 417.
    [Show full text]
  • Video Coding: Recent Developments for HEVC and Future Trends
    4/8/2016 Video Coding: Recent Developments for HEVC and Future Trends Initial overview section by Gary Sullivan Video Architect, Microsoft Corporate Standards Group 30 March 2016 Presentation for Data Compression Conference, Snowbird, Utah Major Video Coding Standards Mid 1990s: Mid 2000s: Mid 2010s: MPEG-2 H.264/MPEG-4 AVC HEVC • These are the joint work of the same two bodies ▫ ISO/IEC Moving Picture Experts Group (MPEG) ▫ ITU-T Video Coding Experts Group (VCEG) ▫ Most recently working on High Efficiency Video Coding (HEVC) as Joint Collaborative Team on Video Coding (JCT-VC) • HEVC version 1 was completed in January 2013 ▫ Standardized by ISO/IEC as ISO/IEC 23008-2 (MPEG-H Part 2) ▫ Standardized by ITU-T as H.265 ▫ 3 profiles: Main, 10 bit, and still picture Gary J. Sullivan 2016-03-30 1 4/8/2016 Project Timeline and Milestones First Test Model under First Working Draft and ISO/IEC CD ISO/IEC DIS ISO/IEC FDIS & ITU-T Consent Consideration (TMuC) Test Model (Draft 1 / HM1) (Draft 6 / HM6) (Draft 8 / HM8) (Draft 10 / HM10) 1200 1000 800 600 400 Participants 200 Documents 0 Gary J. Sullivan 2016-03-30 4 HEVC Block Diagram Input General Coder General Video Control Control Signal Data Transform, - Scaling & Quantized Quantization Scaling & Transform Split into CTUs Inverse Coefficients Transform Coded Header Bitstream Intra Prediction Formatting & CABAC Intra-Picture Data Estimation Filter Control Analysis Filter Control Intra-Picture Data Prediction Deblocking & Motion SAO Filters Motion Data Intra/Inter Compensation Decoder Selection Output Motion Decoded Video Estimation Picture Signal Buffer Gary J.
    [Show full text]
  • Lifting-Based Reversible Color Transformations for Image Compression
    Lifting-based reversible color transformations for image compression Henrique S. Malvar, Gary J. Sullivan, and Sridhar Srinivasan Microsoft Corporation, One Microsoft Way, Redmond, WA 98052, USA ABSTRACT This paper reviews a set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic. The YCoCg transform and its reversible form YCoCg-R can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity. We also present extensions of the YCoCg transform for four-channel CMYK pixel data. Thanks to their reversibility under integer arithmetic, these transforms are useful for both lossy and lossless compression. Versions of these transforms are used in the HD Photo image coding technology (which is the basis for the upcoming JPEG XR standard) and in recent editions of the H.264/MPEG-4 AVC video coding standard. Keywords: Image coding, color transforms, lossless coding, YCoCg, JPEG, JPEG XR, HD Photo. 1. INTRODUCTION In color image compression, usually the input image has three color values per pixel: red, green, and blue (RGB). Independent compression of each of the R, G, and B color planes is possible (and explicitly allowed in standards such as JPEG 20001,2 and JPEG XR3,4), but it is not efficient for natural images, such as pictures from digital cameras. For such images the color planes have significant correlation, and thus a linear color transform should be applied to map the RGB color space into a space that can lead to better compression. For lossless compression, the color transformation must be itself lossless, i.e., the conversion from each integer-valued RGB pixel triplet into a new integer-valued triplet in different color space must be reversible in integer arithmetic.
    [Show full text]
  • Recursive Adaptive Color KLT for Image Sequences
    INTERNATIONAL JOURNAL OF CIRCUITS, SYSTEMS AND SIGNAL PROCESSING Volume 11, 2017 Recursive Adaptive Color KLT for image sequences Roumen K. Kountchev and Roumiana A. Kountcheva Abstract - The method Recursive color space transform for a RDgDb, HSV or HSL, CMY, PhotoYCC, Lab, Luv, etc. sequence of correlated RGB images is based on the algorithm [2,3,8,12,13], is the maximum decorrelation of the transformed Adaptive Color KLT (AC-KLT). The non-recursive AC-KLT for a color components L1L2L3. The main disadvantage of the color sequence of color images is not efficient enough in this case, because KLТ is that it has higher computational complexity than the each image should be processed individually. Depending on the kind deterministic transforms. To partially overcome the problem, of the images in the processed sequence (for example, color video, approximately optimal color KLT with fixed coefficients is multi-view, medical sequences, etc.), the pixels of same position, but from two neighbor images, have significant similarity which in most presented in [8]. Another approach for KLT complexity cases is over 90 %. In such case, strong possibility exists for high reduction is given in [14], where it is used for recursive coding reduction of the information redundancy of the color information. To of image blocks of small size. In this case, the full transform is this end, here is offered the method Recursive AC-KLT (RAC-KLT) executed for the first block only, and for the next blocks are whose basic idea is to execute the AC-KLT for the first image in the calculated the values of the difference parameters in respect to sequence, and for the next images to calculate the values of the the previous.
    [Show full text]
  • 0.0 Introduction 1.0 the Ycocg Color Space
    Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG Document: JVT-I014r3 (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6) Filename: JVT-I014r3.doc JVT PExt Ad Hoc Group Meeting: 22-24 July 2003, Trondheim Title: YCoCg-R: A Color Space with RGB Reversibility and Low Dynamic Range Status: Input Document to JVT Purpose: Proposal Author(s) or Henrique Malvar and Contact(s): Gary Sullivan Tel: +1 425 882 8080 Email: [email protected] One Microsoft Way [email protected] Redmond, WA 98052 USA Source: Microsoft Corp. _____________________________ 0.0 Introduction At the Geneva meeting we presented the YCoCg color space [1], including its simple transformation equations relative to RGB and its improved coding gain relative to both RGB and YCbCr. We also discussed the reversibility of RGB to YCoCg conversion process in the case that two additional bits of precision are used for YCoCg relative to the precision used for source RGB data. In this proposal, we additionally present a variation of this color space, which we call YCoCg-R. YCoCg-R has properties very similar to those of YCoCg, but in addition it enables transformation reversibility with a smaller dynamic range requirement than does YCoCg. In particular, YCoCg-R produces a Y component with the same dynamic range as the input RGB data, and requires only a one-bit expansion of dynamic range support for the Co and Cg components. Note: Compared to our previous “r1” version, this version presents a corrected coding gain table (Table 1), a correction of a typo equation error at the end of section 2.0, and provides a non-normalized autocovariance matrix rather than a normalized one, because the coding gain should be computed from the non-normalized autocovariance matrix.
    [Show full text]
  • The Compact Ycocg Frame Buffer
    Journal of Computer Graphics Techniques Vol. 1, No. 1 http://jcgt.org 2012 The Compact YCoCg Frame Buffer Pavlos Mavridis Georgios Papaioannou Department of Informatics, Athens University of Economics & Business Figure 1. Our method can rasterize a color image using only two frame-buffer chan- nels by interleaving the chrominance components in a checkerboard pattern. The final image is reconstructed using an edge-directed demosaicing filter. The compression error, visualized in the inset, is negligible, and the filtering is temporally stable. Abstract In this article we present a lossy frame-buffer compression format, suitable for exist- ing commodity GPUs and APIs. Our compression scheme allows a full-color image to be directly rasterized using only two color channels at each pixel, instead of three, thus reducing both the consumed storage space and bandwidth during the rendering process. Exploiting the fact that the human visual system is more sensitive to fine spatial variations of luminance than of chrominance, the rasterizer generates frag- ments in the YCoCg color space and directly stores the chrominance channels at a lower resolution using a mosaic pattern. When reading from the buffer, a simple and efficient edge-directed reconstruction filter provides a very precise estimation of the original uncompressed values. We demonstrate that the quality loss from our method is negligible, while the bandwidth reduction results in a sizable increase in the fill rate of the GPU rasterizer. 19 Journal of Computer Graphics Techniques Vol. 1, No. 1 http://jcgt.org 2012 1. Introduction For years the computational power of graphics hardware has grown at a faster rate than the available memory bandwidth [Owens 2005].
    [Show full text]