Aware's JPEG 2000 Implementation

Total Page:16

File Type:pdf, Size:1020Kb

Aware's JPEG 2000 Implementation WHITE PAPER JPEG2000 for Medical Imaging 05.13 Copyright ©2013 Aware, Inc. All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form by any means, electronic, mechanical photocopying, recording, or otherwise without the prior written permission of Aware, Inc. This document is for information purposes only and is subject to change without notice. Aware, Inc. assumes no responsibility for the accuracy of the information. AWARE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS DOCUMENT. “Aware” is a registered trademark of Aware, Inc. Other company and brand, product and service names are trademarks, service marks, registered trade- marks or registered service marks of their respective holders. WP_JPEG2000Med_0513_v01 Aware’s JPEG 2000 Implementation The healthcare industry requires modern open The Aware initial implementation of the ISO/IEC standards-based image compression techniques 15444-1 JPEG 2000 standard was designed and optimized for medical data and workflows implemented by a team of specialists with more than a decade of experience providing compression/ implemented by complex medical systems that decompression, image viewing and image process- acquire and manage digital medical imagery. ing software for medical OEMs. Incorporating almost twenty years of experience, the API brings unprec- These requirements led to the adoption of JPEG 2000 edented reliability and expertise to users. The API is as a wavelet compression standard (ISO/IEC 15444-1) clean, easy to understand, and easy to implement. Both the encoder and decoder are optimized through and its inclusion as a valid compression method in a combination of algorithmic and coding techniques DICOM. that efficiently utilize the design of modern CPUs. Reference implementations of the JPEG 2000 ISO DIRECT SUPPORT FOR DICOM standard, or baseline implementations, are designed The Aware JPEG 2000 API fully supports the input to address a sampled cross section of encoding and and output of single and multi-frame DICOM format- ted data. The API will parse, encode, decode and decoding options to provide a reference of how the reformat the DICOM data. JPEG 2000 support was standard could be used. However, these reference added to the DICOM standard in November 2001 with the publication of “Digital Imaging and Communica- implementations are not optimized and their feature tions in Medicine (DICOM) Supplement 61: JPEG sets are not flexible enough for specialized medical 2000 Transfer Syntaxes.” As described in Supplement 61, WG 4’s motivation was based on the fact that “… applications. there are both real and perceived limitations” with tra- ditional JPEG. “…WG 4 began to investigate alterna- tives, particularly those based on wavelet transforma- Aware, Inc. was the first commercial organization tion, multi-resolution analysis and more sophisticated to provide wavelet based compression algorithms entropy coders than Huffman coding.” The adoption as commercial off-the-shelf libraries designed for of JPEG 2000 as an ISO standard provided DICOM integration in larger systems. Twenty years of R&D with a timely solution. Both the JPEG 2000 lossless combined with a strong commercial focus have kept compression and lossy compression algorithms have Aware in the forefront of the wavelet based image and been accepted for use. data compression market. Today, wavelet compres- sion techniques are ubiquitous across a spectrum of medical imaging systems that require compression and streaming. Multi Frame DICOM Multi Frame DICOM This paper talks about some of the major features of the Aware JPEG 2000 implementation: RAW Aware J2K n Direct support for DICOM objects JPEG 2000 Single Frame DICOM Object Single Frame DICOM n Compress to quality target RAW J2K n Fully-compliant fixed point decompression The paper also introduces features of the JPEG 2000 Figure 1 - The Aware JPEG2000 library includes standard such as 3D volumetric compression and native support for DICOM input/output JPIP that are applicable to medical applications. ©2013 Aware, Inc. JPEG2000 for Medical Imaging | 3 Aware supports the following transfer syntaxes, Research attempts to closely correlate expert human enables transcoding between them and can extract measures of image quality with that of a machine. image data as separate files: The field of image processing and compression typically uses methods that compare each pixel of n DICOM with embedded raw image the original, uncompressed, image with the same pixel of the compressed image. Images with large n DICOM with embedded JPEG 2000 differences across large numbers of pixels are n DICOM with embedded JPEG or Lossless JPEG considered to have been degraded more than an image with small differences across fewer numbers Aware’s JPEG 2000 library enables full DICOM file of pixels. parsing, access to the most commonly used tags and a data dictionary that supports the most com- A common quantitative measure that uses this monly used tags. The Aware JPEG 2000 software also technique is called the “mean square criterion”. provides support for multi-frame DICOM files and full In many image-coding applications, the mean square support for all of the image color spaces used within error is often expressed in terms of a peak signal- DICOM. to-noise ratio, pSNR. The Aware API functions implement this pSNR measurement. COMPRESS TO TARGET QUALITY Aware’s JPEG 2000 API includes functions that The process of selecting a target compression ratio enable an image to be compressed to a target pSNR (e.g. 10:1) or target file size (e.g. 64 kbytes) does not value which is a machine measurable quantity of always correspond to an accurate measure of image image degradation. Both the entire image can be quality. An acceptable compression ratio of X for one compressed to a target pSNR and each quality layer image type (e.g. CT) may introduce an unacceptable can be coded to its own pSNR value. The ability to degree of image degradation for another image type encode each quality layer to a separate quality level (e.g. MR). A useful feature in the Aware JPEG 2000 can be used with the progressive display capabilities API is the ability to encode an image to a specified of JPEG 2000 to set minimum image quality values for target image quality. each layer that is transmitted and displayed. The JPEG 2000 standard includes lossy and lossless 22500 68 compression techniques. Lossy compression results 21500 67 in smaller data size but, depending on the amount of compression, may result in image degradation. Loss- 20500 66 less compression will maintain 100% image integrity 19500 65 but is limited to compression ratios of 2:1 to 3:1 on average. 18500 64 17500 63 pSNR (dB) The use of lossy compression techniques with medi- File Size (bytes) cal images requires controls to maintain image qual- 16500 62 ity. A commonly used practice is to select a target 15500 File Size 61 compression ratio or a target file size that has been pSNR predetermined to provide diagnostic quality. The 14500 60 1 4 7 10 13 16 19 22 25 28 31 34 37 optimal compression ratio is chosen through empirical methods and is influenced by the characteristics of an Slice Number image. The size, resolution, dynamic range, bit depth, Figure 2 - Compression to Target File Size: noise content and frequency components of an image Constant compression results in different levels of image affect its compressibility. distortion for each image in a CT series. This process of optimal compression ratio selec- Figure 2 shows a graph of file size vs. pSNR for a series tion for each modality type can be further modified of 38 CT images from one study. All of the images were to reflect how the images will be used (fast brows- compressed using Aware JPEG 2000 and a target com- ing, reference viewing or diagnostic reading), who pression ratio of 30:1. The original images have a dimen- the user is (radiologist, clinician, insurance provider) sion of 512 x 512 pixels and a bit depth of 16 bits per and the characterization of system resources such as pixel. The data shows that while each image was com- available bandwidth, memory, and processor. pressed to the same target file size, the resulting pSNR, or distortion varies from image to image. ©2013 Aware, Inc. JPEG2000 for Medical Imaging | 4 Figure 3 shows the same series of images Aware’s technology also provides a method by which compressed to a target pSNR of 64 dB. The to specify a target pSNR for each quality layer within resulting compressed images were within a 0.3 dB the JPEG 2000 file. Figure 4 shows a plot of the actual range of the specified target pSNR while the resulting pSNR and the Target pSNR for a JPEG 2000 file with file sizes varied +20% to -20% around the mean file 10 quality layers. size. Setting the image distortion level to a constant pSNR results in different levels of compression for The Aware J2K Encoder enables very accurate pSNR each image in a CT series. distortion control for each quality layer of an image. This Graph Shows Specified and Actual pSNR Values for each layer. Baseline or reference implementations 22500 68 of JPEG 2000 may use one of two methods to specify the amount of compression applied to an image, 21500 67 compression ratio, or target file size. 20500 66 The Aware implementation provides an additional 19500 65 method: compression to target pSNR. The Aware 18500 64 “compress-to-target pSNR” functionality provides a pSNR (dB) robust and repeatable method to compress medical 17500 63 File Size (bytes) image data with a machine measurable quality 16500 62 metric. File Size pSNR 15500 61 14500 60 FULLY COMPLIANT FIXED-POINT DECOMPRESSION 1 4 7 10 13 16 19 22 25 28 31 34 37 FOR OPTIMUM PRECISION AND SPEED Slice Number Medical imaging systems and JPEG 2000 require pre- Figure 3 - Compression to Target Quality: cise decoding of image data such that an ISO-com- pliant JPEG 2000 decoder passes precision testing.
Recommended publications
  • Lossy Image Compression Based on Prediction Error and Vector Quantisation Mohamed Uvaze Ahamed Ayoobkhan* , Eswaran Chikkannan and Kannan Ramakrishnan
    Ayoobkhan et al. EURASIP Journal on Image and Video Processing (2017) 2017:35 EURASIP Journal on Image DOI 10.1186/s13640-017-0184-3 and Video Processing RESEARCH Open Access Lossy image compression based on prediction error and vector quantisation Mohamed Uvaze Ahamed Ayoobkhan* , Eswaran Chikkannan and Kannan Ramakrishnan Abstract Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it is essential to ensure that the compression process does not affect the quality of the image adversely. The performance of a lossy compression algorithm is evaluated based on two conflicting parameters, namely, compression ratio and image quality which is usually measured by PSNR values. In this paper, a new lossy compression method denoted as PE-VQ method is proposed which employs prediction error and vector quantization (VQ) concepts. An optimum codebook is generated by using a combination of two algorithms, namely, artificial bee colony and genetic algorithms. The performance of the proposed PE-VQ method is evaluated in terms of compression ratio (CR) and PSNR values using three different types of databases, namely, CLEF med 2009, Corel 1 k and standard images (Lena, Barbara etc.). Experiments are conducted for different codebook sizes and for different CR values. The results show that for a given CR, the proposed PE-VQ technique yields higher PSNR value compared to the existing algorithms. It is also shown that higher PSNR values can be obtained by applying VQ on prediction errors rather than on the original image pixels.
    [Show full text]
  • JPEG Image Compression.Pdf
    JPEG image compression FAQ, part 1/2 2/18/05 5:03 PM Part1 - Part2 - MultiPage JPEG image compression FAQ, part 1/2 There are reader questions on this topic! Help others by sharing your knowledge Newsgroups: comp.graphics.misc, comp.infosystems.www.authoring.images From: [email protected] (Tom Lane) Subject: JPEG image compression FAQ, part 1/2 Message-ID: <[email protected]> Summary: General questions and answers about JPEG Keywords: JPEG, image compression, FAQ, JPG, JFIF Reply-To: [email protected] Date: Mon, 29 Mar 1999 02:24:27 GMT Sender: [email protected] Archive-name: jpeg-faq/part1 Posting-Frequency: every 14 days Last-modified: 28 March 1999 This article answers Frequently Asked Questions about JPEG image compression. This is part 1, covering general questions and answers about JPEG. Part 2 gives system-specific hints and program recommendations. As always, suggestions for improvement of this FAQ are welcome. New since version of 14 March 1999: * Expanded item 10 to discuss lossless rotation and cropping of JPEGs. This article includes the following sections: Basic questions: [1] What is JPEG? [2] Why use JPEG? [3] When should I use JPEG, and when should I stick with GIF? [4] How well does JPEG compress images? [5] What are good "quality" settings for JPEG? [6] Where can I get JPEG software? [7] How do I view JPEG images posted on Usenet? More advanced questions: [8] What is color quantization? [9] What are some rules of thumb for converting GIF images to JPEG? [10] Does loss accumulate with repeated compression/decompression?
    [Show full text]
  • Vue PACS 12.2 DICOM Conformance Statement
    Clinical Collaboration Platform Vue PACS 12.2 DICOM Conformance Statement Part # AD0536 2017-11-23 Vue PACS 12.2 DICOM Conformance Statement AD0536 A Uncontrolled unless otherwise indicated Confidential Vue PACS 12.2 - DICOM Conformance Statement - AD0536.docx PAGE 1 of 155 Table of Contents 1 Introduction ............................................................................................................................................ 3 1.1 Terms and Definitions ...................................................................................................................... 3 1.2 About This Document ....................................................................................................................... 4 1.3 Important Remarks ........................................................................................................................... 4 2 Implementation Model............................................................................................................................ 5 2.1 Application Data Flow Diagram ........................................................................................................ 5 2.2 Functional Definitions of AEs ......................................................................................................... 11 2.3 Sequencing of Real World Activities .............................................................................................. 12 3 AE Specifications ................................................................................................................................
    [Show full text]
  • CP2100 Clarify SMPTE Transfer Syntaxes for DICOM Web Services
    16 CP-2100 - Clarify SMPTE Transfer Syntaxes for DICOM Web Services​ Page 1​ 1 Status​ Final Text​ 2 Date of Last Update​ 2021/09/03​ 3 Person Assigned​ David Clunie​ 4 mailto:[email protected]​ 5 Submitter Name​ Mathieu Malaterre​ 6 mailto:[email protected]​ 7 Submission Date​ 2020/04/16​ 8 Correction Number CP-2100​ 9 Log Summary: Clarify SMPTE Transfer Syntaxes for DICOM Web Services​ 10 Name of Standard​ 11 PS3.18​ 12 Rationale for Correction:​ 13 Clarify that the video Transfer Syntaxes used for RTV, such as SMPTE ST 2110-20, are not applicable.​ 14 Correction Wording:​ 15 - Final Text -​ 48 CP-2100 - Clarify SMPTE Transfer Syntaxes for DICOM Web Services​ Page 2​ 1 Amend DICOM PS3.18 as follows (changes to existing text are bold and underlined for additions and struckthrough for removals):​ 2 8.7.3.1 Instance Media Types​ 3 The application/dicom media type specifies a representation of Instances encoded in the DICOM File Format specified in Section 7​ 4 “DICOM File Format” in PS3.10.​ 5 Note​ 6 The origin server may populate the PS3.10 File Meta Information with the identification of the Source, Sending and Receiving​ 7 AE Titles and Presentation Addresses as described in Section 7.1 in PS3.10, or these Attributes may have been left unaltered​ 8 from when the origin server received the objects. The user agent storing the objects received in the response may populate​ 9 or coerce these Attributes based on its own knowledge of the endpoints involved in the transaction, so that they accurately​ 10 identify the most recent
    [Show full text]
  • Application of Reversible Denoising and Lifting Steps to DWT in Lossless JPEG 2000 for Improved Bitrates
    Application of reversible denoising and lifting steps to DWT in lossless JPEG 2000 for improved bitrates Roman Starosolski Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland, e-mail: [email protected], [email protected], tel.: +48 322372151 Abstract In a previous study, we noticed that the lifting step of a color space transform might increase the amount of noise that must be encoded during compression of an image. To alleviate this problem, we proposed the replacement of lifting steps with reversible denoising and lifting steps (RDLS), which are basically lifting steps integrated with denoising filters. We found the approach effective for some of the tested images. In this study, we apply RDLS to discrete wavelet transform (DWT) in JPEG 2000 lossless coding. We evaluate RDLS effects on bi- trates using various denoising filters and a large number of diverse images. We employ a heu- ristic for image-adaptive RDLS filter selection; based on its empirical outcomes, we also pro- pose a fixed filter selection variant. We find that RDLS significantly improves bitrates of non- photographic images and of images with impulse noise added, while bitrates of photographic images are improved by below 1% on average. Considering that the DWT stage may worsen bitrates of some images, we propose a couple of practical compression schemes based on JPEG 2000 and RDLS. For non-photographic images, we obtain an average bitrate improve- ment of about 12% for fixed filter selection and about 14% for image-adaptive selection. Keywords: reversible denoising and lifting step, discrete wavelet transform, denoising, lifting tech- nique, lossless image compression, JPEG 2000.
    [Show full text]
  • Learning Scalable Ly=-Constrained Near-Lossless Image Compression Via Joint Lossy Image and Residual Compression
    Learning Scalable ℓ∞-constrained Near-lossless Image Compression via Joint Lossy Image and Residual Compression Yuanchao Bai1, Xianming Liu1,2,∗ Wangmeng Zuo1,2, Yaowei Wang1, Xiangyang Ji3 1Peng Cheng Laboratory, 2Harbin Institute of Technology, 3Tsinghua University {baiych,wangyw}@pcl.ac.cn, {csxm,wmzuo}@hit.edu.cn, [email protected] Abstract of the decoded images, ℓ∞-constrained near-lossless image compression is developed [6, 15, 2, 47, 51] and standardized We propose a novel joint lossy image and residual in traditional codecs, e.g., JPEG-LS [47] and CALIC [51]. compression framework for learning ℓ∞-constrained near- Different from lossy image compression with Peak Signal- lossless image compression. Specifically, we obtain a lossy to-Noise Ratio (PSNR) or Multi-Scale Structural SIMilar- reconstruction of the raw image through lossy image com- ity index (MS-SSIM) [45, 46] distortion measures, ℓ∞- pression and uniformly quantize the corresponding resid- constrained near-lossless image compression requires the ual to satisfy a given tight ℓ∞ error bound. Suppose that maximum reconstruction error of each pixel to be no larger the error bound is zero, i.e., lossless image compression, than a given tight numerical bound. Lossless image com- we formulate the joint optimization problem of compress- pression is a special case of near-lossless image compres- ing both the lossy image and the original residual in terms sion, when the tight error bound is zero. of variational auto-encoders and solve it with end-to-end With the fast development of deep neural networks training. To achieve scalable compression with the error (DNNs), learning-based lossy image compression [40, 3, bound larger than zero, we derive the probability model 39, 41, 34, 24, 4, 31, 27, 22, 8, 7, 23, 25, 29] has achieved of the quantized residual by quantizing the learned prob- tremendous progress over the last four years.
    [Show full text]
  • Still Image Compression Standards
    Still Image Compression Standards Michael W. Hoffman and Khalid Sayood Presented by: Jafar Ajdari Content 5.1 Introduction 5.2 Lossy compression 5.2.1 JPEG 5.2.1.1 DCT-Bsed Image Compression 5.2.1.2 Progressive Transmission 5.2.1.3 General Syntax and Data Ordering 5.2.1.3 Entropy Coding 5.2.2 JPEG2000 5.3 lossless Compression 5.3.1 JPEG 5.3.2 JPEG-LS 5.4 Bilevel Image Compression 5.4.1 JBIG 5.4.2 JBIG2 Definitions of some key terms DCT: Discrete Cosine Transform. JPEG: Joint Photographic Expert Group. JPEG200: the current standard that emphasizes lossy compression of images. JPEG-LS: An upcoming standard that focuses on lossless and near-lossless compression of still images JBIG: Joint Bilevel Image Group. Wavelets: A time-scale decomposition that allow very efficient energy compaction in images. INTRODUCTION What is image compression? Image data can be compressed without significant degradation of the visual (perceptual) quality b/c image contain a high degree of: • Spatial redundancy • Spectral redundancyPsycho-visual redundancy Why Standardization? Compression is one of the technologies that enable the multimedia revolution to occur. However for technology to be effective there has to be some degree of standardization so that the equipment designed by different vendors can talk to each other. Type of still image compression standards: • (JPEG) Joint Photographic Experts Group a- Lossy copression of still images b- Lossless compression of still images • (JBIG) Joint Bilevel Image Group • (GIF) Graphics Interchange Format. de facto • (PNG) Portable Network Graphics. De facto Compression scheme In any compression scheme there are: Step 1- Removal of redundancy based on implicit assumption about the structure in the data Step 2- Assignment of binary codewords to the information deemed nonredundant.
    [Show full text]
  • Benchmarking JPEG XL Image Compression
    Benchmarking JPEG XL image compression a a c a b Jyrki Alakuijala​ ,​ Sami Boukortt​ ,​ Touradj Ebrahimi​ ,​ Evgenii Kliuchnikov​ ,​ Jon Sneyers​ ,​ Evgeniy Upenikc,​ Lode Vandevennea,​ Luca Versaria,​ Jan Wassenberg1*a a ​ ​ b ​ ​ Google Switzerland, 8002 Zurich, Switzerland; ​ ​ Cloudinary, Santa Clara, CA 95051, USA; c ​ Multimedia Signal Processing Group, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland. ABSTRACT JPEG XL is a practical, royalty-free codec for scalable web distribution and efficient compression of high-quality photographs. It also includes previews, progressiveness, animation, transparency, high dynamic range, wide color gamut, and high bit depth. Users expect faithful reproductions of ever higher-resolution images. Experiments performed during standardization have shown the feasibility of economical storage without perceptible quality loss, lossless transcoding of existing JPEG, and fast software encoders and decoders. We analyse the results of subjective and objective evaluations versus existing codecs including HEVC and JPEG. New image codecs have to co-exist with their previous generations for several years. JPEG XL is unique in providing value for both existing JPEGs as well as new users. It includes coding tools to reduce the transmission and storage costs of JPEG by 22% while allowing byte-for-byte exact reconstruction of the original JPEG. Avoiding transcoding and additional artifacts helps to preserve our digital heritage. Applications require fast and low-power decoding. JPEG XL was designed to benefit from multicore and SIMD, and actually decodes faster than JPEG. We report the resulting speeds on ARM and x86 CPUs. To enable reproduction of these results, we open sourced the JPEG XL software in 2019. Keywords:​ image compression, JPEG XL, perceptual visual quality, image fidelity 1.
    [Show full text]
  • JPEG 2000 - a Practical Digital Preservation Standard?
    Technology Watch Report JPEG 2000 - a Practical Digital Preservation Standard? Robert Buckley, Ph.D. Xerox Research Center Webster Webster, New York DPC Technology Watch Series Report 08-01 February 2008 © Digital Preservation Coalition 2008 1 Executive Summary JPEG 2000 is a wavelet-based standard for the compression of still digital images. It was developed by the ISO JPEG committee to improve on the performance of JPEG while adding significant new features and capabilities to enable new imaging applications. The JPEG 2000 compression method is part of a multi-part standard that defines a compression architecture, file format family, client-server protocol and other components for advanced applications. Instead of replacing JPEG, JPEG 2000 has created new opportunities in geospatial and medical imaging, digital cinema, image repositories and networked image access. These opportunities are enabled by the JPEG 2000 feature set: • A single architecture for lossless and visually lossless image compression • A single JPEG 2000 master image can supply multiple derivative images • Progressive display, multi-resolution imaging and scalable image quality • The ability to handle large and high-dynamic range images • Generous metadata support With JPEG 2000, an application can access and decode only as much of the compressed image as needed to perform the task at hand. This means a viewer, for example, can open a gigapixel image almost instantly by retrieving and decompressing a low resolution, display-sized image from the JPEG 2000 codestream. JPEG 2000 also improves a user’s ability to interact with an image. The zoom, pan, and rotate operations that users increasingly expect in networked image systems are performed dynamically by accessing and decompressing just those parts of the JPEG 2000 codestream containing the compressed image data for the region of interest.
    [Show full text]
  • A New Lossless Image Compression Algorithm Based on Arithmetic Coding
    A New Lossless Image Compression Algorithm Based on Arithmetic Coding Bruno Carpentieri Dipartimento di Informatica ed Applicazioni "R. 3/1.. Capocelli" Universita' di Salerno 84081 Baronissi (SA), ltaly Abstract We present a new lossless image compression algorithm based on Arithmetic Coding. Our algorithm seleots appropriately, for each pixel position, one of a large number of possible, d3mamic , probability dish-ibutions, and encodes the current pixel prediction error by using this distribution as the model for the arithmetic encoder. We have experimentally compared our algorithm with Lossless JPEG, that is currently the lossless image compression standard, and also with FELICS and other lossless compression algorithms. Our tests show that the new algorithm outperforms Lossless JPEG and FELICS leading to a compression improvement of about 12% over Lossless JPEG and 10% over FF~LICS. I Introduction Compression is the coding of data to minimize its representation. The compression of images is motivated by the economic and logistic needs to conserve space in storage media and to save bandwidth in communication. In compressed form data can be stored more compactly and transmitted more rapidly. There are two basic reasons for expecting to be able to compress images. First, the information itself contains redundancies in the form of non uniform distribution of signal values (spatial correlation). Second, the act of digitizing contributes an expansion. The compression process is called lossless compression (also reversible or noiseless coding or redundancy reduction) if the original can be exactly reconstructed from the compressed copy; otherwise it is called los~y compression (also irreversible or fidelity-reducing coding or entropy reduction).
    [Show full text]
  • The Jpeg2000 Still Image Coding System: an Overview
    Published in IEEE Transactions on Consumer Electronics, Vol. 46, No. 4, pp. 1103-1127, November 2000 THE JPEG2000 STILL IMAGE CODING SYSTEM: AN OVERVIEW Charilaos Christopoulos1 Senior Member, IEEE, Athanassios Skodras2 Senior Member, IEEE, and Touradj Ebrahimi3 Member, IEEE 1Media Lab, Ericsson Research Corporate Unit, Ericsson Radio Systems AB, S-16480 Stockholm, Sweden Email: [email protected] 2Electronics Laboratory, University of Patras, GR-26110 Patras, Greece Email: [email protected] 3Signal Processing Laboratory, EPFL, CH-1015 Lausanne, Switzerland Email: [email protected] Abstract -- With the increasing use of multimedia international standard for the compression of technologies, image compression requires higher grayscale and color still images. This effort has been performance as well as new features. To address this known as JPEG, the Joint Photographic Experts need in the specific area of still image encoding, a new Group the “joint” in JPEG refers to the collaboration standard is currently being developed, the JPEG2000. It between ITU and ISO). Officially, JPEG corresponds is not only intended to provide rate-distortion and subjective image quality performance superior to to the ISO/IEC international standard 10928-1, digital existing standards, but also to provide features and compression and coding of continuous-tone functionalities that current standards can either not (multilevel) still images or to the ITU-T address efficiently or in many cases cannot address at Recommendation T.81. The text in both these ISO and all. Lossless and lossy compression, embedded lossy to ITU-T documents is identical. The process was such lossless coding, progressive transmission by pixel that, after evaluating a number of coding schemes, the accuracy and by resolution, robustness to the presence JPEG members selected a DCT1-based method in of bit-errors and region-of-interest coding, are some 1988.
    [Show full text]
  • Lossless Data Compression of Grid-Based Digital Elevation Models: a Png Image Format Evaluation
    ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume II-5, 2014 ISPRS Technical Commission V Symposium, 23 – 25 June 2014, Riva del Garda, Italy LOSSLESS DATA COMPRESSION OF GRID-BASED DIGITAL ELEVATION MODELS: A PNG IMAGE FORMAT EVALUATION G. Scarmana University of Southern Queensland, Australia [email protected] Commission V KEY WORDS: Image processing; DEM compression; Terrain modelling; PNG (Portable Networks Graphics). ABSTRACT: At present, computers, lasers, radars, planes and satellite technologies make possible very fast and accurate topographic data acquisition for the production of maps. However, the problem of managing and manipulating this data efficiently remains. One particular type of map is the elevation map. When stored on a computer, it is often referred to as a Digital Elevation Model (DEM). A DEM is usually a square matrix of elevations. It is like an image, except that it contains a single channel of information (that is, elevation) and can be compressed in a lossy or lossless manner by way of existing image compression protocols. Compression has the effect of reducing memory requirements and speed of transmission over digital links, while maintaining the integrity of data as required. In this context, this paper investigates the effects of the PNG (Portable Network Graphics) lossless image compression protocol on floating-point elevation values for 16-bit DEMs of dissimilar terrain characteristics. The PNG is a robust, universally supported, extensible, lossless, general-purpose and patent-free image format. Tests demonstrate that the compression ratios and run decompression times achieved with the PNG lossless compression protocol can be comparable to, or better than, proprietary lossless JPEG variants, other image formats and available lossless compression algorithms.
    [Show full text]