Tim Bruylants

Total Page:16

File Type:pdf, Size:1020Kb

Tim Bruylants FACULTY OF ENGINEERING Departement of Electronics and Informatics Advanced Coding Technologies For Medical and Holographic Imaging Algorithms, Implementations and Standardization Thesis submitted in fulfilment of the requirements for the award of the degree of Doctor in de ingenieurswetenschappen (Doctor in Engineering) by Tim Bruylants July 2015 Advisor(s): Prof. Dr. Peter Schelkens Prof. Dr. Adrian Munteanu Examining Committee Prof. Dr. ir. Peter Schelkens { Vrije Universiteit Brussel { Promoter Prof. Dr. ir. Adrian Munteanu { Vrije Universiteit Brussel { Promoter Prof. Dr. ir. Leo Van Biesen { Vrije Universiteit Brussel { Committee chair Prof. Dr. ir. Johan Deconinck { Vrije Universiteit Brussel { Committee vice-chair Prof. Dr. Bart Jansen { Vrije Universiteit Brussel { Committee secretary Prof. Dr. ir. Søren Forchhammer { Technical University of Denmark { Member Prof. Dr. ir. Aleksandra Pizurica - Universiteit Gent - Member Prof. dr. Johan De Mey { Faculteit Geneeskunde, UZ Brussel { Member In memory of Bruno and Pam. \An expert is a man1 who has made all the mistakes that can be made in a very narrow field.” { Niels Bohr 1women are experts by default Table of contents Acknowledgmentsv Synopsis vii Acronyms ix 1 Introduction1 1.1 Motivation................................1 1.1.1 Volumetric medical image coding................1 1.1.2 Digital holographic image coding................4 1.1.3 JPEG Standardization......................6 1.2 Outline and major contributions....................7 2 Image coding overview9 2.1 Introduction................................9 2.2 Concepts and definitions......................... 10 2.2.1 Digital images.......................... 10 2.2.2 Entropy and mutual information................ 11 2.2.3 Quantization........................... 14 2.2.4 Lossless and lossy compression................. 19 2.2.5 Objective quality metrics.................... 22 2.2.6 Subjective quality metrics.................... 25 2.3 Discrete Wavelet Transform (DWT).................. 27 2.3.1 Classical or non-directional DWT................ 27 2.3.2 Directional DWT......................... 31 2.3.3 Intermediate conclusions..................... 32 2.4 JPEG................................... 32 2.4.1 Pre-processing.......................... 32 2.4.2 Discrete Cosine Transform (DCT)............... 34 2.4.3 Quantization........................... 35 i Table of contents 2.4.4 Entropy coding.......................... 36 2.5 JPEG-LS................................. 37 2.5.1 Context-based prediction.................... 38 2.5.2 Context modeling........................ 39 2.5.3 Golomb-Rice coding....................... 40 2.5.4 Arithmetic coding with Part 2................. 40 2.6 JPEG 2000................................ 42 2.6.1 Pre-processing.......................... 42 2.6.2 Discrete Wavelet Transform (DWT).............. 45 2.6.3 Quantization........................... 48 2.6.4 Tier-1: EBCOT.......................... 49 2.6.5 Code-stream organization.................... 53 2.6.6 Rate control............................ 57 2.6.7 Region of Interest (ROI) coding................. 60 2.6.8 Intermediate conclusions..................... 61 2.7 HEVC................................... 61 2.7.1 High-level syntax......................... 62 2.7.2 Intra-picture prediction..................... 64 2.7.3 Inter-picture prediction..................... 64 2.7.4 Transform and quantization................... 65 2.7.5 In-loop filters........................... 65 2.7.6 Entropy coding.......................... 66 2.8 Conclusions................................ 66 3 JP3D for volumetric image coding 67 3.1 Introduction................................ 67 3.2 Why JP3D?................................ 67 3.3 JP3D: going volumetric......................... 68 3.3.1 Preprocessing........................... 69 3.3.2 The 3D discrete wavelet transform (3D-DWT)........ 69 3.3.3 Quantization........................... 73 3.3.4 Bit-modeling and entropy coding................ 73 3.4 Bit-Stream Organization......................... 75 3.4.1 The three dimensional canvas coordinate system....... 75 3.4.2 Code-stream........................... 83 3.4.3 Rate Control........................... 87 3.5 Additional features of JP3D....................... 88 3.5.1 Region-of-Interest........................ 88 3.6 Implications for other parts of JPEG 2000............... 89 3.6.1 Volumetric extension to JPIP.................. 89 3.7 JP3D Verification Model Software................... 90 ii Table of contents 3.8 Volumetric context modeling (not part of JP3D)........... 90 3.8.1 Introduction........................... 90 3.8.2 3D context models........................ 91 3.8.3 Mutual information based context modeling.......... 92 3.8.4 Near-optimal context classification............... 94 3.8.5 Context model experiments................... 95 3.8.6 Context modeling conclusions.................. 99 3.9 Conclusions................................ 100 4 Volumetric image coding extensions 101 4.1 Introduction................................ 101 4.2 Investigated Extensions for JPEG 2000................. 104 4.2.1 Alternative wavelet filters.................... 104 4.2.2 Block-based volumetric Direction-Adaptive DWT (DA-DWT) 105 4.2.3 Generic segmentation based on SD-DA-DWT......... 107 4.2.4 Volumetric coding with block-based intra-band prediction.. 110 4.3 Volumetric compression framework, JP3D+DA............ 111 4.4 Experimental results........................... 114 4.4.1 Image data set.......................... 114 4.4.2 Volumetric decomposition structures.............. 114 4.4.3 Block-based Intra-band prediction............... 122 4.4.4 Enabling DA-DWT........................ 123 4.4.5 Comparison with H.265/MPEG-H HEVC and JPEG-LS... 128 4.4.6 Computational complexity.................... 132 4.4.7 Visual comparisons........................ 132 4.4.8 Summarizing rate-distortion performance tables........ 136 4.5 Conclusions................................ 142 5 Coding of holographic data 145 5.1 Introduction................................ 145 5.2 Representation of off-axis holograms.................. 145 5.3 Full Packet Decomposition with JPEG 2000.............. 149 5.4 Proposed extensions for JPEG 2000.................. 152 5.4.1 Truly Arbitrary Packet Decompositions............ 152 5.4.2 Directional Adaptive Discrete Wavelet Transform (DA-DWT) 155 5.5 Experiments................................ 157 5.5.1 Test Data............................. 157 5.5.2 Test setup and settings..................... 159 5.5.3 Results.............................. 160 5.6 Conclusions................................ 164 iii Table of contents 6 JPEG standardization 165 6.1 Introduction................................ 165 6.2 JPEG Committee............................. 165 6.3 JBIG, JPEG-LS and JPSearch..................... 167 6.4 JPEG 2000................................ 168 6.5 JPEG XR................................. 169 6.5.1 Introduction........................... 169 6.5.2 Technical overview........................ 169 6.5.3 Subjective testing methodology................. 171 6.6 JPEG Systems.............................. 178 6.6.1 Introduction........................... 178 6.6.2 JPEG Systems, a layered architecture............. 180 6.6.3 Conclusions on JPEG Systems................. 180 6.7 Advanced image coding and evaluations (AIC)............ 181 6.8 JPEG XT................................. 181 6.8.1 Introduction........................... 181 6.8.2 Suite of standards........................ 182 6.8.3 Forward compatibility and new functionality......... 183 6.8.4 JPEG XT Boxes......................... 184 6.8.5 High dynamic range coding................... 185 6.8.6 Lossless and near-lossless coding................ 187 6.8.7 Alpha channel coding...................... 188 6.8.8 Conclusions on JPEG XT.................... 189 6.9 Conclusions................................ 189 7 Epilogue 191 7.1 Conclusions................................ 191 7.2 Future research potential........................ 195 List of publications 199 References 205 Index 217 iv Acknowledgments Here I am, finally arriving at the end of a long but awarding period. Like any PhD, this work too is the result of years of labor, often in cooperation with various people. For this reason, I want to express my appreciation to everyone who contributed to this work in some way. However, before I continue with personal acknowledgments, I would like to state that my memory sometimes fails me on simple things, such as remembering names. Fact is, I would like to thank so many persons, but there is always the risk of forgetting someone. So, to mitigate this risk, I will keep these acknowledgments rather short and if you are that person that also deserved a credit, then don't feel offended. After obtaining my Master of Science degree, I started working for a relatively small, but nice company. However, after some years, I decided to make a career switch into the field of academic research. Yet, going from a company to the academic world is not evident, as I was to find out. Therefore, and foremost, I thank Peter for hiring me as a researcher and providing me with the opportunity and support to not only do a PhD, but also to become part of the wonderful world of standardization. I also genuinely thank Adrian as my co-advisor for his great help and valuable input to this research. And, with Peter and Adrian, I also thank the jury members for their participation and effort regarding this PhD. Of all my friends, I would specifically like to
Recommended publications
  • PDF/A for Scanned Documents
    Webinar www.pdfa.org PDF/A for Scanned Documents Paper Becomes Digital Mark McKinney, LuraTech, Inc., President Armin Ortmann, LuraTech, CTO Mark McKinney President, LuraTech, Inc. © 2009 PDF/A Competence Center, www.pdfa.org Existing Solutions for Scanned Documents www.pdfa.org Black & White: TIFF G4 Color: Mostly JPEG, but sometimes PNG, BMP and other raster graphics formats Often special version formats like “JPEG in TIFF” Disadvantages: Several formats already for scanned documents Even more formats for born digital documents Loss of information, e.g. with TIFF G4 Bad image quality and huge file size, e.g. with JPEG No standardized metadata spread over all formats Not full text searchable (OCR) inside of files Black/White: Color: - TIFF FAX G4 - TIFF - TIFF LZW Mark McKinney - JPEG President, LuraTech, Inc. - PDF 2 Existing Solutions for Scanned Documents www.pdfa.org Bad image quality vs. file size TIFF/BMP JPEG TIFF G4 23.8 MB 180 kB 60 kB Mark McKinney President, LuraTech, Inc. 3 Alternative Solution: PDF www.pdfa.org PDF is already widely used to: Unify file formats Image à PDF “Office” Documents à PDF Other sources à PDF Create full-text searchable files Apply modern compression technology (e.g. the JPEG2000 file formats family) Harmonize metadata Conclusion: PDF avoids the disadvantages of the legacy formats “So if you are already using PDF as archival Mark McKinney format, why not use PDF/A with its many President, LuraTech, Inc. advantages?” 4 PDF/A www.pdfa.org What is PDF/A? • ISO 19005-1, Document Management • Electronic document file format for long-term preservation Goals of PDF/A: • Maintain static visual representation of documents • Consistent handing of Metadata • Option to maintain structure and semantic meaning of content • Transparency to guarantee access • Limit the number of restrictions Mark McKinney President, LuraTech, Inc.
    [Show full text]
  • Chapter 9 Image Compression Standards
    Fundamentals of Multimedia, Chapter 9 Chapter 9 Image Compression Standards 9.1 The JPEG Standard 9.2 The JPEG2000 Standard 9.3 The JPEG-LS Standard 9.4 Bi-level Image Compression Standards 9.5 Further Exploration 1 Li & Drew c Prentice Hall 2003 ! Fundamentals of Multimedia, Chapter 9 9.1 The JPEG Standard JPEG is an image compression standard that was developed • by the “Joint Photographic Experts Group”. JPEG was for- mally accepted as an international standard in 1992. JPEG is a lossy image compression method. It employs a • transform coding method using the DCT (Discrete Cosine Transform). An image is a function of i and j (or conventionally x and y) • in the spatial domain. The 2D DCT is used as one step in JPEG in order to yield a frequency response which is a function F (u, v) in the spatial frequency domain, indexed by two integers u and v. 2 Li & Drew c Prentice Hall 2003 ! Fundamentals of Multimedia, Chapter 9 Observations for JPEG Image Compression The effectiveness of the DCT transform coding method in • JPEG relies on 3 major observations: Observation 1: Useful image contents change relatively slowly across the image, i.e., it is unusual for intensity values to vary widely several times in a small area, for example, within an 8 8 × image block. much of the information in an image is repeated, hence “spa- • tial redundancy”. 3 Li & Drew c Prentice Hall 2003 ! Fundamentals of Multimedia, Chapter 9 Observations for JPEG Image Compression (cont’d) Observation 2: Psychophysical experiments suggest that hu- mans are much less likely to notice the loss of very high spatial frequency components than the loss of lower frequency compo- nents.
    [Show full text]
  • Electronics Engineering
    INTERNATIONAL JOURNAL OF ELECTRONICS ENGINEERING ISSN : 0973-7383 Volume 11 • Number 1 • 2019 Study of Different Image File formats for Raster images Prof. S. S. Thakare1, Prof. Dr. S. N. Kale2 1Assistant professor, GCOEA, Amravati, India, [email protected] 2Assistant professor, SGBAU,Amaravti,India, [email protected] Abstract: In the current digital world, the usage of images are very high. The development of multimedia and digital imaging requires very large disk space for storage and very long bandwidth of network for transmission. As these two are relatively expensive, Image compression is required to represent a digital image yielding compact representation of image without affecting its essential information with reducing transmission time. This paper attempts compression in some of the image representation formats and the experimental results for some image file format are also shown. Keywords: ImageFileFormats, JPEG, PNG, TIFF, BITMAP, GIF,CompressionTechniques,Compressed image processing. 1. INTRODUCTION Digital images generally occupy a large amount of storage space and therefore take longer time to transmit and download (Sayood 2012;Salomonetal 2010;Miano 1999). To reduce this time image compression is necessary. Image compression is a technique used to identify internal data redundancy and then develop a compact representation that takes up less storage space than the original image size and the reverse process is called decompression (Javed 2016; Kia 1997). There are two types of image compression (Gonzalez and Woods 2009). 1. Lossy image compression 2. Lossless image compression In case of lossy compression techniques, it removes some part of data, so it is used when a perfect consistency with the original data is not necessary after decompression.
    [Show full text]
  • Document Size Converter Jpg
    Document Size Converter Jpg Incogitant and anaerobiotic Horst always materialize barratrously and overfills his bilabial. Wye breech his Venezuela compartmentalizes untimely, but perigeal Andre never dribbled so drudgingly. Barney is pitter-patter panegyric after joyous Gretchen sconce his casemate serenely. Then crucial to File Export Create PDFXPS Document to disciple the file with multiple. You can we create advertisements for your websites in different sizes eg 33620 px 7290 px. Please review various Privacy Policy cover more information about cookies and their uses, the format that the camera industry has standardized on for metadata interchange. Select the ticket type jut convert images to. Choose a document. How do i switch between them is a document size of documents not. We pave a total expenditure of free service on our conversion rate will quite fast. We are many other. Processing of JPEG photos online Main page Resize Convert Compress EXIF editor Effects Improve Different tools Compress JPG file to a specified size. JPEGmini Reduce file size not quality. JPEG algorithm is ray of compressing the playground as lossy and lossless. All the information you recount to manage the answers to your questions. Middle: the basis function, GIF, so nobody has tough to your information. Place below your images in original folder is sort facility in line sequence i want. Is it out to boot computer that will power while suspending to disk? We fire your PDF documents and convert resume to produce project quality JPG Using an online. Decide which hike the resulting image many have. Adjust the size of images by using the selection handles.
    [Show full text]
  • JPEG and JPEG 2000
    JPEG and JPEG 2000 Past, present, and future Richard Clark Elysium Ltd, Crowborough, UK [email protected] Planned presentation Brief introduction JPEG – 25 years of standards… Shortfalls and issues Why JPEG 2000? JPEG 2000 – imaging architecture JPEG 2000 – what it is (should be!) Current activities New and continuing work… +44 1892 667411 - [email protected] Introductions Richard Clark – Working in technical standardisation since early 70’s – Fax, email, character coding (8859-1 is basis of HTML), image coding, multimedia – Elysium, set up in ’91 as SME innovator on the Web – Currently looks after JPEG web site, historical archive, some PR, some standards as editor (extensions to JPEG, JPEG-LS, MIME type RFC and software reference for JPEG 2000), HD Photo in JPEG, and the UK MPEG and JPEG committees – Plus some work that is actually funded……. +44 1892 667411 - [email protected] Elysium in Europe ACTS project – SPEAR – advanced JPEG tools ESPRIT project – Eurostill – consensus building on JPEG 2000 IST – Migrator 2000 – tool migration and feature exploitation of JPEG 2000 – 2KAN – JPEG 2000 advanced networking Plus some other involvement through CEN in cultural heritage and medical imaging, Interreg and others +44 1892 667411 - [email protected] 25 years of standards JPEG – Joint Photographic Experts Group, joint venture between ISO and CCITT (now ITU-T) Evolved from photo-videotex, character coding First meeting March 83 – JPEG proper started in July 86. 42nd meeting in Lausanne, next week… Attendance through national
    [Show full text]
  • Analysis and Comparison of Compression Algorithm for Light Field Mask
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 12 (2017) pp. 3553-3556 © Research India Publications. http://www.ripublication.com Analysis and Comparison of Compression Algorithm for Light Field Mask Hyunji Cho1 and Hoon Yoo2* 1Department of Computer Science, SangMyung University, Korea. 2Associate Professor, Department of Media Software SangMyung University, Korea. *Corresponding author Abstract This paper describes comparison and analysis of state-of-the- art lossless image compression algorithms for light field mask data that are very useful in transmitting and refocusing the light field images. Recently, light field cameras have received wide attention in that they provide 3D information. Also, there has been a wide interest in studying the light field data compression due to a huge light field data. However, most of existing light field compression methods ignore the mask information which is one of important features of light field images. In this paper, we reports compression algorithms and further use this to achieve binary image compression by realizing analysis and comparison of the standard compression methods such as JBIG, JBIG 2 and PNG algorithm. The results seem to confirm that the PNG method for text data compression provides better results than the state-of-the-art methods of JBIG and JBIG2 for binary image compression. Keywords: Lossless compression, Image compression, Light Figure. 2. Basic architecture from raw images to RGB and filed compression, Plenoptic coding mask images INTRODUCTION The LF camera provides a raw image captured from photosensor with microlens, as depicted in Fig. 1. The raw Light field (LF) cameras, also referred to as plenoptic cameras, data consists of 10 bits per pixel precision in little-endian differ from regular cameras by providing 3D information of format.
    [Show full text]
  • PDF Image JBIG2 Compression and Decompression with JBIG2 Encoding and Decoding SDK Library | 1
    PDF image JBIG2 compression and decompression with JBIG2 encoding and decoding SDK library | 1 JBIG2 is an image compression standard for bi-level images developed by the Joint bi-level Image Expert Group. It is suitable for lossless compression and lossy compression. According to the group’s press release, in its lossless mode, JBIG2 usually generates files that are one- third to one-fifth the size of the fax group 4 and twice the size of JBIG, which was previously released by the group. The double-layer compression standard. JBIG2 was released as an international standard ITU in 2000. JBIG2 compression JBIG2 is an international standard for bi-level image compression. By segmenting the image into overlapping and/or non-overlapping areas of text, halftones and general content, compression techniques optimized for each content type are used: *Text area: The text area is composed of characters that are well suited for symbol-based encoding methods. Usually, each symbol will correspond to a character bitmap, and a sub-image represents a character or text. For each uppercase and lowercase character used on the front face, there is usually only one character bitmap (or sub-image) in the symbol dictionary. For example, the dictionary will have an “a” bitmap, an “A” bitmap, a “b” bitmap, and so on. VeryUtils.com PDF image JBIG2 compression and decompression with JBIG2 encoding and decoding SDK library | 1 PDF image JBIG2 compression and decompression with JBIG2 encoding and decoding SDK library | 2 *Halftone area: Halftone areas are similar to text areas because they consist of patterns arranged in a regular grid.
    [Show full text]
  • JPEG 2000 File Format (JP2 Format) Provides a Priority [9, 39-40]
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/3180415 The JPEG2000 still image coding system: An overview Article in IEEE Transactions on Consumer Electronics · December 2000 DOI: 10.1109/30.920468 · Source: IEEE Xplore CITATIONS READS 1,182 760 3 authors, including: Athanassios Skodras Touradj Ebrahimi University of Patras École Polytechnique Fédérale de Lausanne 168 PUBLICATIONS 3,939 CITATIONS 652 PUBLICATIONS 18,270 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: JPEG XT a JPEG standard for High Dynamic Range (HDR) and Wide Color Gamut (WCG) Images View project Fall Detection View project All content following this page was uploaded by Athanassios Skodras on 27 November 2012. The user has requested enhancement of the downloaded file. Published in IEEE Transactions on Consumer Electronics, Vol. 46, No. 4, pp. 1103-1127, November 2000 THE JPEG2000 STILL IMAGE CODING SYSTEM: AN OVERVIEW Charilaos Christopoulos1 Senior Member, IEEE, Athanassios Skodras2 Senior Member, IEEE, and Touradj Ebrahimi3 Member, IEEE 1Media Lab, Ericsson Research Corporate Unit, Ericsson Radio Systems AB, S-16480 Stockholm, Sweden Email: [email protected] 2Electronics Laboratory, University of Patras, GR-26110 Patras, Greece Email: [email protected] 3Signal Processing Laboratory, EPFL, CH-1015 Lausanne, Switzerland Email: [email protected] Abstract -- With the increasing use of multimedia international standard for the compression of technologies, image compression requires higher grayscale and color still images. This effort has been performance as well as new features. To address this known as JPEG, the Joint Photographic Experts need in the specific area of still image encoding, a new Group the “joint” in JPEG refers to the collaboration standard is currently being developed, the JPEG2000.
    [Show full text]
  • Lossless Image Compression
    Lossless Image Compression C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/Courses/Compression/ Office: EC538 (03)5731877 [email protected] Lossless JPEG (1992) 2 ITU Recommendation T.81 (09/92) Compression based on 8 predictive modes ( “selection values): 0 P(x) = x (no prediction) 1 P(x) = W 2 P(x) = N 3 P(x) = NW NW N 4 P(x) = W + N - NW 5 P(x) = W + ⎣(N-NW)/2⎦ W x 6 P(x) = N + ⎣(W-NW)/2⎦ 7 P(x) = ⎣(W + N)/2⎦ Sequence is then entropy-coded (Huffman/AC) Lossless JPEG (2) 3 Value 0 used for differential coding only in hierarchical mode Values 1, 2, 3 One-dimensional predictors Values 4, 5, 6, 7 Two-dimensional Value 1 (W) Used in the first line of samples At the beginning of each restart Selected predictor used for the other lines Value 2 (N) Used at the start of each line, except first P-1 Default predictor value: 2 At the start of first line Beginning of each restart Lossless JPEG Performance 4 JPEG prediction mode comparisons JPEG vs. GIF vs. PNG Context-Adaptive Lossless Image Compression [Wu 95/96] 5 Two modes: gray-scale & bi-level We are skipping the lossy scheme for now Basic ideas find the best context from the info available to encoder/decoder estimate the presence/lack of horizontal/vertical features CALIC: Initial Prediction 6 if dh−dv > 80 // sharp horizontal edge X* = N else if dv−dh > 80 // sharp vertical edge X* = W else { // assume smoothness first X* = (N+W)/2 +(NE−NW)/4 if dh−dv > 32 // horizontal edge X* = (X*+N)/2
    [Show full text]
  • This Is Your Reminder That Block Paragraphs Are Used
    Development of Standard Data Format for 2-Dimensional and 3-Dimensional (2D/3D) Pavement Image Data used to determine Pavement Surface Condition and Profiles Task 4 - Develop Metadata and Proposed Standards Office of Technical Services FHWA Resource Center Pavement & Materials Technical Services Team December 2016 1 Notice This document is disseminated under the sponsorship of the U.S. Department of Transportation in the interest of information exchange. The U.S. Government assumes no liability for the use of the information contained in this document. This report does not constitute a standard, specification, or regulation. The U.S. Government does not endorse products or manufacturers. Trademarks or manufacturers’ names appear in this report only because they are considered essential to the objective of the document to transfer technical information. Quality Assurance Statement The Federal Highway Administration (FHWA) provides high-quality information to serve Government, industry, and the public in a manner that promotes public understanding. Standards and policies are used to ensure and maximize the quality, objectivity, utility, and integrity of its information. The FHWA periodically reviews quality issues and adjusts its programs and processes to ensure continuous quality improvement. Technical Report Documentation Page 1. Report No. 2. Government Accession No. 3. Recipient’s Catalog No. 4. Title and Subtitle 5. Report Date 12-20-2016 Development of Standard Data Format for 2-Dimensional and 3-Dimensional (2D/3D) Pavement Image Data that is used to determine Pavement Surface 6. Performing Organization Code Condition and Profiles 7. Author(s) 8. Performing Organization Report No. Wang Kelvin C. P., Qiang “Joshua” Li, and Cheng Chen 9.
    [Show full text]
  • High Dynamic Range Image Compression Based on Visual Saliency Jin Wang,1,2 Shenda Li1 and Qing Zhu1
    SIP (2020), vol. 9, e16, page 1 of 15 © The Author(s), 2020 published by Cambridge University Press in association with Asia Pacific Signal and Information Processing Association. This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, providedthesameCreative Commons licence is included and the original work is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use. doi:10.1017/ATSIP.2020.15 original paper High dynamic range image compression based on visual saliency jin wang,1,2 shenda li1 and qing zhu1 With wider luminance range than conventional low dynamic range (LDR) images, high dynamic range (HDR) images are more consistent with human visual system (HVS). Recently, JPEG committee releases a new HDR image compression standard JPEG XT. It decomposes an input HDR image into base layer and extension layer. The base layer code stream provides JPEG (ISO/IEC 10918) backward compatibility, while the extension layer code stream helps to reconstruct the original HDR image. However, thismethoddoesnotmakefulluseofHVS,causingwasteofbitsonimperceptibleregionstohumaneyes.Inthispaper,avisual saliency-based HDR image compression scheme is proposed. The saliency map of tone mapped HDR image is first extracted, then it is used to guide the encoding of extension layer. The compression quality is adaptive to the saliency of the coding region of the image. Extensive experimental results show that our method outperforms JPEG XT profile A, B, C and other state-of-the-art methods.
    [Show full text]
  • Kompresja Statycznego Obrazu 138
    Damian Karwowski Zrozumieć Kompresję Obrazu Podstawy Technik Kodowania Stratnego oraz Bezstratnego Obrazów Wydanie Pierwsze, wersja 1.2 Poznań 2019r. ISBN 978-83-953420-0-4 9 788395 342004 © Damian Karwowski – „Zrozumieć Kompresję Obrazu” © Copyright by DAMIAN KARWOWSKI. All rights reserved. Książka jest chroniona prawem autorskim i prawami pokrewnymi. Egzemplarz książki został zdeponowany w Kancelarii Notarialnej. Książka jest dostępna pod adresem: www.zrozumieckompresje.pl ISBN 978-83-953420-0-4 Projekt okładki książki oraz strona internetowa zostały wykonane przez Marka Piskulskiego. Serdecznie dziękuję za profesjonalną pracę Obraz „Miś Panda”, który jest częścią okładki, namalowała na płótnie Natalka Karwowska. Natalko, dziękuję Ci za Twój wysiłek Autor książki dołożył ogromnych starań, żeby zamieszczone w niej informacje były prawdziwe i rzetelnie przedstawione. Korzystając z książki Czytelnik robi to jednak wyłącznie na własną odpowiedzialność. Tym samym autor nie odpowiada za jakiekolwiek szkody, będące następstwem wykorzystania zawartej w książce wiedzy. Autor książki Dr inż. Damian Karwowski. Absolwent Politechniki Poznańskiej. Uzyskał tytuł zawodowy magistra inżyniera oraz stopień doktora nauk technicznych na Politechnice Poznańskiej, odpowiednio w latach 2003 i 2008. Obecnie pracownik naukowo-dydaktyczny na wyżej wymienionej uczelni. Od roku 2003 zawodowo zajmuje się kompresją obrazu. Autor ponad 50 publikacji naukowych o tematyce kompresji i przetwarzania obrazów. Brał udział w licznych projektach naukowych dotyczących wydajnej reprezentacji multimedialnych danych. Dodatkowo członek wielu zespołów badawczych, które w tematyce kompresji obrazu i dźwięku realizowały prace badawczo-wdrożeniowe dla przemysłu. Jego zainteresowania obejmują kompresję danych, techniki kodowania entropijnego danych oraz realizację kodeków obrazu i dźwięku na procesorach x86 oraz DSP. 4 | S t r o n a © Damian Karwowski – „Zrozumieć Kompresję Obrazu” Spis treści Spis treści 5 “Słowo” wstępu 11 Rozdział 1 15 Obraz i jego reprezentacja przestrzenna 15 1.1.
    [Show full text]