Entropy Coding of Adaptive Sparse Data Representations by Context Mixing

Total Page:16

File Type:pdf, Size:1020Kb

Entropy Coding of Adaptive Sparse Data Representations by Context Mixing Entropy Coding of Adaptive Sparse Data Representations by Context Mixing von Thomas Wenzel Matrikel-Nr. 5960073 geboren am 23.07.1987 Masterarbeit im Studiengang Mathematik Universität Hamburg 2013 Gutachter: Prof. Dr. Armin Iske Zweitgutachter: Dr. Laurent Demaret Erklärung Die vorliegende Arbeit habe ich selbständig verfasst und keine anderen als die angegebe- nen Hilfsmittel - insbesondere keine im Quellenverzeichnis nicht benannten Internet-Quellen - benutzt. Die Arbeit habe ich vorher nicht in einem anderen Prüfungsverfahren eingere- icht. Die eingereichte schriftliche Fassung entspricht genau der auf dem elektronischen Speichermedium. Abstract In this thesis improvements on the video compression scheme AT3D, which relies on linear spline interpolation over anisotropic Delaunay tetrahedralizations, are presented by introducing a more sophisticated entropy coding stage and optimizing several intermediate computations to reduce compression time. A detailed literature summary on information theory basics and a detailed implementation summary of AT3D is presented. The original entropy coding stage is replaced by the core of the general purpose data compression scheme PAQ by Matt Mahoney, which employs a geometric context-mixing technique based on online convex programming for coding cost minimization. Novel models for pixel position- and grayscale value encoding are incorporated into the new AT3D_PAQ implementation and investigated numerically. A significant average improvement was achieved on a wide variety of test sequences, and in a comparison with H.264 a sequence class is presented, in which AT3D_PAQ outperforms H.264. Theoretical and numerical results additionally demonstrate the higher computational efficiency of AT3D_PAQ and show significantly reduced compression times in a variety of test environments. Contents 1 Introduction3 1.1 Thesis organization and contributions ..................... 6 2 Fundamentals8 2.1 Notational conventions.............................. 8 2.2 A brief introduction to video compression................... 8 2.2.1 Summary and outlook.......................... 13 2.3 MPEG4-AVC/H.264............................... 14 2.3.1 Overview ................................. 14 2.3.2 Coding units ............................... 14 2.3.3 Intra- and inter-frame prediction.................... 16 2.3.4 Transform coding............................. 17 2.3.5 Scan, quantization and entropy coding................. 18 2.3.6 In-loop deblocking filter......................... 18 2.3.7 Rate control................................ 19 2.3.8 Summary ................................. 19 2.4 Introduction to information theory....................... 20 2.4.1 Basic definitions ............................. 20 2.4.2 Entropy.................................. 21 2.4.3 Linking entropy to data compression.................. 25 2.4.4 Arithmetic coding ............................ 29 2.4.5 Separation of modeling and encoding.................. 34 2.4.6 Summary ................................. 35 2.5 AT3D ....................................... 36 2.5.1 Delaunay tetrahedralizations ...................... 37 2.5.2 Linear splines over Delaunay tetrahedralizations ........... 38 2.5.3 Pre-processing .............................. 40 2.5.4 Adaptive thinning ............................ 41 2.5.5 Post-processing: Pixel exchange..................... 43 2.5.6 Post-processing: Best approximation.................. 45 2.5.7 Quantization ............................... 46 2.5.8 Implementation of arithmetic coding.................. 47 2.5.9 Entropy coding: Pixel positions..................... 47 2.5.10 Entropy coding: Grayscale values.................... 51 2.5.11 Numerical results regarding stationarity................ 54 3 Development of AT3D_PAQ 55 3.1 PAQ........................................ 56 3.1.1 PAQ8 - Building blocks ......................... 57 3.1.2 Context mixing in PAQ1-3 ....................... 60 3.1.3 Context mixing in PAQ4-6 ....................... 61 3.1.4 Context mixing in PAQ7-8: Logistic/geometric mixing . 64 3.1.5 PAQ8 - Modeling tools.......................... 67 3.1.6 PAQ8 - Adaptive probability maps................... 68 3.2 Integration of PAQ into AT3D ......................... 69 3.3 Modifications of PAQ after the integration................... 70 3.3.1 Modeling pixel positions......................... 71 3.3.2 Modeling grayscale values........................ 75 3.4 Numerical results: AT3D_PAQ vs. AT3D................... 79 3.4.1 Pixel position encoding results ..................... 81 3.4.2 Grayscale value encoding results .................... 85 3.4.3 Total compression results for selected sequences............ 85 3.4.4 Summary and analysis of overall results ................ 90 3.4.5 Compression time comparison: AT3D_PAQ vs. AT3D . 90 3.5 Numerical results: AT3D_PAQ vs. H.264................... 91 3.5.1 Summary and analysis of results .................... 98 3.6 Conclusion .................................... 98 3.7 Outlook...................................... 99 4 Runtime optimization 100 4.1 Improvements of the AT3D implementation..................101 4.1.1 Assignment of pixels to tetrahedra...................101 4.1.2 Custom queue implementation .....................117 4.1.3 Further improvements..........................119 4.2 Numerical results.................................120 4.3 Conclusion ....................................127 4.4 Outlook......................................127 4.5 Further code improvements ...........................128 Appendices 129 A List of used hard- and software 129 B Test video sequences 130 C Detailed compression results 133 D Detailed H.264 results 137 E Detailed runtime results 139 1 Introduction 1 Introduction Given a sequence S of symbols from an alphabet A, that is to be stored on a medium or transmitted via a channel, it is natural to ask which is the shortest representation, that allows an exact reconstruction of S, and how to obtain it. This requires the process of lossless data compression or entropy coding, a transformation of an input sequence into a preferably shorter output sequence. First answers were found by Shannon in 1948, when he created the field of information theory. Shannon introduced the concept of entropy of a sequence S. Entropy may be measured in bits per symbol and refers to the expected information content per symbol of a sequence with respect to a probability distribution P of the symbols in S. One of the most important results of his work was showing that entropy is a lower bound for the minimal length of an encoded sequence, which still allows exact reconstruction. We will present results from the information theory literature ex- tending Shannon’s results to sources S, which are defined as random processes generating sequences Si, and their entropy rates, which describe the expected information content per symbol of a sequence emitted by S. Those results show that a lower bound for the expected code-length per symbol of such a sequence Si exists and that it is given by the entropy rate of S, if S is stationary. The latter notion refers to the independence of joint probability distributions of arbitrary symbols from their time of emission. Even if S is not stationary, which is the case in our application, a lower expected code length bound per symbol is given by the joint entropy of a finite consecutive subsequence of random variables that S consists of. It is calculated based on the joint probability distribution of those random variables. Unfortunately, in practical applications all of these lower bounds are not computable. The above concepts and results will later be introduced in detail and provide a theoretical background for data compression and lead to a better understanding of our later goals. We will focus on a specific kind of data to compress: video data. It is a digital representation of a sequence of images. The need for efficient video compression schemes is due to the large sizes of typical video sequences and the variety of fields, where video sequences have to be stored or transmitted. Most video compression schemes, and in particular the ones considered in this thesis, are lossy schemes, which introduce an error in the reconstructed video data. In 1989 the scheme H.262/MPEG-1 was introduced, which created the basis of today’s state-of-the-art general purpose compression scheme H.264/MPEG4-AVC. Both rely on intra-frame compression via a discrete cosine transform (or a closely related transform respectively) and translational inter-frame motion compensation of pixel-blocks. Besides a vast number of small improvements, H.264 additionally introduced an in-loop deblocking filter, which improved the visual quality of its output significantly by reducing 5 the tendency to produce visible block-artifacts. A detailed introduction to the specification of H.264 is given in a dedicated chapter. In this thesis we focus on the improvement of a different compression scheme, yet we will compare it to H.264 later: AT3D. The latter was developed by Demaret, Iske, and Khachabi, based on the two-dimensional image-compression scheme AT (adaptive thinning). Given data is interpreted as a 3d scalar field with domain X, and the scheme then relies on linear spline interpolation over anisotropic Delaunay tetrahedralizations. The linear spline space is indirectly determined by the greedy iterative adaptive thinning algorithm, which selects a sparse set Xn of significant pixels from the initial grid by removing one least 3 significant pixel or edge in each iteration.
Recommended publications
  • Lossless Data Compression with Transformer
    Under review as a conference paper at ICLR 2020 LOSSLESS DATA COMPRESSION WITH TRANSFORMER Anonymous authors Paper under double-blind review ABSTRACT Transformers have replaced long-short term memory and other recurrent neural networks variants in sequence modeling. It achieves state-of-the-art performance on a wide range of tasks related to natural language processing, including lan- guage modeling, machine translation, and sentence representation. Lossless com- pression is another problem that can benefit from better sequence models. It is closely related to the problem of online learning of language models. But, despite this ressemblance, it is an area where purely neural network based methods have not yet reached the compression ratio of state-of-the-art algorithms. In this paper, we propose a Transformer based lossless compression method that match the best compression ratio for text. Our approach is purely based on neural networks and does not rely on hand-crafted features as other lossless compression algorithms. We also provide a thorough study of the impact of the different components of the Transformer and its training on the compression ratio. 1 INTRODUCTION Lossless compression is a class of compression algorithms that allows for the perfect reconstruc- tion of the original data. In the last decades, statistical methods for lossless compression have been dominated by PAQ-type approaches (Mahoney, 2005). The structure of these approaches is similar to the Prediction by Partial Matching (PPM) of Cleary & Witten (1984) and are composed of two separated parts: a predictor and an entropy encoding. Entropy coding scheme like arithmetic cod- ing (Rissanen & Langdon, 1979) are optimal and most of the compression gains are coming from improving the predictor.
    [Show full text]
  • Chapter 2 HISTORY and DEVELOPMENT of MILITARY LASERS
    History and Development of Military Lasers Chapter 2 HISTORY AND DEVELOPMENT OF MILITARY LASERS JACK B. KELLER, JR* INTRODUCTION INVENTING THE LASER MILITARIZING THE LASER SEARCHING FOR HIGH-ENERGY LASER WEAPONS SEARCHING FOR LOW-ENERGY LASER WEAPONS RETURNING TO HIGHER ENERGIES SUMMARY *Lieutenant Colonel, US Army (Retired); formerly, Foreign Science Information Officer, US Army Medical Research Detachment-Walter Reed Army Institute of Research, 7965 Dave Erwin Drive, Brooks City-Base, Texas 78235 25 Biomedical Implications of Military Laser Exposure INTRODUCTION This chapter will examine the history of the laser, Military advantage is greatest when details are con- from theory to demonstration, for its impact upon the US cealed from real or potential adversaries (eg, through military. In the field of military science, there was early classification). Classification can remain in place long recognition that lasers can be visually and cutaneously after a program is aborted, if warranted to conceal hazardous to military personnel—hazards documented technological details or pathways not obvious or easily in detail elsewhere in this volume—and that such hazards deduced but that may be relevant to future develop- must be mitigated to ensure military personnel safety ments. Thus, many details regarding developmental and mission success. At odds with this recognition was military laser systems cannot be made public; their the desire to harness the laser’s potential application to a descriptions here are necessarily vague. wide spectrum of military tasks. This chapter focuses on Once fielded, system details usually, but not always, the history and development of laser systems that, when become public. Laser systems identified here represent used, necessitate highly specialized biomedical research various evolutionary states of the art in laser technol- as described throughout this volume.
    [Show full text]
  • Mcgraw-Hill New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Mcgraw-Hill Abc
    Y L F M A E T Team-Fly® Streaming Media Demystified Michael Topic McGraw-Hill New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto McGraw-Hill abc Copyright © 2002 by The McGraw-Hill Companies, Inc. All rights reserved. Manufactured in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distrib- uted in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. 0-07-140962-9 The material in this eBook also appears in the print version of this title: 0-07-138877-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in cor- porate training programs. For more information, please contact George Hoare, Special Sales, at george_hoare@mcgraw- hill.com or (212) 904-4069. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work. Use of this work is subject to these terms.
    [Show full text]
  • Adaptive Weighing of Context Models for Lossless Data Compression
    Adaptive Weighing of Context Models for Lossless Data Compression Matthew V. Mahoney Florida Institute of Technology CS Dept. 150 W. University Blvd. Melbourne FL 32901 [email protected] Technical Report CS-2005-16 Abstract optimal codes (within one bit) are known and can be Until recently the state of the art in lossless data generated efficiently, for example Huffman codes compression was prediction by partial match (PPM). A (Huffman 1952) and arithmetic codes (Howard and Vitter PPM model estimates the next-symbol probability 1992). distribution by combining statistics from the longest matching contiguous contexts in which each symbol value is found. We introduce a context mixing model which 1.1. Text Compression and Natural Language improves on PPM by allowing contexts which are arbitrary Processing functions of the history. Each model independently An important subproblem of machine learning is natural estimates a probability and confidence that the next bit of language processing. Humans apply complex language data will be 0 or 1. Predictions are combined by weighted rules and vast real-world knowledge to implicitly model averaging. After a bit is arithmetic coded, the weights are adjusted along the cost gradient in weight space to favor the natural language. For example, most English speaking most accurate models. Context mixing compressors, as people will recognize that p(recognize speech) > p(reckon implemented by the open source PAQ project, are now top eyes peach). Unfortunately we do not know any algorithm ranked on several independent benchmarks. that estimates these probabilities as accurately as a human. Shannon (1950) estimated that the entropy or information content of written English is about one bit per character 1.
    [Show full text]
  • A Machine Learning Perspective on Predictive Coding with PAQ
    A Machine Learning Perspective on Predictive Coding with PAQ Byron Knoll & Nando de Freitas University of British Columbia Vancouver, Canada fknoll,[email protected] August 17, 2011 Abstract PAQ8 is an open source lossless data compression algorithm that currently achieves the best compression rates on many benchmarks. This report presents a detailed description of PAQ8 from a statistical machine learning perspective. It shows that it is possible to understand some of the modules of PAQ8 and use this understanding to improve the method. However, intuitive statistical explanations of the behavior of other modules remain elusive. We hope the description in this report will be a starting point for discussions that will increase our understanding, lead to improvements to PAQ8, and facilitate a transfer of knowledge from PAQ8 to other machine learning methods, such a recurrent neural networks and stochastic memoizers. Finally, the report presents a broad range of new applications of PAQ to machine learning tasks including language modeling and adaptive text prediction, adaptive game playing, classification, and compression using features from the field of deep learning. 1 Introduction Detecting temporal patterns and predicting into the future is a fundamental problem in machine learning. It has gained great interest recently in the areas of nonparametric Bayesian statistics (Wood et al., 2009) and deep learning (Sutskever et al., 2011), with applications to several domains including language modeling and unsupervised learning of audio and video sequences. Some re- searchers have argued that sequence prediction is key to understanding human intelligence (Hawkins and Blakeslee, 2005). The close connections between sequence prediction and data compression are perhaps under- arXiv:1108.3298v1 [cs.LG] 16 Aug 2011 appreciated within the machine learning community.
    [Show full text]
  • Arxiv:2011.13544V1 [Cs.CV] 27 Nov 2020
    Patch-VQ: ‘Patching Up’ the Video Quality Problem Zhenqiang Ying1*, Maniratnam Mandal1*, Deepti Ghadiyaram2†, Alan Bovik1† 1University of Texas at Austin, 2Facebook AI fzqying, [email protected], [email protected], [email protected] Abstract No-reference (NR) perceptual video quality assessment (VQA) is a complex, unsolved, and important problem to social and streaming media applications. Efficient and ac- curate video quality predictors are needed to monitor and guide the processing of billions of shared, often imper- fect, user-generated content (UGC). Unfortunately, current NR models are limited in their prediction capabilities on real-world, “in-the-wild” UGC video data. To advance progress on this problem, we created the largest (by far) subjective video quality dataset, containing 39; 000 real- world distorted videos and 117; 000 space-time localized video patches (‘v-patches’), and 5:5M human perceptual Fig. 1: Modeling local to global perceptual quality: From each video, we ex- tract three spatio-temporal video patches (Sec. 3.1), which along with their subjective quality annotations. Using this, we created two unique scores, are fed to the proposed video quality model. By integrating spatial (2D) and NR-VQA models: (a) a local-to-global region-based NR spatio-temporal (3D) quality-sensitive features, our model learns spatial and temporal distortions, and can robustly predict both global and local quality, a temporal quality VQA architecture (called PVQ) that learns to predict global series, as well as space-time quality maps (Sec. 5.2). Best viewed in color. video quality and achieves state-of-the-art performance on 3 UGC datasets, and (b) a first-of-a-kind space-time video transform the processing and interpretation of videos on quality mapping engine (called PVQ Mapper) that helps smartphones, social media, telemedicine, surveillance, and localize and visualize perceptual distortions in space and vision-guided robotics, in ways that FR models are un- time.
    [Show full text]
  • Gendered Violence and Safety: a Contextual Approach to Improving Security in Women’S Facilities
    The author(s) shown below used Federal funds provided by the U.S. Department of Justice and prepared the following final report: Document Title: Gendered Violence and Safety: A Contextual Approach to Improving Security in Women’s Facilities Author: Barbara Owen, Ph.D., James Wells, Ph.D., Joycelyn Pollock, Ph.D., J.D., Bernadette Muscat, Ph.D., Stephanie Torres, M.S Document No.: 225338 Date Received: December 2008 Award Number: 2006-RP-BX-0016 This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this Federally- funded grant final report available electronically in addition to traditional paper copies. Opinions or points of view expressed are those of the author(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice. This document is a research report submitted to the U.S. Department of Justice. This report has not been published by the Department. Opinions or points of view expressed are those of the author(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice. Final Report GENDERED VIOLENCE AND SAFETY: A CONTEXTUAL APPROACH TO IMPROVING SECURITY IN WOMENNovember’S FACILITIES 2008 A contextual approach to improving security in women’s facilities Part I of III Gendered Violence and Safety: Improving security in women’s facilities Barbara Owen, Ph.D. California State University, Fresno James Wells, Ph.D. Commonwealth Research Consulting, Inc. Joycelyn Pollock, Ph.D., J.D. Texas State University- San Marcos Bernadette Muscat, Ph.D.
    [Show full text]
  • Point Cloud Compression and Low Latency Streaming
    POINT CLOUD COMPRESSION AND LOW LATENCY STREAMING A Thesis in Electrical Engineering Presented to the Faculty of the University of Missouri-Kansas City in partial fulfillment of the requirements for the degree MASTER OF SCIENCE by KARTHIK AINALA Bachelor of Technology (B.Tech) in Electronics and Communication Engineering, Amrita University, India 2012 Kansas City, Missouri 2017 © 2017 KARTHIK AINALA ALL RIGHTS RESERVED POINT CLOUD COMPRESSION AND LOW LATENCY STREAMING Karthik Ainala, Candidate for the Master of Science Degree University of Missouri–Kansas City, 2017 ABSTRACT With the commoditization of the 3D depth sensors, we can now very easily model real objects and scenes into digital domain which then can be used for variety of application in gaming, animation, virtual reality, immersive communication etc. Modern sensors are capable of capturing objects with very high detail and scene of large area and thus might include millions of points. These point data usually occupy large storage space or require high bandwidth in case of real-time transmission. Thus, an efficient compression of these huge point cloud data points becomes necessary. Point clouds are often organized and compressed with octree based structures. The octree subdivision sequence is often serialized in a sequence of bytes that are subsequently entropy encoded using range coding, arithmetic coding or other methods. Such octree based algorithms are efficient only up to a certain level of detail as they have an exponential run-time in the number of subdivision levels. In addition, the compression efficiency diminishes when the number of subdivision levels iii increases. In this work we present an alternative way to partition the point cloud data.
    [Show full text]
  • Building Multimedia Security Applications in the MPEG Reconfigurable Video Coding (RVC) Framework
    Building Multimedia Security Applications in the MPEG Reconfigurable Video Coding (RVC) Framework Junaid Jameel Ahmad Ihab Amer Marco Mattavelli Shujun Li Ecole Poly technique Federale University of Konstanz German University in Cairo de Lausanne (EPFL) Germany (GUC), Egypt Switzerland ABSTRACT Keywords Although used by most of system developers, imperative lan­ RecoTlfigurabk Videu Codin g (RVe), video tool library (VTL) , guages are known for not being able to provide easily recon­ Crypto Tools Library (CTL), joint multimedia encryption­ figurable, platform independent and strictly modular appli­ encoding (JMEE), digital watermarking, MPEG, H.264/ AVC, cations. ISO /IEC has recently developed a new video coding JPEG Rtandard called R.er.onfigurable Video Coding (RVC) , with the objective of providing modular and concurrent spec­ 1. INTRODUCTION ificatiolls of complex video codecti that COlltititute a bet­ ter starting point for implementation of applications using In recent years, the security of multimedia applications video compression. Multimedia security applications are has become an important requirement that creators, pro­ traditionally developed in imperative languages mainly be­ ducers, distributors and consumers of multimedia products cause the required multimedia codecs were only available cannot ignore anymore. This is because nowadays it is much in specification and implementations based on imperative easier for attackers to make pirate copies, to crack commer­ languages. Therefore, aside from the technical challenges cial multimedia systems, to attack online multimedia ser­ inherited from multimedia codecs, multimedia security ap­ vices, and so forth. So as to ensure the protection of mul­ plications also face a number of other' challenges which are timedia content, a number of multimedia security schemes only specific to them.
    [Show full text]
  • Affect-Based Indexing and Retrieval of Multimedia Data Ching Hau Chan
    Affect-Based Indexing and Retrieval of Multimedia Data Ching Hau Chan A dissertation submitted in fulfillment of the requirements for the award of Doctor of Philosophy (Ph.D.) Supervisor: Dr. Gareth J.F. Jones School of Computing October 2006 Declaration I hereby certify that this material, which I now submit for assessment in the programme of study leading to the award of Doctor of Philosophy, is entirely my own work and has not been taken from the work of others save and to the extent that such work has been cited and acknowledged within the text of my own work. Signed : _______ ID No. : Date : 1 Title: Affect-based Indexing and Retrieval System of Multimedia Data Abstract Digital multimedia systems are creating many new opportunities for rapid access to content archives. In order to explore these collections using search, the content must be annotated with significant features. An important and often overlooked aspect of human interpretation of multimedia data is the affective dimension. The hypothesis of this thesis is that affective labels of content can be extracted automatically from within multimedia data streams, and that these can then be used for content-based retrieval and browsing. A novel system is presented for extracting affective features from video content and mapping it onto a set of keywords with predetermined emotional interpretations. These labels are then used to demonstrate affect-based retrieval on a range of feature films. Because of the subjective nature of the words people use to describe emotions, an approach towards an open vocabulary query system utilizing the electronic lexical database WordNet is also presented.
    [Show full text]
  • Hp I-Paq Cell Phone Users Guide.Pdf
    iPAQ Data Messenger Product Guide © Copyright 2008 Hewlett-Packard Development Company, L.P. HP iPAQ products are powered by Microsoft® Windows Mobile® 6.1 Professional with Messaging and Security Feature Pack. Microsoft Windows, the Windows logo, Outlook, Windows Mobile Device Center, and ActiveSync are trademarks of Microsoft Corporation in the U.S. and other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. SD Logo is a trademark of its proprietor. Bluetooth® is a trademark owned by its proprietor and used by Hewlett-Packard Development Company, L.P. under license. Google and Google Maps™ are trademarks of Google Inc. All other product names mentioned herein may be trademarks of their respective companies. Hewlett-Packard Company shall not be liable for technical or editorial errors or omissions contained herein. The information is provided “as is” without warranty of any kind and is subject to change without notice. The warranties for Hewlett-Packard products are set forth in the express limited warranty statements accompanying such products. Nothing herein should be construed as an additional warranty. This document contains proprietary information that is protected by copyright. No part of this document may be photocopied, reproduced, or translated to another language without the prior written consent of Hewlett-Packard Development Company, L.P. First Edition December 2008 Document Part Number: 500161-001. Table of contents 1 Welcome to your HP iPAQ 2 Box contents 3 Components Front panel components ....................................................................................................................... 3 Top and bottom panel components ...................................................................................................... 4 Left and right side components ...........................................................................................................
    [Show full text]
  • From Optimised Inpainting with Linear Pdes Towards Competitive Image Compression Codecs
    In T. Br¨aunl,B. McCane, M. Rivera, X. Yu (Eds.): Image and Video Technology. Lecture Notes in Computer Science, Vol. 9431, 63-74, Springer, Cham, 2016. The final publication is available at link.springer.com From Optimised Inpainting with Linear PDEs towards Competitive Image Compression Codecs Pascal Peter1, Sebastian Hoffmann1, Frank Nedwed1, Laurent Hoeltgen2, and Joachim Weickert1 1 Mathematical Image Analysis Group, Faculty of Mathematics and Computer Science, Campus E1.7, Saarland University, 66041 Saarbr¨ucken, Germany. {peter,hoffmann,nedwed,weickert}@mia.uni-saarland.de 2 Applied Mathematics and Computer Vision Group, Brandenburg University of Technology, Platz der Deutschen Einheit 1, 03046 Cottbus, Germany [email protected] Abstract. For inpainting with linear partial differential equations (PDEs) such as homogeneous or biharmonic diffusion, sophisticated data optimisation strategies have been found recently. These allow high-quality reconstructions from sparse known data. While they have been explic- itly developed with compression in mind, they have not entered actual codecs so far: Storing these optimised data efficiently is a nontrivial task. Since this step is essential for any competetive codec, we propose two new compression frameworks for linear PDEs: Efficient storage of pixel locations obtained from an optimal control approach, and a stochastic strategy for a locally adaptive, tree-based grid. Suprisingly, our experi- ments show that homogeneous diffusion inpainting can surpass its often favoured biharmonic counterpart in compression. Last but not least, we demonstrate that our linear approach is able to beat both JPEG2000 and the nonlinear state-of-the-art in PDE-based image compression. Keywords: linear diffusion inpainting, homogeneous, biharmonic, im- age compression, probabilistic tree-densification 1 Introduction For a given image, the core task of image compression is to store a small amount of data from which the original can be reconstructed with high accuracy.
    [Show full text]