Inpainting-Based Compression of Visual Data

Total Page:16

File Type:pdf, Size:1020Kb

Inpainting-Based Compression of Visual Data 2018 Data Compression Conference M I Snowbird, UT, USA, March 27–30, 2018 A 1 2 3 4 5 6 7 8 Inpainting-Based Compression of Visual Data 9 10 11 12 13 14 15 16 Joachim Weickert 17 18 Faculty of Mathematics and Computer Science 19 20 21 22 Saarland University 23 24 Saarbr¨ucken, Germany 25 26 27 28 29 30 31 32 long term research funded by 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 Gottfried Wilhelm Leibniz Prize ERC Advanced Grant INCOVID 57 58 59 60 61 62 c 2018 Joachim Weickert 63 64 65 Collaborators 1 2 3 4 joint work with 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Naoufal Amrani Sarah Andris Egil Bae Zakaria Belhachmi Matthias Berg Pinak Bheed 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 Andr´esBruhn Dorin Bucur Irena Gali´c Sebastian Hoffmann Lena Karos Markus Mainberger 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 Pascal Peter Gerlind Plonka Christian Schmaltz Joan Serra-Sagrista Ching Hoo Tang Martin Welk 61 62 63 64 65 Introduction (1) M I A 1 2 3 4 Introduction 5 6 7 8 9 10 Transform-based Codecs for Lossy Image and Video Compression 11 12 13 14 15 16 Widely-used lossy codecs such as JPEG, JPEG2000, HEVC are transform-based. 17 18 19 20 rely on sparse representation in transform domain (DCT, wavelets) 21 22 23 24 25 26 sparse approximation in terms of an orthogonal base offers many benefits: 27 28 29 30 clean theory 31 32 • 33 34 fast algorithms 35 36 • 37 38 39 40 quality deteriorates with sparsity 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 Introduction (2) 1 2 Quality Deterioration under JPEG Compression 3 4 5 6 7 8 original compression rate 10 : 1 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 compression rate 40 : 1 compression rate 80 : 1 61 62 63 64 65 Introduction (3) M I A 1 2 Further Improvements through Transform-Based Codecs 3 4 5 6 7 8 carefully engineered by many experts over many years 9 10 11 12 13 14 approaches have become highly sophisticated 15 16 17 18 19 20 increasingly difficult to achieve further fundamental improvements 21 22 by staying within the universe of transform-based ideas 23 24 25 26 27 28 29 30 Things that Transform-Based Codecs Ignore 31 32 33 34 35 36 more semantical, higher level representation of image structures: 37 38 39 40 – need for features (e.g. edges) rather than pixels 41 42 – filling-in effect of our visual system (Werner 1935) 43 44 45 46 47 48 approximation qualities beyond pixelwise quality measures (MSE, PSNR): 49 50 51 52 respecting e.g. perceptual similarities of stochastic textures 53 54 55 56 57 58 59 60 61 62 63 64 65 Introduction (4) 1 2 An Emerging Alternative: Inpainting-Based Compression 3 4 5 6 7 8 store only a very small, carefully selected subset of the data 9 10 11 12 13 14 reconstruct missing data with a suitable inpainting process 15 16 17 18 19 20 21 22 23 24 What is Inpainting ? 25 26 27 28 29 30 interpolation strategy for missing / degraded data 31 32 33 34 so far mainly used for repairing smaller parts of an image 35 36 37 38 39 40 two classes relevant for us: 41 42 43 44 PDE-based inpainting (Masnou/Morel 1998, Bertalmio et al. 2000) 45 46 • 47 48 exemplar-based inpainting (Efros/Leung 1999, Criminisi et al. 2004) 49 50 • 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 Introduction (5) M I A 1 2 PDE-Based Inpainting 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 original inpainted 37 38 39 40 Left: Original image with severe degradations by a text. From Bertalmio et al. (2000). Right: Result of 41 42 inpainting with an anisotropic diffusion process. 43 44 45 46 47 48 49 50 fills in with a partial differential equation (PDE), e.g. a diffusion process 51 52 53 54 uncorrupted data serve as fixed boundary conditions 55 56 57 58 PDE propagates this information to inpainting domain 59 60 61 62 local approach that can be justified from the statistics of natural images 63 64 65 Introduction (6) 1 2 Exemplar-Based Inpainting 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 original inpainted 37 38 39 40 Left: Original image. From Criminisi et al. (2004). Right: Inpainting result with Criminisi’s exemplar- 41 42 based approach. 43 44 45 46 47 48 49 50 clever copy-and-paste from similar patches within the image 51 52 53 54 nonlocal approach that exploits the intrinsic statistical properties of the image 55 56 57 58 good for texture-dominated images 59 60 61 62 computationally more expensive than PDE-based inpainting 63 64 65 Introduction (7) M I A 1 2 How Sparse can the Data be for PDE-based Inpainting ? 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 Image where only 2 percent of all pixels are known. Can you recognise what is depicted ? 57 58 59 60 61 62 63 64 65 Introduction (8) 1 2 Is Exemplar-based Inpainting also Useful for Sparse Representations ? 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 Texture synthesis with the exemplar-based approach of Criminisi et al. (2004), using our fast algorithm 57 58 59 60 with space-filling curves (Dahmen et al. 2017). 61 62 63 64 65 Introduction (9) M I A 1 2 What Happens if We Select Semantically More Important Data ? 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 original data near edges kept inpainted 45 46 47 48 49 50 Image reconstruction from edges. Left: Original image, 237 316 pixels. Middle: Edge set. Only × 51 52 the left and right neighbours of each Canny edge are stored. Right: Inpainting with the homogeneous 53 54 55 56 diffusion equation. 57 58 59 60 61 62 63 64 65 Introduction (10) 1 2 Chances for Inpainting-Based Compression 3 4 5 6 7 8 spatial representation more intuitive than transform-based coding 9 10 11 12 respects more properties of human visual system: 13 14 15 16 can use semantically important features such as edges 17 18 • 19 20 PDE-based inpainting resembles biological filling-in mechanisms 21 22 • 23 24 exemplar-based inpainting captures statistics of textures, 25 26 • 27 28 while renouncing pixelwise accuracy 29 30 31 32 33 34 may go beyond limits of sparse approximations in orthogonal bases 35 36 37 38 39 40 generic framework for various data types: 41 42 1D, 2D, 3D signals 43 44 • 45 46 still images and videos 47 48 • 49 50 scalar-, vector-, matrix-valued 51 52 • 53 54 surface data, graphs 55 56 • 57 58 dedicated codecs for specific applications 59 60 • 61 62 63 64 65 Introduction (11) M I A 1 2 Key Challenges for Inpaiting-Based Compression 3 4 5 6 7 8 1. Which data should be kept? 9 10 11 12 13 14 2. What are the most suitable inpainting processes? 15 16 17 18 19 20 3. How can the selected pixels be encoded in an efficient way? 21 22 23 24 4. Can one design fast algorithms for encoding and decoding? 25 26 27 28 29 30 31 32 33 34 These Problems are Highly Interrelated: 35 36 37 38 39 40 Optimality of data depends on inpainting process. 41 42 43 44 45 46 Suboptimal pixels can pay off if they are cheaper to encode. 47 48 49 50 There is a natural tradeoff between high efficiency and high quality. 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 Introduction (12) 1 2 How Does this Differ from Related Approaches ? 3 4 5 6 7 8 Classical Inpainting Methods: 9 10 11 12 PDE-based inpainting: 13 14 • Masnou/Morel 1998, Bertalmio et al.
Recommended publications
  • Lossless Data Compression with Transformer
    Under review as a conference paper at ICLR 2020 LOSSLESS DATA COMPRESSION WITH TRANSFORMER Anonymous authors Paper under double-blind review ABSTRACT Transformers have replaced long-short term memory and other recurrent neural networks variants in sequence modeling. It achieves state-of-the-art performance on a wide range of tasks related to natural language processing, including lan- guage modeling, machine translation, and sentence representation. Lossless com- pression is another problem that can benefit from better sequence models. It is closely related to the problem of online learning of language models. But, despite this ressemblance, it is an area where purely neural network based methods have not yet reached the compression ratio of state-of-the-art algorithms. In this paper, we propose a Transformer based lossless compression method that match the best compression ratio for text. Our approach is purely based on neural networks and does not rely on hand-crafted features as other lossless compression algorithms. We also provide a thorough study of the impact of the different components of the Transformer and its training on the compression ratio. 1 INTRODUCTION Lossless compression is a class of compression algorithms that allows for the perfect reconstruc- tion of the original data. In the last decades, statistical methods for lossless compression have been dominated by PAQ-type approaches (Mahoney, 2005). The structure of these approaches is similar to the Prediction by Partial Matching (PPM) of Cleary & Witten (1984) and are composed of two separated parts: a predictor and an entropy encoding. Entropy coding scheme like arithmetic cod- ing (Rissanen & Langdon, 1979) are optimal and most of the compression gains are coming from improving the predictor.
    [Show full text]
  • Chapter 2 HISTORY and DEVELOPMENT of MILITARY LASERS
    History and Development of Military Lasers Chapter 2 HISTORY AND DEVELOPMENT OF MILITARY LASERS JACK B. KELLER, JR* INTRODUCTION INVENTING THE LASER MILITARIZING THE LASER SEARCHING FOR HIGH-ENERGY LASER WEAPONS SEARCHING FOR LOW-ENERGY LASER WEAPONS RETURNING TO HIGHER ENERGIES SUMMARY *Lieutenant Colonel, US Army (Retired); formerly, Foreign Science Information Officer, US Army Medical Research Detachment-Walter Reed Army Institute of Research, 7965 Dave Erwin Drive, Brooks City-Base, Texas 78235 25 Biomedical Implications of Military Laser Exposure INTRODUCTION This chapter will examine the history of the laser, Military advantage is greatest when details are con- from theory to demonstration, for its impact upon the US cealed from real or potential adversaries (eg, through military. In the field of military science, there was early classification). Classification can remain in place long recognition that lasers can be visually and cutaneously after a program is aborted, if warranted to conceal hazardous to military personnel—hazards documented technological details or pathways not obvious or easily in detail elsewhere in this volume—and that such hazards deduced but that may be relevant to future develop- must be mitigated to ensure military personnel safety ments. Thus, many details regarding developmental and mission success. At odds with this recognition was military laser systems cannot be made public; their the desire to harness the laser’s potential application to a descriptions here are necessarily vague. wide spectrum of military tasks. This chapter focuses on Once fielded, system details usually, but not always, the history and development of laser systems that, when become public. Laser systems identified here represent used, necessitate highly specialized biomedical research various evolutionary states of the art in laser technol- as described throughout this volume.
    [Show full text]
  • Mcgraw-Hill New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Mcgraw-Hill Abc
    Y L F M A E T Team-Fly® Streaming Media Demystified Michael Topic McGraw-Hill New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto McGraw-Hill abc Copyright © 2002 by The McGraw-Hill Companies, Inc. All rights reserved. Manufactured in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distrib- uted in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. 0-07-140962-9 The material in this eBook also appears in the print version of this title: 0-07-138877-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in cor- porate training programs. For more information, please contact George Hoare, Special Sales, at george_hoare@mcgraw- hill.com or (212) 904-4069. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work. Use of this work is subject to these terms.
    [Show full text]
  • Adaptive Weighing of Context Models for Lossless Data Compression
    Adaptive Weighing of Context Models for Lossless Data Compression Matthew V. Mahoney Florida Institute of Technology CS Dept. 150 W. University Blvd. Melbourne FL 32901 [email protected] Technical Report CS-2005-16 Abstract optimal codes (within one bit) are known and can be Until recently the state of the art in lossless data generated efficiently, for example Huffman codes compression was prediction by partial match (PPM). A (Huffman 1952) and arithmetic codes (Howard and Vitter PPM model estimates the next-symbol probability 1992). distribution by combining statistics from the longest matching contiguous contexts in which each symbol value is found. We introduce a context mixing model which 1.1. Text Compression and Natural Language improves on PPM by allowing contexts which are arbitrary Processing functions of the history. Each model independently An important subproblem of machine learning is natural estimates a probability and confidence that the next bit of language processing. Humans apply complex language data will be 0 or 1. Predictions are combined by weighted rules and vast real-world knowledge to implicitly model averaging. After a bit is arithmetic coded, the weights are adjusted along the cost gradient in weight space to favor the natural language. For example, most English speaking most accurate models. Context mixing compressors, as people will recognize that p(recognize speech) > p(reckon implemented by the open source PAQ project, are now top eyes peach). Unfortunately we do not know any algorithm ranked on several independent benchmarks. that estimates these probabilities as accurately as a human. Shannon (1950) estimated that the entropy or information content of written English is about one bit per character 1.
    [Show full text]
  • A Machine Learning Perspective on Predictive Coding with PAQ
    A Machine Learning Perspective on Predictive Coding with PAQ Byron Knoll & Nando de Freitas University of British Columbia Vancouver, Canada fknoll,[email protected] August 17, 2011 Abstract PAQ8 is an open source lossless data compression algorithm that currently achieves the best compression rates on many benchmarks. This report presents a detailed description of PAQ8 from a statistical machine learning perspective. It shows that it is possible to understand some of the modules of PAQ8 and use this understanding to improve the method. However, intuitive statistical explanations of the behavior of other modules remain elusive. We hope the description in this report will be a starting point for discussions that will increase our understanding, lead to improvements to PAQ8, and facilitate a transfer of knowledge from PAQ8 to other machine learning methods, such a recurrent neural networks and stochastic memoizers. Finally, the report presents a broad range of new applications of PAQ to machine learning tasks including language modeling and adaptive text prediction, adaptive game playing, classification, and compression using features from the field of deep learning. 1 Introduction Detecting temporal patterns and predicting into the future is a fundamental problem in machine learning. It has gained great interest recently in the areas of nonparametric Bayesian statistics (Wood et al., 2009) and deep learning (Sutskever et al., 2011), with applications to several domains including language modeling and unsupervised learning of audio and video sequences. Some re- searchers have argued that sequence prediction is key to understanding human intelligence (Hawkins and Blakeslee, 2005). The close connections between sequence prediction and data compression are perhaps under- arXiv:1108.3298v1 [cs.LG] 16 Aug 2011 appreciated within the machine learning community.
    [Show full text]
  • Arxiv:2011.13544V1 [Cs.CV] 27 Nov 2020
    Patch-VQ: ‘Patching Up’ the Video Quality Problem Zhenqiang Ying1*, Maniratnam Mandal1*, Deepti Ghadiyaram2†, Alan Bovik1† 1University of Texas at Austin, 2Facebook AI fzqying, [email protected], [email protected], [email protected] Abstract No-reference (NR) perceptual video quality assessment (VQA) is a complex, unsolved, and important problem to social and streaming media applications. Efficient and ac- curate video quality predictors are needed to monitor and guide the processing of billions of shared, often imper- fect, user-generated content (UGC). Unfortunately, current NR models are limited in their prediction capabilities on real-world, “in-the-wild” UGC video data. To advance progress on this problem, we created the largest (by far) subjective video quality dataset, containing 39; 000 real- world distorted videos and 117; 000 space-time localized video patches (‘v-patches’), and 5:5M human perceptual Fig. 1: Modeling local to global perceptual quality: From each video, we ex- tract three spatio-temporal video patches (Sec. 3.1), which along with their subjective quality annotations. Using this, we created two unique scores, are fed to the proposed video quality model. By integrating spatial (2D) and NR-VQA models: (a) a local-to-global region-based NR spatio-temporal (3D) quality-sensitive features, our model learns spatial and temporal distortions, and can robustly predict both global and local quality, a temporal quality VQA architecture (called PVQ) that learns to predict global series, as well as space-time quality maps (Sec. 5.2). Best viewed in color. video quality and achieves state-of-the-art performance on 3 UGC datasets, and (b) a first-of-a-kind space-time video transform the processing and interpretation of videos on quality mapping engine (called PVQ Mapper) that helps smartphones, social media, telemedicine, surveillance, and localize and visualize perceptual distortions in space and vision-guided robotics, in ways that FR models are un- time.
    [Show full text]
  • Gendered Violence and Safety: a Contextual Approach to Improving Security in Women’S Facilities
    The author(s) shown below used Federal funds provided by the U.S. Department of Justice and prepared the following final report: Document Title: Gendered Violence and Safety: A Contextual Approach to Improving Security in Women’s Facilities Author: Barbara Owen, Ph.D., James Wells, Ph.D., Joycelyn Pollock, Ph.D., J.D., Bernadette Muscat, Ph.D., Stephanie Torres, M.S Document No.: 225338 Date Received: December 2008 Award Number: 2006-RP-BX-0016 This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this Federally- funded grant final report available electronically in addition to traditional paper copies. Opinions or points of view expressed are those of the author(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice. This document is a research report submitted to the U.S. Department of Justice. This report has not been published by the Department. Opinions or points of view expressed are those of the author(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice. Final Report GENDERED VIOLENCE AND SAFETY: A CONTEXTUAL APPROACH TO IMPROVING SECURITY IN WOMENNovember’S FACILITIES 2008 A contextual approach to improving security in women’s facilities Part I of III Gendered Violence and Safety: Improving security in women’s facilities Barbara Owen, Ph.D. California State University, Fresno James Wells, Ph.D. Commonwealth Research Consulting, Inc. Joycelyn Pollock, Ph.D., J.D. Texas State University- San Marcos Bernadette Muscat, Ph.D.
    [Show full text]
  • Point Cloud Compression and Low Latency Streaming
    POINT CLOUD COMPRESSION AND LOW LATENCY STREAMING A Thesis in Electrical Engineering Presented to the Faculty of the University of Missouri-Kansas City in partial fulfillment of the requirements for the degree MASTER OF SCIENCE by KARTHIK AINALA Bachelor of Technology (B.Tech) in Electronics and Communication Engineering, Amrita University, India 2012 Kansas City, Missouri 2017 © 2017 KARTHIK AINALA ALL RIGHTS RESERVED POINT CLOUD COMPRESSION AND LOW LATENCY STREAMING Karthik Ainala, Candidate for the Master of Science Degree University of Missouri–Kansas City, 2017 ABSTRACT With the commoditization of the 3D depth sensors, we can now very easily model real objects and scenes into digital domain which then can be used for variety of application in gaming, animation, virtual reality, immersive communication etc. Modern sensors are capable of capturing objects with very high detail and scene of large area and thus might include millions of points. These point data usually occupy large storage space or require high bandwidth in case of real-time transmission. Thus, an efficient compression of these huge point cloud data points becomes necessary. Point clouds are often organized and compressed with octree based structures. The octree subdivision sequence is often serialized in a sequence of bytes that are subsequently entropy encoded using range coding, arithmetic coding or other methods. Such octree based algorithms are efficient only up to a certain level of detail as they have an exponential run-time in the number of subdivision levels. In addition, the compression efficiency diminishes when the number of subdivision levels iii increases. In this work we present an alternative way to partition the point cloud data.
    [Show full text]
  • Building Multimedia Security Applications in the MPEG Reconfigurable Video Coding (RVC) Framework
    Building Multimedia Security Applications in the MPEG Reconfigurable Video Coding (RVC) Framework Junaid Jameel Ahmad Ihab Amer Marco Mattavelli Shujun Li Ecole Poly technique Federale University of Konstanz German University in Cairo de Lausanne (EPFL) Germany (GUC), Egypt Switzerland ABSTRACT Keywords Although used by most of system developers, imperative lan­ RecoTlfigurabk Videu Codin g (RVe), video tool library (VTL) , guages are known for not being able to provide easily recon­ Crypto Tools Library (CTL), joint multimedia encryption­ figurable, platform independent and strictly modular appli­ encoding (JMEE), digital watermarking, MPEG, H.264/ AVC, cations. ISO /IEC has recently developed a new video coding JPEG Rtandard called R.er.onfigurable Video Coding (RVC) , with the objective of providing modular and concurrent spec­ 1. INTRODUCTION ificatiolls of complex video codecti that COlltititute a bet­ ter starting point for implementation of applications using In recent years, the security of multimedia applications video compression. Multimedia security applications are has become an important requirement that creators, pro­ traditionally developed in imperative languages mainly be­ ducers, distributors and consumers of multimedia products cause the required multimedia codecs were only available cannot ignore anymore. This is because nowadays it is much in specification and implementations based on imperative easier for attackers to make pirate copies, to crack commer­ languages. Therefore, aside from the technical challenges cial multimedia systems, to attack online multimedia ser­ inherited from multimedia codecs, multimedia security ap­ vices, and so forth. So as to ensure the protection of mul­ plications also face a number of other' challenges which are timedia content, a number of multimedia security schemes only specific to them.
    [Show full text]
  • Affect-Based Indexing and Retrieval of Multimedia Data Ching Hau Chan
    Affect-Based Indexing and Retrieval of Multimedia Data Ching Hau Chan A dissertation submitted in fulfillment of the requirements for the award of Doctor of Philosophy (Ph.D.) Supervisor: Dr. Gareth J.F. Jones School of Computing October 2006 Declaration I hereby certify that this material, which I now submit for assessment in the programme of study leading to the award of Doctor of Philosophy, is entirely my own work and has not been taken from the work of others save and to the extent that such work has been cited and acknowledged within the text of my own work. Signed : _______ ID No. : Date : 1 Title: Affect-based Indexing and Retrieval System of Multimedia Data Abstract Digital multimedia systems are creating many new opportunities for rapid access to content archives. In order to explore these collections using search, the content must be annotated with significant features. An important and often overlooked aspect of human interpretation of multimedia data is the affective dimension. The hypothesis of this thesis is that affective labels of content can be extracted automatically from within multimedia data streams, and that these can then be used for content-based retrieval and browsing. A novel system is presented for extracting affective features from video content and mapping it onto a set of keywords with predetermined emotional interpretations. These labels are then used to demonstrate affect-based retrieval on a range of feature films. Because of the subjective nature of the words people use to describe emotions, an approach towards an open vocabulary query system utilizing the electronic lexical database WordNet is also presented.
    [Show full text]
  • Hp I-Paq Cell Phone Users Guide.Pdf
    iPAQ Data Messenger Product Guide © Copyright 2008 Hewlett-Packard Development Company, L.P. HP iPAQ products are powered by Microsoft® Windows Mobile® 6.1 Professional with Messaging and Security Feature Pack. Microsoft Windows, the Windows logo, Outlook, Windows Mobile Device Center, and ActiveSync are trademarks of Microsoft Corporation in the U.S. and other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. SD Logo is a trademark of its proprietor. Bluetooth® is a trademark owned by its proprietor and used by Hewlett-Packard Development Company, L.P. under license. Google and Google Maps™ are trademarks of Google Inc. All other product names mentioned herein may be trademarks of their respective companies. Hewlett-Packard Company shall not be liable for technical or editorial errors or omissions contained herein. The information is provided “as is” without warranty of any kind and is subject to change without notice. The warranties for Hewlett-Packard products are set forth in the express limited warranty statements accompanying such products. Nothing herein should be construed as an additional warranty. This document contains proprietary information that is protected by copyright. No part of this document may be photocopied, reproduced, or translated to another language without the prior written consent of Hewlett-Packard Development Company, L.P. First Edition December 2008 Document Part Number: 500161-001. Table of contents 1 Welcome to your HP iPAQ 2 Box contents 3 Components Front panel components ....................................................................................................................... 3 Top and bottom panel components ...................................................................................................... 4 Left and right side components ...........................................................................................................
    [Show full text]
  • From Optimised Inpainting with Linear Pdes Towards Competitive Image Compression Codecs
    In T. Br¨aunl,B. McCane, M. Rivera, X. Yu (Eds.): Image and Video Technology. Lecture Notes in Computer Science, Vol. 9431, 63-74, Springer, Cham, 2016. The final publication is available at link.springer.com From Optimised Inpainting with Linear PDEs towards Competitive Image Compression Codecs Pascal Peter1, Sebastian Hoffmann1, Frank Nedwed1, Laurent Hoeltgen2, and Joachim Weickert1 1 Mathematical Image Analysis Group, Faculty of Mathematics and Computer Science, Campus E1.7, Saarland University, 66041 Saarbr¨ucken, Germany. {peter,hoffmann,nedwed,weickert}@mia.uni-saarland.de 2 Applied Mathematics and Computer Vision Group, Brandenburg University of Technology, Platz der Deutschen Einheit 1, 03046 Cottbus, Germany [email protected] Abstract. For inpainting with linear partial differential equations (PDEs) such as homogeneous or biharmonic diffusion, sophisticated data optimisation strategies have been found recently. These allow high-quality reconstructions from sparse known data. While they have been explic- itly developed with compression in mind, they have not entered actual codecs so far: Storing these optimised data efficiently is a nontrivial task. Since this step is essential for any competetive codec, we propose two new compression frameworks for linear PDEs: Efficient storage of pixel locations obtained from an optimal control approach, and a stochastic strategy for a locally adaptive, tree-based grid. Suprisingly, our experi- ments show that homogeneous diffusion inpainting can surpass its often favoured biharmonic counterpart in compression. Last but not least, we demonstrate that our linear approach is able to beat both JPEG2000 and the nonlinear state-of-the-art in PDE-based image compression. Keywords: linear diffusion inpainting, homogeneous, biharmonic, im- age compression, probabilistic tree-densification 1 Introduction For a given image, the core task of image compression is to store a small amount of data from which the original can be reconstructed with high accuracy.
    [Show full text]