Misalignment Correction and Lossless Compression of Pixelated Images

Total Page:16

File Type:pdf, Size:1020Kb

Misalignment Correction and Lossless Compression of Pixelated Images MISALIGNMENT CORRECTION AND LOSSLESS COMPRESSION OF PIXELATED IMAGES By Md. Ahasan Kabir MASTER OF SCIENCE IN INFORMATION AND COMMUNICATION TECHNOLOGY INSTITUTE OF INFORMATION AND COMMUNICATION TECHNOLOGY BANGLADESH UNIVERSITY OF ENGINEERING AND TECHNOLOGY ii Declaration It is hereby declared that this thesis or any part of it has not been submitted elsewhere for the award of any degree or diploma. Signature of the Candidate Md. Ahasan Kabir 1014312013 IICT, BUET iii Dedication THIS THESIS IS DEDICATED TO MY FAMILY iv Table of Contents Page Declaration…………...…………………………………………………………………… iii Dedication…………………………………………………………………………………. iv Table of Contents…..……………………………………………………………………... v List of Figures….…………………………………………………………………………. viii List of Table …………………………………………………………………………….... x List of Abbreviation………………………………………………………………………. xi Acknowledgment………………………………………………………………………….. xii Abstract……………………………………………………………………………………. xiii 1. Introduction…………………………………………………………………………… 01 1.1. Overview………………………………………………………………………….. 01 . 1.2. Pixelated Images…………………………………………………………………... 02 1.3. Image Compression……………………………………………………………….. 03 1.4. Motivation of the Thesis…..……………………………………………………..... 04 1.5. Objectives of the Thesis...……………………………………………………….... 04 1.6. Contribution of this Thesis………………………………………………………... 05 1.7. Thesis Organization……………………………………………………………….. 06 2. Literature Review……..……………………………………………………………… 07 2.1. Overview….………………………………………………………………………. 07 2.2. Data Redundancy…..……………………………………………………………… 07 2.2.1. Coding Redundancy ……………………………………………………..... 07 2.2.2. Inter Pixel Redundancy…………………………………………………..... 08 2.2.3. Psychovisual Redundancy………………………………………………… 08 2.3. Types of Image Compression……………………………………………………... 09 2.3.1. Lossless Compression……………………………………………………... 09 2.3.2. Lossy Compression………………………………………………………... 09 2.4. Image Compression Formats……………………………………………………… 10 2.5. Literature Review on Image Compression………………………………………... 11 2.6. Summary…………………………………………………………………………... 23 v 3. Pixelated System and Misalignment Correction……………………….…………… 24 3.1. Overview………………………………………………………………………….. 24 . 3.2. Pixelated Communication System………………………………………………… 24 3.3. Misalignment Effect on Pixelated System………………………………………… 25 3.4. ProposedMisalignment Correction Method……………………………………..... 26 3.4.1. Capture Image……………………………………………………………… 26 3.4.2. Noise Removal……………………………………………………………... 27 3.4.3. Finding Edge………………………………………………………………. 27 3.4.4. Finding Border Line……………………………………………………….. 29 3.4.5. Finding Corner Point………………………………………………………. 30 3.4.6. Correcting Shape and Orientation………………………………………….. 30 3.5. Result and Discussion……………………………………………………………... 32 3.6. Summary…………………………………………………………………………... 34 4. Existing Image Compression Techniques…………………………………………… 35 4.1. Overview………………………………………………………………………….. 35 . 4.2. JPEG-LS………………………………………………………………………….. 35 4.3. SPIHT……………………………………………………………………………... 37 4.4. Wavelet Transforms……………………………………………………………….. 38 4.5. SPIHT Coding Algorithm………………………………………………………..... 39 4.6. DPCM……………………………………………………………………………... 42 4.7. Arithmetic Coding……………………………………………………………........ 43 4.8. Huffman Coding…………………………………………………………………... 45 4.9. Run Length Coding……………………………………………………………...... 47 4.10. Summary………………………………………………………………………..... 47 5. Proposed ETEC Algorithm…………...……………………………………………… 48 5.1. Overview………………………………………………………………………….. 48 5.2. Proposed ETEC Algorithm………………………………………………………... 48 5.3. Result and Discussion……………………………………………………………... 51 5.4. Summary…………………………………………………………………………... 66 vi 6. Proposed PTEC Algorithm…….…………………………………………………….. 67 6.1. Overview …………………………………………………………………………. 67 6.2. Proposed PTEC Algorithm………………………………………………………... 67 6.3. Result and Discussion……………………………………………………………... 71 6.4. Summary…………………………………………………………………………... 77 7. Practical Demonstration………………...….………………………………………… 78 7.1. Overview………………………………………………………………………….. 78 . 7.2. Implementation…………………………………………………………………… 78 7.2.1. System Description………………………………………………………… 78 7.2.2. Transmitter ………………………………………………………………… 79 7.2.3. Receiver …………………………………………………………………… 80 7.3. Summary………………………………………………………………….............. 81 8. Conclusion…………………………………………………………………………….. 82 8.1. Conclusion………………………………………………………………………… 82 8.2. Limitation…………………………………………………………………………. 83 8.3. Future Work………………………………………………………………………. 83 References……….………………………………………………………………………… 84 vii List of Figures Page 2.1 Two-stage near-lossless wavelet coder. 13 2.2 Casual template 15 2.3 Prediction model of DPCM model. 18 2.4 Block diagram of discrete color image compression. 20 3.1 Block diagram of the pixelated wireless optical channel. 25 3.2 Comparison between different type of edge detection methods with proposed 27 method. 3.3 Received image at receiver (a) Square, (b) Rectangle or (c) Rhombic. 31 3.4 Trapezoidal shaped desired image to rectangular shape. 31 3.5 Trapezium shaped desired image to rectangular shape. 32 3.6 Reconstruction of misalignment error of the captured image. 33 4.1 Block diagram of JPEG-LS. 35 4.2 An illustration of image pixels. 36 4.3 Flow chart of SPIHT method. 37 4.4 Block diagram of operational algorithm of sorting pass. 41 4.5 Block diagram of operational algorithm of refinement pass. 41 4.6 Block diagram of DPCM encoder and decoder. 42 4.7 Example of the arithmetic coding process. 44 4.8 Huffman encoding technique. 46 5.1 Example of pixelated image block. 49 5.2 Edges in the pixelated image block. 49 5.3 Modified J-bit encoding process. 50 5.4 Block diagram of proposed ETEC algorithm. 51 5.5 Tested pixelated images. 54 5.6 Comparison the effect of threshold on compression ratio in ETEC (with 57 Arithmetic) technique. 5.7 Comparison the effect of threshold on compression ratio in ETEC (with 57 Huffman) technique. 5.8 Comparison of compression ratio for pixelated images. 61 viii 5.9 Comparison of bit per pixels for pixelated images. 61 5.10 Comparison of percentage saving of storages for pixelated images. 62 5.11 Standard test image. (a) Lena (b) Peppers(c) Ankle (d) Brain (e) Mri_top (f) 63 Boat (g) Barbara (h) House. 5.12 Comparison of bit per pixels for non-pixelated images. 65 5.13 Comparison of compression ratio for non-pixelated images. 65 5.14 Comparison of percentage saving of storages for non-pixelated images. 66 6.1 Illustration of hierarchical decompression. 68 6.2 Input image and its decomposition image. 68 6.3 Ordering of the casual neighbors. 69 6.4 Ordering of the casual neighbors to predict X oe and X oo respectively. 71 6.5 Block diagram of proposed PTEC algorithm. 71 6.6 Comparison of bits per pixels for pixelated images. 73 6.7 Comparison of simulation time for pixelated images. 74 6.8 Comparison of bits per pixels for non-pixelated images. 75 6.9 Comparison of compression ratio for non-pixelated images. 76 7.1 Aschematic of a pixelated system. 79 7.2 Transmitter block diagram of a pixelated system. 79 7.3 Transmitted image frame. 80 7.4 Receiver block diagram to compress pixelated images. 80 7.5 Illustration of received images. 80 ix List of Tables Page 2.1 Technical scenarios of few existing lossless and near lossless image compression 22 algorithms. 4.1 Probabilities and the initial subinterval of symbols. 44 5.1 Comparison the effect of threshold on compression ratio in ETEC with 55 Arithmetic coding. 5.2 Comparison the effect of threshold on compression ratio in ETEC with Huffman 55 coding. 5.3 Comparison the effect of threshold on simulation time in ETEC with Arithmetic 56 coding. 5.4 Comparison the effect of threshold on simulation time in ETEC with Huffman 56 coding. 5.5 Comparison of bits/pixel and compression ratio (CR). 59 5.6 Comparison of storage saving and computation time. 60 5.7 Comparison of compression ratio and simulation time for non-pixelated images. 63 5.8 Comparison of percentage saving and simulation time for non-pixelated images. 64 6.1 Comparison of bits per pixel and compression ratio for pixelated images. 72 6.2 Comparison of percentage saving and simulation time for pixelated images. 73 6.3 Comparison of bits per pixels and compression ratio for non-pixelated images. 75 6.4 Comparison of percentage saving and simulation time for non-pixelated images. 76 x List of Abbreviations HDTV High-Definition Television LCD Liquid Crystal Display FOV Field Of View JPEG Joint Photographic Experts Group MPEG Moving Picture Experts Group ZIP Zone Improvement Plan JPEG-LS Joint Photographic Experts Group Lossless SPIHT Set Partitioning In Hierarchical Trees AQC Adaptive Quantization Coding LZW Lempel–Ziv–Welch ETEC Edge based Transformation and Entropy Coding PTEC Prediction based Transformation and Entropy Coding DPCM Differential Pulse CodeModulation BMP Bitmap PNG Portable Network Graphics TIFF Tagged Image File Format JPEG2000 Joint Photographic Experts Group 2000 DCT Discrete Cosine Transform EZW Embedded Zerotree Wavelet IBAQC Intensity Based Adaptive Quantizer Coding DWT Discrete Wavelet Transformation VT Visual Threshold GAP Gradient Adjustment Predictor AVIRIS Airborne Visible Infrared Imaging Spectrometer LOCO Low Complexity Lossless Compression MED Median Edge Detection LIS List of Insignificant Set LSP List of Significant Pixels LIP List of Insignificant Pixels xi Acknowledgement All praises are for the almighty Allah for giving me the strength, without which I couldn’t afford to attempt this research work.. I would like to express my sense of gratitude towards my honorable thesis supervisor Dr. Md. Rubaiyat Hossain Mondal, Associate Professor, Institute of Information and Communication
Recommended publications
  • A General Scheme for Dithering Multidimensional Signals, and a Visual Instance of Encoding Images with Limited Palettes
    Journal of King Saud University – Computer and Information Sciences (2014) 26, 202–217 King Saud University Journal of King Saud University – Computer and Information Sciences www.ksu.edu.sa www.sciencedirect.com A General scheme for dithering multidimensional signals, and a visual instance of encoding images with limited palettes Mohamed Attia a,b,c,*,1,2, Waleed Nazih d,3, Mohamed Al-Badrashiny e,4, Hamed Elsimary d,3 a The Engineering Company for the Development of Computer Systems, RDI, Giza, Egypt b Luxor Technology Inc., Oakville, Ontario L6L6V2, Canada c Arab Academy for Science & Technology (AAST), Heliopolis Campus, Cairo, Egypt d College of Computer Engineering and Sciences, Salman bin Abdulaziz University, AlKharj, Saudi Arabia e King Abdul-Aziz City for Science and Technology (KACST), Riyadh, Saudi Arabia Received 12 March 2013; revised 30 August 2013; accepted 5 December 2013 Available online 12 December 2013 KEYWORDS Abstract The core contribution of this paper is to introduce a general neat scheme based on soft Digital signal processing; vector clustering for the dithering of multidimensional signals that works in any space of arbitrary Digital image processing; dimensionality, on arbitrary number and distribution of quantization centroids, and with a comput- Dithering; able and controllable quantization noise. Dithering upon the digitization of one-dimensional and Multidimensional signals; multi-dimensional signals disperses the quantization noise over the frequency domain which renders Quantization noise; it less perceptible by signal processing systems including the human cognitive ones, so it has a very Soft vector clustering beneficial impact on vital domains such as communications, control, machine-learning, etc.
    [Show full text]
  • Modification of Adaptive Huffman Coding for Use in Encoding Large Alphabets
    ITM Web of Conferences 15, 01004 (2017) DOI: 10.1051/itmconf/20171501004 CMES’17 Modification of Adaptive Huffman Coding for use in encoding large alphabets Mikhail Tokovarov1,* 1Lublin University of Technology, Electrical Engineering and Computer Science Faculty, Institute of Computer Science, Nadbystrzycka 36B, 20-618 Lublin, Poland Abstract. The paper presents the modification of Adaptive Huffman Coding method – lossless data compression technique used in data transmission. The modification was related to the process of adding a new character to the coding tree, namely, the author proposes to introduce two special nodes instead of single NYT (not yet transmitted) node as in the classic method. One of the nodes is responsible for indicating the place in the tree a new node is attached to. The other node is used for sending the signal indicating the appearance of a character which is not presented in the tree. The modified method was compared with existing methods of coding in terms of overall data compression ratio and performance. The proposed method may be used for large alphabets i.e. for encoding the whole words instead of separate characters, when new elements are added to the tree comparatively frequently. Huffman coding is frequently chosen for implementing open source projects [3]. The present paper contains the 1 Introduction description of the modification that may help to improve Efficiency and speed – the two issues that the current the algorithm of adaptive Huffman coding in terms of world of technology is centred at. Information data savings. technology (IT) is no exception in this matter. Such an area of IT as social media has become extremely popular 2 Study of related works and widely used, so that high transmission speed has gained a great importance.
    [Show full text]
  • VM Dissertation 2009
    WAVELET BASED IMAGE COMPRESSION INTEGRATING ERROR PROTECTION via ARITHMETIC CODING with FORBIDDEN SYMBOL and MAP METRIC SEQUENTIAL DECODING with ARQ RETRANSMISSION By Veruschia Mahomed BSc. (Electronic Engineering) Submitted in fulfilment of the requirements for the Degree of Master of Science in Electronic Engineering in the School of Electrical, Electronic and Computer Engineering at the University of KwaZulu-Natal, Durban December 2009 Preface The research described in this dissertation was performed at the University of KwaZulu-Natal (Howard College Campus), Durban, over the period July 2005 until January 2007 as a full time dissertation and February 2007 until July 2009 as a part time dissertation by Miss. Veruschia Mahomed under the supervision of Professor Stanley Mneney. This work has been generously sponsored by Armscor and Morwadi. I hereby declare that all the material incorporated in this dissertation is my own original unaided work except where specific acknowledgment is made by name or in the form of a reference. The work contained herein has not been submitted in whole or part for a degree at any other university. Signed : ________________________ Name : Miss. Veruschia Mahomed Date : 30 December 2009 As the candidate’s supervisor I have approved this thesis for submission. Signed : ________________________ Name : Prof. S.H. Mneney Date : ii Acknowledgements First and foremost, I wish to thank my supervisor, Professor Stanley Mneney, for his supervision, encouragement and deep insight during the course of this research and for allowing me to pursue a dissertation in a field of research that I most enjoy. His comments throughout were invaluable, constructive and insightful and his willingness to set aside his time to assist me is most appreciated.
    [Show full text]
  • Geodesic Image and Video Editing
    Geodesic Image and Video Editing ANTONIO CRIMINISI and, TOBY SHARP and, CARSTEN ROTHER Microsoft Research Ltd, CB3 0FB, Cambridge, UK and PATRICK PEREZ´ Technicolor Research and Innovation, F-35576 Cesson-Sevign´ e,´ France This paper presents a new, unified technique to perform general edge- 1. INTRODUCTION AND LITERATURE SURVEY sensitive editing operations on n-dimensional images and videos efficiently. The first contribution of the paper is the introduction of a generalized Recent years have seen an explosion of research in Computational geodesic distance transform (GGDT), based on soft masks. This provides a Photography, with many exciting new techniques been invented to unified framework to address several, edge-aware editing operations. Di- aid users accomplish difficult image and video editing tasks effec- verse tasks such as de-noising and non-photorealistic rendering, are all tively. Much attention has been focused on: segmentation [Boykov dealt with fundamentally the same, fast algorithm. Second, a new, geodesic, and Jolly 2001; Bai and Sapiro 2007; Grady and Sinop 2008; Li symmetric filter (GSF) is presented which imposes contrast-sensitive spa- et al. 2004; Rother et al. 2004; Sinop and Grady 2007; Wang et al. tial smoothness into segmentation and segmentation-based editing tasks 2005], bilateral filtering [Chen et al. 2007; Tomasi and Manduchi (cutout, object highlighting, colorization, panorama stitching). The effect 1998; Weiss 2006] and anisotropic diffusion [Perona and Malik of the filter is controlled by two intuitive, geometric parameters. In contrast 1990], non-photorealistic rendering [Bousseau et al. 2007; Wang to existing techniques, the GSF filter is applied to real-valued pixel likeli- et al.
    [Show full text]
  • Backward Coding of Wavelet Trees with Fine-Grained Bitrate Control
    JOURNAL OF COMPUTERS, VOL. 1, NO. 4, JULY 2006 1 Backward Coding of Wavelet Trees with Fine-grained Bitrate Control Jiangling Guo School of Information Science and Technology, Beijing Institute of Technology at Zhuhai, Zhuhai, P.R. China Email: [email protected] Sunanda Mitra, Brian Nutter and Tanja Karp Department of Electrical and Computer Engineering, Texas Tech University, Lubbock, USA Email: {sunanda.mitra, brian.nutter, tanja.karp}@ttu.edu Abstract—Backward Coding of Wavelet Trees (BCWT) is Recently, several new wavelet-tree-based codecs have an extremely fast wavelet-tree-based image coding algo- been developed, such as LTW [7] and Progres [8], which rithm. Utilizing a unique backward coding algorithm, drastically improved the computational efficiency in BCWT also provides a rich set of features such as resolu- terms of coding speed. In [9] we have presented our tion-scalability, extremely low memory usage, and extremely newly developed BCWT codec, which is the fastest low complexity. However, BCWT in its original form inher- its one drawback also existing in most non-bitplane codecs, wavelet-tree-based codec we have studied to date with namely coarse bitrate control. In this paper, two solutions the same compression performance as SPIHT. With its for improving the bitrate controllability of BCWT are pre- unique backward coding, the wavelet coefficients from sented. The first solution is based on dual minimum quanti- high frequency subbands to low frequency subbands, zation levels, allowing BCWT to achieve fine-grained bi- BCWT also provides a rich set of features such as low trates with quality-index as a controlling parameter; the memory usage, low complexity and resolution scalability, second solution is based on both dual minimum quantiza- usually lacking in other wavelet-tree-based codecs.
    [Show full text]
  • Quality Assurance in Diagnostic Radiology
    AAPM REPORT NO. 74 QUALITY CONTROL IN DIAGNOSTIC RADIOLOGY Report of Task Group #12 Diagnostic X-ray Imaging Committee Members S. Jeff Shepard, Chairman Pei-Jan Paul Lin, Co-Chairman John M. Boone Dianna D. Cody Jane R. Fisher G. Donald Frey Hy Glasser* Joel E. Gray Arthur G. Haus Lance V. Hefner Richard L. Holmes, Jr. Robert J. Kobistek Frank N. Ranallo Philip L. Rauch Raymond P. Rossi* J. Anthony Seibert Keith J. Strauss Orhan H. Suleiman Joel R. Schenck Stephen K. Thompson July 2002 Published for the American Association of Physicists in Medicine by Medical Physics Publishing *Deceased DISCLAIMER: This publication is based on sources and information believed to be reliable, but the AAPM and the editors disclaim any warranty or liability based on or relating to the contents of this publication. The AAPM does not endorse any products, manufacturers, or suppliers. Nothing in this publication should be interpreted as implying such endorsement. Further copies of this report ($15 prepaid) may be obtained from: Medical Physics Publishing 4513 Vernon Blvd. Madison, WI 53705-4964 Telephone: 1-800-442-5778 or 608-262-4021 Fax: 608-265-2121 Email: [email protected] Web site: www.medicalphysics.org International Standard Book Number: 1-888340-33-9 International Standard Serial Number: 0271-7344 © 2002 by American Association of Physicists in Medicine One Physics Ellipse College Park, MD 20740-3843 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photo- copying, recording, or otherwise) without the prior written permission of the publisher.
    [Show full text]
  • Real-Time Video Compression Techniques and Algorithms
    Page i Real-Time Video Compression Page ii THE KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE MULTIMEDIA SYSTEMS AND APPLICATIONS Consulting Editor Borko Furht Florida Atlantic University Recently Published Titles: VIDEO AND IMAGE PROCESSING IN MULTIMEDIA SYSTEMS, by Borko Furht, Stephen W. Smoliar, HongJiang Zhang ISBN: 0-7923-9604-9 MULTIMEDIA SYSTEMS AND TECHNIQUES, edited by Borko Furht ISBN: 0-7923-9683-9 MULTIMEDIA TOOLS AND APPLICATIONS, edited by Borko Furht ISBN: 0-7923-9721-5 MULTIMEDIA DATABASE MANAGEMENT SYSTEMS, by B. Prabhakaran ISBN: 0-7923-9784-3 Page iii Real-Time Video Compression Techniques and Algorithms by Raymond Westwater Borko Furht Florida Atlantic University Page iv Distributors for North America: Kluwer Academic Publishers 101 Philip Drive Assinippi Park Norwell, Massachusetts 02061 USA Distributors for all other countries: Kluwer Academic Publishers Group Distribution Centre Post Office Box 322 3300 AH Dordrecht, THE NETHERLANDS Library of Congress Cataloging-in-Publication Data A C.I.P. Catalogue record for this book is available from the Library of Congress. Copyright © 1997 by Kluwer Academic Publishers All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher, Kluwer Academic Publishers, 101 Philip Drive, Assinippi Park, Norwell, Massachusetts 02061 Printed on acid-free paper. Printed in the United States of America Page v Contents Preface vii 1. The Problem of Video Compression 1 1.1 Overview of Video Compression Techniques 3 1.2 Applications of Compressed Video 6 1.3 Image and Video Formats 8 1.4 Overview of the Book 12 2.
    [Show full text]
  • Synthetic Colour Test Image for Objective Quality Assessment of Digital Codecs
    Synthetic Colour Test Image for Objective Quality Assessment of Digital Codecs Amal Punchihewa , Donald G. Bailey, R. M. Hodgson School of Engineering and Technology, Massey University, Palmerston North, New Zealand ABSTRACT This paper proposes a supplementary colour test image to the standard colour bars used in television and an objective quality metric for bleeding colour artefacts due to codecs. The test image and algorithms can be implemented in any open architecture test environment. The video product or the path under test is stressed using the test image. In addition to the objective measure, the test environment can display the hue and the saturation errors on an emulated vectorscope providing a visual measure of saturation and hue for the original and reconstructed images at a given compression ratio. The test image can be used to evaluate the performance of digital video and television broadcasting facilities and has potential to yield a wide range of compression ratios. The objective quality measure can also be used in the colour codec development process and in benchmarking colour codec performance. INTRODUCTION Television and video broadcasting engineers have been using standard colour bars for testing and adjustment of analogue systems for many years [1]. In digital television broadcasting, video streaming and other multimedia communications, image and video are the dominant components. With limited communication bandwidth and storage capacity in terminal devices, it is necessary to reduce data rates using digital codecs [2]. The techniques and quantisation used in image and video compression codecs introduce distortions known as artefacts. The Digital Fact Book defines artefacts as “particular visible effects, which are a direct result of some technical limitation” [3].
    [Show full text]
  • Region-Of-Interest-Based Video Coding for Video Conference Applications Marwa Meddeb
    Region-of-interest-based video coding for video conference applications Marwa Meddeb To cite this version: Marwa Meddeb. Region-of-interest-based video coding for video conference applications. Signal and Image processing. Telecom ParisTech, 2016. English. tel-01410517 HAL Id: tel-01410517 https://hal-imt.archives-ouvertes.fr/tel-01410517 Submitted on 6 Dec 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. EDITE – ED 130 Doctorat ParisTech THÈSE pour obtenir le grade de docteur délivré par Télécom ParisTech Spécialité « Signal et Images » présentée et soutenue publiquement par Marwa MEDDEB le 15 Février 2016 Codage vidéo par régions d’intérêt pour des applications de visioconférence Region-of-interest-based video coding for video conference applications Directeurs de thèse : Béatrice Pesquet-Popescu (Télécom ParisTech) Marco Cagnazzo (Télécom ParisTech) Jury M. Marc ANTONINI , Directeur de Recherche, Laboratoire I3S Sophia-Antipolis Rapporteur M. François Xavier COUDOUX , Professeur, IEMN DOAE Valenciennes Rapporteur M. Benoit MACQ , Professeur, Université Catholique de Louvain Président Mme. Béatrice PESQUET-POPESCU , Professeur, Télécom ParisTech Directeur de thèse M. Marco CAGNAZZO , Maître de Conférence HdR, Télécom ParisTech Directeur de thèse M. Joël JUNG , Ingénieur de Recherche, Orange Labs Examinateur M.
    [Show full text]
  • Caradoc of the North Wind Free
    FREE CARADOC OF THE NORTH WIND PDF Allan Frewin Jones | 368 pages | 05 Apr 2012 | Hachette Children's Group | 9780340999417 | English | London, United Kingdom CARADOC OF THE NORTH WIND PDF As the war. Disaster strikes, and a valued friend suffers Caradoc of the North Wind devastating injury. Branwen sets off on a heroic journey towards destiny in an epic adventure of lovewar and revenge. Join Charlotte and Mia in this brilliant adventure full of princess sparkle and Christmas excitement! Chi ama i libri sceglie Kobo e inMondadori. The description is beautiful, but at times a bit too much, and sometimes at its worst the writing is hard to comprehend completely clearly. I find myself hoping vehemently for another book. It definitely allows the I read this in Caradoc of the North Wind sitting and could not put it down. Fair Wind to Widdershins. This author has published under several versions of his name, including Allan Jones, Frewin Jones, and A. Write a product review. Now he has stolen the feathers Caradoc of the North Wind Doodle, the weather-vane cockerel in charge of the weather. Jun 29, Katie rated it really liked it. Is the other warrior child, Arthur?? More than I thought I would, even. I really cafadoc want to know more, and off author is one that can really take you places. Join us by creating an account and start getting the best experience from our website! Jules Ember was raised hearing legends of wjnd ancient magic of the wicked Alchemist and the good Sorceress. Delivery and Returns see our delivery rates and policies thinking of returning an item? Mar 24, Valentina rated it really liked it.
    [Show full text]
  • A Universal Placement Technique of Compressed Instructions For
    1224 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 28, NO. 8, AUGUST 2009 A Universal Placement Technique of Compressed Instructions for Efficient Parallel Decompression Xiaoke Qin, Student Member, IEEE, and Prabhat Mishra, Senior Member, IEEE Abstract—Instruction compression is important in embedded The research in this area can be divided into two categories system design since it reduces the code size (memory requirement) based on whether it primarily addresses the compression or de- and thereby improves the overall area, power, and performance. compression challenges. The first category tries to improve the Existing research in this field has explored two directions: effi- cient compression with slow decompression, or fast decompression code compression efficiency by using state-of-the-art coding at the cost of compression efficiency. This paper combines the methods, such as Huffman coding [1] and arithmetic coding [2], advantages of both approaches by introducing a novel bitstream [3]. Theoretically, they can decrease the CR to its lower bound placement method. Our contribution in this paper is a novel governed by the intrinsic entropy of the code, although their compressed bitstream placement technique to support parallel decode bandwidth is usually limited to 6–8 bits/cycle. These so- decompression without sacrificing the compression efficiency. The proposed technique enables splitting a single bitstream (instruc- phisticated methods are suitable when the decompression unit tion binary) fetched from memory into multiple bitstreams, which is placed between the main memory and the cache (precache). are then fed into different decoders. As a result, multiple slow However, recent research [4] suggests that it is more profitable decoders can simultaneously work to produce the effect of high to place the decompression unit between the cache and the decode bandwidth.
    [Show full text]
  • Image Filtering and Image Segmentation: an Insight of Evaluation Techniques
    Image Filtering and Image Segmentation: An Insight of Evaluation Techniques Boubakeur BELAROUSSI, BioClinica, [email protected] www.bioclinica.com 1 Topics Objectives BioClinica Validation in Image Processing Validation Methodology Overview of Evaluation criteria Test Data Validation Examples Conclusion 2 Topics Objectives BioClinica Validation in Image Processing Validation Methodology Overview of Evaluation criteria Test Data Validation Examples Conclusion 3 Objectives Set a rigorous evaluation methodology to validate an image processing algorithm Validation framework Choose / Define appropriate datasets Choose / Define the quantitative criteria 4 Topics Objectives BioClinica Validation in Image Processing Validation Methodology Overview of Evaluation criteria Test Data Validation Examples Conclusion 5 BioClinica BioClinica is a technology-oriented Imaging Contract Research Organization (CRO) providing biotechnology and pharmaceutical companies with a unique expertise in the field of medical image analysis in the context of multicenter clinical trials. BioClinica manages the imaging component of clinical trials (Phase I to IV) using its proprietary image processing software technology. This technology enables the introduction of quantitative imaging markers in the design of innovative clinical trials in major diagnostic and therapeutic areas: Central Nervous System (CNS) diseases Neurovascular diseases Vascular diseases Oncology 6 BioClinica The use of accurate and reproducible imaging parameters as safety and efficacy
    [Show full text]