Image Resampling

Total Page:16

File Type:pdf, Size:1020Kb

Image Resampling UCAM-CL-TR-261 Technical Report ISSN 1476-2986 Number 261 Computer Laboratory Image resampling Neil Anthony Dodgson August 1992 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom phone +44 1223 763500 http://www.cl.cam.ac.uk/ c 1992 Neil Anthony Dodgson This technical report is based on a dissertation submitted by the author for the degree of Doctor of Philosophy to the University of Cambridge, Wolfson College. Technical reports published by the University of Cambridge Computer Laboratory are freely available via the Internet: http://www.cl.cam.ac.uk/TechReports/ ISSN 1476-2986 3 Summary Image resampling is the process of geometrically transforming digital images. This dissertation considers several aspects of the process. We begin by decomposing the resampling process into three simpler sub-processes: reconstruction of a continuous intensity surface from a discrete image, transformation of that continuous surface, and sampling of the transformed surface to produce a new discrete image. We then consider the sampling process, and the subsidiary problem of intensity quantisation. Both these are well understood, and we present a summary of existing work, laying a foundation for the central body of the dissertation where the sub-process of reconstruction is studied. The work on reconstruction divides into four parts, two general and two specific: 1. Piecewise local polynomials: the most studied group of reconstructors. We examine these, and the criteria used in their design. One new derivation is of two piecewise local quadratic reconstructors. 2. Infinite extent reconstructors: we consider these and their local approximations, the problem of finite image size, the resulting edge effects, and the solutions to these problems. Amongst the reconstructors discussed are the interpolating cubic B-spline and the interpolating Bezier cubic. We derive the filter kernels for both of these, and prove that they are the same. Given this kernel we demonstrate how the interpolating cubic B-spline can be extended from a one- dimensional to a two-dimensional reconstructor, providing a considerable speed improvement over the existing method of extension. 3. Fast Fourier transform reconstruction: it has long been known that the fast Fourier transform (FFT) can be used to generate an approximation to perfect scaling of a sample set. Donald Fraser (in 1987) took this result and generated a hybrid FFT reconstructor which can be used for general transformations, not just scaling. We modify Fraser’s method to tackle two major problems: its large time and storage requirements, and the edge effects it causes in the reconstructed intensity surface. 4. A priori knowledge reconstruction: first considering what can be done if we know how the original image was sampled, and then considering what can be done with one particular class of image coupled with one particular type of sampling. In this latter case we find that exact reconstruction of the image is possible. This is a surprising result as this class of images cannot be exactly reconstructed using classical sampling theory. The final section of the dissertation draws all of the strands together to discuss transformations and the resampling process as a whole. Of particular note here is work on how the quality of different reconstruction and resampling methods can be assessed. 4 About this document This report is, apart from some cosmetic changes, the dissertation which I submitted for my PhD degree. I performed the research for this degree over the period January 1989 through May 1992. The page numbering is different in this version to versions prior to 2005 because I have been able to incorporate many of the photographs which were previously unavailable in the online version. Figures 1.4, 2.1, 2.2, 4.34, 7.9, and 7.10 are either incomplete or missing, owing to the way in which the original manuscript was prepared. Appendix C is also missing for the same reason. Acknowledgements I would like to thank my supervisor, Neil Wiseman, for his help, advice, and listening ear over the course of this research. I am especially grateful for his confidence in me, even when I had little of my own, and for reading this dissertation and making many helpful comments. I would also like to thank David Wheeler and Paul Heckbert for useful discussions about image resampling and related topics, which clarified and stimulated my research. Over the last three and a third years several bodies have given me financial assistance. The Cambridge Commonwealth Trust paid my fees and maintenance for the first three years, for which I am grateful. Wolfson College, through the kind offices of Dr Rudolph Hanka, and the Computer Laboratory, have both contributed towards travel, conference, and typing expenses; and I am grateful for these too. Finally, I am extremely grateful to my wife, Catherine Gibbs, for her financial support, without which that last third of a year would have been especially difficult. This dissertation was typed in by four people, not a pleasant task. Many thanks go to Lynda Murray, Eileen Murray, Annemarie Gibbs, and Catherine Gibbs for taking this on, and winning! The Rainbow graphics research group deserve many thanks for their support and friendship over my time in Cambridge; they have made the research much more enjoyable. Thanks go to Olivia, Stuart, Jeremy, Heng, Alex, Adrian, Dan, Tim, Rynson, Nicko, Oliver, Martin, Jane, and Neil. Equal, if not more, thanks go to Margaret Levitt and Eileen Murray, who have helped to keep me sane; and also to the other inmates of Austin 4 and 5 for the interesting working environment they provide. Final heartfelt thanks go to my parents and to Catherine, to whom this dissertation is dedicated. Their support, and encouragement, has been invaluable in getting me to this point. Contents Notation 8 Preface 9 1 Resampling 11 1.1 Introduction ........................................ 11 1.2 Images ........................................... 11 1.3 Sources of discrete images ................................ 13 1.4 Resampling: the geometric transformation of discrete images . ......... 16 1.5Thepracticalusesofresampling............................ 16 1.6 Optical image warping . ................................ 18 1.7 Decomposition ...................................... 18 1.8 How it really works .................................... 27 1.9 The literature ....................................... 29 1.10 Summary ......................................... 31 2 Sampling 33 2.1Convertingthecontinuousintothediscrete...................... 33 2.2 The problem of discrete intensities ........................... 33 2.3 Producing a discrete image ............................... 37 2.4Thesamplingprocess.................................. 39 2.5 Area vs pointsampling.................................. 43 2.6 Practical considerations . ................................ 46 2.7 Anti-aliasing ....................................... 48 2.8 Sampling which considers the reconstruction method ................. 49 2.9 Summary ......................................... 50 3 Reconstruction 52 3.1 Classification ....................................... 52 3.2 Skeleton reconstruction . ................................ 55 5 6 CONTENTS 3.3 Perfect interpolation ................................... 58 3.4 Piecewise local polynomials ............................... 59 3.5WhatshouldwelookforinaNAPKreconstructor?................. 72 3.6 Cubic functions ...................................... 87 3.7Higherorders....................................... 95 3.8 Summary ......................................... 96 4 Infinite Extent Reconstruction 98 4.1 Introduction to infinite reconstruction . ....................... 98 4.2Edgeeffects........................................ 98 4.3 Sinc and other perfect interpolants ........................... 117 4.4Gaussianreconstructors................................. 123 4.5 Interpolating cubic B-spline ............................... 125 4.6 Local approximations to infinite reconstructors .................... 142 4.7 Summary ......................................... 144 5 Fast Fourier Transform Reconstruction 145 5.1Thereconstructedintensitysurface........................... 145 5.2DFTsampleratechanging................................ 148 5.3 FFT intensity surface reconstruction . ....................... 154 5.4 Problems with the FFT reconstructor . ....................... 156 5.5 Summary ......................................... 169 6 A priori knowledge reconstruction 171 6.1Knowledgeaboutthesampler..............................171 6.2 Knowledge about the image content . ....................... 174 6.3 One-dimensional sharp edge reconstruction ...................... 175 6.4 Two-dimensional sharp edge reconstruction ...................... 191 6.5 Summary ......................................... 210 7 Transforms & Resamplers 211 7.1Practicalresamplers................................... 211 7.2 Transform dimensionality ................................ 213 7.3 Evaluation of reconstructors and resamplers ...................... 215 7.4 Summary ......................................... 227 8 Summary & Conclusions 228 8.1Majorresults....................................... 228 8.2Otherresults....................................... 229 CONTENTS 7 8.3Futurework........................................ 229 A Literature Analysis 230 B Proofs 238 B.1 The inverse Fourier transform of a box function .................... 238 B.2 The Fourier transform of the sinc function ....................... 238 √ B.3 Derivation of a =
Recommended publications
  • Information and Compression Introduction Summary of Course
    Information and compression Introduction ■ We will be looking at what information is Information and Compression ■ Methods of measuring it ■ The idea of compression ■ 2 distinct ways of compressing images Paul Cockshott Faraday partnership Summary of Course Agenda ■ At the end of this lecture you should have ■ Encoding systems an idea what information is, its relation to ■ BCD versus decimal order, and why data compression is possible ■ Information efficiency ■ Shannons Measure ■ Chaitin, Kolmogorov Complexity Theory ■ Relation to Goedels theorem Overview Connections ■ Data coms channels bounded ■ Simple encoding example shows different ■ Images take great deal of information in raw densities possible form ■ Shannons information theory deals with ■ Efficient transmission requires denser transmission encoding ■ Chaitin relates this to Turing machines and ■ This requires that we understand what computability information actually is ■ Goedels theorem limits what we know about compression paul 1 Information and compression Vocabulary What is information ■ information, ■ Information measured in bits ■ entropy, ■ Bit equivalent to binary digit ■ compression, ■ Why use binary system not decimal? ■ redundancy, ■ Decimal would seem more natural ■ lossless encoding, ■ Alternatively why not use e, base of natural ■ lossy encoding logarithms instead of 2 or 10 ? Voltage encoding Noise Immunity Binary voltage ■ ■ ■ 5v Digital information BINARY DECIMAL encoded as ranges of ■ 2 values 0,1 ■ 10 values 0..9 continuous variables 1 ■ 1 isolation band ■
    [Show full text]
  • An Improved Fast Fractal Image Compression Coding Method
    Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 9, 241 - 247, 2016 doi:10.21311/001.39.9.32 An Improved Fast Fractal Image Compression Coding Method Haibo Gao, Wenjuan Zeng, Jifeng Chen, Cheng Zhang College of Information Science and Engineering, Hunan International Economics University, Changsha 410205, Hunan, China Abstract The main purpose of image compression is to use as many bits as possible to represent the source image by eliminating the image redundancy with acceptable restoration. Fractal image compression coding has achieved good results in the field of image compression due to its high compression ratio, fast decoding speed as well as independency between decoding image and resolution. However, there is a big problem in this method, i.e. long coding time because there are considerable searches of fractal blocks in the coding. Based on the theoretical foundation of fractal coding and fractal compression algorithm, this paper researches every step of this compression method, including block partition, block search and the representation of affine transformation coefficient, in order to come up with optimization and improvement strategies. The simulation result shows that the algorithm of this paper can achieve excellent effects in coding time and compression ratio while ensuring better image quality. The work on fractal coding in this paper has made certain contributions to the research of fractal coding. Key words: Image Compression, Fractal Theory, Coding and Decoding. 1. INTRODUCTION Image compression is aimed to use as many bytes as possible to represent and transmit the original big image and to have a better quality in the restored image.
    [Show full text]
  • A Novel Edge-Preserving Mesh-Based Method for Image Scaling
    A Novel Edge-Preserving Mesh-Based Method for Image Scaling Seyedali Mostafavian and Michael D. Adams Dept. of Electrical and Computer Engineering, University of Victoria, Victoria, BC V8P 5C2, Canada [email protected] and [email protected] Abstract—In this paper, we present a novel image scaling no loss of image quality. One main problem in vector-based method that employs a mesh model that explicitly represents interpolation methods, however, is how to create a vector discontinuities in the image. Our method effectively addresses model which faithfully represents the raster image data and its the problem of preserving the sharpness of edges, which has always been a challenge, during image enlargement. We use important features such as edges. Among the many techniques a constrained Delaunay triangulation to generate the model to generate a vector image from a raster image, triangle and an approximating function that is continuous everywhere mesh models have become quite popular. With a triangle-mesh except across the image edges (i.e., discontinuities). The model model, the image domain is partitioned into a set of non- is then rasterized using a subdivision-based technique. Visual overlapping triangles called a triangulation. Then, the image comparisons and quantitative measures show that our method can greatly reduce the blurring artifacts that can arise during intensity function is approximated over each of the triangles. image enlargement and produce images that look more pleasant A mesh-generation method is required to choose a good subset to human observers, compared to the well-known bilinear and of sample points and to collect any critical data from the input bicubic methods.
    [Show full text]
  • Color Page Effects Chapter 116 Davinci Resolve Control Panels
    PART 9 Color Page Effects Chapter 116 DaVinci Resolve Control Panels The DaVinci Resolve control panels make it easier to make more adjustments in the same amount of time than using a mouse, pen, or trackpad with the on-screen interface. Additionally, using a DaVinci Resolve control panel to control the Color page provides vastly superior ergonomic comfort to clutching a mouse or pen all day, which is important when you’re potentially grading a thousand shots per day. This chapter covers details about the three DaVinci Resolve control panels that are available, and how they work with DaVinci Resolve. Chapter – 116 DaVinci Resolve Control Panels 2258 Contents About The DaVinci Resolve Control Panels 2260 DaVinci Resolve Micro Panel 2261 Trackballs 2261 Control Knobs 2262 Control Buttons 2263 DaVinci Resolve Mini Panel 2265 Palette Selection Buttons 2265 Quick Selection Buttons 2266 DaVinci Resolve Advanced Control Panel 2268 Menus, Soft Keys, and Soft Knob Controls 2268 Trackball Panel 2269 T-bar Panel 2270 Transport Panel 2276 Copying Grades Using the Advanced Control Panel 2280 Copy Forward Keys 2280 Scroll 2280 Rippling Changes Using the Advanced Control Panel 2281 Chapter – 116 DaVinci Resolve Control Panels 2259 About The DaVinci Resolve Control Panels There are three DaVinci Resolve control panel options available and each are designed to meet modern workflow ergonomics and ease of use so colorists can quickly and accurately construct both simple and complex creative grades with minimal fatigue. This chapter provides details of the each of the panel functions and should be read in conjunction with the previous grading chapters to get the best from your panel.
    [Show full text]
  • Hardware Architecture for Elliptic Curve Cryptography and Lossless Data Compression
    Hardware architecture for elliptic curve cryptography and lossless data compression by Miguel Morales-Sandoval A thesis presented to the National Institute for Astrophysics, Optics and Electronics in partial ful¯lment of the requirement for the degree of Master of Computer Science Thesis Advisor: PhD. Claudia Feregrino-Uribe Computer Science Department National Institute for Astrophysics, Optics and Electronics Tonantzintla, Puebla M¶exico December 2004 Abstract Data compression and cryptography play an important role when transmitting data across a public computer network. While compression reduces the amount of data to be transferred or stored, cryptography ensures that data is transmitted with reliability and integrity. Compression and encryption have to be applied in the correct way: data are compressed before they are encrypted. If it were the opposite case the result of the cryptographic operation would be illegible data and no patterns or redundancy would be present, leading to very poor or no compression. In this research work, a hardware architecture that joins lossless compression and public-key cryptography for secure data transmission applications is discussed. The architecture consists of a dictionary-based lossless data compressor to compress the incoming data, and an elliptic curve crypto- graphic module that performs two EC (elliptic curve) cryptographic schemes: encryp- tion and digital signature. For lossless data compression, the dictionary-based LZ77 algorithm is implemented using a systolic array aproach. The elliptic curve cryptosys- tem is de¯ned over the binary ¯eld F2m , using polynomial basis, a±ne coordinates and the binary method to compute an scalar multiplication. While the model of the complete system was implemented in software, the hardware architecture was described in the Very High Speed Integrated Circuit Hardware Description Language (VHDL).
    [Show full text]
  • A Review of Data Compression Technique
    International Journal of Computer Science Trends and Technology (IJCST) – Volume 2 Issue 4, Jul-Aug 2014 RESEARCH ARTICLE OPEN ACCESS A Review of Data Compression Technique Ritu1, Puneet Sharma2 Department of Computer Science and Engineering, Hindu College of Engineering, Sonipat Deenbandhu Chhotu Ram University, Murthal Haryana-India ABSTRACT With the growth of technology and the entrance into the Digital Age, the world has found itself amid a vast amount of information. Dealing with such enormous amount of information can often present difficulties. Digital information must be stored and retrieved in an efficient manner, in order for it to be put to practical use. Compression is one way to deal with this problem. Images require substantial storage and transmission resources, thus image compression is advantageous to reduce these requirements. Image compression is a key technology in transmission and storage of digital images because of vast data associated with them. This paper addresses about various image compression techniques. There are two types of image compression: lossless and lossy. Keywords:- Image Compression, JPEG, Discrete wavelet Transform. I. INTRODUCTION redundancies. An inverse process called decompression Image compression is important for many applications (decoding) is applied to the compressed data to get the that involve huge data storage, transmission and retrieval reconstructed image. The objective of compression is to such as for multimedia, documents, videoconferencing, and reduce the number of bits as much as possible, while medical imaging. Uncompressed images require keeping the resolution and the visual quality of the considerable storage capacity and transmission bandwidth. reconstructed image as close to the original image as The objective of image compression technique is to reduce possible.
    [Show full text]
  • A Review on Different Lossless Image Compression Techniques
    Ilam Parithi.T et. al. /International Journal of Modern Sciences and Engineering Technology (IJMSET) ISSN 2349-3755; Available at https://www.ijmset.com Volume 2, Issue 4, 2015, pp.86-94 A Review on Different Lossless Image Compression Techniques 1Ilam Parithi.T * 2Balasubramanian.R Research Scholar/CSE Professor/CSE Manonmaniam Sundaranar Manonmaniam Sundaranar University, University, Tirunelveli, India Tirunelveli, India [email protected] [email protected] Abstract In the Morden world sharing and storing the data in the form of image is a great challenge for the social networks. People are sharing and storing a millions of images for every second. Though the data compression is done to reduce the amount of space required to store a data, there is need for a perfect image compression algorithm which will reduce the size of data for both sharing and storing. Keywords: Huffman Coding, Run Length Encoding (RLE), Arithmetic Coding, Lempel – Ziv-Welch Coding (LZW), Differential Pulse Code Modulation (DPCM). 1. INTRODUCTION: The Image compression is a technique for effectively coding the digital image by minimizing the numbers of bits needed for representing the image. The aim is to reduce the storage space, transmission cost and to maintain a good quality. The compression is also achieved by removing the redundancies like coding redundancy, inter pixel redundancy, and psycho visual redundancy. Generally the compression techniques are divided into two types. i) Lossy compression techniques and ii) Lossless compression techniques. Compression Techniques Lossless Compression Lossy Compression Run Length Coding Huffman Coding Wavelet based Fractal Compression Compression Arithmetic Coding LZW Coding Vector Quantization Block Truncation Coding Fig.
    [Show full text]
  • Video/Image Compression Technologies an Overview
    Video/Image Compression Technologies An Overview Ahmad Ansari, Ph.D., Principal Member of Technical Staff SBC Technology Resources, Inc. 9505 Arboretum Blvd. Austin, TX 78759 (512) 372 - 5653 [email protected] May 15, 2001- A. C. Ansari 1 Video Compression, An Overview ■ Introduction – Impact of Digitization, Sampling and Quantization on Compression ■ Lossless Compression – Bit Plane Coding – Predictive Coding ■ Lossy Compression – Transform Coding (MPEG-X) – Vector Quantization (VQ) – Subband Coding (Wavelets) – Fractals – Model-Based Coding May 15, 2001- A. C. Ansari 2 Introduction ■ Digitization Impact – Generating Large number of bits; impacts storage and transmission » Image/video is correlated » Human Visual System has limitations ■ Types of Redundancies – Spatial - Correlation between neighboring pixel values – Spectral - Correlation between different color planes or spectral bands – Temporal - Correlation between different frames in a video sequence ■ Know Facts – Sampling » Higher sampling rate results in higher pixel-to-pixel correlation – Quantization » Increasing the number of quantization levels reduces pixel-to-pixel correlation May 15, 2001- A. C. Ansari 3 Lossless Compression May 15, 2001- A. C. Ansari 4 Lossless Compression ■ Lossless – Numerically identical to the original content on a pixel-by-pixel basis – Motion Compensation is not used ■ Applications – Medical Imaging – Contribution video applications ■ Techniques – Bit Plane Coding – Lossless Predictive Coding » DPCM, Huffman Coding of Differential Frames, Arithmetic Coding of Differential Frames May 15, 2001- A. C. Ansari 5 Lossless Compression ■ Bit Plane Coding – A video frame with NxN pixels and each pixel is encoded by “K” bits – Converts this frame into K x (NxN) binary frames and encode each binary frame independently. » Runlength Encoding, Gray Coding, Arithmetic coding Binary Frame #1 .
    [Show full text]
  • A Review on Fractal Image Compression
    INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 12, DECEMBER 2019 ISSN 2277-8616 A Review On Fractal Image Compression Geeta Chauhan, Ashish Negi Abstract: This study aims to review the recent techniques in digital multimedia compression with respect to fractal coding techniques. It covers the proposed fractal coding methods in audio and image domain and the techniques that were used for accelerating the encoding time process which is considered the main challenge in fractal compression. This study also presents the trends of the researcher's interests in applying fractal coding in image domain in comparison to the audio domain. This review opens directions for further researches in the area of fractal coding in audio compression and removes the obstacles that face its implementation in order to compare fractal audio with the renowned audio compression techniques. Index Terms: Compression, Discrete Cosine Transformation, Fractal, Fractal Encoding, Fractal Decoding, Quadtree partitioning. —————————— —————————— 1. INTRODUCTION of contractive transformations through exploiting the self- THE development of digital multimedia and communication similarity in the image parts using PIFS. The encoding process applications and the huge volume of data transfer through the consists of three sub-processes: first, partitioning the image internet place the necessity for data compression methods at into non-overlapped range and overlapped domain blocks, the first priority [1]. The main purpose of data compression is second, searching for similar range-domain with minimum to reduce the amount of transferred data by eliminating the error and third, storing the IFS coefficients in a file that redundant information to keep the transmission bandwidth and represents a compress file and use it decompression process without significant effects on the quality of the data [2].
    [Show full text]
  • Fractal Image Compression Technique
    FRACTAL IMAGE COMPRESSION A thesis submitted to the Faculty of Graduate Studies and Research in partial mrnent of the requirernents for the degree of Master of Science Information and S ystems Science Department of Mathematics and Statistics Carleton University Ottawa, Ontario April, 1998 National Library Bibliothèque nationale I*B of Canada du Canada Acquisitions and Acquisitions et Bibliographie Services services bibliographiques 395 Wellington Street 395. rue Weltington Ottawa ON KI A ON4 Ottawa ON KIA ON4 Canada Canada Yow fik Votre referenco Our Ne Notre rëftirence The author has granted a non- L'auteur a accorde une licence non exclusive licence allowing the exclusive permettant à la National Library of Canada to Bibliothèque nationale du Canada de reproduce, loan, distriiute or sell reproduire, prêter, distribuer ou copies of this thesis in microform, vendre des copies de cette thèse sous paper or electronic formats. la fome de microfiche/nlm, de reproduction sur papier ou sur fomat électronique. The author retains ownership of the L'auteur conserve la propriété du copyright in this thesis. Neither the droit d'auteur qui protège cette thèse. thesis nor substantial extracts fiom it Ni la thèse ni des extraits substantiels may be printed or otherwise de celle-ci ne doivent être imprimés reproduced without the author's ou autrement reproduits sans son permission. autorisation. ACKNOWLEDGEMENTS I would iike to express my sincere t hanks to my supe~so~~Professor Bnan h[ortimrr, for providing me with the rïght balance of freedom and ,guidance throughout this research. My thanks are extended to the Department of Mat hematics and Statistics at Carleton University for providing a constructive environment for graduate work, and hancial assistance.
    [Show full text]
  • Apparent Display Resolution Enhancement for Moving Images
    Apparent Display Resolution Enhancement for Moving Images Piotr Didyk1 Elmar Eisemann1;2 Tobias Ritschel1 Karol Myszkowski1 Hans-Peter Seidel1 1 MPI Informatik 2 Telecom ParisTech / CNRS-LTCI / Saarland University Frame 1 Frame 2 Frame 3 Retina Lanczos Frame 1 Frame 2 Frame 3 Retina Lanczos Frame 1 Frame 2 Frame 3 Retina Lanczos Figure 1: Depicting fine details such as hair (left), sparkling car paint (middle) or small text (right) on a typical display is challenging and often fails if the display resolution is insufficient. In this work, we show that smooth and continuous subpixel image motion can be used to increase the perceived resolution. By sequentially displaying varying intermediate images at the display resolution (as depicted in the bottom insets), subpixel details can be resolved at the retina in the region of interest due to fixational eye tracking of this region. Abstract and images are skillfully tone and gamut mapped to adapt them to the display’s capabilities, these limitations persist. In order to Limited spatial resolution of current displays makes the depiction of surmount the physical limitations of display devices, modern algo- very fine spatial details difficult. This work proposes a novel method rithms started to exploit characteristics of the human visual system applied to moving images that takes into account the human visual (HVS) such as apparent image contrast [Purves et al. 1999] based on system and leads to an improved perception of such details. To this the Cornsweet Illusion or apparent brightness [Zavagno and Caputo end, we display images rapidly varying over time along a given tra- 2001] due to glare.
    [Show full text]
  • Rk3026 Brief
    BRIEF RK3026 RK3026 BRIEF Revision 1.1 Public Version August 2013 High Performance and Low-power Processor for Digital Media Application - 1 - BRIEF RK3026 Revision History This document is now Production Data. Date Revision Description 2013-08-28 1.0 Initial Release 2013-10-17 1.1 Update “512MB” to “1GB” High Performance and Low-power Processor for Digital Media Application - 2 - BRIEF RK3026 Content Content................................................................................................................................................................- 3 - chapter 1 Introduction......................................................................................................................- 5 - 1.1 Overview.....................................................................................................................................- 5 - 1.2 Features......................................................................................................................................- 5 - 1.3 Block Diagram..........................................................................................................................- 15 - High Performance and Low-power Processor for Digital Media Application - 3 - BRIEF RK3026 Warranty Disclaimer Rockchip Electronics Co.,Ltd makes no warranty, representation or guarantee (expressed, implied, statutory, or otherwise) by or with respect to anything in this document, and shall not be liable for any implied warranties of non-infringement, merchantability or fitness for a particular
    [Show full text]