Computational Plenoptic Imaging

Total Page:16

File Type:pdf, Size:1020Kb

Computational Plenoptic Imaging Volume xx (200y), Number z, pp. 1–25 Computational Plenoptic Imaging Gordon Wetzstein1, Ivo Ihrke2, Douglas Lanman3, and Wolfgang Heidrich1 1University of British Columbia 2Universität des Saarlandes / MPI Informatik 3MIT Media Lab Abstract The plenoptic function is a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. Although digital light sensors have greatly evolved in the last years, one fundamental limitation remains: all standard CCD and CMOS sensors integrate over the dimensions of the plenoptic func- tion as they convert photons into electrons; in the process, all visual information is irreversibly lost, except for a two-dimensional, spatially-varying subset — the common photograph. In this state of the art report, we review ap- proaches that optically encode the dimensions of the plenoptic function transcending those captured by traditional photography and reconstruct the recorded information computationally. Categories and Subject Descriptors (according to ACM CCS): I.4.1 [Image Processing and Computer Vision]: Dig- itization and Image Capture— I.4.5 [Image Processing and Computer Vision]: Reconstruction— 1. Introduction spectral, and directional resolution are manifold; medical Evolution has resulted in the natural development of a va- imaging, remote sensing, shape reconstruction, surveillance, riety of highly specialized visual systems among animals. and automated fabrication are only a few examples. In par- The mantis shrimp retina, for instance, contains 16 different ticular, the computer graphics and vision communities ben- types of photoreceptors [MO99]. The extraordinary anatomy efit from computational plenoptic imaging. Not only do the of their eyes not only allows the mantis shrimp to see 12 dif- techniques discussed in this survey allow highly detailed vi- ferent color channels, ranging from ultra-violet to infra-red, sual information to be captured, which is essential for the and distinguish between shades of linear and circular polar- acquisition of geometry, scene reflectance, materials, and re- ization, but it also allows the shrimp to perceive depth using fractive index properties, but they can also be directly used trinocular vision with each eye. Other creatures of the sea, for image-based rendering and lighting, increasing the real- such as cephalopods [MSH09], are also known to use their ism of synthetically generated content. ability to perceive polarization for communication and un- The plenoptic function [AB91] provides a ray-based veiling transparency of their prey. Although the compound model of light encompassing properties that are of inter- eyes found in flying insects have a lower spatial resolution est for image acquisition. Most of these properties, how- compared to mammalian single-lens eyes, their temporal re- ever, are irreversibly lost by standard sensors integrating solving power is far superior to the human visual system. over the plenoptic dimensions during image acquisition. As Flys, for instance, have a flicker fusion rate of more than illustrated in Figure 1, the plenoptic dimensions include the 200 Hz [Ruc61], which is an order of magnitude higher than color spectrum as well as spatial, temporal, and directional that of the human visual system. light variation. We also consider dynamic range a desirable Traditionally, cameras have been designed to capture what property, as common sensors have a limited dynamic range. a single human eye can perceive: a two-dimensional trichro- In addition to the plenoptic dimensions, we further cat- matic image. Inspired by the natural diversity of perceptual egorize plenoptic image acquisition techniques according systems and fueled by advances of digital camera technol- to their hardware configuration (see Fig. 1). While single- ogy, computational processing, and optical fabrication, im- device, multi-shot approaches are usually the preferred age processing has begun to transcend limitations of film- method for capturing plenoptic properties of static scenes, based analog photography. Applications for the computer- either multiple devices or single-image multiplexing are re- ized acquisition of images with a high spatial, temporal, quired to record dynamic phenomena. Using multiple de- submitted to COMPUTER GRAPHICS Forum (9/2011). 2 G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich / Computational Plenoptic Imaging Plenoptic Dimension Acquisition Approach Dynamic Range Color Spectrum Space | Focal Surfaces Directions | Light Fields Time Assorted Pixels Assorted Pixels Color Filter Arrays Coded Apertures Plenoptic Cameras w/ Gradient Camera Flutter Shutter Assorted Pixels Focal Sweep Lenses, Masks, or Mirrors Split Aperture Imaging Reinterpretable Imager Dispersive Optics Field Correction Compound Eye Cameras Adaptive DR Imaging Sensor Motion Single Shot Acquisition Exposure Brackets Narrow Band Filters Focal Stack Programmable Aperture High Speed Imaging Generalized Mosaics Generalized Mosaicing Jitter Camera Camera & Gantry Temporal Dithering HDR Video Agile Spectrum Imaging Super-Resolution Sequential Image Capture Multi-Camera Arrays Multi-Camera Arrays Optical Splitting Trees Multi-Camera Arrays Multi-Camera Arrays Optical Splitting Trees Hybrid Cameras Multi-Device Setup Figure 1: Taxonomy and overview of plenoptic image acquisition approaches. vices is often the most expensive solution, whereas multi- as high-speed events (Section 6). We also outline the acqui- plexing commonly reduces the spatial resolution of captured sition of light properties that are not directly included in the content in favor of increased plenoptic resolution. The opti- plenoptic function, but related, such as polarization, phase mal acquisition approach for a given problem is, therefore, imaging, and time-of-flight (Section 7) and point the reader dependent on the properties of the photographed scene and to more comprehensive literature on these topics. Conclu- the available hardware. sions and possible future avenues of research are discussed in Section 8. 1.1. Computational Photography and Plenoptic Imaging Due to the fact that modern, digital acquisition approaches are often closely related to their analog predecessors, we What makes plenoptic imaging different than general com- outline these whenever applicable. For each of the plenop- putational photography? Plenoptic imaging considers a sub- tic dimensions we also discuss practical applications of the set of computational photography approaches; specifically, acquired data. As there is an abundance of work in this field, those that aim at acquiring the dimensions of the plenoptic we focus on imaging techniques that are designed for stan- function with combined optical light modulation and com- dard planar 2D sensors. We will only highlight examples putational reconstruction. Computational photography has of modified sensor hardware for direct capture of plenoptic grown tremendously in the last years with dozens of pub- image information. We do not cover pure image processing lished papers per year in a variety of graphics, vision, and techniques, such as tone-reproduction, dynamic range com- optics venues. The dramatic rise in publications in this inter- pression and tone-mapping [RWD∗10], or the reconstruction disciplinary field, spanning optics, sensor technology, image of geometry [IKL∗10], BSDFs and reflectance fields. processing, and illumination, has made it difficult to encom- pass all research in a single survey. 2. High Dynamic Range Imaging We provide a structured review of the subset of research that has recently been shown to be closely-related in terms High dynamic range (HDR) image acquisition has been a of optical encoding and especially in terms of reconstruction very active area of research for more than a decade. The algorithms [IWH10]. Additionally, our report serves as a re- dynamic range of an imaging system is commonly defined source for interested parties by providing a categorization of as the ratio of largest and smallest possible value in the recent research and is intended to aid in the identification of range, as opposed to the domain, of a recorded signal. Un- unexplored areas in the field. fortunately, standard sensors have a limited dynamic range, which often results in clipping of bright and dark parts of a photographed scene. HDR imaging is important for many 1.2. Overview and Definition of Scope computer vision applications, including image-based ge- In this report, we review the state of the art in joint opti- ometry, material, and lighting reconstruction. Applications cal light modulation and computational reconstruction ap- for high dynamic range imagery in computer graphics in- proaches for acquiring the dimensions of the plenoptic func- clude physically-based rendering and lighting [Deb02], im- tion. Specifically, we discuss the acquisition of high dy- age editing, digital photography and cinema, perceptual dif- namic range imagery (Section2), the color spectrum (Sec- ference metrics based on absolute luminance [MDMS05, tion3), light fields and directional variation (Section 4), spa- MKRH11], virtual reality, and computer games. With the in- tial super-resolution and focal surfaces (Section 5), as well troduction of the HDR display prototype [SHS∗04] and its submitted to COMPUTER GRAPHICS Forum (9/2011). G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich / Computational Plenoptic Imaging 3 successor models becoming consumer products today, high- contrast photographic material is required for a mass market. For
Recommended publications
  • Denver Cmc Photography Section Newsletter
    MARCH 2018 DENVER CMC PHOTOGRAPHY SECTION NEWSLETTER Wednesday, March 14 CONNIE RUDD Photography with a Purpose 2018 Monthly Meetings Steering Committee 2nd Wednesday of the month, 7:00 p.m. Frank Burzynski CMC Liaison AMC, 710 10th St. #200, Golden, CO [email protected] $20 Annual Dues Jao van de Lagemaat Education Coordinator Meeting WEDNESDAY, March 14, 7:00 p.m. [email protected] March Meeting Janice Bennett Newsletter and Communication Join us Wednesday, March 14, Coordinator fom 7:00 to 9:00 p.m. for our meeting. [email protected] Ron Hileman CONNIE RUDD Hike and Event Coordinator [email protected] wil present Photography with a Purpose: Conservation Photography that not only Selma Kristel Presentation Coordinator inspires, but can also tip the balance in favor [email protected] of the protection of public lands. Alex Clymer Social Media Coordinator For our meeting on March 14, each member [email protected] may submit two images fom National Parks Mark Haugen anywhere in the country. Facilities Coordinator [email protected] Please submit images to Janice Bennett, CMC Photo Section Email [email protected] by Tuesday, March 13. [email protected] PAGE 1! DENVER CMC PHOTOGRAPHY SECTION MARCH 2018 JOIN US FOR OUR MEETING WEDNESDAY, March 14 Connie Rudd will present Photography with a Purpose: Conservation Photography that not only inspires, but can also tip the balance in favor of the protection of public lands. Please see the next page for more information about Connie Rudd. For our meeting on March 14, each member may submit two images from National Parks anywhere in the country.
    [Show full text]
  • Sample Manuscript Showing Specifications and Style
    Information capacity: a measure of potential image quality of a digital camera Frédéric Cao 1, Frédéric Guichard, Hervé Hornung DxO Labs, 3 rue Nationale, 92100 Boulogne Billancourt, FRANCE ABSTRACT The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital processing can correct some flaws (like distortion). Our definition of information takes possible correction into account and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading. Keywords: digital photography, image quality evaluation, optical aberration, information capacity, camera performance database 1. INTRODUCTION The evaluation of a digital camera is a key factor for customers, whether they are vendors or final customers. It relies on many different factors as the presence or not of some functionalities, ergonomic, price, or image quality. Each separate criterion is itself quite complex to evaluate, and depends on many different factors. The case of image quality is a good illustration of this topic.
    [Show full text]
  • Seeing Like Your Camera ○ My List of Specific Videos I Recommend for Homework I.E
    Accessing Lynda.com ● Free to Mason community ● Set your browser to lynda.gmu.edu ○ Log-in using your Mason ID and Password ● Playlists Seeing Like Your Camera ○ My list of specific videos I recommend for homework i.e. pre- and post-session viewing.. PART 2 - FALL 2016 ○ Clicking on the name of the video segment will bring you immediately to Lynda.com (or the login window) Stan Schretter ○ I recommend that you eventually watch the entire video class, since we will only use small segments of each video class [email protected] 1 2 Ways To Take This Course What Creates a Photograph ● Each class will cover on one or two topics in detail ● Light ○ Lynda.com videos cover a lot more material ○ I will email the video playlist and the my charts before each class ● Camera ● My Scale of Value ○ Maximum Benefit: Review Videos Before Class & Attend Lectures ● Composition & Practice after Each Class ○ Less Benefit: Do not look at the Videos; Attend Lectures and ● Camera Setup Practice after Each Class ○ Some Benefit: Look at Videos; Don’t attend Lectures ● Post Processing 3 4 This Course - “The Shot” This Course - “The Shot” ● Camera Setup ○ Exposure ● Light ■ “Proper” Light on the Sensor ■ Depth of Field ■ Stop or Show the Action ● Camera ○ Focus ○ Getting the Color Right ● Composition ■ White Balance ● Composition ● Camera Setup ○ Key Photographic Element(s) ○ Moving The Eye Through The Frame ■ Negative Space ● Post Processing ○ Perspective ○ Story 5 6 Outline of This Class Class Topics PART 1 - Summer 2016 PART 2 - Fall 2016 ● Topic 1 ○ Review of Part 1 ● Increasing Your Vision ● Brief Review of Part 1 ○ Shutter Speed, Aperture, ISO ○ Shutter Speed ● Seeing The Light ○ Composition ○ Aperture ○ Color, dynamic range, ● Topic 2 ○ ISO and White Balance histograms, backlighting, etc.
    [Show full text]
  • Evaluation of the Lens Flare
    https://doi.org/10.2352/ISSN.2470-1173.2021.9.IQSP-215 © 2021, Society for Imaging Science and Technology Evaluation of the Lens Flare Elodie Souksava, Thomas Corbier, Yiqi Li, Franc¸ois-Xavier Thomas, Laurent Chanas, Fred´ eric´ Guichard DXOMARK, Boulogne-Billancourt, France Abstract Flare, or stray light, is a visual phenomenon generally con- sidered undesirable in photography that leads to a reduction of the image quality. In this article, we present an objective metric for quantifying the amount of flare of the lens of a camera module. This includes hardware and software tools to measure the spread of the stray light in the image. A novel measurement setup has been developed to generate flare images in a reproducible way via a bright light source, close in apparent size and color temperature (a) veiling glare (b) luminous halos to the sun, both within and outside the field of view of the device. The proposed measurement works on RAW images to character- ize and measure the optical phenomenon without being affected by any non-linear processing that the device might implement. Introduction Flare is an optical phenomenon that occurs in response to very bright light sources, often when shooting outdoors. It may appear in various forms in the image, depending on the lens de- (c) haze (d) colored spot and ghosting sign; typically it appears as colored spots, ghosting, luminous ha- Figure 1. Examples of flare artifacts obtained with our flare setup. los, haze, or a veiling glare that reduces the contrast and color saturation in the picture (see Fig. 1).
    [Show full text]
  • (DSLR) Cameras
    Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras Andrew D. Kerr and Douglas A. Stow Abstract Our objectives are to analyze the radiometric characteris- benefits of replicating solar ephemeris (Coulter et al., 2012, tics and best practices for maximizing radiometric fidel- Ahrends et al., 2008), specific shutter and aperture settings ity of digital single lens reflex (DSLR) cameras for aerial (Ahrends et al., 2008, Lebourgeois et al., 2008), using RAW image-based change detection. Control settings, exposure files (Deanet al., 2000, Coulter et al., 2012, Ahrends et al., values, white balance, light metering, ISO, and lens aper- 2008, Lebourgeois et al., 2008), vignetting abatement (Dean ture are evaluated for several bi-temporal imagery datasets. et al., 2000), and maintaining intra-frame white balance (WB) These variables are compared for their effects on dynamic consistency (Richardson et al., 2009, Levin et al., 2005). range, intra-frame brightness variation, acuity, temporal Through this study we seek to identify and determine how consistency, and detectability of simulated cracks. Test- to compensate and account for, the photometric aspects of im- ing was conducted from a terrestrial, rather than airborne age capture and postprocessing with DSLR cameras, to achieve platform, due to the large number of images collected, and high radiometric fidelity within and between digital multi- to minimize inter-image misregistration. Results point to temporal images. The overall goal is to minimize the effects of exposure biases in the range of −0.7 or −0.3EV (i.e., slightly these factors on the radiometric consistency of multi-temporal less than the auto-exposure selected levels) being preferable images, inter-frame brightness, and capture of images with for change detection and noise minimization, by achiev- high acuity and dynamic range.
    [Show full text]
  • Depth Map Quality Evaluation for Photographic Applications
    https://doi.org/10.2352/ISSN.2470-1173.2020.9.IQSP-370 © 2020, Society for Imaging Science and Technology Depth Map Quality Evaluation for Photographic Applications Eloi Zalczer, François-Xavier Thomas, Laurent Chanas, Gabriele Facciolo and Frédéric Guichard DXOMARK; 24-26 Quai Alphonse Le Gallo, 92100 Boulogne-Billancourt, France Abstract As depth imaging is integrated into more and more consumer devices, manufacturers have to tackle new challenges. Applica- tions such as computational bokeh and augmented reality require dense and precisely segmented depth maps to achieve good re- sults. Modern devices use a multitude of different technologies to estimate depth maps, such as time-of-flight sensors, stereoscopic cameras, structured light sensors, phase-detect pixels or a com- bination thereof. Therefore, there is a need to evaluate the quality of the depth maps, regardless of the technology used to produce them. The aim of our work is to propose an end-result evalua- tion method based on a single scene, using a specifically designed chart. We consider the depth maps embedded in the photographs, which are not visible to the user but are used by specialized soft- ware, in association with the RGB pictures. Some of the aspects considered are spatial alignment between RGB and depth, depth consistency, and robustness to texture variations. This work also provides a comparison of perceptual and automatic evaluations. Introduction Computational Bokeh, and various other depth-based pho- tographic features, have become major selling points for flagship smartphones in the last few years. These applications have dif- ferent quality requirements than most depth imaging technolo- gies.
    [Show full text]
  • Kuwaittimes 28-3-2019.Qxp Layout 1
    RAJAB 21, 1440 AH THURSDAY, MARCH 28, 2019 28 Pages Max 27º Min 15 150 Fils Established 1961 ISSUE NO: 17793 The First Daily in the Arabian Gulf www.kuwaittimes.net Sheikh Nasser eyes $450-650bn Body bags, rats, waste: Disaster Desperate and hiding, collapsed Bautista Agut stuns Djokovic, 416in investments in northern area response turns to VR for training 24 Saudi Oger workers left in limbo 28 Barty topples Kvitova in Miami Minister says 35-day annual leave ‘harmful’ to economy Govt sources defend ‘anti-expat’ laws • Driving license process to go online By B Izzak and A Saleh backed them said they wanted to encourage more nationals to join the pri- Amir attends national operetta KUWAIT: Minister of State for vate sector by increasing labor benefits Economic Affairs Mariam Al-Aqeel said in the sector to ultimately reduce pres- yesterday that amendments to the labor sure on public sector employment. law in the private sector raising annual Representatives of the chamber of com- leave to 35 days are harmful to the merce and industry and the banking national economy and Kuwaiti business- association attended the meeting and men. After the National Assembly passed totally backed the minister’s remarks. the amendments in the first reading more Well-informed government sources than three weeks ago, mainly to increase expressed amazement over opposition to annual leave from 30 to 35 days, the new anti-expat legislations and stressed government, which voted in favor of the that Kuwait can never do without expats, amendments, backtracked and decided who are still greatly needed.
    [Show full text]
  • Sample Manuscript Showing Specifications and Style
    F. Cao, F. Guichard, H. Hornung, R. Teissières, An objective protocol for comparing the noise performance of silver halide film and digital sensor, Digital Photography VIII, Electronic Imaging 2012. Copyright 2012 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. http://dx.doi.org/10.1117/12.910113 An objective protocol for comparing the noise performance of silver halide film and digital sensor Frédéric Cao, Frédéric Guichard, Hervé Hornung, Régis Tessière DxO Labs, 3 Rue Nationale, 92100 Boulogne, France ABSTRACT Digital sensors have obviously invaded the photography mass market. However, some photographers with very high expectancy still use silver halide film. Are they only nostalgic reluctant to technology or is there more than meets the eye? The answer is not so easy if we remark that, at the end of the golden age, films were actually scanned before development. Nowadays film users have adopted digital technology and scan their film to take advantage from digital processing afterwards. Therefore, it is legitimate to evaluate silver halide film “with a digital eye”, with the assumption that processing can be applied as for a digital camera. The article will describe in details the operations we need to consider the film as a RAW digital sensor. In particular, we have to account for the film characteristic curve, the autocorrelation of the noise (related to film grain) and the sampling of the digital sensor (related to Bayer filter array).
    [Show full text]
  • Autofocus Measurement for Imaging Devices
    Autofocus measurement for imaging devices Pierre Robisson, Jean-Benoit Jourdain, Wolf Hauser Clement´ Viard, Fred´ eric´ Guichard DxO Labs, 3 rue Nationale 92100 Boulogne-Billancourt FRANCE Abstract els of the captured image increases with correct focus [3], [2]. We propose an objective measurement protocol to evaluate One image at a single focus position is not sufficient for focus- the autofocus performance of a digital still camera. As most pic- ing with this technology. Instead, multiple images from differ- tures today are taken with smartphones, we have designed the first ent focus positions must be compared, adjusting the focus until implementation of this protocol for devices with touchscreen trig- the maximum contrast is detected [4], [5]. This technology has ger. The lab evaluation must match with the users’ real-world ex- three major inconveniences. First, the camera never can be sure perience. Users expect to have an autofocus that is both accurate whether it is in focus or not. To confirm that the focus is correct, and fast, so that every picture from their smartphone is sharp and it has to move the lens out of the right position and back. Sec- captured precisely when they press the shutter button. There is ond, the system does not know whether it should move the lens a strong need for an objective measurement to help users choose closer to or farther away from the sensor. It has to start moving the best device for their usage and to help camera manufacturers the lens, observe how contrast changes, and possibly switch di- quantify their performance and benchmark different technologies.
    [Show full text]
  • RAW Image Quality Evaluation Using Information Capacity
    https://doi.org/10.2352/ISSN.2470-1173.2021.9.IQSP-218 © 2021, Society for Imaging Science and Technology RAW Image Quality Evaluation Using Information Capacity F.-X. Thomas, T. Corbier, Y. Li, E. Baudin, L. Chanas, and F. Guichard DXOMARK, Boulogne-Billancourt, France Abstract distortion, loss of sharpness in the field, vignetting and color In this article, we propose a comprehensive objective metric lens shading. The impact of these degradation indicators on the for estimating digital camera system performance. Using the information capacity is in general not trivial. We therefore chose DXOMARK RAW protocol, image quality degradation indicators the approach of assuming that every other degradation is corrected are objectively quantified, and the information capacity is using an ideal enhancement algorithm and then evaluating how the computed. The model proposed in this article is a significant correction impacts noise, color response, MTF, and pixel count as improvement over previous digital camera systems evaluation a side effect. protocols, wherein only noise, spectral response, sharpness, and The idea of using Shannon’s Information Capacity [1] as a pixel count were considered. In the proposed model we do not measure of potential image quality of a digital camera is not per consider image processing techniques, to only focus on the device se new. In our previous work [2, 3] however, we have estimated intrinsic performances. Results agree with theoretical predictions. information capacity under the assumption that it depended only This work has profound implications in RAW image testing for on the noise characteristics, the color response, the blur, the computer vision and may pave the way for advancements in other and pixel count.
    [Show full text]
  • Book VI Image
    b bb bbb bbbbon.com bbbb Basic Photography in 180 Days Book VI - Image Editor: Ramon F. aeroramon.com Contents 1 Day 1 1 1.1 Visual arts ............................................... 1 1.1.1 Education and training .................................... 1 1.1.2 Drawing ............................................ 1 1.1.3 Painting ............................................ 3 1.1.4 Printmaking .......................................... 5 1.1.5 Photography .......................................... 5 1.1.6 Filmmaking .......................................... 6 1.1.7 Computer art ......................................... 6 1.1.8 Plastic arts .......................................... 6 1.1.9 United States of America copyright definition of visual art .................. 7 1.1.10 See also ............................................ 7 1.1.11 References .......................................... 9 1.1.12 Bibliography ......................................... 9 1.1.13 External links ......................................... 10 1.2 Image ................................................. 20 1.2.1 Characteristics ........................................ 21 1.2.2 Imagery (literary term) .................................... 21 1.2.3 Moving image ......................................... 22 1.2.4 See also ............................................ 22 1.2.5 References .......................................... 23 1.2.6 External links ......................................... 23 2 Day 2 24 2.1 Digital image ............................................
    [Show full text]
  • DXOMARK Objective Video Quality Measurements
    https://doi.org/10.2352/ISSN.2470-1173.2020.9.IQSP-166 © 2020, Society for Imaging Science and Technology DXOMARK Objective Video Quality Measurements Emilie Baudin1, Franc¸ois-Xavier Bucher2, Laurent Chanas1 and Fred´ eric´ Guichard1 1: DXOMARK, Boulogne-Billancourt, France 2: Apple, Cupertino, CA, USA Abstract thus they cannot be considered comprehensive video quality mea- Video capture is becoming more and more widespread. The surement protocols. Temporal aspects of video quality begin to be technical advances of consumer devices have led to improved studied; for instance, a perceptual study of auto-exposure adapta- video quality and to a variety of new use cases presented by so- tion has been presented by Oh et al. [8]. To the best of our knowl- cial media and artificial intelligence applications. Device man- edge, there is not as yet a measurement protocol with correlation ufacturers and users alike need to be able to compare different to subjective perception for video image quality. cameras. These devices may be smartphones, automotive com- The goal of this work is to provide a measurement protocol ponents, surveillance equipment, DSLRs, drones, action cameras, for the objective evaluation of video quality that reliably allows to etc. While quality standards and measurement protocols exist for compare different cameras. It should allow to automatically rank still images, there is still a need of measurement protocols for cameras in the same order a final user would do it. In this article, video quality. These need to include parts that are non-trivially we focus on three temporal metrics: exposure convergence, color adapted from photo protocols, particularly concerning the tem- convergence, and temporal visual noise.
    [Show full text]