High Dynamic Range Image Encodings Introduction
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
The Uses of Animation 1
The Uses of Animation 1 1 The Uses of Animation ANIMATION Animation is the process of making the illusion of motion and change by means of the rapid display of a sequence of static images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon. Animators are artists who specialize in the creation of animation. Animation can be recorded with either analogue media, a flip book, motion picture film, video tape,digital media, including formats with animated GIF, Flash animation and digital video. To display animation, a digital camera, computer, or projector are used along with new technologies that are produced. Animation creation methods include the traditional animation creation method and those involving stop motion animation of two and three-dimensional objects, paper cutouts, puppets and clay figures. Images are displayed in a rapid succession, usually 24, 25, 30, or 60 frames per second. THE MOST COMMON USES OF ANIMATION Cartoons The most common use of animation, and perhaps the origin of it, is cartoons. Cartoons appear all the time on television and the cinema and can be used for entertainment, advertising, 2 Aspects of Animation: Steps to Learn Animated Cartoons presentations and many more applications that are only limited by the imagination of the designer. The most important factor about making cartoons on a computer is reusability and flexibility. The system that will actually do the animation needs to be such that all the actions that are going to be performed can be repeated easily, without much fuss from the side of the animator. -
Khronos Data Format Specification
Khronos Data Format Specification Andrew Garrard Version 1.2, Revision 1 2019-03-31 1 / 207 Khronos Data Format Specification License Information Copyright (C) 2014-2019 The Khronos Group Inc. All Rights Reserved. This specification is protected by copyright laws and contains material proprietary to the Khronos Group, Inc. It or any components may not be reproduced, republished, distributed, transmitted, displayed, broadcast, or otherwise exploited in any manner without the express prior written permission of Khronos Group. You may use this specification for implementing the functionality therein, without altering or removing any trademark, copyright or other notice from the specification, but the receipt or possession of this specification does not convey any rights to reproduce, disclose, or distribute its contents, or to manufacture, use, or sell anything that it may describe, in whole or in part. This version of the Data Format Specification is published and copyrighted by Khronos, but is not a Khronos ratified specification. Accordingly, it does not fall within the scope of the Khronos IP policy, except to the extent that sections of it are normatively referenced in ratified Khronos specifications. Such references incorporate the referenced sections into the ratified specifications, and bring those sections into the scope of the policy for those specifications. Khronos Group grants express permission to any current Promoter, Contributor or Adopter member of Khronos to copy and redistribute UNMODIFIED versions of this specification in any fashion, provided that NO CHARGE is made for the specification and the latest available update of the specification for any version of the API is used whenever possible. -
An Advanced Path Tracing Architecture for Movie Rendering
RenderMan: An Advanced Path Tracing Architecture for Movie Rendering PER CHRISTENSEN, JULIAN FONG, JONATHAN SHADE, WAYNE WOOTEN, BRENDEN SCHUBERT, ANDREW KENSLER, STEPHEN FRIEDMAN, CHARLIE KILPATRICK, CLIFF RAMSHAW, MARC BAN- NISTER, BRENTON RAYNER, JONATHAN BROUILLAT, and MAX LIANI, Pixar Animation Studios Fig. 1. Path-traced images rendered with RenderMan: Dory and Hank from Finding Dory (© 2016 Disney•Pixar). McQueen’s crash in Cars 3 (© 2017 Disney•Pixar). Shere Khan from Disney’s The Jungle Book (© 2016 Disney). A destroyer and the Death Star from Lucasfilm’s Rogue One: A Star Wars Story (© & ™ 2016 Lucasfilm Ltd. All rights reserved. Used under authorization.) Pixar’s RenderMan renderer is used to render all of Pixar’s films, and by many 1 INTRODUCTION film studios to render visual effects for live-action movies. RenderMan started Pixar’s movies and short films are all rendered with RenderMan. as a scanline renderer based on the Reyes algorithm, and was extended over The first computer-generated (CG) animated feature film, Toy Story, the years with ray tracing and several global illumination algorithms. was rendered with an early version of RenderMan in 1995. The most This paper describes the modern version of RenderMan, a new architec- ture for an extensible and programmable path tracer with many features recent Pixar movies – Finding Dory, Cars 3, and Coco – were rendered that are essential to handle the fiercely complex scenes in movie production. using RenderMan’s modern path tracing architecture. The two left Users can write their own materials using a bxdf interface, and their own images in Figure 1 show high-quality rendering of two challenging light transport algorithms using an integrator interface – or they can use the CG movie scenes with many bounces of specular reflections and materials and light transport algorithms provided with RenderMan. -
Painting with Light: a Technical Overview of Implementing HDR Support in Krita
Painting with Light: A Technical Overview of Implementing HDR Support in Krita Executive Summary Krita, a leading open source application for artists, has become one of the first digital painting software to achieve true high dynamic range (HDR) capability. With more than two million downloads per year, the program enables users to create and manipulate illustrations, concept art, comics, matte paintings, game art, and more. With HDR, Krita now opens the door for professionals and amateurs to a vastly extended range of colors and luminance. The pioneering approach of the Krita team can be leveraged by developers to add HDR to other applications. Krita and Intel engineers have also worked together to enhance Krita performance through Intel® multithreading technology, while Intel’s built-in support of HDR provides further acceleration. This white paper gives a high-level description of the development process to equip Krita with HDR. A Revolution in the Display of Color and Brightness Before the arrival of HDR, displays were calibrated to comply with the D65 standard of whiteness, corresponding to average midday light in Western and Northern Europe. Possibilities of storing color, contrast, or other information about each pixel were limited. Encoding was limited to a depth of 8 bits; some laptop screens only handled 6 bits per pixel. By comparison, HDR standards typically define 10 bits per pixel, with some rising to 16. This extra capacity for storing information allows HDR displays to go beyond the basic sRGB color space used previously and display the colors of Recommendation ITU-R BT.2020 (Rec. 2020, also known as BT.2020). -
Optimized RGB for Image Data Encoding
Journal of Imaging Science and Technology® 50(2): 125–138, 2006. © Society for Imaging Science and Technology 2006 Optimized RGB for Image Data Encoding Beat Münch and Úlfar Steingrímsson Swiss Federal Laboratories for Material Testing and Research (EMPA), Dübendorf, Switzerland E-mail: [email protected] Indeed, image data originating from digital cameras and Abstract. The present paper discusses the subject of an RGB op- scanners are highly device dependent, and the same variabil- timization for color data coding. The current RGB situation is ana- lyzed and those requirements selected which exclusively address ity exists for image reproducing devices such as monitors the data encoding, while disregarding any application or workflow and printers. Figure 1 shows the locations of the primaries related objectives. In this context, the limitation to the RGB triangle for a few important RGB color spaces, revealing a manifold of color saturation, and the discretization due to the restricted reso- lution of 8 bits per channel are identified as the essential drawbacks variety. The respective specifications for the primaries, white of most of today’s RGB data definitions. Specific validation metrics points and gamma values are given in Table I. are proposed for assessing the codable color volume while at the In practice, the problem of RGB misinterpretation has same time considering the discretization loss due to the limited bit resolution. Based on those measures it becomes evident that the been approached by either providing an ICC profile or by idea of the recently promoted e-sRGB definition holds the potential assuming sRGB if no ICC profile is present. -
Recording Devices, Image Creation Reproduction Devices
11/19/2010 Early Color Management: Closed systems Color Management: ICC Profiles, Color Management Modules and Devices Drum scanner uses three photomultiplier tubes aadnd typica lly scans a film negative . A fixed transformation for the in‐house system Dr. Michael Vrhel maps the recorded RGB values to CMYK separations. Print operator adjusts final ink amounts. RGB CMYK Note: At‐home, camera film to prints and NTSC television. TC 707 Basis of modern techniques for color specification, measurement, control and communication. 11/17/2010 1 2 Reproduction Devices Recording Devices, Image Creation 3 4 1 11/19/2010 Modern Color Modern Color Communication: Communication: Open systems. Open systems. Profile Connection Space (PCS) e.g. CIELAB, CIEXYZ or sRGB 5 6 Modern Color Communication: Open Standards, Standards, Standards.. Systems. Printing Industry: ICC profile as an ISO standard Postscript color management was the first adopted solution providing a connection ISO 15076‐1:2005 Image technology colour management Space 1990 in PostScript Level 2. Architecture, profile format and data structure ‐‐ Part 1: Based on ICC.1:2004‐10 Apple ColorSync was a competing solution introduced in 1993. Camera characterization standards ISO 22028‐1 specifies the image state architecture and encoding requirements. International Color Consortium (ICC) developed a standard that has now been widely ISO 22028‐3 specifies the RIMM, ERIMM (and soon the FP‐RIMM) RGB color encodings. adopted at least by the print industry. Organization founded in 1993 by Adobe, Agfa, IEC/ISO 61966‐2‐2 specifies the scRGB, scYCC, scRGB‐nl and scYCC‐nl color encodings. Kodak, Microsoft, Silicon Graphics, Sun Microsystems and Taligent. -
Deep Compositing Using Lie Algebras
Deep Compositing Using Lie Algebras TOM DUFF Pixar Animation Studios Deep compositing is an important practical tool in creating digital imagery, Deep images extend Porter-Duff compositing by storing multiple but there has been little theoretical analysis of the underlying mathematical values at each pixel with depth information to determine composit- operators. Motivated by finding a simple formulation of the merging oper- ing order. The OpenEXR deep image representation stores, for each ation on OpenEXR-style deep images, we show that the Porter-Duff over pixel, a list of nonoverlapping depth intervals, each containing a function is the operator of a Lie group. In its corresponding Lie algebra, the single pixel value [Kainz 2013]. An interval in a deep pixel can splitting and mixing functions that OpenEXR deep merging requires have a represent contributions either by hard surfaces if the interval has particularly simple form. Working in the Lie algebra, we present a novel, zero extent or by volumetric objects when the interval has nonzero simple proof of the uniqueness of the mixing function. extent. Displaying a single deep image requires compositing all the The Lie group structure has many more applications, including new, values in each deep pixel in depth order. When two deep images correct resampling algorithms for volumetric images with alpha channels, are merged, splitting and mixing of these intervals are necessary, as and a deep image compression technique that outperforms that of OpenEXR. described in the following paragraphs. 26 r Merging the pixels of two deep images requires that correspond- CCS Concepts: Computing methodologies → Antialiasing; Visibility; ing pixels of the two images be reconciled so that all overlapping Image processing; intervals are identical in extent. -
HDR PROGRAMMING Thomas J
April 4-7, 2016 | Silicon Valley HDR PROGRAMMING Thomas J. True, July 25, 2016 HDR Overview Human Perception Colorspaces Tone & Gamut Mapping AGENDA ACES HDR Display Pipeline Best Practices Final Thoughts Q & A 2 WHAT IS HIGH DYNAMIC RANGE? HDR is considered a combination of: • Bright display: 750 cm/m 2 minimum, 1000-10,000 cd/m 2 highlights • Deep blacks: Contrast of 50k:1 or better • 4K or higher resolution • Wide color gamut What’s a nit? A measure of light emitted per unit area. 1 nit (nt) = 1 candela / m 2 3 BENEFITS OF HDR Improved Visuals Richer colors Realistic highlights More contrast and detail in shadows Reduces / Eliminates clipping and compression issues HDR isn’t simply about making brighter images 4 HUNT EFFECT Increasing the Luminance Increases the Colorfulness By increasing luminance it is possible to show highly saturated colors without using highly saturated RGB color primaries Note: you can easily see the effect but CIE xy values stay the same 5 STEPHEN EFFECT Increased Spatial Resolution More visual acuity with increased luminance. Simple experiment – look at book page indoors and then walk with a book into sunlight 6 HOW HDR IS DELIVERED TODAY High-end professional color grading displays - Dolby Pulsar (4000 nits), Dolby Maui, SONY X300 (1000 nit OLED) UHD TVs - LG, SONY, Samsung… (1000 nits, high contrast, UHD-10, Dolby Vision, etc) Rec. 2020 UHDTV wide color gamut SMPTE ST-2084 Dolby Perceptual Quantizer (PQ) Electro-Optical Transfer Function (EOTF) SMPTE ST-2094 Dynamic metadata specification 7 REAL WORLD VISIBLE -
Loren Carpenter, Inventor and President of Cinematrix, Is a Pioneer in the Field of Computer Graphics
Loren Carpenter BIO February 1994 Loren Carpenter, inventor and President of Cinematrix, is a pioneer in the field of computer graphics. He received his formal education in computer science at the University of Washington in Seattle. From 1966 to 1980 he was employed by The Boeing Company in a variety of capacities, primarily software engineering and research. While there, he advanced the state of the art in image synthesis with now standard algorithms for synthesizing images of sculpted surfaces and fractal geometry. His 1980 film Vol Libre, the world’s first fractal movie, received much acclaim along with employment possibilities countrywide. He chose Lucasfilm. His fractal technique and others were usedGenesis in the sequence Starin Trek II: The Wrath of Khan. It was such a popular sequence that Paramount used it in most of the other Star Trek movies. The Lucasfilm computer division produced sequences for several movies:Star Trek II: The Wrath of Khan, Young Sherlock Holmes, andReturn of the Jedl, plus several animated short films. In 1986, the group spun off to form its own business, Pixar. Loren continues to be Senior Scientist for the company, solving interesting problems and generally improving the state of the art. Pixar is now producing the first completely computer animated feature length motion picture with the support of Disney, a logical step after winning the Academy Award for best animatedTin short Toy for in 1989. In 1993, Loren received a Scientific and Technical Academy Award for his fundamental contributions to the motion picture industry through the invention and development of the RenderMan image synthesis software system. -
Appearance-Based Image Splitting for HDR Display Systems Dan Zhang
Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 4-1-2011 Appearance-based image splitting for HDR display systems Dan Zhang Follow this and additional works at: http://scholarworks.rit.edu/theses Recommended Citation Zhang, Dan, "Appearance-based image splitting for HDR display systems" (2011). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. CHESTER F. CARLSON CENTER FOR IMAGING SCIENCE COLLEGE OF SCIENCE ROCHESTER INSTITUTE OF TECHNOLOGY ROCHESTER, NY CERTIFICATE OF APPROVAL M.S. DEGREE THESIS The M.S. Degree Thesis of Dan Zhang has been examined and approved by two members of the Color Science faculty as satisfactory for the thesis requirement for the Master of Science degree Dr. James Ferwerda, Thesis Advisor Dr. Mark Fairchild I Appearance-based image splitting for HDR display systems. Dan Zhang B.S. Yanshan University, Qinhuangdao, China (2005) M.S. Beijing Institute of Technology, Beijing, China (2008) A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Color Science in the Center for Imaging Science, Rochester Institute of Technology April 2011 Signature of the Author Accepted by Dr. Mark Fairchild Graduate Coordinator Color Science II THESIS RELEASE PERMISSION FORM CHESTER F. CARLSON CENTER FOR IMAGING SCIENCE COLLEGE OF SCIENCE ROCHESTER INSTITUTE OF TECHNOLOGY ROCHESTER, NEW YORK Title of Thesis Appearance-based image splitting for HDR display systems. -
HDR RENDERING on NVIDIA GPUS Thomas J
HDR RENDERING ON NVIDIA GPUS Thomas J. True, May 8, 2017 HDR Overview Visible Color & Colorspaces NVIDIA Display Pipeline Tone Mapping AGENDA Programing for HDR Best Practices Conclusion Q & A 2 HDR OVERVIEW 3 WHAT IS HIGH DYNAMIC RANGE? HDR is considered a combination of: • Bright display: 750 cm/m 2 minimum, 1000-10,000 cd/m 2 highlights • Deep blacks: Contrast of 50k:1 or better • 4K or higher resolution • Wide color gamut What’s a nit? A measure of light emitted per unit area. 1 nit (nt) = 1 candela / m 2 4 BENEFITS OF HDR Tell a Better Story with Improved Visuals Richer colors Realistic highlights More contrast and detail in shadows Reduces / Eliminates clipping and compression issues HDR isn’t simply about making brighter images 5 HUNT EFFECT Increasing the Luminance Increases the Colorfulness By increasing luminance it is possible to show highly saturated colors without using highly saturated RGB color primaries Note: you can easily see the effect but CIE xy values stay the same 6 STEPHEN EFFECT Increased Spatial Resolution More visual acuity with increased luminance. Simple experiment – look at book page indoors and then walk with a book into sunlight 7 HOW HDR IS DELIVERED TODAY Displays and Connections High-End Professional Color Grading Displays – via SDI • Dolby Pulsar (4000 nits) • Dolby Maui • SONY X300 (1000 nit OLED) • Canon DP-V2420 UHD TVs – via HDMI 2.0a/b • LG, SONY, Samsung… (1000 nits, high contrast, HDR10, Dolby Vision, etc) Desktop Computer Displays – coming soon to a desktop / laptop near you 8 VISIBLE COLOR & -
Color Image Processing Pipeline [A General Survey of Digital Still Camera Processing]
[Rajeev Ramanath, Wesley E. Snyder, Youngjun Yoo, and Mark S. Drew ] FLOWER PHOTO © PHOTO FLOWER MEDIA, 1991 21ST CENTURY PHOTO:CAMERA AND BACKGROUND ©VISION LTD. DIGITAL Color Image Processing Pipeline [A general survey of digital still camera processing] igital still color cameras (DSCs) have gained significant popularity in recent years, with projected sales in the order of 44 million units by the year 2005. Such an explosive demand calls for an understanding of the processing involved and the implementation issues, bearing in mind the otherwise difficult problems these cameras solve. This article presents an overview of the image processing pipeline, Dfirst from a signal processing perspective and later from an implementation perspective, along with the tradeoffs involved. IMAGE FORMATION A good starting point to fully comprehend the signal processing performed in a DSC is to con- sider the steps by which images are formed and how each stage affects the final rendered image. There are two distinct aspects of image formation: one that has a colorimetric perspective and another that has a generic imaging perspective, and we treat these separately. In a vector space model for color systems, a reflectance spectrum r(λ) sampled uniformly in a spectral range [λmin,λmax] interacts with the illuminant spectrum L(λ) to form a projection onto the color space of the camera RGBc as follows: c = N STLMr + n ,(1) where S is a matrix formed by stacking the spectral sensitivities of the K color filters used in the imaging system column-wise, L is