Virtual Cinematography: Relighting Through Computation

Total Page:16

File Type:pdf, Size:1020Kb

Virtual Cinematography: Relighting Through Computation COVER FEATURE Virtual Cinematography: Relighting through Computation Paul Debevec USC Centers for Creative Technologies Recording how scenes transform incident illumination into radiant light is an active topic in computational photography.Such techniques are making it possible to create virtual images of a person or place from new viewpoints and in any form of illumination. he first photograph (Figure 1) sits in an oxygen- a range of wavelengths, ␭. We can thus describe a color free protective case at the Harry Ransom Cen- image as a 3D function, P (␽, ␾, ␭). Motion pictures ter at the University of Texas at Austin. Its record how the incident light changes with time t, adding image is faint enough that discerning the roofs another dimension to our function, which becomes T of Joseph Nicéphore Niépce’s farmhouses P (␽, ␾, ␭, t). requires some effort. Most of the photographic imagery we view today—cin- That same effort, however, reveals a clear and amaz- ema and television—records and reproduces light across ing truth: Through the technology of photography, a all of these dimensions, providing an even more compelling pattern of light crossing a French windowsill in 1826 is experience of scenes from other times and places. re-emitted thousands of miles away nearly two centuries later. Not surprisingly, this abil- ity to provide a view into a different time and place has come to revolutionize how we com- municate information, tell stories, and docu- ment history. DIGITAL IMAGING In digital imaging terminology, we would think of the first photograph as a 2D array of pixel values, each representing the amount of light arriving at the camera from a particular angle. If we index the pixel values based on the horizontal (␽) and vertical (␾) components of this angle, we can express the photograph as a 2D function, P (␽, ␾). Modern photographic techniques have made it possible to capture considerably more information about the light in a scene. Color Figure 1.The first photograph,Joseph Nicéphore Niépce,1826. photography records another dimension of Source: http://palimpsest.stanford.edu/byorg/abbey/an/an26/an26-3/ information: the amount of incident light for an26-307.html. 0018-9162/06/$20.00 © 2006 IEEE Published by the IEEE Computer Society August 2006 57 VIRTUAL CONTROL OF THE VIEWPOINT Recording variation in the spatial dimen- sions of the plenoptic function allows scenes to be recorded three-dimensionally. Stereo photography captures two image samples in (x, y, z) corresponding to the positions of a left and right camera. Since this matches the form of the imagery that our binocular visual system senses, it can reproduce the sensation of a 3D scene for a particular point of view. QuickTime VR panoramas allow virtual control the field of view by panning and zooming in a panoramic image, exploring different ranges of ␽ and ␾ but staying at the Figure 2.Geometry of the plenoptic function and the reflectance field. same viewpoint (x, y, z). In 1995, Leonard McMillan and Gary Bishop presented a sys- Many innovations in image recording technology tem for interpolating between panoramic images taken receiving increased interest today provide ways of cap- from different viewpoints to create the appearance of turing P (␽, ␾, ␭, t) with greater resolution, range, and virtual navigation through a scene.3 fidelity. For example, high dynamic range (HDR) imag- Light field photography uses an array of cameras to ing techniques increase the range of values that are record a scene from a planar 2D array of viewpoints dis- recorded within P, accurately capturing the millions-to- tributed across x and y for a fixed z.4 If the cameras are one range brightness from a dimly lit interior to staring clear of obstructions, we can infer values of the plenop- into the bright sun. Panoramic and omnidirectional tic function at other z positions by following these rays photography1 increase the range of ␽ and ␾ that are forward or backward in the direction (␽, ␾) to where the recorded, in some cases capturing the entire sphere of light was sensed at the x-y plane.5 incident light. Large-format photography systems such By recording such data, it becomes possible to virtually as Graham Flint’s Gigapxl Project dramatically increase construct any desired viewpoint of a scene after it has been angular resolution, achieving tens of thousands of inde- filmed. As such techniques mature—and if display tech- pendent samples across ␽ and ␾. Multispectral and nology makes similar advances6—future observers may hyperspectral photography, respectively, increase the res- be able to enjoy 3D interactive views of sports matches, olution and range recorded in ␭, and high-speed pho- family gatherings, and tours of the great sites of the world. tography increases the resolution in t. This rich line of photographic advances, each record- As we consider what the future of digital photogra- ing a greater dimensionality of the light within a scene, phy might bring, we might ask what other dimensions has progressively brought a greater sense of “being there” of information about a scene we could consider captur- in a scene or that the photographed subject is in some ing. In 1991, E.H. Adelson and J.R. Bergen2 presented way present within our own environment. Nonetheless, a mathematical description of the totality of light within a photographic image—even a 3D, omnidirectional, high- a scene as the seven-dimensional plenoptic function P: resolution motion picture—shows a scene only the way it was at a particular time, the way things happened, in the P = P (x, y, z, ␽, ␾, ␭, t) illumination that happened to be there. If it were somehow possible to record the scene itself, The last four dimensions of this function are familiar, rather than just the light it happened to reflect, there representing the azimuth, inclination, wavelength, and would be a potential to create an even more complete time of the incident ray of light in question, as Figure 2 and compelling record of people, places, and events. In shows. The additional three dimensions (x, y, z) denote this article, we focus on a particular part of this prob- the 3D position in space at which the incident light is lem: Is there a way to record a subject or scene so that we being sensed. In the case of the first photograph, (x, y, z) can later produce images not just from every possible would be fixed at the optical center of Niépce’s rudimen- viewpoint, but in any possible illumination? tary camera. Tantalizingly, this simple function contains within it VIRTUAL CONTROL OF ILLUMINATION every photograph that could ever have possibly been Recording a scene so that its illumination can be cre- taken. If it were possible to query the plenoptic function ated later has potential applications in a variety of dis- with the appropriate values of x, y, z, ␽, ␾, ␭, and t, it ciplines. With virtual relighting, an architect could would be possible to construct brilliant color images (or record a building to visualize it under a proposed night- movies) of any event in history. time lighting design or capture a living room’s response 58 Computer to light to see how its appearance would change if an ulate arbitrary lighting conditions in a scene, since we apartment complex were built across the street. An only need to capture the scene in a set of basis lighting archaeologist could compare digital records of artifacts conditions that span the space of the lighting conditions as if they were sitting next to each other in the same of interest. In 1995, Julie Dorsey and colleagues sug- illumination, a process difficult to perform with stan- gested this technique in the context of computer-ren- dard photographs. dered imagery to efficiently visualize potential theatrical Some of the most interesting virtual relighting appli- lighting designs.8 cations are in the area of filmmaking. A small crew could We should now ask which set of basis lighting condi- capture a real-world location to use it as a virtual film tions will allow us to simulate any possible pattern of illu- set, and the production’s cinematographer could light mination in the scene. Since we would like to place lights the set virtually. Then, actors filmed in a studio could be at any (x, y, z) position, we might suppose that our basis composited into the virtual set, their lighting being vir- set of recorded plenoptic functions should consist of a set tually created to make them appear to reflect the light of of images of the scene, each with an omnidirectional light the environment and further shaped by the cinematog- source (such as a tiny lightbulb) at a different (x, y, z) rapher for dramatic effect. position, densely covering the space. However, this light- Virtually changing the lighting in a scene is a more ing basis would not allow us to simulate how a directed complex problem than virtually changing the viewpoint. spotlight would illuminate the scene, since no summation Even recording the entire 7D plenoptic function for a of omnidirectional light source images can produce a pat- scene reveals only how to construct images of the scene tern of light aimed in a particular direction. in its original illumination. If we con- Thus, our basis must include vari- sider a living room with three lamps, ance with respect to lighting direc- turning on any one of the lamps will Some of the tion as well, and we should choose give rise to a different plenoptic func- most interesting our basis light sources to be single tion with remarkably different high- rays of light, originating at (x, y, z) lights, shading, shadows, and re- virtual relighting and emitting in direction (␽, ␾). From flections within the room.
Recommended publications
  • Automated Staging for Virtual Cinematography Amaury Louarn, Marc Christie, Fabrice Lamarche
    Automated Staging for Virtual Cinematography Amaury Louarn, Marc Christie, Fabrice Lamarche To cite this version: Amaury Louarn, Marc Christie, Fabrice Lamarche. Automated Staging for Virtual Cinematography. MIG 2018 - 11th annual conference on Motion, Interaction and Games, Nov 2018, Limassol, Cyprus. pp.1-10, 10.1145/3274247.3274500. hal-01883808 HAL Id: hal-01883808 https://hal.inria.fr/hal-01883808 Submitted on 28 Sep 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Automated Staging for Virtual Cinematography Amaury Louarn Marc Christie Fabrice Lamarche IRISA / INRIA IRISA / INRIA IRISA / INRIA Rennes, France Rennes, France Rennes, France [email protected] [email protected] [email protected] Scene 1: Camera 1, CU on George front screencenter and Marty 3/4 backright screenleft. George and Marty are in chair and near bar. (a) Scene specification (b) Automated placement of camera and characters (c) Resulting shot enforcing the scene specification Figure 1: Automated staging for a simple scene, from a high-level language specification (a) to the resulting shot (c). Our system places both actors and camera in the scene. Three constraints are displayed in (b): Close-up on George (green), George seen from the front (blue), and George screencenter, Marty screenleft (red).
    [Show full text]
  • CEO & Co-Founder, Pinscreen, Inc. Distinguished Fellow, University Of
    HaoLi CEO & Co-Founder, Pinscreen, Inc. Distinguished Fellow, University of California, Berkeley Pinscreen, Inc. Email [email protected] 11766 Wilshire Blvd, Suite 1150 Home page http://www.hao-li.com/ Los Angeles, CA 90025, USA Facebook http://www.facebook.com/li.hao/ PROFILE Date of birth 17/01/1981 Place of birth Saarbrücken, Germany Citizenship German Languages German, French, English, and Mandarin Chinese (all fluent and no accents) BIO I am CEO and Co-Founder of Pinscreen, an LA-based Startup that builds the world’s most advanced AI-driven virtual avatars, as well as a Distinguished Fellow of the Computer Vision Group at the University of California, Berkeley. Before that, I was an Associate Professor (with tenure) in Computer Science at the University of Southern California, as well as the director of the Vision & Graphics Lab at the USC Institute for Creative Technologies. I work at the intersection between Computer Graphics, Computer Vision, and AI, with focus on photorealistic human digitization using deep learning and data-driven techniques. I’m known for my work on dynamic geometry processing, virtual avatar creation, facial performance capture, AI-driven 3D shape digitization, and deep fakes. My research has led to the Animoji technology in Apple’s iPhone X, I worked on the digital reenactment of Paul Walker in the movie Furious 7, and my algorithms on deformable shape alignment have improved the radiation treatment for cancer patients all over the world. I have been keynote speaker at numerous major events, conferences, festivals, and universities, including the World Economic Forum (WEF) in Davos 2020 and Web Summit 2020.
    [Show full text]
  • Architectural Model for an Augmented Reality Based Mobile Learning Application Oluwaranti, A
    Journal of Multidisciplinary Engineering Science and Technology (JMEST) ISSN: 3159-0040 Vol. 2 Issue 7, July - 2015 Architectural Model For An Augmented Reality Based Mobile Learning Application Oluwaranti, A. I., Obasa A. A., Olaoye A. O. and Ayeni S. Department of Computer Science and Engineering Obafemi Awolowo University Ile-Ife, Nigeria [email protected] Abstract— The work developed an augmented students. It presents a model to utilize an Android reality (AR) based mobile learning application for based smart phone camera to scan 2D templates and students. It implemented, tested and evaluated the overlay the information in real time. The model was developed AR based mobile learning application. implemented and its performance evaluated with This is with the view to providing an improved and respect to its ease of use, learnability and enhanced learning experience for students. effectiveness. The augmented reality system uses the marker- II. LITERATURE REVIEW based technique for the registration of virtual Augmented reality, commonly referred to as AR contents. The image tracking is based on has garnered significant attention in recent years. This scanning by the inbuilt camera of the mobile terminology has been used to describe the technology device; while the corresponding virtual behind the expansion or intensification of the real augmented information is displayed on its screen. world. To “augment reality” is to “intensify” or “expand” The recognition of scanned images was based on reality itself [4]. Specifically, AR is the ability to the Vuforia Cloud Target Recognition System superimpose digital media on the real world through (VCTRS). The developed mobile application was the screen of a device such as a personal computer or modeled using the Object Oriented modeling a smart phone, to create and show users a world full of techniques.
    [Show full text]
  • Virtual Cinematography Using Optimization and Temporal Smoothing
    Virtual Cinematography Using Optimization and Temporal Smoothing Alan Litteneker Demetri Terzopoulos University of California, Los Angeles University of California, Los Angeles [email protected] [email protected] ABSTRACT disagree both on how well a frame matches a particular aesthetic, We propose an automatic virtual cinematography method that takes and on the relative importance of aesthetic properties in a frame. a continuous optimization approach. A suitable camera pose or path As such, virtual cinematography for scenes that are planned is determined automatically by computing the minima of an ob- before delivery to the viewer, such as with 3D animated films made jective function to obtain some desired parameters, such as those popular by companies such as Pixar, is generally solved manu- common in live action photography or cinematography. Multiple ally by a human artist. However, when a scene is even partially objective functions can be combined into a single optimizable func- unknown, such as when playing a video game or viewing a pro- tion, which can be extended to model the smoothness of the optimal cedurally generated real-time animation, some sort of automatic camera path using an active contour model. Our virtual cinematog- virtual cinematography system must be used. raphy technique can be used to find camera paths in either scripted The type of system with which the present work is concerned or unscripted scenes, both with and without smoothing, at a rela- uses continuous objective functions and numerical optimization tively low computational cost. solvers. As an optimization problem, virtual cinematography has some difficult properties: it is generally nonlinear, non-convex, CCS CONCEPTS and cannot be efficiently expressed in closed form.
    [Show full text]
  • Teaching Visual Storytelling for Virtual Production Pipelines Incorporating Motion Capture and Visual Effects
    Teaching Visual Storytelling for virtual production pipelines incorporating Motion Capture and Visual Effects Gregory Bennett∗ Jan Krusey Auckland University of Technology Auckland University of Technology Figure 1: Performance Capture for Visual Storytelling at AUT. Abstract solid theoretical foundation, and could traditionally only be ex- plored through theory and examples in a lecture/lab style context. Film, television and media production are subject to consistent Particularly programs that aim to deliver content in a studio-based change due to ever-evolving technological and economic environ- environment suffer from the complexity and cost-time-constraints ments. Accordingly, tertiary teaching of subject areas such as cin- inherently part of practical inquiry into storytelling through short ema, animation and visual effects require frequent adjustments re- film production or visual effects animation. Further, due to the garding curriculum structure and pedagogy. This paper discusses a structure and length of Film, Visual Effects and Digital Design de- multifaceted, cross-disciplinary approach to teaching Visual Narra- grees, there is normally only time for a single facet of visual nar- tives as part of a Digital Design program. Specifically, pedagogical rative to be addressed, for example a practical camera shoot, or challenges in teaching Visual Storytelling through Motion Capture alternatively a visual effects or animation project. This means that and Visual Effects are addressed, and a new pedagogical frame- comparative exploratory learning is usually out of the question, and work using three different modes of moving image storytelling is students might only take a singular view on technical and creative applied and cited as case studies. Further, ongoing changes in film story development throughout their undergraduate years.
    [Show full text]
  • Fusing Multimedia Data Into Dynamic Virtual Environments
    ABSTRACT Title of dissertation: FUSING MULTIMEDIA DATA INTO DYNAMIC VIRTUAL ENVIRONMENTS Ruofei Du Doctor of Philosophy, 2018 Dissertation directed by: Professor Amitabh Varshney Department of Computer Science In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a signifcant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments. First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatiotem- poral flters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spher- ical harmonics (SH). Our spherical residual model can be applied to virtual cine- matography in 360° videos. We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. Our user study has identifed several use cases for these systems, including immersive social storytelling, experiencing the culture, and crowd-sourced tourism. We next present Video Fields, a web-based interactive system to create, cal- ibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Fur- thermore, we present VRSurus and ARCrypt projects to explore the applications of gestures recognition, haptic feedback, and visual cryptography for virtual and augmented reality.
    [Show full text]
  • Joint Vr Conference of Eurovr and Egve, 2011
    VTT CREATES BUSINESS FROM TECHNOLOGY Technology and market foresight • Strategic research • Product and service development • IPR and licensing VTT SYMPOSIUM 269 • Assessments, testing, inspection, certification • Technology and innovation management • Technology partnership • • • VTT SYMPOSIUM 269 JOINT VR CONFERENCE OF EUROVR AND EGVE, 2011. CURRENT AND FUTURE PERSPECTIVE... • VTT SYMPOSIUM 269 JOINT VR CONFERENCE OF EUROVR AND EGVE, 2011. The Joint Virtual Reality Conference (JVRC2011) of euroVR and EGVE is an inter- national event which brings together people from industry and research including end-users, developers, suppliers and all those interested in virtual reality (VR), aug- mented reality (AR), mixed reality (MR) and 3D user interfaces (3DUI). This year it was held in the UK in Nottingham hosted by the Human Factors Research Group (HFRG) and the Mixed Reality Lab (MRL) at the University of Nottingham. This publication is a collection of the industrial papers and poster presentations at the conference. It provides an interesting perspective into current and future industrial applications of VR/AR/MR. The industrial Track is an opportunity for industry to tell the research and development communities what they use the tech- nologies for, what they really think, and their needs now and in the future. The Poster Track is an opportunity for the research community to describe current and completed work or unimplemented and/or unusual systems or applications. Here we have presentations from around the world. Joint VR Conference of
    [Show full text]
  • VR As a Content Creation Tool for Movie Previsualisation
    Online Submission ID: 1676 VR as a Content Creation Tool for Movie Previsualisation Category: Application Figure 1: We propose a novel VR authoring tool dedicated to sketching movies in 3D before shooting them (a phase known as previsualisation). Our tool covers the main stages in film-preproduction: crafting storyboards (left), creating 3D animations (center images), and preparing technical aspects of the shooting (right). ABSTRACT previsualisation technologies as a way to reduce the costs by early anticipation of issues that may occur during the shooting. Creatives in animation and film productions have forever been ex- ploring the use of new means to prototype their visual sequences Previsualisations of movies or animated sequences are actually before realizing them, by relying on hand-drawn storyboards, phys- crafted by dedicated companies composed of 3D artists, using tra- ical mockups or more recently 3D modelling and animation tools. ditional modelling/animation tools such as Maya, 3DS or Motion However these 3D tools are designed in mind for dedicated ani- Builder. Creatives in film crews (editors, cameramen, directors of mators rather than creatives such as film directors or directors of photography) however do not master these tools and hence can only photography and remain complex to control and master. In this intervene by interacting with the 3D artists. While a few tools such paper we propose a VR authoring system which provides intuitive as FrameForge3D, MovieStorm or ShotPro [2, 4, 6] or research pro- ways of crafting visual sequences, both for expert animators and totypes have been proposed to ease the creative process [23, 27, 28], expert creatives in the animation and film industry.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 9,729,765 B2 Balakrishnan Et Al
    USOO9729765B2 (12) United States Patent (10) Patent No.: US 9,729,765 B2 Balakrishnan et al. (45) Date of Patent: Aug. 8, 2017 (54) MOBILE VIRTUAL CINEMATOGRAPHY A63F 13/70 (2014.09); G06T 15/20 (2013.01); SYSTEM H04N 5/44504 (2013.01); A63F2300/1093 (71) Applicant: Drexel University, Philadelphia, PA (2013.01) (58) Field of Classification Search (US) None (72) Inventors: Girish Balakrishnan, Santa Monica, See application file for complete search history. CA (US); Paul Diefenbach, Collingswood, NJ (US) (56) References Cited (73) Assignee: Drexel University, Philadelphia, PA (US) PUBLICATIONS (*) Notice: Subject to any disclaimer, the term of this Lino, C. et al. (2011) The Director's Lens: An Intelligent Assistant patent is extended or adjusted under 35 for Virtual Cinematography. ACM Multimedia, ACM 978-1-4503 U.S.C. 154(b) by 651 days. 0616-Apr. 11, 2011. Elson, D.K. and Riedl, M.O (2007) A Lightweight Intelligent (21) Appl. No.: 14/309,833 Virtual Cinematography System for Machinima Production. Asso ciation for the Advancement of Artificial Intelligence. Available (22) Filed: Jun. 19, 2014 from www.aaai.org. (Continued) (65) Prior Publication Data US 2014/O378222 A1 Dec. 25, 2014 Primary Examiner — Maurice L. McDowell, Jr. (74) Attorney, Agent, or Firm — Saul Ewing LLP: Related U.S. Application Data Kathryn Doyle; Brian R. Landry (60) Provisional application No. 61/836,829, filed on Jun. 19, 2013. (57) ABSTRACT A virtual cinematography system (SmartWCS) is disclosed, (51) Int. Cl. including a mobile tablet device, wherein the mobile tablet H04N 5/222 (2006.01) device includes a touch-sensor Screen, a first hand control, a A63F 3/65 (2014.01) second hand control, and a motion sensor.
    [Show full text]
  • VR As a Content Creation Tool for Movie Previsualisation
    VR as a Content Creation Tool for Movie Previsualisation Quentin Galvane* I-Sheng Lin Fernando Argelaguet† Tsai-Yen Li‡ Inria, CNRS, IRISA, M2S National ChengChi Inria, CNRS, IRISA National ChengChi France University France University Taiwan Taiwan Marc Christie§ Univ Rennes, Inria, CNRS, IRISA, M2S France Figure 1: We propose a novel VR authoring tool dedicated to sketching movies in 3D before shooting them (a phase known as previsualisation). Our tool covers the main stages in film-preproduction: crafting storyboards (left), creating 3D animations (center images), and preparing technical aspects of the shooting (right). ABSTRACT storyboards or digital storyboards [8] remain one of the most tradi- Creatives in animation and film productions have forever been ex- tional ways to design the visual and narrative dimensions of a film ploring the use of new means to prototype their visual sequences sequence. before realizing them, by relying on hand-drawn storyboards, phys- With the advent of realistic real-time rendering techniques, the ical mockups or more recently 3D modelling and animation tools. film and animation industries have been extending storyboards by However these 3D tools are designed in mind for dedicated ani- exploring the use of 3D virtual environments to prototype movies, mators rather than creatives such as film directors or directors of a technique termed previsualisation (or previs). Previsualisation photography and remain complex to control and master. In this consists in creating a rough 3D mockup of the scene, laying out the paper we propose a VR authoring system which provides intuitive elements, staging the characters, placing the cameras, and creating ways of crafting visual sequences, both for expert animators and a very early edit of the sequence.
    [Show full text]
  • Capturing and Simulating Physically Accurate Illumination in Computer Graphics
    Capturing and Simulating Physically Accurate Illumination in Computer Graphics PAUL DEBEVEC Institute for Creative Technologies University of Southern California Marina del Rey, California Anyone who has seen a recent summer blockbuster has witnessed the dramatic increases in computer-generated realism in recent years. Visual effects supervisors now report that bringing even the most challenging visions of film directors to the screen is no longer a question of whats possible; with todays techniques it is only a matter of time and cost. Driving this increase in realism have been computer graphics (CG) techniques for simulating how light travels within a scene and for simulating how light reflects off of and through surfaces. These techniquessome developed recently, and some originating in the 1980sare being applied to the visual effects process by computer graphics artists who have found ways to channel the power of these new tools. RADIOSITY AND GLOBAL ILLUMINATION One of the most important aspects of computer graphics is simulating the illumination within a scene. Computer-generated images are two-dimensional arrays of computed pixel values, with each pixel coordinate having three numbers indicating the amount of red, green, and blue light coming from the corresponding direction within the scene. Figuring out what these numbers should be for a scene is not trivial, since each pixels color is based both on how the object at that pixel reflects light and also the light that illuminates it. Furthermore, the illumination does not only come directly from the lights within the scene, but also indirectly Capturing and Simulating Physically Accurate Illumination in Computer Graphics from all of the surrounding surfaces in the form of bounce light.
    [Show full text]
  • Image-Based Lighting
    Tutorial Image-Based Paul Debevec Lighting USC Institute for Creative Technologies mage-based lighting (IBL) is the process of 2. mapping the illumination onto a representation of Iilluminating scenes and objects (real or syn- the environment; thetic) with images of light from the real world. It 3. placing the 3D object inside the environment; and evolved from the reflection-mapping technique1,2 in 4. simulating the light from the environment illumi- which we use panoramic images as nating the computer graphics object. texture maps on computer graphics This tutorial shows how models to show shiny objects reflect- Figure 1 shows an example of an object illuminated ing real and synthetic environments. entirely using IBL. Gary Butcher created the models in image-based lighting can IBL is analogous to image-based 3D Studio Max, and the renderer used was the Arnold modeling, in which we derive a 3D global illumination system written by Marcos Fajardo. illuminate synthetic objects scene’s geometric structure from I captured the light in a kitchen so it includes light from images, and to image-based render- a ceiling fixture; the blue sky from the windows; and with measurements of real ing, in which we produce the ren- the indirect light from the room’s walls, ceiling, and dered appearance of a scene from its cabinets. Gary mapped the light from this room onto a light, making objects appear appearance in images. When used large sphere and placed the model of the microscope effectively, IBL can produce realistic on the table in the middle of the sphere.
    [Show full text]