Tone Reproduction and Physically Based Spectral Rendering
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
The Lumigraph
The Lumigraph The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Gortler, Steven J., Radek Grzeszczuk, Richard Szeliski, and Michael F. Cohen. 1996. The Lumigraph. Proceedings of the 23rd annual conference on computer graphics and interactive techniques (SIGGRAPH 1996), August 4-9, 1996, New Orleans, Louisiana, ed. SIGGRAPH, Holly Rushmeier, and Theresa Marie Rhyne, 43-54. Computer graphics annual conference series. New York, N.Y.: Association for Computing Machinery. Published Version http://dx.doi.org/10.1145/237170.237200 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:2634291 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA e umigra !"#$#% &' ()*"+#* ,-.#/ (*0#10203/ ,425-*. !0#+41/4 6425-#+ 7' 8)5#% 642*)1)9" ,#1#-*25 strat 1#*4#1 )9 2-;"3*#. #%$4*)%=#%" =-;1 -++)< - 31#* ") look around - 12#%# 9*)= "K#. ;)4%"1 4% 1;-2#' U%# 2-% -+1) !4; "5*)3>5 .499#*? :541 ;-;#* .412311#1 - %#< =#"5). 9)* 2-;"3*4%> "5# 2)=;+#"# -;? #%" $4#<1 )9 -% )@B#2" ") 2*#-"# "5# 4++314)% )9 - MG =).#+' 85#% -%. ;#-*-%2#)9 @)"5 1A%"5#"42 -%. *#-+ <)*+. )@B#2"1 -%. 12#%#1C *#;*#1? I4++4-=1 QVS -%. I#*%#* #" -+ QMWS 5-$# 4%$#1"4>-"#. 1=))"5 4%"#*? #%"4%> "541 4%9)*=-"4)%C -%. "5#% 314%> "541 *#;*#1#%"-"4)% ") *#%.#* ;)+-"4)% @#"<##% 4=->#1 @A =).#+4%> "5# =)"4)% )9 ;4K#+1 X4'#'C "5# 4=->#1 )9 "5# )@B#2" 9*)= %#< 2-=#*- ;)14"4)%1' D%+4/# "5# 15-;# optical !owY -1 )%# =)$#1 9*)= )%# 2-=#*- ;)14"4)% ") -%)"5#*' E% 2-;"3*# ;*)2#11 "*-.4"4)%-++A 31#. -
Path Tracing
Path Tracing Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Irradiance Circumference of a Circle • The circumference of a circle of radius r is 2πr • In fact, a radian is defined as the angle you get on an arc of length r • We can represent the circumference of a circle as an integral 2휋 푐푖푟푐 = 푟푑휑 0 • This is essentially computing the length of the circumference as an infinite sum of infinitesimal segments of length 푟푑휑, over the range from 휑=0 to 2π Area of a Hemisphere • We can compute the area of a hemisphere by integrating ‘rings’ ranging over a second angle 휃, which ranges from 0 (at the north pole) to π/2 (at the equator) • The area of a ring is the circumference of the ring times the width 푟푑휃 • The circumference is going to be scaled by sin휃 as the rings are smaller towards the top of the hemisphere 휋 2 2휋 2 2 푎푟푒푎 = sin 휃 푟 푑휑 푑휃 = 2휋푟 0 0 Hemisphere Integral 휋 2 2휋 2 푎푟푒푎 = sin 휃 푟 푑휑 푑휃 0 0 • We are effectively computing an infinite summation of tiny rectangular area elements over a two dimensional domain. Each element has dimensions: sin 휃 푟푑휑 × 푟푑휃 Irradiance • Now, let’s assume that we are interested in computing the total amount of light arriving at some point on a surface • We are essentially integrating over all possible directions in the hemisphere above the surface (assuming it’s opaque and we’re not receiving translucent light from below the surface) • We are integrating the light intensity (or radiance) coming in from every direction • The total incoming radiance over all directions in the hemisphere -
Adjoints and Importance in Rendering: an Overview
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 3, JULY-SEPTEMBER 2003 1 Adjoints and Importance in Rendering: An Overview Per H. Christensen Abstract—This survey gives an overview of the use of importance, an adjoint of light, in speeding up rendering. The importance of a light distribution indicates its contribution to the region of most interest—typically the directly visible parts of a scene. Importance can therefore be used to concentrate global illumination and ray tracing calculations where they matter most for image accuracy, while reducing computations in areas of the scene that do not significantly influence the image. In this paper, we attempt to clarify the various uses of adjoints and importance in rendering by unifying them into a single framework. While doing so, we also generalize some theoretical results—known from discrete representations—to a continuous domain. Index Terms—Rendering, adjoints, importance, light, ray tracing, global illumination, participating media, literature survey. æ 1INTRODUCTION HE use of importance functions started in neutron importance since it indicates how much the different parts Ttransport simulations soon after World War II. Im- of the domain contribute to the solution at the most portance was used (in different disguises) from 1983 to important part. Importance is also known as visual accelerate ray tracing [2], [16], [27], [78], [79]. Smits et al. [67] importance, view importance, potential, visual potential, value, formally introduced the use of importance for global -
F I N a L P R O G R a M 1998
1998 FINAL PROGRAM October 18 • October 23, 1998 Sheraton Imperial Hotel Research Triangle Park, NC THE INSTITUTE OF ELECTRICAL IEEE IEEE VISUALIZATION 1998 & ELECTRONICS ENGINEERS, INC. COMPUTER IEEE SOCIETY Sponsored by IEEE Computer Society Technical Committee on Computer Graphics In Cooperation with ACM/SIGGRAPH Sessions include Real-time Volume Rendering, Terrain Visualization, Flow Visualization, Surfaces & Level-of-Detail Techniques, Feature Detection & Visualization, Medical Visualization, Multi-Dimensional Visualization, Flow & Streamlines, Isosurface Extraction, Information Visualization, 3D Modeling & Visualization, Multi-Source Data Analysis Challenges, Interactive Visualization/VR/Animation, Terrain & Large Data Visualization, Isosurface & Volume Rendering, Simplification, Marine Data Visualization, Tensor/Flow, Key Problems & Thorny Issues, Image-based Techniques and Volume Analysis, Engineering & Design, Texturing and Rendering, Art & Visualization Get complete, up-to-date listings of program information from URL: http://www.erc.msstate.edu/vis98 http://davinci.informatik.uni-kl.de/Vis98 Volvis URL: http://www.erc.msstate.edu/volvis98 InfoVis URL: http://www.erc.msstate.edu/infovis98 or contact: Theresa-Marie Rhyne, Lockheed Martin/U.S. EPA Sci Vis Center, 919-541-0207, [email protected] Robert Moorhead, Mississippi State University, 601-325-2850, [email protected] Direct Vehicle Loading Access S Dock Salon Salon Sheraton Imperial VII VI Imperial Convention Center HOTEL & CONVENTION CENTER Convention Center Phones -
COURSE NOTES 8 CASE STUDY: Scanning Michelangelo's Florentine Pieta`
SIGGRAPH 99 26th International Conference on Computer Graphics and Interactive Techniques COURSE NOTES 8 CASE STUDY: Scanning Michelangelo's Florentine Pieta` Sunday, August 9, 1999 Two Hour Tutorial ORGANIZER Holly Rushmeier IBM TJ Watson Research Center LECTURERS Fausto Bernardini Joshua Mittleman Holly Rushmeier IBM TJ Watson Research Center 1-1 ABSTRACT We describe a recent project to create a 3D digital model of Michelangelo's Florentine Pieta`. The emphasis is on the practical issues such as equipment selection and modi®cation, the planning of data acquisition, dealing with the constraints of the museum environment, overcoming problems encountered with "real" rather than idealized data, and presenting the model in a form that is suitable for the art historian who is the end user. Art historian Jack Wasserman, working with IBM, initiated a project to create a digital model of Michelangelo's Florentine Pieta` to assist in a scholarly study. In the course of the project, we encountered many practical problems related to the size and topology of the work, and with completing the project within constraints of time, budget and the access allowed by the museum. While we have and will continue to publish papers on the individual new methods we have developed in the in course of solving various problems, a typical technical paper or presentation does not allow for the discussion of many important practical issues. We expect this course to be of interest to practitioners interested in acquiring digital models for computer graphics applications, end users who are interested in understanding what quality can be expected from acquired models, and researchers interested in ®nding research opportunities in the "gaps" in current acquisition methods. -
LIGHT TRANSPORT SIMULATION in REFLECTIVE DISPLAYS By
LIGHT TRANSPORT SIMULATION IN REFLECTIVE DISPLAYS by Zhanpeng Feng, B.S.E.E., M.S.E.E. A Dissertation In ELECTRICAL ENGINEERING DOCTOR OF PHILOSOPHY Dr. Brian Nutter Chair of the Committee Dr. Sunanda Mitra Dr. Tanja Karp Dr. Richard Gale Dr. Peter Westfall Peggy Gordon Miller Dean of the Graduate School May, 2012 Copyright 2012, Zhanpeng Feng Texas Tech University, Zhanpeng Feng, May 2012 ACKNOWLEDGEMENTS The pursuit of my Ph.D. has been a rather long journey. The journey started when I came to Texas Tech ten years ago, then took a change in direction when I went to California to work for Qualcomm in 2006. Over the course, I am privileged to have met the most amazing professionals in Texas Tech and Qualcomm. Without them I would have never been able to finish this dissertation. I begin by thanking my advisor, Dr. Brian Nutter, for introducing me to the brilliant world of research, and teaching me hands on skills to actually get something done. Dr. Nutter sets an example of excellence that inspired me to work as an engineer and researcher for my career. I would also extend my thanks to Dr. Mitra for her breadth and depth of knowledge; to Dr. Karp for her scientific rigor; to Dr. Gale for his expertise in color science and sense of humor; and to Dr. Westfall, for his great mind in statistics. They have provided me invaluable advice throughout the research. My colleagues and supervisors in Qualcomm also gave tremendous support and guidance along the path. Tom Fiske helped me establish my knowledge in optics from ground up and taught me how to use all the tools for optical measurements. -
IBM Research Report Reverse Engineering Methods for Digital
RC23991 (W0606-132) June 28, 2006 Computer Science IBM Research Report Reverse Engineering Methods for Digital Restoration Applications Ioana Boier-Martin IBM Research Division Thomas J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 Holly Rushmeier Yale University New Haven, CT Research Division Almaden - Austin - Beijing - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. I thas been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g ,. payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . Reverse Engineering Methods for Digital Restoration Applications Ioana Boier-Martin Holly Rushmeier IBM T. J. Watson Research Center Yale University Hawthorne, New York, USA New Haven, Connecticut, USA [email protected] [email protected] Abstract In this paper we focus on the demands of restoring dig- ital objects for cultural heritage applications. However, the In this paper we discuss the challenges of processing methods we present are relevant to many other areas. and converting 3D scanned data to representations suit- This is a revised and extended version of our previous pa- able for interactive manipulation in the context of virtual per [12]. -
The Photon Mapping Method
7 The Photon Mapping Method “I get by with a little help from my friends.” —John Lennon, 1940–1980 HOTON mapping is a practical approach for computing global illumination within complex P environments. Much like irradiance caching methods, photon mapping caches and reuses illumination information in the scene for efficiency. Photon mapping has also been successfully applied for computing lighting within, and in the presence of, participating media. In this chapter we briefly introduce the photon mapping technique. This sets the foundation for our contributions in the next chapter, which make volumetric photon mapping practical. 7.1 Algorithm Overview Photon mapping, introduced by Jensen[1995; 1996; 1997; 1998; 2001], is a practical approach for computing global illumination. At a high level, the algorithm consists of two main steps: Algorithm 7.1:PHOTONMAPPING() 1 PHOTONTRACING(); 2 RENDERUSINGPHOTONMAP(); In the first step, a lighting simulation is performed by tracing packets of energy, or photons, from light sources and storing these photons as they scatter within the scene. This processes 119 120 Algorithm 7.2:PHOTONTRACING() 1 n 0; e Æ 2 repeat 3 (l, pdf (l)) = CHOOSELIGHT(); 4 (xp , ~!p , ©p ) = GENERATEPHOTON(l ); ©p 5 TRACEPHOTON(xp , ~!p , pdf (l) ); 6 n 1; e ÅÆ 7 until photon map full ; 1 8 Scale power of all photons by ; ne results in a set of photon maps, which can be used to efficiently query lighting information. In the second pass, the final image is rendered using Monte Carlo ray tracing. This rendering step is made more efficient by exploiting the lighting information cached in the photon map. -
Lighting - the Radiance Equation
CHAPTER 3 Lighting - the Radiance Equation Lighting The Fundamental Problem for Computer Graphics So far we have a scene composed of geometric objects. In computing terms this would be a data structure representing a collection of objects. Each object, for example, might be itself a collection of polygons. Each polygon is sequence of points on a plane. The ‘real world’, however, is ‘one that generates energy’. Our scene so far is truly a phantom one, since it simply is a description of a set of forms with no substance. Energy must be generated: the scene must be lit; albeit, lit with virtual light. Computer graphics is concerned with the construction of virtual models of scenes. This is a relatively straightforward problem to solve. In comparison, the problem of lighting scenes is the major and central conceptual and practical problem of com- puter graphics. The problem is one of simulating lighting in scenes, in such a way that the computation does not take forever. Also the resulting 2D projected images should look as if they are real. In fact, let’s make the problem even more interesting and challenging: we do not just want the computation to be fast, we want it in real time. Image frames, in other words, virtual photographs taken within the scene must be produced fast enough to keep up with the changing gaze (head and eye moves) of Lighting - the Radiance Equation 35 Great Events of the Twentieth Century35 Lighting - the Radiance Equation people looking and moving around the scene - so that they experience the same vis- ual sensations as if they were moving through a corresponding real scene. -
The 2019 Visualization Career Award Thomas Ertl
The 2019 Visualization Career Award Thomas Ertl The 2019 Visualization Career Award goes to Thomas Ertl. Thomas Ertl is a Professor of Computer Science at the University of Stuttgart where he founded the Institute for Visualization and Interactive Systems (VIS) and the Visualization Research Center (VISUS). He received a MSc Thomas Ertl in Computer Science from the University of Colorado at University of Stuttgart Boulder and a PhD in Theoretical Astrophysics from the Award Recipient 2019 University of Tuebingen. After a few years as postdoc and cofounder of a Tuebingen based IT company, he moved to the University of Erlangen as a Professor of Computer Graphics and Visualization. He served the University of Thomas Ertl has had the privilege to collaborate with Stuttgart in various administrative roles including Dean excellent students, doctoral and postdoctoral researchers, of Computer Science and Electrical Engineering and Vice and with many colleagues around the world and he has President for Research and Advanced Graduate Education. always attributed the success of his group to working with Currently, he is the Spokesperson of the Cluster of them. He has advised more than fifty PhD researchers and Excellence Data-Integrated Simulation Science and Director hosted numerous postdoctoral researchers. More than ten of the Stuttgart Center for Simulation Science. of them are now holding professorships, others moved to His research interests include visualization, computer prestigious academic or industrial positions. He also has graphics, and human computer interaction in general with numerous ties to industry and he actively pursues research a focus on volume rendering, flow and particle visualization, on innovative visualization applications for the physical and hierarchical and adaptive algorithms for large datasets, par- life sciences and various engineering domains as well as in allel and hardware accelerated visual computing systems, the digital humanities. -
Yale University Department of Computer Science
Yale University Department of Computer Science A Psychophysical Study of Dominant Texture Detection Jianye Lu Alexandra Garr-Schultz Julie Dorsey Holly Rushmeier YALEU/DCS/TR-1417 June 2009 Contents 1 Introduction 4 2 Psychophysical Studies and Computer Graphics 5 2.1 Diffusion Distance Manifolds . 5 2.2 PsychophysicalExperiments. 5 2.3 Psychophysics Applications in Graphics . 7 3 Experimental Method 8 4 Subject Response Analysis 12 4.1 AnalysiswithANOVA . 12 4.2 AnalysiswithMDS........................ 12 4.3 AnalysiswithScaling. 17 5 Texture Features 20 6 Conclusion 24 A Collection of Psychophysical Studies in Graphics 25 1 List of Figures 1 Dominant texture and texture synthesis . 4 2 Decision tree for psychophysical experiment selection . ... 7 3 Textures for psychophysical experiment . 9 4 User interface for psychophysical experiment . 10 5 Technique preferences based on ANOVA . 13 6 Technique significance with respect to individual texture . 14 7 Technique significance with respect to individual participant . 15 8 Participants on a 2-D MDS configuration . 16 9 Texture samples on 2-D MDS configuration . 18 10 Pairedcomparisonscaling . 19 11 PC-scalesfortestingtextures . 20 12 Textures on PC-scale-based 2-D MDS configuration . 21 13 Spearman’s correlation between texture descriptors and PC- scales ............................... 23 List of Tables 1 Linear correlation between paired comparison scales . 20 2 Qualitative correlation between texture descriptors and PC- scales ............................... 24 2 A Psychophysical Study of Dominant Texture Detection∗ Jianye Lu, Alexandra Garr-Schultz, Julie Dorsey, and Holly Rushmeier Abstract Images of everyday scenes are frequently used as input for texturing 3D models in computer graphics. Such images include both the texture desired and other extraneous information. -
CS 184: Problems and Questions on Rendering
CS 184: Problems and Questions on Rendering Ravi Ramamoorthi Problems 1. Define the terms Radiance and Irradiance, and give the units for each. Write down the formula (inte- gral) for irradiance at a point in terms of the illumination L(ω) incident from all directions ω. Write down the local reflectance equation, i.e. express the net reflected radiance in a given direction as an integral over the incident illumination. 2. Make appropriate approximations to derive the radiosity equation from the full rendering equation. 3. Match the surface material to the formula (and goniometric diagram shown in class). Also, give an example of a real material that reasonably closely approximates the mathematical description. Not all materials need have a corresponding diagram. The materials are ideal mirror, dark glossy, ideal diffuse, retroreflective. The formulae for the BRDF fr are 4 ka(R~ · V~ ), kb(R~ · V~ ) , kc/(N~ · V~ ), kdδ(R~), ke. 4. Consider the Cornell Box (as in the radiosity lecture, assume for now that this is essentially a room with only the walls, ceiling and floor. Assume for now, there are no small boxes or other furniture in the room, and that all surfaces are Lambertian. The box also has a small rectangular white light source at the center of the ceiling.) Assume we make careful measurements of the light source intensity and dimensions of the room, as well as the material properties of the walls, floor and ceiling. We then use these as inputs to our simple OpenGL renderer. Assuming we have been completely accurate, will the computer-generated picture be identical to a photograph of the same scene from the same location? If so, why? If not, what will be the differences? Ignore gamma correction and other nonlinear transfer issues.