Tone Reproduction and Physically Based Spectral Rendering

Total Page:16

File Type:pdf, Size:1020Kb

Tone Reproduction and Physically Based Spectral Rendering EUROGRAPHICS 2002 STAR – State of The Art Report Tone Reproduction and Physically Based Spectral Rendering Kate Devlin1 Alan Chalmers1 Alexander Wilkie2 Werner Purgathofer2 1 – Department of Computer Science University of Bristol 2 – Institute of Computer Graphics and Algorithms Vienna University of Technology Abstract The ultimate aim of realistic graphics is the creation of images that provoke the same responses that a viewer would have to a real scene. This STAR addresses two related key problem areas in this effort which are located at opposite ends of the rendering pipeline, namely the data structures used to describe light during the actual rendering process, and the issue of displaying such radiant intensities in a meaningful way. The interest in the first of these subproblems stems from the fact that it is common industry practice to use RGB colour values to describe light intensity and surface reflectancy. While viable in the context of methods that do not strive to achieve true realism, this approach has to be replaced by more physically accurate techniques if a prediction of nature is intended. The second subproblem is that while research into ways of rendering images provides us with better and faster methods, we do not necessarily see their full effect due to limitations of the display hardware. The low dynamic range of a standard computer monitor requires some form of mapping to produce images that are perceptually accurate. Tone reproduction operators attempt to replicate the effect of real-world luminance intensities. This STAR report will review the work to date on spectral rendering and tone reproduction techniques. It will include an investigation into the need for spectral imagery synthesis methods and accurate tone reproduction, and a discussion of major approaches to physically correct rendering and key tone mapping algorithms. The future of both spectral rendering and tone reproduction techniques will be considered, together with the implications of advances in display hardware. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Viewing Algorithms I.3.7 [Computer Graphics]: Color, shading, shadowing, and texture 1. Introduction ends of the rendering pipeline have been neglected by com- parison: the question which entities are used in rendering The ultimate aim of realistic graphics is the creation of im- programs to describe light intensity during the calculations ages that provoke the same response and sensation as a performed by the rendering algorithms, and the mapping of viewer would have to a real scene, i.e. the images are phys- the luminances computed by these algorithms to the display ically or perceptually accurate when compared to reality. device of choice. Weaknesses of a system in both areas can This requires significant effort to achieve, and one of the key make any improvements in the underlying rendering algo- properties of this problem is that the overall performance of rithm totally pointless. Consequently, good care has to be a photorealistic rendering system is only as good as its worst taken when designing a system for image synthesis to strike component. a good balance between the capabilities of the various stages In the field of computer graphics, the actual image syn- in the rendering pipeline. thesis algorithms – from scanline techniques to global illu- mination methods – are constantly being reviewed and im- In this paper we review the state of the art on these two proved, but two equally important research areas at opposite topics in an interleaving manner. We first present the basic c The Eurographics Association 2002. Devlin, Chalmers, Wilkie, Purgathofer / Tone Reproduction and Physically Based Spectral Rendering problems in both areas in sections 2 and 3, and then discuss edge of the Human Visual System (HVS) is still limited, but previous work on tone mapping in section 4 and for spectral its ability to perceive such a wide dynamic range in the real- rendering in section 5. world requires some form of reproduction to produce per- ceptually accurate images on display devices. Changes in the perception of colour and of apparent contrast also come into 2. Tone mapping play when mapping values to a display device. The develop- While research into ways of creating images provides us ment of new psychophysically-based visual models seeks to with better and faster methods, we usually do not see the address these factors. To date, methods of tone mapping tend full effect of these techniques due to display limitations. For to concentrate on singular aspects for singular purposes. This accurate image analysis and comparison with reality, the dis- approach is understandable given the deficit in HVS knowl- play image must bear as close a resemblance to the original edge, but is inefficient as the HVS responds as a whole, image as possible. In situations where predictive imaging rather than as isolated functions. New psychophysical re- is required, tone reproduction is of great importance to en- search is needed to address the workings of the HVS in their sure that the conclusions drawn from a simulation are correct totality. (Figure 1). 2.2. Tone mapping: art, television and photography 2.1. The need for accurate tone reproduction Tone mapping was developed for use in television and pho- Tone reproduction is necessary for two main reasons: the tography, but its origins can be seen in the field of art where first is to ensure that the wide range of light in a real world artists make use of a limited palette to depict high contrast scene is conveyed on a display with limited capabilities; and scenes. It takes advantage of the fact that the HVS has a the second is to produce an image which provokes the same greater sensitivity to relative rather than absolute luminance responses as someone would have when viewing the scene in levels 26. Initial work on accurate tone reproduction was in- the real world. Physical accuracy alone of a rendered image sufficient for high dynamic range scenes. Either the average does not ensure that the scene in question will have a realis- real-world luminance was mapped to the display average, or tic visual appearance when it is displayed. This is due to the the maximum non-light source luminance was mapped to the shortcomings of standard display devices, which can only maximum displayable value. However, the process failed to reproduce a range of luminance of about 100:1 candelas per preserve visibility in high dynamic range scenes as the very 2 square metre (cd m ), as opposed to human vision which bright and very dimmed values were clamped to fall within ranges from 100 000 000:1, from bright sunlight down to the display range. Also, all images were mapped irrespective starlight, and an observer’s adaptation to their surroundings of absolute value, resulting in the loss of an overall impres- also needs to be taken into account. It is this high dynamic sion of brightness 38. range (HDR) of human vision that needs to be scaled in some way to fit a low dynamic range display device. The extensive use of tone reproduction in photography and television today is explained in Hunt’s “The Reproduc- In dark scenes our visual acuity — the ability to resolve tion of Colour in Photography, Print and Film” 26 and Poyn- spatial detail — is low and colours cannot be distinguished. ton’s “A Technical Introduction to Digital Video” 55, which This is due to the two different types of photoreceptor in the give comprehensive explanations on the subject. This re- eye: rods and cones. It is the rods that provide us with achro- search area is outside the scope of this paper, and it is sug- matic vision at these scotopic levels, functioning within a gested that readers with an interest in this area refer initially 6 2 range of 10 ¡ to 10 cd m . Visual adaptation from light to to these works. dark is known as dark adaptation, and can last for tens of minutes; for example, the length of time it takes the eye to adapt at night when the light is switched off. Conversely, 2.3. Gamma correction light adaptation, from dark to light, can take only seconds, Gamma is a mathematical curve that represents the bright- such as leaving a dimly lit room and stepping into bright sun- ness and contrast of an image. Brightness is a subjective light. The cones are active at these photopic levels of illumi- 8 2 measurement, formally defined as “the attribute of a visual nation, covering a range of 0.01 to 10 cd m . The overlap sensation according to which an area appears to emit more (the mesopic levels), when both rods and cones are func- 54 2 or less light" . It describes the non-linear tonal response of tioning, lies between 0.01 to 10 cd m . The range normally the display device and compensates for the non-linearities. used by the majority of electronic display devices (cathode 2 For CRTs, the use of RGB values to express colour is actu- ray tubes, or CRTs) spans from 1 to 100 cd m . More de- ally specifying the voltage that will be applied to each elec- tailed information on visual responses with regard to tone tron gun. The luminance generated is not linearly related to reproduction can be found in the papers by Ferwerda et al., this voltage. In actuality, luminance produced on the display ¢ ¢ Pattanaik et al. and Tumblin 13 ¢ 48 50 81. device is approximately proportional to the applied voltage Despite a wealth of psychophysical research, our knowl- raised to a power of 2.5, although the actual value of the c The Eurographics Association 2002. Devlin, Chalmers, Wilkie, Purgathofer / Tone Reproduction and Physically Based Spectral Rendering ¡ ¢£ Display ¤ ¤ ¤ Tone Reproduction Display with Observer Operator Limited Capabilities ¥¦ Scene Perceptual Match ¡ £ ¢ ¤ Observer Real-World Figure 1: Ideal tone reproduction process exponent varies 54.
Recommended publications
  • The Lumigraph
    The Lumigraph The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Gortler, Steven J., Radek Grzeszczuk, Richard Szeliski, and Michael F. Cohen. 1996. The Lumigraph. Proceedings of the 23rd annual conference on computer graphics and interactive techniques (SIGGRAPH 1996), August 4-9, 1996, New Orleans, Louisiana, ed. SIGGRAPH, Holly Rushmeier, and Theresa Marie Rhyne, 43-54. Computer graphics annual conference series. New York, N.Y.: Association for Computing Machinery. Published Version http://dx.doi.org/10.1145/237170.237200 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:2634291 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA e umigra !"#$#% &' ()*"+#* ,-.#/ (*0#10203/ ,425-*. !0#+41/4 6425-#+ 7' 8)5#% 642*)1)9" ,#1#-*25 strat 1#*4#1 )9 2-;"3*#. #%$4*)%=#%" =-;1 -++)< - 31#* ") look around - 12#%# 9*)= "K#. ;)4%"1 4% 1;-2#' U%# 2-% -+1) !4; "5*)3>5 .499#*? :541 ;-;#* .412311#1 - %#< =#"5). 9)* 2-;"3*4%> "5# 2)=;+#"# -;? #%" $4#<1 )9 -% )@B#2" ") 2*#-"# "5# 4++314)% )9 - MG =).#+' 85#% -%. ;#-*-%2#)9 @)"5 1A%"5#"42 -%. *#-+ <)*+. )@B#2"1 -%. 12#%#1C *#;*#1? I4++4-=1 QVS -%. I#*%#* #" -+ QMWS 5-$# 4%$#1"4>-"#. 1=))"5 4%"#*? #%"4%> "541 4%9)*=-"4)%C -%. "5#% 314%> "541 *#;*#1#%"-"4)% ") *#%.#* ;)+-"4)% @#"<##% 4=->#1 @A =).#+4%> "5# =)"4)% )9 ;4K#+1 X4'#'C "5# 4=->#1 )9 "5# )@B#2" 9*)= %#< 2-=#*- ;)14"4)%1' D%+4/# "5# 15-;# optical !owY -1 )%# =)$#1 9*)= )%# 2-=#*- ;)14"4)% ") -%)"5#*' E% 2-;"3*# ;*)2#11 "*-.4"4)%-++A 31#.
    [Show full text]
  • Path Tracing
    Path Tracing Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Irradiance Circumference of a Circle • The circumference of a circle of radius r is 2πr • In fact, a radian is defined as the angle you get on an arc of length r • We can represent the circumference of a circle as an integral 2휋 푐푖푟푐 = 푟푑휑 0 • This is essentially computing the length of the circumference as an infinite sum of infinitesimal segments of length 푟푑휑, over the range from 휑=0 to 2π Area of a Hemisphere • We can compute the area of a hemisphere by integrating ‘rings’ ranging over a second angle 휃, which ranges from 0 (at the north pole) to π/2 (at the equator) • The area of a ring is the circumference of the ring times the width 푟푑휃 • The circumference is going to be scaled by sin휃 as the rings are smaller towards the top of the hemisphere 휋 2 2휋 2 2 푎푟푒푎 = sin 휃 푟 푑휑 푑휃 = 2휋푟 0 0 Hemisphere Integral 휋 2 2휋 2 푎푟푒푎 = sin 휃 푟 푑휑 푑휃 0 0 • We are effectively computing an infinite summation of tiny rectangular area elements over a two dimensional domain. Each element has dimensions: sin 휃 푟푑휑 × 푟푑휃 Irradiance • Now, let’s assume that we are interested in computing the total amount of light arriving at some point on a surface • We are essentially integrating over all possible directions in the hemisphere above the surface (assuming it’s opaque and we’re not receiving translucent light from below the surface) • We are integrating the light intensity (or radiance) coming in from every direction • The total incoming radiance over all directions in the hemisphere
    [Show full text]
  • Adjoints and Importance in Rendering: an Overview
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 3, JULY-SEPTEMBER 2003 1 Adjoints and Importance in Rendering: An Overview Per H. Christensen Abstract—This survey gives an overview of the use of importance, an adjoint of light, in speeding up rendering. The importance of a light distribution indicates its contribution to the region of most interest—typically the directly visible parts of a scene. Importance can therefore be used to concentrate global illumination and ray tracing calculations where they matter most for image accuracy, while reducing computations in areas of the scene that do not significantly influence the image. In this paper, we attempt to clarify the various uses of adjoints and importance in rendering by unifying them into a single framework. While doing so, we also generalize some theoretical results—known from discrete representations—to a continuous domain. Index Terms—Rendering, adjoints, importance, light, ray tracing, global illumination, participating media, literature survey. æ 1INTRODUCTION HE use of importance functions started in neutron importance since it indicates how much the different parts Ttransport simulations soon after World War II. Im- of the domain contribute to the solution at the most portance was used (in different disguises) from 1983 to important part. Importance is also known as visual accelerate ray tracing [2], [16], [27], [78], [79]. Smits et al. [67] importance, view importance, potential, visual potential, value, formally introduced the use of importance for global
    [Show full text]
  • F I N a L P R O G R a M 1998
    1998 FINAL PROGRAM October 18 • October 23, 1998 Sheraton Imperial Hotel Research Triangle Park, NC THE INSTITUTE OF ELECTRICAL IEEE IEEE VISUALIZATION 1998 & ELECTRONICS ENGINEERS, INC. COMPUTER IEEE SOCIETY Sponsored by IEEE Computer Society Technical Committee on Computer Graphics In Cooperation with ACM/SIGGRAPH Sessions include Real-time Volume Rendering, Terrain Visualization, Flow Visualization, Surfaces & Level-of-Detail Techniques, Feature Detection & Visualization, Medical Visualization, Multi-Dimensional Visualization, Flow & Streamlines, Isosurface Extraction, Information Visualization, 3D Modeling & Visualization, Multi-Source Data Analysis Challenges, Interactive Visualization/VR/Animation, Terrain & Large Data Visualization, Isosurface & Volume Rendering, Simplification, Marine Data Visualization, Tensor/Flow, Key Problems & Thorny Issues, Image-based Techniques and Volume Analysis, Engineering & Design, Texturing and Rendering, Art & Visualization Get complete, up-to-date listings of program information from URL: http://www.erc.msstate.edu/vis98 http://davinci.informatik.uni-kl.de/Vis98 Volvis URL: http://www.erc.msstate.edu/volvis98 InfoVis URL: http://www.erc.msstate.edu/infovis98 or contact: Theresa-Marie Rhyne, Lockheed Martin/U.S. EPA Sci Vis Center, 919-541-0207, [email protected] Robert Moorhead, Mississippi State University, 601-325-2850, [email protected] Direct Vehicle Loading Access S Dock Salon Salon Sheraton Imperial VII VI Imperial Convention Center HOTEL & CONVENTION CENTER Convention Center Phones
    [Show full text]
  • COURSE NOTES 8 CASE STUDY: Scanning Michelangelo's Florentine Pieta`
    SIGGRAPH 99 26th International Conference on Computer Graphics and Interactive Techniques COURSE NOTES 8 CASE STUDY: Scanning Michelangelo's Florentine Pieta` Sunday, August 9, 1999 Two Hour Tutorial ORGANIZER Holly Rushmeier IBM TJ Watson Research Center LECTURERS Fausto Bernardini Joshua Mittleman Holly Rushmeier IBM TJ Watson Research Center 1-1 ABSTRACT We describe a recent project to create a 3D digital model of Michelangelo's Florentine Pieta`. The emphasis is on the practical issues such as equipment selection and modi®cation, the planning of data acquisition, dealing with the constraints of the museum environment, overcoming problems encountered with "real" rather than idealized data, and presenting the model in a form that is suitable for the art historian who is the end user. Art historian Jack Wasserman, working with IBM, initiated a project to create a digital model of Michelangelo's Florentine Pieta` to assist in a scholarly study. In the course of the project, we encountered many practical problems related to the size and topology of the work, and with completing the project within constraints of time, budget and the access allowed by the museum. While we have and will continue to publish papers on the individual new methods we have developed in the in course of solving various problems, a typical technical paper or presentation does not allow for the discussion of many important practical issues. We expect this course to be of interest to practitioners interested in acquiring digital models for computer graphics applications, end users who are interested in understanding what quality can be expected from acquired models, and researchers interested in ®nding research opportunities in the "gaps" in current acquisition methods.
    [Show full text]
  • LIGHT TRANSPORT SIMULATION in REFLECTIVE DISPLAYS By
    LIGHT TRANSPORT SIMULATION IN REFLECTIVE DISPLAYS by Zhanpeng Feng, B.S.E.E., M.S.E.E. A Dissertation In ELECTRICAL ENGINEERING DOCTOR OF PHILOSOPHY Dr. Brian Nutter Chair of the Committee Dr. Sunanda Mitra Dr. Tanja Karp Dr. Richard Gale Dr. Peter Westfall Peggy Gordon Miller Dean of the Graduate School May, 2012 Copyright 2012, Zhanpeng Feng Texas Tech University, Zhanpeng Feng, May 2012 ACKNOWLEDGEMENTS The pursuit of my Ph.D. has been a rather long journey. The journey started when I came to Texas Tech ten years ago, then took a change in direction when I went to California to work for Qualcomm in 2006. Over the course, I am privileged to have met the most amazing professionals in Texas Tech and Qualcomm. Without them I would have never been able to finish this dissertation. I begin by thanking my advisor, Dr. Brian Nutter, for introducing me to the brilliant world of research, and teaching me hands on skills to actually get something done. Dr. Nutter sets an example of excellence that inspired me to work as an engineer and researcher for my career. I would also extend my thanks to Dr. Mitra for her breadth and depth of knowledge; to Dr. Karp for her scientific rigor; to Dr. Gale for his expertise in color science and sense of humor; and to Dr. Westfall, for his great mind in statistics. They have provided me invaluable advice throughout the research. My colleagues and supervisors in Qualcomm also gave tremendous support and guidance along the path. Tom Fiske helped me establish my knowledge in optics from ground up and taught me how to use all the tools for optical measurements.
    [Show full text]
  • IBM Research Report Reverse Engineering Methods for Digital
    RC23991 (W0606-132) June 28, 2006 Computer Science IBM Research Report Reverse Engineering Methods for Digital Restoration Applications Ioana Boier-Martin IBM Research Division Thomas J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 Holly Rushmeier Yale University New Haven, CT Research Division Almaden - Austin - Beijing - Haifa - India - T. J. Watson - Tokyo - Zurich LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. I thas been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g ,. payment of royalties). Copies may be requested from IBM T. J. Watson Research Center , P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home . Reverse Engineering Methods for Digital Restoration Applications Ioana Boier-Martin Holly Rushmeier IBM T. J. Watson Research Center Yale University Hawthorne, New York, USA New Haven, Connecticut, USA [email protected] [email protected] Abstract In this paper we focus on the demands of restoring dig- ital objects for cultural heritage applications. However, the In this paper we discuss the challenges of processing methods we present are relevant to many other areas. and converting 3D scanned data to representations suit- This is a revised and extended version of our previous pa- able for interactive manipulation in the context of virtual per [12].
    [Show full text]
  • The Photon Mapping Method
    7 The Photon Mapping Method “I get by with a little help from my friends.” —John Lennon, 1940–1980 HOTON mapping is a practical approach for computing global illumination within complex P environments. Much like irradiance caching methods, photon mapping caches and reuses illumination information in the scene for efficiency. Photon mapping has also been successfully applied for computing lighting within, and in the presence of, participating media. In this chapter we briefly introduce the photon mapping technique. This sets the foundation for our contributions in the next chapter, which make volumetric photon mapping practical. 7.1 Algorithm Overview Photon mapping, introduced by Jensen[1995; 1996; 1997; 1998; 2001], is a practical approach for computing global illumination. At a high level, the algorithm consists of two main steps: Algorithm 7.1:PHOTONMAPPING() 1 PHOTONTRACING(); 2 RENDERUSINGPHOTONMAP(); In the first step, a lighting simulation is performed by tracing packets of energy, or photons, from light sources and storing these photons as they scatter within the scene. This processes 119 120 Algorithm 7.2:PHOTONTRACING() 1 n 0; e Æ 2 repeat 3 (l, pdf (l)) = CHOOSELIGHT(); 4 (xp , ~!p , ©p ) = GENERATEPHOTON(l ); ©p 5 TRACEPHOTON(xp , ~!p , pdf (l) ); 6 n 1; e ÅÆ 7 until photon map full ; 1 8 Scale power of all photons by ; ne results in a set of photon maps, which can be used to efficiently query lighting information. In the second pass, the final image is rendered using Monte Carlo ray tracing. This rendering step is made more efficient by exploiting the lighting information cached in the photon map.
    [Show full text]
  • Lighting - the Radiance Equation
    CHAPTER 3 Lighting - the Radiance Equation Lighting The Fundamental Problem for Computer Graphics So far we have a scene composed of geometric objects. In computing terms this would be a data structure representing a collection of objects. Each object, for example, might be itself a collection of polygons. Each polygon is sequence of points on a plane. The ‘real world’, however, is ‘one that generates energy’. Our scene so far is truly a phantom one, since it simply is a description of a set of forms with no substance. Energy must be generated: the scene must be lit; albeit, lit with virtual light. Computer graphics is concerned with the construction of virtual models of scenes. This is a relatively straightforward problem to solve. In comparison, the problem of lighting scenes is the major and central conceptual and practical problem of com- puter graphics. The problem is one of simulating lighting in scenes, in such a way that the computation does not take forever. Also the resulting 2D projected images should look as if they are real. In fact, let’s make the problem even more interesting and challenging: we do not just want the computation to be fast, we want it in real time. Image frames, in other words, virtual photographs taken within the scene must be produced fast enough to keep up with the changing gaze (head and eye moves) of Lighting - the Radiance Equation 35 Great Events of the Twentieth Century35 Lighting - the Radiance Equation people looking and moving around the scene - so that they experience the same vis- ual sensations as if they were moving through a corresponding real scene.
    [Show full text]
  • The 2019 Visualization Career Award Thomas Ertl
    The 2019 Visualization Career Award Thomas Ertl The 2019 Visualization Career Award goes to Thomas Ertl. Thomas Ertl is a Professor of Computer Science at the University of Stuttgart where he founded the Institute for Visualization and Interactive Systems (VIS) and the Visualization Research Center (VISUS). He received a MSc Thomas Ertl in Computer Science from the University of Colorado at University of Stuttgart Boulder and a PhD in Theoretical Astrophysics from the Award Recipient 2019 University of Tuebingen. After a few years as postdoc and cofounder of a Tuebingen based IT company, he moved to the University of Erlangen as a Professor of Computer Graphics and Visualization. He served the University of Thomas Ertl has had the privilege to collaborate with Stuttgart in various administrative roles including Dean excellent students, doctoral and postdoctoral researchers, of Computer Science and Electrical Engineering and Vice and with many colleagues around the world and he has President for Research and Advanced Graduate Education. always attributed the success of his group to working with Currently, he is the Spokesperson of the Cluster of them. He has advised more than fifty PhD researchers and Excellence Data-Integrated Simulation Science and Director hosted numerous postdoctoral researchers. More than ten of the Stuttgart Center for Simulation Science. of them are now holding professorships, others moved to His research interests include visualization, computer prestigious academic or industrial positions. He also has graphics, and human computer interaction in general with numerous ties to industry and he actively pursues research a focus on volume rendering, flow and particle visualization, on innovative visualization applications for the physical and hierarchical and adaptive algorithms for large datasets, par- life sciences and various engineering domains as well as in allel and hardware accelerated visual computing systems, the digital humanities.
    [Show full text]
  • Yale University Department of Computer Science
    Yale University Department of Computer Science A Psychophysical Study of Dominant Texture Detection Jianye Lu Alexandra Garr-Schultz Julie Dorsey Holly Rushmeier YALEU/DCS/TR-1417 June 2009 Contents 1 Introduction 4 2 Psychophysical Studies and Computer Graphics 5 2.1 Diffusion Distance Manifolds . 5 2.2 PsychophysicalExperiments. 5 2.3 Psychophysics Applications in Graphics . 7 3 Experimental Method 8 4 Subject Response Analysis 12 4.1 AnalysiswithANOVA . 12 4.2 AnalysiswithMDS........................ 12 4.3 AnalysiswithScaling. 17 5 Texture Features 20 6 Conclusion 24 A Collection of Psychophysical Studies in Graphics 25 1 List of Figures 1 Dominant texture and texture synthesis . 4 2 Decision tree for psychophysical experiment selection . ... 7 3 Textures for psychophysical experiment . 9 4 User interface for psychophysical experiment . 10 5 Technique preferences based on ANOVA . 13 6 Technique significance with respect to individual texture . 14 7 Technique significance with respect to individual participant . 15 8 Participants on a 2-D MDS configuration . 16 9 Texture samples on 2-D MDS configuration . 18 10 Pairedcomparisonscaling . 19 11 PC-scalesfortestingtextures . 20 12 Textures on PC-scale-based 2-D MDS configuration . 21 13 Spearman’s correlation between texture descriptors and PC- scales ............................... 23 List of Tables 1 Linear correlation between paired comparison scales . 20 2 Qualitative correlation between texture descriptors and PC- scales ............................... 24 2 A Psychophysical Study of Dominant Texture Detection∗ Jianye Lu, Alexandra Garr-Schultz, Julie Dorsey, and Holly Rushmeier Abstract Images of everyday scenes are frequently used as input for texturing 3D models in computer graphics. Such images include both the texture desired and other extraneous information.
    [Show full text]
  • CS 184: Problems and Questions on Rendering
    CS 184: Problems and Questions on Rendering Ravi Ramamoorthi Problems 1. Define the terms Radiance and Irradiance, and give the units for each. Write down the formula (inte- gral) for irradiance at a point in terms of the illumination L(ω) incident from all directions ω. Write down the local reflectance equation, i.e. express the net reflected radiance in a given direction as an integral over the incident illumination. 2. Make appropriate approximations to derive the radiosity equation from the full rendering equation. 3. Match the surface material to the formula (and goniometric diagram shown in class). Also, give an example of a real material that reasonably closely approximates the mathematical description. Not all materials need have a corresponding diagram. The materials are ideal mirror, dark glossy, ideal diffuse, retroreflective. The formulae for the BRDF fr are 4 ka(R~ · V~ ), kb(R~ · V~ ) , kc/(N~ · V~ ), kdδ(R~), ke. 4. Consider the Cornell Box (as in the radiosity lecture, assume for now that this is essentially a room with only the walls, ceiling and floor. Assume for now, there are no small boxes or other furniture in the room, and that all surfaces are Lambertian. The box also has a small rectangular white light source at the center of the ceiling.) Assume we make careful measurements of the light source intensity and dimensions of the room, as well as the material properties of the walls, floor and ceiling. We then use these as inputs to our simple OpenGL renderer. Assuming we have been completely accurate, will the computer-generated picture be identical to a photograph of the same scene from the same location? If so, why? If not, what will be the differences? Ignore gamma correction and other nonlinear transfer issues.
    [Show full text]