Estimating Reflectance Properties and Reilluminating Scenes Using

Total Page:16

File Type:pdf, Size:1020Kb

Estimating Reflectance Properties and Reilluminating Scenes Using Estimating Reflectance Properties and Reilluminating Scenes Using Physically Based Rendering and Deep Neural Networks Farhan Rahman Wasee A Thesis in The Department of Computer Science and Software Engineering Presented in Partial Fulfillment of the Requirements for the Degree of Master of Computer Science (Computer Science) at Concordia University Montréal, Québec, Canada October 2020 © Farhan Rahman Wasee, 2020 CONCORDIA UNIVERSITY School of Graduate Studies This is to certify that the thesis prepared By: Farhan Rahman Wasee Entitled: Estimating Reflectance Properties and Reilluminating Scenes Using Physically Based Rendering and Deep Neural Networks and submitted in partial fulfillment of the requirements for the degree of Master of Computer Science (Computer Science) complies with the regulations of this University and meets the accepted standards with respect to originality and quality. Signed by the Final Examining Committee: Chair Dr. Jinqiu Yang Examiner Dr. Ching Suen Examiner Dr. Jinqiu Yang Supervisor Dr. Charalambos Poullis Approved by Dr. Lata Narayanan, Chair Department of Computer Science and Software Engineering October 2020 Dr. Mourad Debbabi, Interim Dean Faculty of Engineering and Computer Science Abstract Estimating Reflectance Properties and Reilluminating Scenes Using Physically Based Rendering and Deep Neural Networks Farhan Rahman Wasee Estimating material properties and modeling the appearance of an object under varying illumi- nation conditions is a complex process. In this thesis, we address the problem by proposing a novel framework to re-illuminate scenes by recovering the reflectance properties. Uniquely, following a divide-and-conquer approach, we recast the problem into its two constituent sub-problems. In the first sub-problem, we have developed a synthetic dataset of spheres with realistic mate- rials. The dataset has a wide range of material properties, rendered from varying viewpoints and under fixed directional light. Images from the dataset are further processed and used as reflectance maps used during the training process of the network. In the second sub-problem, reflectance maps are created for scenes by reorganizing the outgoing radiances recorded in the multi-view images. The network trained on the synthetic dataset, is used to infer the material properties of the reflectance maps, acquired for the test scenes. These predictions are reused to relight the scenes from novel viewpoints and different lighting conditions using path tracing. A number of experiments are conducted and performances are reported using different metrics to justify our design decisions and the choice of our network. We also show that, using multi-view images, the camera properties and the geometry of a scene, our technique can successfully predict the reflectance properties using our trained network within seconds. In the end, we also present the visual results of re-illumination on several scenes under different lighting conditions. iii Acknowledgments At the very beginning, I would like to take a moment to express my gratitude towards everyone who directly and indirectly played a role to help me complete this thesis. Firstly, I am very grateful to my supervisor, Dr. Charalambos Poullis for believing in me and for making me a part of his wonderful lab. I have enjoyed every moment of working side by side with him. His in-depth knowledge, patience and guidance during the most difficult phases of the project helped me navigate my way throughout the last couple of years. It would not have been possible if it was not for him. I would also like to thank the members of my lab and my friends Bodhiswatta [Bodhi] Chatterjee and Pinjing Xu [Alex] for their countless help and advices throughout the time I was a part of the ICT Lab. I would like to thank the respectable committee members, the other professors and staff of Concordia University for the help and support I have received from them. The research done as part of this thesis is based upon work supported by the Natural Sci- ences and Engineering Research Council of Canada Grants DG-N01670 (Discovery Grant) and DND-N01885 (Collaborative Research and Development with the Department of National Defence Grant). I want to thank Jonathan Fournier from Valcartier DRDC, and Hermann Brassard, Sylvain Pronovost, Bhakti Patel, and Scott McAvoy from Presagis Inc Canada, for their invaluable discus- sions and assistance. Last but not the least, I would like to thank my family for supporting me throughout my life. It would not have been possible at all if it had not been for you and your love. You are the source of my inspiration, joy and all the motivation. iv Contents List of Figures vii List of Tables xii 1 Introduction 1 1.1 Thesis Organization . 4 2 Literature Review 5 2.1 Deep learning and Convolutional Neural Networks . 5 2.1.1 LeNet . 6 2.1.2 AlexNet . 7 2.1.3 ResNet . 7 2.1.4 ResNeXt . 8 2.1.5 DenseNet . 8 2.1.6 EfficientNet . 10 2.2 Appearance Modeling and Inverse Rendering . 11 2.2.1 Procedural-based . 11 2.2.2 Deep Learning based . 13 2.3 Conclusion . 15 3 Background 17 3.1 Rendering and Different Rendering Techniques . 17 3.1.1 Rasterization . 18 v 3.1.2 Ray casting . 19 3.1.3 Raytracing . 19 3.1.4 Path tracing . 20 3.2 Reflection Models . 21 3.2.1 Bidirectional Reflectance Distribution Function . 21 3.2.2 BRDF Models and Acquisition . 22 4 Estimating Reflectance Properties and Relighting 26 4.1 Overview . 26 4.1.1 Materials . 27 4.1.2 Lights . 28 4.1.3 Camera viewpoints . 30 4.2 Dataset and Reflectance Maps Generation . 31 4.2.1 Generating realistic spheres dataset . 31 4.2.2 Generating reflectance maps from spheres . 32 4.3 Network Training . 33 4.3.1 Network Architecture . 33 4.3.2 Loss function . 33 4.3.3 Data augmentations . 34 4.3.4 Training and Inference . 34 4.4 Evaluation and Results . 36 4.4.1 Comparison of networks . 37 4.4.2 Analysis of image relighting . 38 5 Conclusion and Future work 47 5.1 Concluding Remarks . 47 5.2 Future Work . 48 vi List of Figures Figure 2.1 The architecture diagram of the LeNet model [56] that was designed to rec- ognize handwritten characters from images. 6 Figure 2.2 The architecture of AlexNet [53] which created a revolution in image clas- sification and introduced several new ideas including the use of data augmentation and GPUs for training. 7 Figure 2.3 Two different types of residual blocks are shown here. The left one corre- sponds to a residual block for the ResNet-34 [32] model. The right one, on the other hand, shows a bottleneck block for the ResNet-50/101/152 [32]. 8 Figure 2.4 Figure illustrating the fundamental concept of parallel paths of ResNext [102]. On the left, we can see the basic residual block of a ResNet. On the right, we can see parallel blocks of this same type, a total of 32 paths converging into one. The cardinality is 32 in this case. 9 Figure 2.5 Figure illustrating the flow of data through a DenseNet [37]. There are 3 dense blocks here. The layers in between two dense blocks are referred to as tran- sition layers and they contribute towards reducing the size of the feature maps with the help of convolution operations and pooling. Each dense block has 5 layers, with a growth rate, k = 4. Each of these layers also takes in the feature maps from the previous layers as input. 9 vii Figure 2.6 This figure shows the motivation behind the architectural design of Effi- cientNet [92] in contrast to other conventional ConvNets. Here, (a), (b), (c) and (d) show how each dimension i.e. width, number of channels, depth, etc are adjusted to achieve optimal performance. In comparison, (e) shows how EfficientNet uses all of these factors together and uniformly scales them with a fixed ratio to find the optimal results more efficiently. 10 Figure 3.1 The raytracing process involves tracing the rays from the camera through the image plane towards the scene geometry. This diagram from [16] shows the rays are traced for some pixels on the image plane that form the image of the sphere. The view rays get intersected with the sphere, from where the shadow ray is spawned towards the light source. The bottommost view ray hit the surface from where the shadow ray cannot reach the light, hence it is creating the shadow. 20 Figure 3.2 3 different renders generated using path tracing. The three images are us- ing rendered using sampling sizes 4, 32 and 256 respectively. The noisiness gets stronger as the number of samples is lowered. As the number of samples grows bigger, more rays end up reaching the light source, thereby, producing a cleaner and smoother image. 21 Figure 3.3 The figure [86] illustrates the measurements involved in computing the BRDF. A distant light source emits radiance Li. The incoming angles are expressed using the zenith and azimuth angles denoted by (θi,φi). The sensor captures the reflected radiance Lo which creates an angle represented by (θo,φo). (Variable symbols are edited from the original figure to match our equations.) . 22 viii Figure 4.1 Each row highlights each distinct property used to define a material. The topmost row, from left to right illustrates the effect of Kd. Changing the value of Kd alters the diffuse color of the material, which is tweaked in this image to change the color to red, then green and finally to blue. The second row demonstrates the effect of Ks, the specular component, on the overall color of the sphere. The Ks controls the reflectivity component, and also the color of the specular hotspot.
Recommended publications
  • Neural Scene Graph Rendering
    Neural Scene Graph Rendering JONATHAN GRANSKOG, NVIDIA TILL N. SCHNABEL, NVIDIA FABRICE ROUSSELLE, NVIDIA JAN NOVÁK, NVIDIA We present a neural scene graph—a modular and controllable representa- · translation Tg,1 (root node) tion of scenes with elements that are learned from data. We focus on the forward rendering problem, where the scene graph is provided by the user and references learned elements. The elements correspond to geometry and material definitions of scene objects and constitute the leaves of thegraph; we store them as high-dimensional vectors. The position and appearance of encoded scene objects can be adjusted in an artist-friendly manner via familiar trans- transformations 3 × 3 formations, e.g. translation, bending, or color hue shift, which are stored in 1 dgg x dgg 2 Tg,21: matrixmatrix Tg,2 4 the inner nodes of the graph. In order to apply a (non-linear) transforma- Tm,1 translation diuse hue tion to a learned vector, we adopt the concept of linearizing a problem by color lifting it into higher dimensions: we first encode the transformation into a shift T1 3g × 3g T1 3m × 3m T2 T3 T3 T4 high-dimensional matrix and then apply it by standard matrix-vector mul- g,3 : matrix m,2 : matrix g,3 g,2 m,1 g,2 tiplication. The transformations are encoded using neural networks. We deformation rotation scaling render the scene graph using a streaming neural renderer, which can handle graphs with a varying number of objects, and thereby facilitates scalability. Our results demonstrate a precise control over the learned object repre- g1 : m1: g2 m2 m3 g4 m4 sentations in a number of animated 2D and 3D scenes.
    [Show full text]
  • Lecture 2 3D Modeling
    Lecture 2 3D Modeling Dr. Shuang LIANG School of Software Engineering Tongji University Fall 2012 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 Lecturer Dr. Shuang LIANG • Assisstant professor, SSE, Tongji – Education » B.Sc in Computer Science, Zhejiang University, 1999-2003 » PhD in Computer Science, Nanjing Univerisity, 2003-2008 » Visit in Utrecht University, 2007, 2008 – Research Fellowship » The Chinese University of Hong Kong, 2009 » The Hong Kong Polytechnic University, 2010-2011 » The City University of Hong Kong, 2012 • Contact – Office: Room 442, JiShi Building, JiaDing Campus, TongJi (Temporary) – Email: [email protected] 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 Outline • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 Outline • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 What is a 3D model? 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. It is a collection of data (points and other information) 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 What is a 3D modeling? The process of developing a mathematical representation of any three-dimensional
    [Show full text]
  • Matching 3D Models with Shape Distributions
    Matching 3D Models with Shape Distributions Robert Osada, Thomas Funkhouser, Bernard Chazelle, and David Dobkin Princeton University Abstract Cad models is a simple example), the vast majority of 3D objects available via the World Wide Web will not have them, and there Measuring the similarity between 3D shapes is a fundamental prob- are few standards regarding their use. In general, 3D models will lem, with applications in computer vision, molecular biology, com- be acquired with scanning devices, or output from geometric ma- puter graphics, and a variety of other fields. A challenging aspect nipulation tools (file format conversion programs), and thus they of this problem is to find a suitable shape signature that can be con- will have only geometric and appearance information, usually com- structed and compared quickly, while still discriminating between pletely void of structure or semantic information. Automatic shape- similar and dissimilar shapes. based matching algorithms will be useful for recognition, retrieval, In this paper, we propose and analyze a method for computing clustering, and classification of 3D models in such databases. shape signatures for arbitrary (possibly degenerate) 3D polygonal Databases of 3D models have several new and interesting charac- models. The key idea is to represent the signature of an object as a teristics that significantly affect shape-based matching algorithms. shape distribution sampled from a shape function measuring global Unlike images and range scans, 3D models do not depend on the geometric properties of an object. The primary motivation for this configuration of cameras, light sources, or surrounding objects approach is to reduce the shape matching problem to the compar- (e.g., mirrors).
    [Show full text]
  • Standard Elevation Models for Evaluating Terrain Representation
    Standard elevation models for evaluating terrain representation a, b c d e Patrick Kennelly *, Tom Patterson , Alexander Tait , Bernhard Jenny , Daniel Huffman , Sarah Bell f, Brooke Marston g a Long Island University, [email protected] b US National Parks Service (Ret.), [email protected] c National Geographic Society, [email protected] d Monash University, [email protected] e somethingaboutmaps, [email protected] f Esri, [email protected] g Brooke Marston, [email protected] * Corresponding author Keywords: Terrain representation, digital elevation model, relief shading, standard data models Standard Elevation Models We propose the use of standard elevation models to evaluate and compare the quality of various relief shading and other terrain rendering techniques. These datasets will cover various landforms, be available at no cost to the user, and be free of common data imperfections such as missing data values, resampling artifacts, and seams. Datasets will be available at multiple map scales over the same geographic area for multi-scale analysis. Utilizing a standard data model for testing and comparing methods is a common practice among many disciplines. Furthermore, the use of digital models to test rendering techniques based on qualitative visual production has been well established for decades. Following the generation of the first digital image in 1957, image processing and analysis required standard test images upon which methods could be tested and compared. During the 1960s and 1970s, many well-known standard test images emerged from this need for a comparative technique model. Today, institutions like the University of Southern California’s Signal and Image Processing Institute maintain digital image databases for the primary purpose of supporting image processing and analysis research, including geographic imaging processes.
    [Show full text]
  • Gaboury 10/01/13
    Jacob Gaboury 10/01/13 Object Standards, Standard Objects In December 1949 Martin Heidegger gave a series of four lectures in the city of Bremen, then an isolated part of the American occupation zone following the Second World War. The event marked Heidegger’s first speaking engagement following his removal from his Freiburg professorship by the denazification authorities in 1946, and his first public lecture since his foray into university administration and politics in the early 1930s. Titled Insight Into That Which Is [Einblick in das, was ist],1 the lectures mark the debut of a new direction in Heidegger’s thought and introduce a number of major themes that would be explored in his later work.2 Heidegger opened the Bremen lectures with a work simply titled “The Thing” which begins with a meditation on the collapsing of distance, enabled by modern technology. “Physical distance is dissolved by aircraft. The radio makes information instantly available that once went unknown. The formerly slow and mysterious growth of plants is laid bare through stop-action photography.”3 Yet Heidegger argues that despite all conquest of distances the nearness of things remains absent. What about nearness? How can we come to know its nature? Nearness, it seems, cannot be encountered directly. We succeed in reaching it rather by attending to what is near. Near to us are what we usually call things. But what is a thing?4 This question motivates the lecture, and indeed much of Heidegger’s later thought. In 1 Heidegger, Martin. trans. Andrew J. Mitchell. Bremen and Freiburg Lectures: Insight Into That Which Is and Basic Principals of Thinking.
    [Show full text]
  • 3D Computer Graphics Compiled By: H
    animation Charge-coupled device Charts on SO(3) chemistry chirality chromatic aberration chrominance Cinema 4D cinematography CinePaint Circle circumference ClanLib Class of the Titans clean room design Clifford algebra Clip Mapping Clipping (computer graphics) Clipping_(computer_graphics) Cocoa (API) CODE V collinear collision detection color color buffer comic book Comm. ACM Command & Conquer: Tiberian series Commutative operation Compact disc Comparison of Direct3D and OpenGL compiler Compiz complement (set theory) complex analysis complex number complex polygon Component Object Model composite pattern compositing Compression artifacts computationReverse computational Catmull-Clark fluid dynamics computational geometry subdivision Computational_geometry computed surface axial tomography Cel-shaded Computed tomography computer animation Computer Aided Design computerCg andprogramming video games Computer animation computer cluster computer display computer file computer game computer games computer generated image computer graphics Computer hardware Computer History Museum Computer keyboard Computer mouse computer program Computer programming computer science computer software computer storage Computer-aided design Computer-aided design#Capabilities computer-aided manufacturing computer-generated imagery concave cone (solid)language Cone tracing Conjugacy_class#Conjugacy_as_group_action Clipmap COLLADA consortium constraints Comparison Constructive solid geometry of continuous Direct3D function contrast ratioand conversion OpenGL between
    [Show full text]
  • Dossier [email protected] 0. Introduction (Standards) 1
    Dossier [email protected] 0. Introduction (Standards) 1. Recognition 2. Classification 3. Bias & Noise 4. Unknown Known (New Teapots) 0. Introduction (Standards) I’ve been thinking about the problems of standards and categories in computer image recognition, how their abstractions result in hegemonic form that averages and normalizes, and how drawing and modeling might be used to critique and resist these tendencies. The Utah Teapot rendered four ways by Martin Newell To first talk about standards, I’m going to return to some work I did earlier in the semester on the Utah Teapot. The Utah Teapot was created by Martin Newell, a computer graphics researcher in the Graphics Lab at the University of Utah Computer Science Department, in 1975. Newell needed an object for testing 3D scenes, and his wife suggested their Melitta teapot. It was useful to computer graphics researchers primarily because it met certain geometric criteria. “It was round, contained saddle points, had a genus greater than zero because of the hole in the handle, could project a shadow on itself, and could be displayed accurately without a surface texture” (per Wikipedia). But it also was useful because it met some contextual or cultural criteria: it was a familiar, everyday object. The Utah Teapot thus became a standard in the computer graphics world. Original drawing of the Utah Teapot by Martin Newell Outline of the Utah Teapot rotating about the y-axis, Alex Bodkin Outline of the Utah Teapot rotating about the y-axis, Alex Bodkin Outline of the Utah Teapot rotating about
    [Show full text]
  • Classic Models in Computer Graphics • 3D Model Representations • Raw Data • Solids • Surfaces
    Lecture 2 3D Modeling Dr. Shuang LIANG School of Software Engineering Tongji University Spring 2013 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Today’s Topics • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Today’s Topics • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 What is a 3D model? 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. It is a collection of data (points and other information) 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 What is a 3D modeling? The process of developing a mathematical representation of any three-dimensional surface of object via specialized software. 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Today’s Topics • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Usage of a 3D model The medical industry uses detailed models of organs 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Usage of a 3D model The movie industry uses them as characters
    [Show full text]
  • This Is an Open Access Document Downloaded from ORCA, Cardiff University's Institutional Repository
    This is an Open Access document downloaded from ORCA, Cardiff University's institutional repository: http://orca.cf.ac.uk/123147/ This is the author’s version of a work that was submitted to / accepted for publication. Citation for final published version: Song, Ran, Liu, Yonghuai and Rosin, Paul L. 2019. Distinction of 3D objects and scenes via classification network and Markov random field. IEEE Transactions on Visualization and Computer Graphics 10.1109/TVCG.2018.2885750 file Publishers page: http://dx.doi.org/10.1109/TVCG.2018.2885750 <http://dx.doi.org/10.1109/TVCG.2018.2885750> Please note: Changes made as a result of publishing processes such as copy-editing, formatting and page numbers may not be reflected in this version. For the definitive version of this publication, please refer to the published source. You are advised to consult the publisher’s version if you wish to cite this paper. This version is being made available in accordance with publisher policies. See http://orca.cf.ac.uk/policies.html for usage policies. Copyright and moral rights for publications made available in ORCA are retained by the copyright holders. MANUSCRIPT SUBMITTED TO TVCG 1 Distinction of 3D Objects and Scenes via Classification Network and Markov Random Field Ran Song, Yonghuai Liu, Senior Member, IEEE and Paul L. Rosin Abstract—An importance measure of 3D objects inspired by human perception has a range of applications since people want computers to behave like humans in many tasks. This paper revisits a well-defined measure, distinction of 3D surface mesh, which indicates how important a region of a mesh is with respect to classification.
    [Show full text]
  • Caradoc of the North Wind Free
    FREE CARADOC OF THE NORTH WIND PDF Allan Frewin Jones | 368 pages | 05 Apr 2012 | Hachette Children's Group | 9780340999417 | English | London, United Kingdom CARADOC OF THE NORTH WIND PDF As the war. Disaster strikes, and a valued friend suffers Caradoc of the North Wind devastating injury. Branwen sets off on a heroic journey towards destiny in an epic adventure of lovewar and revenge. Join Charlotte and Mia in this brilliant adventure full of princess sparkle and Christmas excitement! Chi ama i libri sceglie Kobo e inMondadori. The description is beautiful, but at times a bit too much, and sometimes at its worst the writing is hard to comprehend completely clearly. I find myself hoping vehemently for another book. It definitely allows the I read this in Caradoc of the North Wind sitting and could not put it down. Fair Wind to Widdershins. This author has published under several versions of his name, including Allan Jones, Frewin Jones, and A. Write a product review. Now he has stolen the feathers Caradoc of the North Wind Doodle, the weather-vane cockerel in charge of the weather. Jun 29, Katie rated it really liked it. Is the other warrior child, Arthur?? More than I thought I would, even. I really cafadoc want to know more, and off author is one that can really take you places. Join us by creating an account and start getting the best experience from our website! Jules Ember was raised hearing legends of wjnd ancient magic of the wicked Alchemist and the good Sorceress. Delivery and Returns see our delivery rates and policies thinking of returning an item? Mar 24, Valentina rated it really liked it.
    [Show full text]
  • Computer Graphics CS
    Computer Graphics CS 354 Introductions I am Dr. Sarah Abraham • email: [email protected] • office hours: MW 4:00—6:00 T 11:00-noon The TA is Josh Vekhter • email: [email protected] • office hours: TBD Assignment and Grading Homeworks and quizzes (20%) 5 projects (50%) • 2-3 weeks each Final project (20%) • Open-ended • Includes a presentation on an “advanced” topic in graphics Participation (10%) Project Logistics • Can work in pairs • Both students get same grade • Late slips shared (both must submit) • We are moving to WebGL, so projects will run in browser for compatibility Classroom Logistics • Lecture time • In-class discussions • Concepts stick better when they’re hands-on Prerequisites Linear Algebra • CG could be “applied linear algebra” • Will show up over and over again • Reviewed in class and in worksheets • Stop and ask questions Prerequisites Linear Algebra Basic C++ • C++ is performant • An incredibly good skill for working in computer graphics • We are working Typescript and WebGL for cross-compatibility and to simplify the learning curve Prerequisites Linear Algebra Basic C++ Engineering large software systems • Debugging complex code • Using poorly-documented libraries • Time management • Good project planning What This Class is NOT About What This Class is NOT About 3D modeling tutorial What This Class is NOT About C++ or GLSL (shading language) tutorial Optional textbook: “red book” But I will recommend working through http://www.opengl-tutorial.org/ to help you get your bearings! A Brief History of Graphics Dark Ages: blinking lights, Teletype [UNIVAC I, 1951] [Model 33, 1963] Dark Ages 1940s: cathode ray tubes (CRTs) • originally used as computer memory 1960s CRTs can do basic vector graphics real time by early 60s [PDP-1 running “Spacewar!”, 1962] 1960s CRTs can do basic vector graphics computer terminals (“virtual teletype”) mass-produced in ‘67 [DataPoint 3300] 1960s 1968: Ray tracing invented [Ray traced building.
    [Show full text]
  • Standard Elevation Models for Evaluating Terrain Representation
    Standard elevation models for evaluating terrain representation a, b c d e Patrick Kennelly *, Tom Patterson , Alexander Tait , Bernhard Jenny , Daniel Huffman , Sarah Bell f, Brooke Marston g a Long Island University, [email protected] b US National Parks Service (Ret.), [email protected] c National Geographic Society, [email protected] d Monash University, [email protected] e somethingaboutmaps, [email protected] f Esri, [email protected] g Brooke Marston, [email protected] * Corresponding author Keywords: Terrain representation, digital elevation model, relief shading, standard data models Standard Elevation Models We propose the use of standard elevation models to evaluate and compare the quality of various relief shading and other terrain rendering techniques. These datasets will cover various landforms, be available at no cost to the user, and be free of common data imperfections such as missing data values, resampling artifacts, and seams. Datasets will be available at multiple map scales over the same geographic area for multi-scale analysis. Utilizing a standard data model for testing and comparing methods is a common practice among many disciplines. Furthermore, the use of digital models to test rendering techniques based on qualitative visual production has been well established for decades. Following the generation of the first digital image in 1957, image processing and analysis required standard test images upon which methods could be tested and compared. During the 1960s and 1970s, many well-known standard test images emerged from this need for a comparative technique model. Today, institutions like the University of Southern California’s Signal and Image Processing Institute maintain digital image databases for the primary purpose of supporting image processing and analysis research, including geographic imaging processes.
    [Show full text]