Dossier [email protected] 0. Introduction (Standards) 1

Total Page:16

File Type:pdf, Size:1020Kb

Dossier Abodkin@Mit.Edu 0. Introduction (Standards) 1 Dossier [email protected] 0. Introduction (Standards) 1. Recognition 2. Classification 3. Bias & Noise 4. Unknown Known (New Teapots) 0. Introduction (Standards) I’ve been thinking about the problems of standards and categories in computer image recognition, how their abstractions result in hegemonic form that averages and normalizes, and how drawing and modeling might be used to critique and resist these tendencies. The Utah Teapot rendered four ways by Martin Newell To first talk about standards, I’m going to return to some work I did earlier in the semester on the Utah Teapot. The Utah Teapot was created by Martin Newell, a computer graphics researcher in the Graphics Lab at the University of Utah Computer Science Department, in 1975. Newell needed an object for testing 3D scenes, and his wife suggested their Melitta teapot. It was useful to computer graphics researchers primarily because it met certain geometric criteria. “It was round, contained saddle points, had a genus greater than zero because of the hole in the handle, could project a shadow on itself, and could be displayed accurately without a surface texture” (per Wikipedia). But it also was useful because it met some contextual or cultural criteria: it was a familiar, everyday object. The Utah Teapot thus became a standard in the computer graphics world. Original drawing of the Utah Teapot by Martin Newell Outline of the Utah Teapot rotating about the y-axis, Alex Bodkin Outline of the Utah Teapot rotating about the y-axis, Alex Bodkin Outline of the Utah Teapot rotating about the y-axis, Alex Bodkin Outline of the Utah Teapot rotating about the y-axis, Alex Bodkin Dancers in the Wings, Edgar Degas The Pink Dancers, Before the Ballet, Edgar Degas Outline of the Utah Teapot rotating about the z-axis, Alex Bodkin Outline of the Utah Teapot rotating about the z-axis, Alex Bodkin Outline of the Utah Teapot rotating about the z-axis, Alex Bodkin Outline of the Utah Teapot rotating about the z-axis, Alex Bodkin Outline of the Utah Teapot zooming towards “camera”, Alex Bodkin Outline of the Utah Teapot rotating about the z-axis and y-axis, Alex Bodkin The Utah Teapot in Toy Story The Utah Teapot is no longer a useful standard for computer graphics. Instead, it has become a cultural object representing the idea of a standard. In this project, I am using the teapot to look at issues surrounding standards as they relate to how computers see and understand the world. The Utah Teapot in The Simpsons 1. Recognition Landmark Fitting used by FaceTracker Let’s start with recognition. How do computers recognize stuff in the world? It depends on what you’re trying to recognize. For example, with faces, one way is to have an abstracted “Reference Face” with landmarks, which are typical facial features that can be tracked, such as eyes, nose, and mouth. This is the case with FaceTracker, created by Kyle McDonald based on the work of Jason Saragih. FaceTracker will try to deform this Reference Face (technically called an Active Shape Model) onto the actual face it is tracking, using the landmarks as guides. The Active Shape Model of the Reference Face was generated by training the FaceTracker software on a bunch of faces in an image dataset. The FaceTracker software learned what faces in the dataset looked like, and generated its Reference Face model based on characteristics these faces shared. Reference Face used by FaceTracker “And you can see, it kind of matches up with what you would expect, but it’s kind of weird in some ways. It looks weirdly elongated. This [the model’s cheeks] should be out more, right along here, this [the model’s right cheekbone] should be protruding more, but because it’s not important, there’s no features here to track, so there’s no reason for the 3D model to represent that. All of the points here are in a good place, but the overall structure can be strange sometimes. Also here across the nose, it makes it look like there’s a giant triangle right here. But again, that’s because there’s no features to track there.” Kyle McDonald, creator of FaceOSC Rafael Lozano-Hemmer, Levels of Confidence, 2015 Antonio Daniele, This is Not Private, 2015 Apple, ARFaceAnchor, 2017 Andreas Refsgaard, Eye Conductor, 2015 Dan Williams, Nick Clegg Looking Algorithmically Sad Zhanpeng Zhang, Ping Luo, Chen Change Loy, Xiaoou Tang, Facial Landmark Detection by Deep Multi-task Learning, 2014 Justus Thies, Face2Face, 2016 Facebook Research, DeepFace, 2014 Greg Borenstein, Pareidolia, 2012 Affectiva, AFFDEX SDK, 2016 2. Classification The same strategy used to train FaceTracker also applies more generally to the training of any Image Recognition and Classification algorithm. Image recognition algorithms are trained on image datasets, which contain images with objects that are already tagged and categorized by humans. The algorithm learns what the characteristics of an object are, which allows it to recognize that object in images it has never seen before. It learns categories and classes and then reinforces those categories and classes. ImageNet image dataset CIFAR-10 image dataset Caltech 101 image dataset 3. Bias & Noise There are problems with image datasets that then cause issues with recognition and classification. • Image datasets flatten, average, and normalize by making judgements about what is signal and what is noise, what is valuable and what is not • There aren’t enough image datasets to train image recognition algorithms • Image datasets contain hidden biases which are only understood once an algorithm has been trained on the dataset • Categories allow for object recognition but not necessarily for an understanding of image context. The loss of context and narrative around an object or set of objects flattens them and removes meaning and value. Image recognition is not enough, Ken Ryu “As with language, photos need contextual intelligence.” “Current datasets, however, offer a somewhat limited range of image variability... The problems with such restrictions are two fold: (i) some algorithms may exploit them, yet will fail when the restrictions do not apply; and, related to this, (ii) the images are not sufficiently challenging for the benefits of more sophisticated algorithms to make a difference.” Article on object detection in image recognition algorithms “Datasets play a very important (and sometimes underrated) role in research. Every time a new dataset is released, papers are released, and new models are compared and often improved upon, pushing the limits of what’s possible. Unfortunately, there aren’t enough datasets for object detection. Data is harder (and more expensive) to generate, companies probably don’t feel like freely giving away their investment, and universities do not have that many resources.” A Sea of Data: Apophenia and Pattern (Mis-)Recognition, Hito Steyerl “Jacques Rancière tells a mythical story about how the separation of signal and noise might have been accomplished in Ancient Greece. Sounds produced by affluent male locals were defined as speech, whereas women, children, slaves, and foreigners were assumed to produce garbled noise... Those identified as speaking were labeled citizens and the rest as irrelevant, irrational, and potentially dangerous nuisances. Similarly, today, the question of separating signal and noise has a fundamental political dimension.” When Discrimination Is Baked Into Algorithms, Lauren Kirchner “From retail to real estate, from employment to criminal justice, the use of data mining, scoring software, and predictive analytics programs is proliferating at an exponential rate. Software that makes decisions based on data like a person’s zip code can reflect, or even amplify, the results of historical or institutional discrimination.“[A]n algorithm is only as good as the data it works with,” Solon Barocas and Andrew Selbst write in their article “Big Data’s Disparate Impact,” forthcoming in the California Law Review. “Even in situations where data miners are extremely careful, they can still affect discriminatory results with models that, quite unintentionally, pick out proxy variables for protected classes.”” There is also one big problem inherent in the neural nets that are used to train image recognition algorithms: they are black boxes to both the public and to their designers. That means that people don’t really understand how they do what they do. I have no idea if this will remain the case, but at present, it’s still something we can confront. Whose black box do you trust?, Tim O’Reilly 4. Unknown Known (New Teapots) Mechanical Turk Human Intelligence Task 1 On white paper, please create a black line drawing of a shape that meets all of the criteria listed below. Upload a photo or scan of the drawing as a JPEG file to http://imgbb.com/ and submit the link to me. List of criteria: • Use a black pen or marker on white paper • The shape must be round (or rounded in some way). • The shape must have at least one visible hole in it. • The shape must be smooth. • The shape must have concave and convex curvature. • The shape must not be symmetrical. Feel free to be creative! The result will be part of a student project within the Department of Architecture at MIT. Thank you :) Drawing, Mechanical Turk Worker IDA229SEF35EGTSF Drawing, Mechanical Turk Worker IDA7JS3BROKU6S1 Drawing, Mechanical Turk Worker IDA381XM5ZPMVMDK Drawing, Mechanical Turk Worker IDADIAEAPYRI8CK Drawing, Mechanical Turk Worker IDAEZY44DUJFT42 Drawing, Mechanical Turk Worker IDAJF954955YWZG Drawing, Mechanical Turk Worker IDAGK30V341UN88` Drawing,
Recommended publications
  • Neural Scene Graph Rendering
    Neural Scene Graph Rendering JONATHAN GRANSKOG, NVIDIA TILL N. SCHNABEL, NVIDIA FABRICE ROUSSELLE, NVIDIA JAN NOVÁK, NVIDIA We present a neural scene graph—a modular and controllable representa- · translation Tg,1 (root node) tion of scenes with elements that are learned from data. We focus on the forward rendering problem, where the scene graph is provided by the user and references learned elements. The elements correspond to geometry and material definitions of scene objects and constitute the leaves of thegraph; we store them as high-dimensional vectors. The position and appearance of encoded scene objects can be adjusted in an artist-friendly manner via familiar trans- transformations 3 × 3 formations, e.g. translation, bending, or color hue shift, which are stored in 1 dgg x dgg 2 Tg,21: matrixmatrix Tg,2 4 the inner nodes of the graph. In order to apply a (non-linear) transforma- Tm,1 translation diuse hue tion to a learned vector, we adopt the concept of linearizing a problem by color lifting it into higher dimensions: we first encode the transformation into a shift T1 3g × 3g T1 3m × 3m T2 T3 T3 T4 high-dimensional matrix and then apply it by standard matrix-vector mul- g,3 : matrix m,2 : matrix g,3 g,2 m,1 g,2 tiplication. The transformations are encoded using neural networks. We deformation rotation scaling render the scene graph using a streaming neural renderer, which can handle graphs with a varying number of objects, and thereby facilitates scalability. Our results demonstrate a precise control over the learned object repre- g1 : m1: g2 m2 m3 g4 m4 sentations in a number of animated 2D and 3D scenes.
    [Show full text]
  • Lecture 2 3D Modeling
    Lecture 2 3D Modeling Dr. Shuang LIANG School of Software Engineering Tongji University Fall 2012 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 Lecturer Dr. Shuang LIANG • Assisstant professor, SSE, Tongji – Education » B.Sc in Computer Science, Zhejiang University, 1999-2003 » PhD in Computer Science, Nanjing Univerisity, 2003-2008 » Visit in Utrecht University, 2007, 2008 – Research Fellowship » The Chinese University of Hong Kong, 2009 » The Hong Kong Polytechnic University, 2010-2011 » The City University of Hong Kong, 2012 • Contact – Office: Room 442, JiShi Building, JiaDing Campus, TongJi (Temporary) – Email: [email protected] 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 Outline • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 Outline • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 What is a 3D model? 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. It is a collection of data (points and other information) 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 What is a 3D modeling? The process of developing a mathematical representation of any three-dimensional
    [Show full text]
  • Matching 3D Models with Shape Distributions
    Matching 3D Models with Shape Distributions Robert Osada, Thomas Funkhouser, Bernard Chazelle, and David Dobkin Princeton University Abstract Cad models is a simple example), the vast majority of 3D objects available via the World Wide Web will not have them, and there Measuring the similarity between 3D shapes is a fundamental prob- are few standards regarding their use. In general, 3D models will lem, with applications in computer vision, molecular biology, com- be acquired with scanning devices, or output from geometric ma- puter graphics, and a variety of other fields. A challenging aspect nipulation tools (file format conversion programs), and thus they of this problem is to find a suitable shape signature that can be con- will have only geometric and appearance information, usually com- structed and compared quickly, while still discriminating between pletely void of structure or semantic information. Automatic shape- similar and dissimilar shapes. based matching algorithms will be useful for recognition, retrieval, In this paper, we propose and analyze a method for computing clustering, and classification of 3D models in such databases. shape signatures for arbitrary (possibly degenerate) 3D polygonal Databases of 3D models have several new and interesting charac- models. The key idea is to represent the signature of an object as a teristics that significantly affect shape-based matching algorithms. shape distribution sampled from a shape function measuring global Unlike images and range scans, 3D models do not depend on the geometric properties of an object. The primary motivation for this configuration of cameras, light sources, or surrounding objects approach is to reduce the shape matching problem to the compar- (e.g., mirrors).
    [Show full text]
  • Standard Elevation Models for Evaluating Terrain Representation
    Standard elevation models for evaluating terrain representation a, b c d e Patrick Kennelly *, Tom Patterson , Alexander Tait , Bernhard Jenny , Daniel Huffman , Sarah Bell f, Brooke Marston g a Long Island University, [email protected] b US National Parks Service (Ret.), [email protected] c National Geographic Society, [email protected] d Monash University, [email protected] e somethingaboutmaps, [email protected] f Esri, [email protected] g Brooke Marston, [email protected] * Corresponding author Keywords: Terrain representation, digital elevation model, relief shading, standard data models Standard Elevation Models We propose the use of standard elevation models to evaluate and compare the quality of various relief shading and other terrain rendering techniques. These datasets will cover various landforms, be available at no cost to the user, and be free of common data imperfections such as missing data values, resampling artifacts, and seams. Datasets will be available at multiple map scales over the same geographic area for multi-scale analysis. Utilizing a standard data model for testing and comparing methods is a common practice among many disciplines. Furthermore, the use of digital models to test rendering techniques based on qualitative visual production has been well established for decades. Following the generation of the first digital image in 1957, image processing and analysis required standard test images upon which methods could be tested and compared. During the 1960s and 1970s, many well-known standard test images emerged from this need for a comparative technique model. Today, institutions like the University of Southern California’s Signal and Image Processing Institute maintain digital image databases for the primary purpose of supporting image processing and analysis research, including geographic imaging processes.
    [Show full text]
  • Gaboury 10/01/13
    Jacob Gaboury 10/01/13 Object Standards, Standard Objects In December 1949 Martin Heidegger gave a series of four lectures in the city of Bremen, then an isolated part of the American occupation zone following the Second World War. The event marked Heidegger’s first speaking engagement following his removal from his Freiburg professorship by the denazification authorities in 1946, and his first public lecture since his foray into university administration and politics in the early 1930s. Titled Insight Into That Which Is [Einblick in das, was ist],1 the lectures mark the debut of a new direction in Heidegger’s thought and introduce a number of major themes that would be explored in his later work.2 Heidegger opened the Bremen lectures with a work simply titled “The Thing” which begins with a meditation on the collapsing of distance, enabled by modern technology. “Physical distance is dissolved by aircraft. The radio makes information instantly available that once went unknown. The formerly slow and mysterious growth of plants is laid bare through stop-action photography.”3 Yet Heidegger argues that despite all conquest of distances the nearness of things remains absent. What about nearness? How can we come to know its nature? Nearness, it seems, cannot be encountered directly. We succeed in reaching it rather by attending to what is near. Near to us are what we usually call things. But what is a thing?4 This question motivates the lecture, and indeed much of Heidegger’s later thought. In 1 Heidegger, Martin. trans. Andrew J. Mitchell. Bremen and Freiburg Lectures: Insight Into That Which Is and Basic Principals of Thinking.
    [Show full text]
  • 3D Computer Graphics Compiled By: H
    animation Charge-coupled device Charts on SO(3) chemistry chirality chromatic aberration chrominance Cinema 4D cinematography CinePaint Circle circumference ClanLib Class of the Titans clean room design Clifford algebra Clip Mapping Clipping (computer graphics) Clipping_(computer_graphics) Cocoa (API) CODE V collinear collision detection color color buffer comic book Comm. ACM Command & Conquer: Tiberian series Commutative operation Compact disc Comparison of Direct3D and OpenGL compiler Compiz complement (set theory) complex analysis complex number complex polygon Component Object Model composite pattern compositing Compression artifacts computationReverse computational Catmull-Clark fluid dynamics computational geometry subdivision Computational_geometry computed surface axial tomography Cel-shaded Computed tomography computer animation Computer Aided Design computerCg andprogramming video games Computer animation computer cluster computer display computer file computer game computer games computer generated image computer graphics Computer hardware Computer History Museum Computer keyboard Computer mouse computer program Computer programming computer science computer software computer storage Computer-aided design Computer-aided design#Capabilities computer-aided manufacturing computer-generated imagery concave cone (solid)language Cone tracing Conjugacy_class#Conjugacy_as_group_action Clipmap COLLADA consortium constraints Comparison Constructive solid geometry of continuous Direct3D function contrast ratioand conversion OpenGL between
    [Show full text]
  • Classic Models in Computer Graphics • 3D Model Representations • Raw Data • Solids • Surfaces
    Lecture 2 3D Modeling Dr. Shuang LIANG School of Software Engineering Tongji University Spring 2013 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Today’s Topics • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Today’s Topics • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 What is a 3D model? 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. It is a collection of data (points and other information) 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 What is a 3D modeling? The process of developing a mathematical representation of any three-dimensional surface of object via specialized software. 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Today’s Topics • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Usage of a 3D model The medical industry uses detailed models of organs 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Spring 2013 Usage of a 3D model The movie industry uses them as characters
    [Show full text]
  • This Is an Open Access Document Downloaded from ORCA, Cardiff University's Institutional Repository
    This is an Open Access document downloaded from ORCA, Cardiff University's institutional repository: http://orca.cf.ac.uk/123147/ This is the author’s version of a work that was submitted to / accepted for publication. Citation for final published version: Song, Ran, Liu, Yonghuai and Rosin, Paul L. 2019. Distinction of 3D objects and scenes via classification network and Markov random field. IEEE Transactions on Visualization and Computer Graphics 10.1109/TVCG.2018.2885750 file Publishers page: http://dx.doi.org/10.1109/TVCG.2018.2885750 <http://dx.doi.org/10.1109/TVCG.2018.2885750> Please note: Changes made as a result of publishing processes such as copy-editing, formatting and page numbers may not be reflected in this version. For the definitive version of this publication, please refer to the published source. You are advised to consult the publisher’s version if you wish to cite this paper. This version is being made available in accordance with publisher policies. See http://orca.cf.ac.uk/policies.html for usage policies. Copyright and moral rights for publications made available in ORCA are retained by the copyright holders. MANUSCRIPT SUBMITTED TO TVCG 1 Distinction of 3D Objects and Scenes via Classification Network and Markov Random Field Ran Song, Yonghuai Liu, Senior Member, IEEE and Paul L. Rosin Abstract—An importance measure of 3D objects inspired by human perception has a range of applications since people want computers to behave like humans in many tasks. This paper revisits a well-defined measure, distinction of 3D surface mesh, which indicates how important a region of a mesh is with respect to classification.
    [Show full text]
  • Caradoc of the North Wind Free
    FREE CARADOC OF THE NORTH WIND PDF Allan Frewin Jones | 368 pages | 05 Apr 2012 | Hachette Children's Group | 9780340999417 | English | London, United Kingdom CARADOC OF THE NORTH WIND PDF As the war. Disaster strikes, and a valued friend suffers Caradoc of the North Wind devastating injury. Branwen sets off on a heroic journey towards destiny in an epic adventure of lovewar and revenge. Join Charlotte and Mia in this brilliant adventure full of princess sparkle and Christmas excitement! Chi ama i libri sceglie Kobo e inMondadori. The description is beautiful, but at times a bit too much, and sometimes at its worst the writing is hard to comprehend completely clearly. I find myself hoping vehemently for another book. It definitely allows the I read this in Caradoc of the North Wind sitting and could not put it down. Fair Wind to Widdershins. This author has published under several versions of his name, including Allan Jones, Frewin Jones, and A. Write a product review. Now he has stolen the feathers Caradoc of the North Wind Doodle, the weather-vane cockerel in charge of the weather. Jun 29, Katie rated it really liked it. Is the other warrior child, Arthur?? More than I thought I would, even. I really cafadoc want to know more, and off author is one that can really take you places. Join us by creating an account and start getting the best experience from our website! Jules Ember was raised hearing legends of wjnd ancient magic of the wicked Alchemist and the good Sorceress. Delivery and Returns see our delivery rates and policies thinking of returning an item? Mar 24, Valentina rated it really liked it.
    [Show full text]
  • Estimating Reflectance Properties and Reilluminating Scenes Using
    Estimating Reflectance Properties and Reilluminating Scenes Using Physically Based Rendering and Deep Neural Networks Farhan Rahman Wasee A Thesis in The Department of Computer Science and Software Engineering Presented in Partial Fulfillment of the Requirements for the Degree of Master of Computer Science (Computer Science) at Concordia University Montréal, Québec, Canada October 2020 © Farhan Rahman Wasee, 2020 CONCORDIA UNIVERSITY School of Graduate Studies This is to certify that the thesis prepared By: Farhan Rahman Wasee Entitled: Estimating Reflectance Properties and Reilluminating Scenes Using Physically Based Rendering and Deep Neural Networks and submitted in partial fulfillment of the requirements for the degree of Master of Computer Science (Computer Science) complies with the regulations of this University and meets the accepted standards with respect to originality and quality. Signed by the Final Examining Committee: Chair Dr. Jinqiu Yang Examiner Dr. Ching Suen Examiner Dr. Jinqiu Yang Supervisor Dr. Charalambos Poullis Approved by Dr. Lata Narayanan, Chair Department of Computer Science and Software Engineering October 2020 Dr. Mourad Debbabi, Interim Dean Faculty of Engineering and Computer Science Abstract Estimating Reflectance Properties and Reilluminating Scenes Using Physically Based Rendering and Deep Neural Networks Farhan Rahman Wasee Estimating material properties and modeling the appearance of an object under varying illumi- nation conditions is a complex process. In this thesis, we address the problem by proposing a novel framework to re-illuminate scenes by recovering the reflectance properties. Uniquely, following a divide-and-conquer approach, we recast the problem into its two constituent sub-problems. In the first sub-problem, we have developed a synthetic dataset of spheres with realistic mate- rials.
    [Show full text]
  • Computer Graphics CS
    Computer Graphics CS 354 Introductions I am Dr. Sarah Abraham • email: [email protected] • office hours: MW 4:00—6:00 T 11:00-noon The TA is Josh Vekhter • email: [email protected] • office hours: TBD Assignment and Grading Homeworks and quizzes (20%) 5 projects (50%) • 2-3 weeks each Final project (20%) • Open-ended • Includes a presentation on an “advanced” topic in graphics Participation (10%) Project Logistics • Can work in pairs • Both students get same grade • Late slips shared (both must submit) • We are moving to WebGL, so projects will run in browser for compatibility Classroom Logistics • Lecture time • In-class discussions • Concepts stick better when they’re hands-on Prerequisites Linear Algebra • CG could be “applied linear algebra” • Will show up over and over again • Reviewed in class and in worksheets • Stop and ask questions Prerequisites Linear Algebra Basic C++ • C++ is performant • An incredibly good skill for working in computer graphics • We are working Typescript and WebGL for cross-compatibility and to simplify the learning curve Prerequisites Linear Algebra Basic C++ Engineering large software systems • Debugging complex code • Using poorly-documented libraries • Time management • Good project planning What This Class is NOT About What This Class is NOT About 3D modeling tutorial What This Class is NOT About C++ or GLSL (shading language) tutorial Optional textbook: “red book” But I will recommend working through http://www.opengl-tutorial.org/ to help you get your bearings! A Brief History of Graphics Dark Ages: blinking lights, Teletype [UNIVAC I, 1951] [Model 33, 1963] Dark Ages 1940s: cathode ray tubes (CRTs) • originally used as computer memory 1960s CRTs can do basic vector graphics real time by early 60s [PDP-1 running “Spacewar!”, 1962] 1960s CRTs can do basic vector graphics computer terminals (“virtual teletype”) mass-produced in ‘67 [DataPoint 3300] 1960s 1968: Ray tracing invented [Ray traced building.
    [Show full text]
  • Standard Elevation Models for Evaluating Terrain Representation
    Standard elevation models for evaluating terrain representation a, b c d e Patrick Kennelly *, Tom Patterson , Alexander Tait , Bernhard Jenny , Daniel Huffman , Sarah Bell f, Brooke Marston g a Long Island University, [email protected] b US National Parks Service (Ret.), [email protected] c National Geographic Society, [email protected] d Monash University, [email protected] e somethingaboutmaps, [email protected] f Esri, [email protected] g Brooke Marston, [email protected] * Corresponding author Keywords: Terrain representation, digital elevation model, relief shading, standard data models Standard Elevation Models We propose the use of standard elevation models to evaluate and compare the quality of various relief shading and other terrain rendering techniques. These datasets will cover various landforms, be available at no cost to the user, and be free of common data imperfections such as missing data values, resampling artifacts, and seams. Datasets will be available at multiple map scales over the same geographic area for multi-scale analysis. Utilizing a standard data model for testing and comparing methods is a common practice among many disciplines. Furthermore, the use of digital models to test rendering techniques based on qualitative visual production has been well established for decades. Following the generation of the first digital image in 1957, image processing and analysis required standard test images upon which methods could be tested and compared. During the 1960s and 1970s, many well-known standard test images emerged from this need for a comparative technique model. Today, institutions like the University of Southern California’s Signal and Image Processing Institute maintain digital image databases for the primary purpose of supporting image processing and analysis research, including geographic imaging processes.
    [Show full text]