The Convergence of Graphics and Vision

Total Page:16

File Type:pdf, Size:1020Kb

The Convergence of Graphics and Vision The Convergence of Graphics Cover Feature and Vision Approaching similar problems from opposite directions, graphics and vision researchers are reaching a fertile middle ground. The goal is to find the best possible tools for the imagination. This overview describes cutting- edge work, some of which will debut at Siggraph 98. Jed Lengyel t Microsoft Research, the computer vision sampled representations. This use of captured Microsoft and graphics groups used to be on opposite scenes (enhanced by vision research) yields richer Research sides of the building. Now we have offices rendering and modeling methods (for graphics) along the same hallway, and we see each other than methods that synthesize everything from A every day. This reflects the larger trend in our scratch. field as graphics and vision close in on similar problems. • Exploiting temporal and spatial coherence (sim- Computer graphics and computer vision are inverse ilarities in images) via the use of layers and other problems. Traditional computer graphics starts with techniques is boosting runtime performance. input geometric models and produces image se- • The explosion in PC graphics performance is quences. Traditional computer vision starts with input making powerful computational techniques more image sequences and produces geometric models. practical. Lately, there has been a meeting in the middle, and the center—the prize—is to create stunning images in real VISION AND GRAPHICS CROSSOVER: time. IMAGE-BASED RENDERING AND MODELING Vision researchers now work from images back- What are vision and graphics learning from each ward, just as far backward as necessary to create mod- other? Both deal with the image streams that result els that capture a scene without going to full geometric when a real or virtual camera is exposed to the phys- models. Graphics researchers now work with hybrid ical or modeled world. Both can benefit from exploit- geometry and image models. These models use images ing image stream coherence. Both value accurate as partial results, reusing them to take advantage of knowledge of the surface reflectance properties. Both similarities in the image stream. As a graphics benefit from the decomposition of the image stream researcher, I am most interested in the vision tech- into layers. niques that help create and render compelling scenes The overlapping subset of graphics and vision goes as efficiently as possible. by the somewhat unwieldy description of “image- based.” In this article, I use graphics to describe the GOALS AND TRENDS forward problem (image-based rendering) and vision Jim Kajiya, assistant director of Microsoft Research, for the inverse problem (image-based modeling). proposes that the goal for computer graphics is to cre- Inverse problems have been a staple of computer ate the best possible tool for the imagination. graphics from the beginning. These inverse problems Computer graphics today seeks answers to the ques- range from the trivial (mapping user input back to tion, “How do I take the idea in my head and show it model space for interaction) to the extremely difficult to you?” But imagination must be grounded in real- (finding the best path through a controller state space ity. Vision provides the tools needed to take the real to get a desired animation). This article discusses only world back into the virtual. forward and inverse imaging operations (that is, light There are several current trends that make this an transport and projection onto a film plane). exciting time for image synthesis: Image-based rendering and modeling has been a fruitful field for the past several years, and Siggraph • The combined graphics and vision approaches 98 devotes two sessions to the subject. The papers I have a hybrid vigor, much of which stems from discuss later come from those two sessions. 46 Computer 0018-9162/98/$10.00 © 1998 IEEE . Appearance Physically based based Geometric Images models Lumigraph Image Triangle Global and Sprites morphing scan conversion illumination light field Sprites Layered View- Geometric Monte Carlo Planar Texture with depth dependent level Radiosity ray sprites mapping depth image textures of detail tracing Graphics Texture Range data Silhouettes Illuminance Plane- recovery merging to volume estimation Multiview Optical flow Layered Depth map sweep stereo estimation stereo recovery stereo Image Curves to Geometry Reflectance mosaics 3D mesh fitting estimation Vision Figure 1. Graphics Images have been used to increase realism since Ed proper space for interpolation between two images is and vision Catmull and Jim Blinn first described texture mapping in the common coordinate system defined by the syn- techniques, on a from 1974 to 1978. In a 1980 survey, Blinn described thetic camera view.6 That same year, Steven Gortler spectrum from more the potential of combining vision and graphics and his colleagues at Microsoft Research intro- image-based to more techniques (http://research.microsoft.com/~blinn/ duced the lumigraph (http://research.microsoft.com/ physical- or geome- imagproc.htm). Renewed interest in the shared mid- msrsiggraph), and Marc Levoy and Pat Hanrahan of try-based. Traditional dle ground arose just a few years ago. Stanford University introduced light field rendering graphics starts on the The new research area is split into roughly three (http://www-graphics.stanford.edu/papers/light). Both right with geometric overlapping branches. Here I touch on the graphics systems describe a dense sampling of the radiance over models and moves to articles in the field. (A longer bibliography is available a space of viewing rays. the left to make at http://computer.org/computer/co1998/html/r7046. This year, Jonathan Shade and colleagues describe images. Traditional htm.) For the vision perspective, see the survey arti- new image-based representations that allow multiple vision starts on the cles by Sing Bing Kang1 and Zhengyou Zhang.2 Figure depths per pixel,7 and Paul Rademacher and Gary left with images and 1 provides a schematic illustration of the spectrum of Bishop describe a representation that combines pix- moves to the right to these research interests. els from multiple cameras into a single image.8 make geometric mod- els. Lately, there has Image-based rendering Image-based 3D-rendering acceleration been a meeting in the Given a set of images and correspondences between Given a traditional 3D texture-mapped geometric middle. the images, how do you produce an image from a new model, how can you use image caches (also known as point of view? sprites) to increase the frame rate or complexity of the In 1993, Eric Chen and Lance Williams described model? how to interpolate, or flow, images from one frame to In 1992, Steve Molnar and colleagues described the next instead of having to do full 3D rendering.3 hardware to split the rendering of a scene into 3D ren- Chen then introduced QuickTime VR in 1995, show- dering plus 2D compositing with z.9 In 1994, Matthew ing how an environment map sampled from a real Regan and Ronald Pose built inexpensive hardware scene could be warped to give a strong sense of pres- to show how image caches with independent update ence. Independently, Richard Szeliski described the rates exploit the natural coherence in computer graph- use of image mosaics for virtual environments4 in ics scenes.10 1996 and, the following year with Harry Shum, Paolo Maciel and Peter Shirley introduced in 1995 described improved techniques for combining multi- the idea of using image-based “imposters” to replace ple images into a single panoramic image. the underlying geometric models.11 More recently, In 1995, Leonard McMillan and Gary Bishop intro- Jonathan Shade and colleagues7 and, independently, duced the graphics community to plenoptic modeling, Gernot Schaufler,12 describe techniques that use depth which uses the space of viewing rays to calculate the information for better reconstruction. appropriate warp to apply to image samples; they also In 1996, Jay Torborg and Jim Kajiya introduced introduced the term “image-based rendering.”5 In Microsoft’s Talisman architecture at Siggraph. (See 1996, Steven Seitz and Charles Dyer showed that the http://research.microsoft.com/msrsiggraph for elec- July 1998 47 . tronic versions of MSR Siggraph submissions.) example—has much more coherence than the com- Another The following year, John Snyder and I showed bined scene. For vision, partitioning a scene into lay- longstanding trend how to handle dynamic scenes and lighting fac- ers permits the independent analysis of each layer. torization on Talisman, and this year, we Doing so avoids the difficulties in scene analysis that in computer describe how to sort a layered decomposition stem from overlapping layers and occlusion between graphics is to pursue into sprites without splitting.13 layers. For graphics, partitioning a scene into layers coherence wherever allows rendering algorithms to take better advantage Image-based modeling of spatial and temporal coherence. it can be found. Given an input set of images, what is the most efficient representation that will allow rendering 3D rendering with sprites from new points of view? Taking advantage of temporal coherence requires Paul Debevec, Camillo Taylor, and Jitendra storing the partial results of a given frame for use in a Malik enhanced architectural modeling with simple later frame; in other words, trading off memory for modeling primitives aligned to images with vision tech- computation. The partial results are image samples, niques. They added realism
Recommended publications
  • 3D Graphics Fundamentals
    11BegGameDev.qxd 9/20/04 5:20 PM Page 211 chapter 11 3D Graphics Fundamentals his chapter covers the basics of 3D graphics. You will learn the basic concepts so that you are at least aware of the key points in 3D programming. However, this Tchapter will not go into great detail on 3D mathematics or graphics theory, which are far too advanced for this book. What you will learn instead is the practical implemen- tation of 3D in order to write simple 3D games. You will get just exactly what you need to 211 11BegGameDev.qxd 9/20/04 5:20 PM Page 212 212 Chapter 11 ■ 3D Graphics Fundamentals write a simple 3D game without getting bogged down in theory. If you have questions about how matrix math works and about how 3D rendering is done, you might want to use this chapter as a starting point and then go on and read a book such as Beginning Direct3D Game Programming,by Wolfgang Engel (Course PTR). The goal of this chapter is to provide you with a set of reusable functions that can be used to develop 3D games. Here is what you will learn in this chapter: ■ How to create and use vertices. ■ How to manipulate polygons. ■ How to create a textured polygon. ■ How to create a cube and rotate it. Introduction to 3D Programming It’s a foregone conclusion today that everyone has a 3D accelerated video card. Even the low-end budget video cards are equipped with a 3D graphics processing unit (GPU) that would be impressive were it not for all the competition in this market pushing out more and more polygons and new features every year.
    [Show full text]
  • What's Your Theory of Effective Schematic Map Design?
    What’s Your Theory of Effective Schematic Map Design? Maxwell. J. Roberts Department of Psychology University of Essex Colchester, UK [email protected] Abstract—Amongst designers, researchers, and the general general public accept or reject radical new creations. public, there exists a diverse array of opinion about good practice Furthermore, the lay-theories held by designers will affect the in schematic map design, and a lack of awareness that such products that they create and the recommendations that they opinions are not necessarily universally held nor supported by make. Similarly, the lay-theories of transport managers will evidence. This paper gives an overview of the range of opinion determine design specifications and the acceptance or rejection that can be encountered, the consequences that this has for of end-products. With so many different theories in circulation, published designs, and a framework for organising the various conflicts will be inevitable, hence the polarised response to the views. controversial Madrid Metro map of 2007, which generated a huge adverse reaction amongst residents: presumably, the Keywords—schematic mapping; lay-theories, expectations and transport authority believed that this was a sound design. prejudices; effective design; cognition; intelligence. Likewise, the adverse public response to the removal of the I. LAY-THEORIES OF DESIGN River Thames from the London Underground map in 2009. The lay-theory held by officials, presumably, was that the river Whenever I create a controversial new schematic map and was irrelevant to usability. The public disagreed, and the river publish it on the internet, the diversity of opinion that it evokes returned a few months later.
    [Show full text]
  • Module 4 Electronic Diagrams and Schematics
    ENGINEERING SYMBOLOGY, PRINTS, AND DRAWINGS Module 4 Electronic Diagrams and Schematics Engineering Symbology, Prints, & Drawings Electronic Diagrams & Schematics TABLE OF CONTENTS Table of Co nte nts TABLE OF CONTENTS ................................................................................................... i LIST OF FIGURES ...........................................................................................................ii LIST OF TABLES ............................................................................................................ iii REFERENCES ................................................................................................................iv OBJECTIVES .................................................................................................................. 1 ELECTRONIC DIAGRAMS, PRINTS, AND SCHEMATICS ............................................ 2 Introduction .................................................................................................................. 2 Electronic Schematic Drawing Symbology .................................................................. 3 Examples of Electronic Schematic Diagrams .............................................................. 6 Reading Electronic Prints, Diagrams and Schematics ................................................. 8 Block Drawing Symbology ......................................................................................... 13 Examples of Block Diagrams ....................................................................................
    [Show full text]
  • Critical Review of Open Source Tools for 3D Animation
    IJRECE VOL. 6 ISSUE 2 APR.-JUNE 2018 ISSN: 2393-9028 (PRINT) | ISSN: 2348-2281 (ONLINE) Critical Review of Open Source Tools for 3D Animation Shilpa Sharma1, Navjot Singh Kohli2 1PG Department of Computer Science and IT, 2Department of Bollywood Department 1Lyallpur Khalsa College, Jalandhar, India, 2 Senior Video Editor at Punjab Kesari, Jalandhar, India ABSTRACT- the most popular and powerful open source is 3d Blender is a 3D computer graphics software program for and animation tools. blender is not a free software its a developing animated movies, visual effects, 3D games, and professional tool software used in animated shorts, tv adds software It’s a very easy and simple software you can use. It's show, and movies, as well as in production for films like also easy to download. Blender is an open source program, spiderman, beginning blender covers the latest blender 2.5 that's free software anybody can use it. Its offers with many release in depth. we also suggest to improve and possible features included in 3D modeling, texturing, rigging, skinning, additions to better the process. animation is an effective way of smoke simulation, animation, and rendering. Camera videos more suitable for the students. For e.g. litmus augmenting the learning of lab experiments. 3d animation is paper changing color, a video would be more convincing not only continues to have the advantages offered by 2d, like instead of animated clip, On the other hand, camera video is interactivity but also advertisement are new dimension of not adequate in certain work e.g. like separating hydrogen from vision probability.
    [Show full text]
  • 3D Modeling, Animation, and Special Effects
    3D Modeling, Animation, and Special Effects ITP 215 (2 Units) Catalogue Developing a 3D animation from modeling to rendering: basics of surfacing, Description lighting, animation, and modeling techniques. Advanced topics: compositing, particle systems, and character animation. Objective Fundamentals of 3D modeling, animation, surfacing, and special effects: Understanding the processes involved in the creation of 3D animation and the interaction of vision, budget, and time constraints. Developing an understanding of diverse methods for achieving similar results and decision-making processes involved at various stages of project development. Gaining insight into the differences among the various animation tools. Understanding the opportunities and tracks in the field of 3D animation. Prerequisites Knowledge of any 2D paint, drawing, or CAD program Instructor Lance S. Winkel E-mail: [email protected] Tel: 213/740.9959 Office: OHE 530 H Office Hours: Tue/Thur 8am-10am Lab Assistants: Qingzhou Tang: [email protected] Hours 4 hours Course Structure The Final Exam will be conducted at the time dictated in the Schedule of Classes. Details and instructions for all projects will be available on Blackboard. For grading criteria of each assignment, project, and exam, see the Grading section below. Textbook(s) Blackboard Autodesk Maya Documentation Resources online and at Lynda.com and knowledge.autodesk.com Adobe online resources where necessary for Photoshop and After Effects Grading Planets = 10 points Cityscape 1 of 7 = 10 points Cityscape 2 of
    [Show full text]
  • 3D Modeling and the Role of 3D Modeling in Our Life
    ISSN 2413-1032 COMPUTER SCIENCE 3D MODELING AND THE ROLE OF 3D MODELING IN OUR LIFE 1Beknazarova Saida Safibullaevna 2Maxammadjonov Maxammadjon Alisher o’g’li 2Ibodullayev Sardor Nasriddin o’g’li 1Uzbekistan, Tashkent, Tashkent University of Informational Technologies, Senior Teacher 2Uzbekistan, Tashkent, Tashkent University of Informational Technologies, student Abstract. In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any three-dimensional surface of an object (either inanimate or living) via specialized software. The product is called a 3D model. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D printing devices. Models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications or modelers. Key words: 3D, modeling, programming, unity, 3D programs. Nowadays 3D modeling impacts in every sphere of: computer programming, architecture and so on. Firstly, we will present basic information about 3D modeling. 3D models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created by hand, algorithmically (procedural modeling), or scanned. 3D models are widely used anywhere in 3D graphics.
    [Show full text]
  • Vision & Implementation Plan
    Buffalo River Greenway Vision & Implementation Plan Implementation Guidelines for the Buffalo River, Cayuga Creek, Buffalo Creek and Cazenovia Creek within the City of Buffalo and Towns of Cheektowaga and West Seneca. Prepared For: Buffalo Niagara RIVERKEEPER® Prepared By: Lynn Mason, ASLA With Assistance From: Julie Barrett O’Neill Jill Spisiak Jedlicka Lynda Schneekloth May 2006 II TABLE OF CONTENTS FIGURE LIST pg. IV INVOLVED AGENCIES pg. VI EXECUTIVE SUMMARY pg. VII I. GREENWAY VISION pg. 1 1. Greenway Benefits pg. 3 2. Buffalo River Greenway Project Area pg. 8 II. GREENWAY PLANNING HISTORY pg. 15 1. Recent Projects pg. 17 2. Recent Planning Projects pg. 28 III. EXISTING GREENWAY RESOURCES pg. 29 1. Parks & Parkways pg. 31 2. Conservation Areas pg. 37 3. Bike/Hike Trails pg. 38 4. Boat Launches and Marinas pg. 41 5. Fishing Access and Hot Spots pg. 41 6. Urban Canoe Trail and Launches pg. 42 7. Heritage Interpretation Areas pg. 44 IV. PROPOSED NEW GREENWAY ELEMENTS pg. 47 1. Introduction pg. 49 2. Buffalo River Trail Segments pg. 56 3. Site Specific Opportunities pg. 58 V. IMPLEMENTATION pg. 95 1. Introduction pg. 97 2. Implementation Strategies pg. 106 3. Leveraging Resources and Identifying Funding pg. 108 4. Legislative Action pg. 110 5. Education and Encouragement pg. 110 6. Operations and Maintenance pg. 111 7. Trail Development pg. 112 8. Design Guidelines pg. 181 APPENDIX/ BACKGROUND INFORMATION pg. 189 A. Existing Greenway Resources pg. 191 B. Prohibited Uses pg. 209 C. Phytoremediation pg. 213 D. Buffalo River Paper Streets: A Status Report pg. 215 REFERENCES pg.
    [Show full text]
  • Custom Shader and 3D Rendering for Computationally Efficient Sonar
    Custom Shader and 3D Rendering for computationally efficient Sonar Simulation Romuloˆ Cerqueira∗y, Tiago Trocoli∗, Gustavo Neves∗, Luciano Oliveiray, Sylvain Joyeux∗ and Jan Albiez∗z ∗Brazilian Institute of Robotics, SENAI CIMATEC, Salvador, Bahia, Brazil, Email: romulo.cerqueira@fieb.org.br yIntelligent Vision Research Lab, Federal University of Bahia, Salvador, Bahia, Brazil zRobotics Innovation Center, DFKI GmbH, Bremen, Germany Sonar Parameters opening angle, direction, range osg viewport select rendering area osg world 3D Shader rendering of three channel picture Beam Angle in Camera Surface Angle to Camera Distance from Camera n° 90° Near 0° calculation -n° 0° Far select beam select bin Distance Histogramm # f(x) 1 Return Intensity Data Structure of Sonar Beam for n° return value return normalisation Bin Val 0,5 Bin # 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Near Far 0 0,5 0,77 1 x Fig. 1. A graphical representation of the individual steps to get from the OpenSceneGraph scene to a sonar beam data structure. Abstract—This paper introduces a novel method for simulating is that the simulation has to be good enough to test the decision underwater sonar sensors by vertex and fragment processing. making algorithms in the control system. The virtual scenario used is composed of the integration between When dealing with autonomous underwater vehicles the Gazebo simulator and the Robot Construction Kit (ROCK) framework. A 3-channel matrix with depth and intensity buffers (AUVs) a real-time simulation plays a key role. Since an AUV and angular distortion values is extracted from OpenSceneGraph can only scarcely communicate back via mostly unreliable 3D scene frames by shader rendering, and subsequently fused acoustic communication, the robot has to be able to make and processed to generate the synthetic sonar data.
    [Show full text]
  • 3D Computer Graphics Compiled By: H
    animation Charge-coupled device Charts on SO(3) chemistry chirality chromatic aberration chrominance Cinema 4D cinematography CinePaint Circle circumference ClanLib Class of the Titans clean room design Clifford algebra Clip Mapping Clipping (computer graphics) Clipping_(computer_graphics) Cocoa (API) CODE V collinear collision detection color color buffer comic book Comm. ACM Command & Conquer: Tiberian series Commutative operation Compact disc Comparison of Direct3D and OpenGL compiler Compiz complement (set theory) complex analysis complex number complex polygon Component Object Model composite pattern compositing Compression artifacts computationReverse computational Catmull-Clark fluid dynamics computational geometry subdivision Computational_geometry computed surface axial tomography Cel-shaded Computed tomography computer animation Computer Aided Design computerCg andprogramming video games Computer animation computer cluster computer display computer file computer game computer games computer generated image computer graphics Computer hardware Computer History Museum Computer keyboard Computer mouse computer program Computer programming computer science computer software computer storage Computer-aided design Computer-aided design#Capabilities computer-aided manufacturing computer-generated imagery concave cone (solid)language Cone tracing Conjugacy_class#Conjugacy_as_group_action Clipmap COLLADA consortium constraints Comparison Constructive solid geometry of continuous Direct3D function contrast ratioand conversion OpenGL between
    [Show full text]
  • How Neurons Exploit Fractal Geometry to Optimize Their Network Connectivity Julian H
    www.nature.com/scientificreports OPEN How neurons exploit fractal geometry to optimize their network connectivity Julian H. Smith1,5, Conor Rowland1,5, B. Harland2, S. Moslehi1, R. D. Montgomery1, K. Schobert1, W. J. Watterson1, J. Dalrymple‑Alford3,4 & R. P. Taylor1* We investigate the degree to which neurons are fractal, the origin of this fractality, and its impact on functionality. By analyzing three‑dimensional images of rat neurons, we show the way their dendrites fork and weave through space is unexpectedly important for generating fractal‑like behavior well‑ described by an ‘efective’ fractal dimension D. This discovery motivated us to create distorted neuron models by modifying the dendritic patterns, so generating neurons across wide ranges of D extending beyond their natural values. By charting the D‑dependent variations in inter‑neuron connectivity along with the associated costs, we propose that their D values refect a network cooperation that optimizes these constraints. We discuss the implications for healthy and pathological neurons, and for connecting neurons to medical implants. Our automated approach also facilitates insights relating form and function, applicable to individual neurons and their networks, providing a crucial tool for addressing massive data collection projects (e.g. connectomes). Many of nature’s fractal objects beneft from the favorable functionality that results from their pattern repeti- tion at multiple scales 1–3. Anatomical examples include cardiovascular and respiratory systems4 such as the bronchial tree5 while examples from natural scenery include coastlines 6, lightning7, rivers8, and trees9,10. Along with trees, neurons are considered to be a prevalent form of fractal branching behavior11.
    [Show full text]
  • An Evaluation Study of Preferences Between Combinations of 2D-Look Shading and Limited Animation in 3D Computer Animation
    Received May 1, 2015; Accepted October 5, 2015 An Evaluation Study of preferences between combinations of 2D-look shading and Limited Animation in 3D computer animation Matsuda,Tomoyo Kim, Daewoong Ishii, Tatsuro Kyushu University, Kyushu University, Kyushu University, Graduate school of Design Faculty of Design Faculty of Design [email protected] [email protected] [email protected] Abstract 3D computer animation has become popular all over the world, and different styles have emerged. However, 3D animation styles vary within Japan because of its 2D animation culture. There has been a trend to flatten 3D animation into 2D animation by using 2D-look shading and limited animation techniques to create 2D looking 3D computer animation to attract the Japanese audience. However, the effect of these flattening trends in the audience’s satisfaction is still unclear and no research has been done officially. Therefore, this research aims to evaluate how the combinations of the flattening techniques affect the audience’s preference and the sense of depth. Consequently, we categorized shadings and animation styles used to create 2D-look 3D animation, created sample movies, and finally evaluated each combination with Thurston’s method of paired comparisons. We categorized shadings into three types; 3D rendering with realistic shadow, 2D rendering with flat shadow and outline, and 2.5D rendering which is between 3D rendering and 2D rendering and has semi-realistic shadow and outline. We also prepared two different animations that have the same key frames; 24fps full animation and 12fps limited animation, and tested combinations of each of them for the evaluation experiment.
    [Show full text]
  • Computer Graphics Master in Computer Science Master In
    Computer Graphics Computer Graphics Master in Computer Science Master in Electrical Engineering ... 1 Computer Graphics The service... Eric Béchet (it's me !) Engineering Studies in Nancy (Fr.) Ph.D. in Montréal (Can.) Academic career in Nantes and Metz (Fr.) then Liège... Christophe Leblanc Assistant at ULg Web site http://www.cgeo.ulg.ac.be/infographie Emails: { eric.bechet, christophe.leblanc }@uliege.be 2 Computer Graphics Course schedule 6-7 theory lessons ( ~ 4 hours) may be split in 2x2 hours and mixed with labs This room in building B52 (+2/441) (or my office) 7 practice lessons on computer (~ 4 hours ) may be split in 2x2 hours and mixed with lessons Room +0/413 (B52, floor 0) or +2/441 (own PC) Practical evaluation (labs / on computer) Written exam about theory Project Availability time: Monday PM (or on appointment) 3 Computer Graphics Course schedule Project Implementation of realistic rendering techniques in a small in-house ray-tracing software Your own topics 4 Computer Graphics Course schedule The lectures begin on February 6 and: Feb 27, March 13, etc. (up-to-date agenda on the website) The labs begin on February 20th . Alternating with the theoretical courses. 5 Computer Graphics Introduction Computer graphics : The study of the creation, manipulation and the use of images in computers 6 Computer Graphics Introduction Some bibliography: Computer graphics: principles and practice in C James Foley et al. Old but there exists a french translation Computer graphics: Theory into practice Jeffrey McConnell Fundamentals of computer graphics Peter Shirley et al. Algorithmes pour la synthèse d'images (in French) R.
    [Show full text]