Introduction to Computer Graphics cathode ray terminal (CRT) flat panel (LCD or plasm a): thinner, lighter, less power, crisper Ming- Hwa Wang, Ph.D. display COEN 148/290 Computer Graphics head m ounted display: with optics to feel like large screens, Department of Computer Engineering separate displays for eyes to see 3D, can sense your head Santa Clara University position/ orientation and with position-sensitive input device to make virtual reality possible Mandatory Basics raster scan: the dot that can be lit up is rapidly swept across the an im age is a two-dim ensional array of picture elem ent (pixel); the monitor face in a pattern resolution of an im age refers to how closely packed the pixels are when size: measured in screen diagonal length displayed aspect ration: ratio of width / height continuous im ages (real im ages) vs. discrete im ages (com puter graphics triads: 3 colors are arranged in a group, and triads are arranged in a on raster display) hexagonal pattern so that each triad has six neighbors there is no such thing as a sm ooth curve in a com puter im age, the dot pitch: the smallest detail a monitor can possibly show or resolve, illusion is achieved m ostly by using large quantities of pixels the distance from any one of a triad to each of the six neighbors (sometime with anti-aliasing) scan rate or refresh rate: horizontal scan rate is the rate at which color space: individual scan lines are drawn (15-100 KHz), and the vertical scan red, green, blue (RGB) color space - internal representation rate indicates how often the entire image is refreshed (60-70 Hz) color palette using color lookup table (LUT): 256 levels per RGB interlacing (alternating even field and odd field): refreshes only com ponent requires 8 bits or a byte, LUT converts a color every other scan line each vertical pass, and requires only half as number (color index value or pseudo color) into its RGB color m any scan lines to be drawn, reduce hardware cost but flicker quite hue, intensity (or value, or lightness), saturation (HSV or HLS) color noticeably system - human representation hard copy output computer graphics: the process of converting a mathematic or geometric printer description of an object (the m odel) into a two-dimensional projection (a inkjet: 300-700 DPI (dot per inch), slow, inexpensive, quality ok visualization) that simulate the appearance of a real object laser printer: faster, better quality, more expensive lim itations: com puter graphics can't do depth of field and m otion wax transfer: quality even better, slower, more expensive blur as cam eras can, and cam eras do not have near and far clipping dye sublimation: ultimate image quality, and very expensive plans as computer graphics have photograph: recorder a de facto approach: starting with a very detailed geom etric display controller (video card or graphic card) description, applying a series of transform ations, im itating reality by the video random access m em ory (VRAM) bitm aps is the heart of making the object look solid and real (rendering) any display controller, it is where the pixels are kept, and the not sufficient for anim ation, need synthesis m ethods and bitm aps divides the rem aining display controller into the drawing modeling methods “front end” and the video “back end” the m ain elem ents of a graphics system : application program drawing front end: off-load processor by receiving drawing process with interaction fram e store video controller raster com m ands from the processor, and the pixels are “drawn” by display writing the new values into the bitm aps, m ost display controllers photo-realism : m ake a graphics im age of an object or scene also allow the processor to directly read/ write the pixels in the indistinguishable from a TV image or photograph bitmaps com puter graphics is not an exact science, involve m uch video back end: interprets the pixel values into their colors and sim plification in the original m athem atical m odel so that it can be creates the video signals that drive the monitor implemented as a computer graphics algorithm types: persistence of vision: human's eyes continue to perceive light for a short 2D display controller or graphics user interface (GUI) engine tim e after the light has gone away (also the phosphor persistence for 2½ D display controller: allowing the color to vary across the CRT) object being drawn using bilinear interpolation, dithering, and Z buffering Hardware Basics: interactive output technologies Full 3D display controller: do transform ation, lighting, and other monitor advanced effects type:

Graphic Primitives potential function: builds surfaces by m odeling lots of little blobs graphics primitives: can be draw directly into bitmaps together, the blobs are spheres when far apart, and they bleed point primitives are the smallest thing we can draw (pixels) into each other when close together vector primitives constructive solid geometry (CSG): objects are created by combining line: artifact – diagonal lines are 29% fewer pixels than sim ple 3D solids (spheres, boxes, cones, and cylinders) using only 3 horizontal or vertical lines Boolean operations: OR (or union), AND (or intersection), NOT (for polyline: m ore efficient than an equivalent list of independent minus), and XOR lines space subdivision: divide space into voxels (or volum e elem ents); polygon primitives the num ber of voxels goes up with the cube of resolution, but not convex polygons: m ost graphics system s draw sim ple convex subdivide all the space at the sam e resolution; it is hom ogenous, polygons directory i.e., you don’t need detailed inform ation in any region of space that o triangles: always convex and simple contains the same thing throughout concave polygons: easier to decompose into triangles octrees: always broken in the m iddle along each of the 3 m aj or triangle strip or Tstrip: only 1 point is needed to specify each axes, and all voxels are always rectangular solids for storing 3D new triangle, often used to draw curve surfaces volumetric data o if each triangle is drawn individually, the shared corner m ay o quadtrees are 2D variations of octrees compute 6 times, mesh and strip avoid/reduce this overhead binary space partition (BSP) trees: if a block of space isn’t quad mesh homogenous, it’s broken into two blocks, this process continues complex shapes can be drawn by lots of simple primitives until either they are homogenous or a resolution limit is reached; in regular BSP trees, all blocks are rectangular solids; blocks can Modeling be cut in two by any arbitrary plane, which require less m odeling: the process of describing an object or scene so that we can subdivision, but m ore difficult to describe and the resulting eventually draw it (and makes it easy to draw) blocks are more difficult to manipulate m ost real objects are opaque and we can only see their surface, procedure m odels: described by a m ethod, or procedure, for surfaces are often easier to describe than volum es, m ost 3D producing prim itives when necessary instead of the prim itives rendering hardware can draw only surfaces themselves, and it is more compact than an explicit list of primitives; model types when the required level of detail isn’t know when the m odel is explicit surface m odel: an outright list of all surface elem ents, or created, the procedure can be perform ed after as necessary to patches, that make up a surface achieve the desired level of detail polygons: the predom inant m ethod for m odeling surface using fractals: functions that have infinite detail, e.g., Mandelbrot set tessellation, need find appropriate tessellation level and make function them smooth graftals: objects that are “grown” according to a strict recipe curved patches: m odel sm all regions of our curved surface with particle system s: the object is drawn as the tracks of large patches of the convenient m athem atical functions, can get a num bers of individual particles; each particle has a start location closer fit to the intended surface with fewer patches, m ore and a trajectory com pact and accurate, but need to decom pose into polygons for modeling issues rendering on hardware level of detail: trade-off between realistic pictures and com puting o biquadratic or bicubic patches resources/storages spline patches: a curved patch where the m athem atical function suitability: model choices is governed by a set of control points, the patch takes on a smooth shape based on the position of these control points Lighting o nonuniform rational b-spline (NURBS), B-spline, beta spline, a scene includes the objects and all the other stuff the com puter needs to etc. actually make a picture im plicit surface: described by a m athem atical function that can light: our brains get m any subtle shape cues from how the brightness of answer whether any point you ask about is to one side, or the other an object varies across its surface; shape perception works because light side, or on a surface; sim pler but need eventually surface points will doesn’t com e from all direction equally (otherwise, whiteout or have to found so that the surface can be drawn disorienting effect m ake features difficult to distinguish, e.g., in cloudy isosurfaces: 3D contour slow)

directional light: specified with only a direction to the light and an Z-buffer rendering: only the unoccluded surfaces are drawn by intensity, which apply everywhere in the scene, e.g., sunlight m aintaining an additional value (the Z-value) in each bitm ap pixel point light: light intensity falls off with the square of the distance beyond the color; before a new value is written into a pixel, the from the light source, but in com puter graphics, it just falls off with pixel’s Z-value is com pared to the new Z value; if the new Z-value the distance represent a closer distance to the eye point, the new value is written spotlights: a light with shade or reflector around it so that it shines into the pixel; otherwise, the pixel is left alone only in a cone ray tracing: am bient light: the light is scattered about by bouncing off other in Z-buffering, we render one object at a time until we run out of objects; m ost rendering m ethods don’t handle scatter light (except objects, and we m ight access any of the bitm ap pixels for each radiosity), so we pretend there is a constant low-level illum ination object; in ray tracing, we render one pixel at a tim e until we run everywhere to prevent areas from appearing com plete dark (not for out of pixels, and we m ight access any of the objects for each shape perception) bitm ap pixel; also ray tracing im ages can have shadow (help human to know the relative position of objects) and reflection Camera or Viewing ray casting or non-recursive ray tracing: for each pixel, a ray is viewing shot from the eye point in the view direction for that pixel; the eye point (view point or camera point): the point to look from pixel is then set to the color of whatever the ray bumps into view direction or gaze direction: the direction to look into recursive ray tracing: in theory, the original ray (the eye ray) lookat point: a point that is to appear in the center of the picture and additional rays (launched when a reflective or transparent projections object is hit) m ight keep launching m ore rays indefinitely; in perspective projection: distant objects are drawn sm aller than near practice, new rays are not launched when their im portance to ones; all objects get projected onto the im age along a line from the the original pixel color is sm all enough to ignore, or when an eye point; our brain naturally interpret depth from the perspective arbitrary recursion is reached projection scaling speed issues: flat or orthographic proj ection: objects are projected flat onto the o too slow, not good for real-time situations im age without any change in size (when preserving dim ension is o too complicate, not good available on hardware important); it is the sam e as perspective projection with the eye o it does its work per pixel instead of per object, but pixels point infinitely far back greatly outnum ber objects for m ost of today’s scene (good for Z-buffer rendering though) Calculate the light- object interaction o for a fixed-size bitmap, ray tracing may actually be faster for local or direct reflection model: only consider the interaction of an object images of very complex scene with a light source speedup techniques: Phong: usually ends up with an object reflecting m ore light than it o object hierarchy ray tracing: clum ps objects into groups, receives, can't do shadows and have to be calculated by a separate then clum ps groups into bigger groups (the bounding 'add-on' algorithm or texture mapping volum es); instead of checking a ray against every object in global reflection m odel: consider light reflects from one object and the group, the ray is checked against the bounding volum e evaluate the interaction between obj ects (specular interaction, diffuse first; if it doesn’t intersect the bounding volum e, the there is interaction, shadows) no need to go any further with any object or subgroup ray tracing: attends to perfect specular reflection, sharp refraction o space subdivision ray tracing: the scene space is subdivided radiosity: m odels diffuse interaction (com m on in m an-made into little regions, and each region contains a list of the interiors), softly-lit objects found within that region; rays are traced from their start points through each of the regions in their path, and Rendering: converting a scene into 2D grid of pixels (image) intersection checks are perform ed only on objects listed in rendering methods: the regions; m ost schem es in use are adaptive, i.e., they wire-frame rendering: only the edges are drawn just like stick-figure subdivide m ore finely in areas that requires m ore detail; drawing; faster (because drawing lines in hardware) so useful for adaptive octree is a good choice quick previews (or m echanical drawing need wire-fram e only); old radiosity: calligraphic controllers only can draw lines on the screen but don’t because ray tracing follows thin shafts of light (the rays), it deals have bitmap or pixels well with light com ing from point source; radiosity excels at diffuse reflection or secondary scatter

in radiosity rendering, as in Z-buffer rendering, the scene is interpolation (fudging it for m ost of the pixels): com pute the real decom posed into a pile of polygons, and those polygons will object color only at the vertices of each polygon, then fill in the ultimately be rendered using a Z buffer interior pixels based on the color at vertices the radiosity algorithm first com putes what portion of the light flat shading: the color values from the polygon vertices are from each polygon reaches each others; this form factor (i.e., averaged, and all the polygon’s pixels are set to this fixed (flat) “coupling factor”) is determ ined between every two polygons in color value; very sim ple and even supported by low-end 2D the scene; then set final visible colors or each polygon by first display controllers, but the individual polygons are quite visible setting the illum ination onto each polygon to 0 (black), the only in the resulting picture polygons that wouldn’t appear black now are the light sources; Gouraud or linear or bilinear (bi m eans 2D) shading: linear next, the light from the bright polygons causes other polygons to m eans the rate of change is held constant; linear shading is the be lit, this, in turn, shine on other polygon; this process of most common shading technique used with Z-buffer rendering lighting up polygons continues until things settle down; finally Phong shading: instead of interpolating the color value for each the polygons are drawn by using Z-buffer renderer pixel in a prim itive, the shading norm al vector is interpolated, shading: the process of eventually com ing up with pixel color values then a separate apparent color com putation is perform ed for using RGB color space each pixel; it has better quality result than bilinear interpolation direct evaluation (figuring it out the hard way): the Phong lighting model whenever there is a sudden change in the color’s rate of change, the knowing which way the surface is facing (the N vector), where , Mach bands, is observed; the com m on solution is to the light is coming from (the L vector), and where we are looking use more and smaller polygons in modeling from (E vector); from N and L, we can com pute reflection (the R compositing: composite or blend different im ages into one (frequently vector) done in moviemaking and television production) shading norm al: the norm al vector used for the purpose of overlay where not zero (the bargain-basem ent com positing computing the apparent color m ethod): im age B is sim ply written over im age A whenever im age B geom etric norm al: the norm al vector of the polygon we are doesn’t have all 0 pixel values drawing alpha-buffered rendering facet shading: the geom etric norm al is also used as the shading alpha (or transparency) value: opacity fraction, 0 m eans fully normal; in this shading, each patch is planar, one shading transparent (invisible), and the m axim um value (255) m eans normal applies to an entire patch fully opaque sm ooth shading: the original surface’s norm al vectors on each alpha buffer: the alpha values for all the pixels in an im age or vertex were used as the shading norm al; the vertex color values bitmap are sm oothly blended in the interior of each polygon; the edge alpha-buffered rendering: overlay a new im age onto an existing still shows the individual polygon because the silhouette edge is one; only the overlay im age have alpha values, the starting and dependent only on the as-drawn geom etry, not the shading resulting im ages are fully opaque; or all 3 im ages have alpha normal vector values visual properties or surface properties chrom a keying: the resulting im age is taken from the overlay o emissive color: the color the object appears in the dark, as if whenever the overlay is (or isn’t) a particular hue it were emitting light; it is not used often blue screen: heavily used in television weather reports; blue was o diffuse color: as light hits a spot on an object, the diffuse the chosen because it’s the hue m ost opposite to skin color, and color is reflected equally in all direction; it is the m atte color the reporter isn’t wearing anything blue (not shiny) antialiasing: o specular color: the color is reflected m ostly around the aliasing ( the jaggies): sm ooth lines in som e digit im ages are jagged reflection vector; it is the shinny highlight; the diffuse and and look more like stair steps (individual pixels) specular surface properties are most often used together antialiasing: to avoid or reduce aliasing by setting pixels near the specular exponent: controls how tightly the specular edge to some in some intermediate value reflection is bunched around the reflection vector; a just ignore them because antialiasing requires lots of value of 0 is diffuse color, and a value of infinite is pure computation and is rarely supported in hardware m irror reflection; the specular color contribution is a sim ple way – sub-pixels: starts with a description of the proportional to the dot product of the E and R vectors im age at a higher resolution than the final im age by using raised to the power of the specular exponent multiple pixels for each final pixel (filtering from DSP)

o box filtering: average all the sub-pixels within a final pixel; param etric (rule-based m ethod): provide rules (based on input reasonably fast, but quality is medium param eters, particularly the anim ation tim e value) for how to determ ine a better way - convolution: the bunch of sub-pixels need to be each animated value at any point in time weighted (filter kernel or convolution kernel) and sum m ed to inverse kinem atics and dynam ics (physics Knurd m ethod): by giving produce each output pixel; more computation but better quality constraints (rules about how the system functions), the com puter tem poral aliasing or strobing (aliasing in tim e): each fram e of an actually calculates some of the motion automatically; inverse means that anim ation is one fixed im age throughout, any m otion in the the computer is figuring out what the objects must have done to achieve anim ation is really a sequence of frozen steps; to have each an end result you specify anim ation fram e represent a sm all range of tim e, and fast-moving objects would then appear blurred Saving the Pixels and Image File Issues texture mapping: replacing a parameter to the apparent color calculation bitmap are written to image files; standard image file formats make it easier with an external value; the user defines texture coordinates across the to store, transmit, and exchange digital images object during m odeling; the texture coordinates are interpolated across common image file strategies each polygon instead of the final color values; texture m apping adds order: all the pixels for one horizontal scan line are written before significant visual detail to the im age without the need for m odeling going on to next scan line (scan-line or top-down order), and left-to- complex geometry right order within a scan line texils: the pixels in the texture map color: either palette (pseudo) color or true color other forms of texture mapping compression: since the color value of m ost pixels is close to the color bum p m apping: texture-m apping sm all perturbations of the value of adjacent pixels, m ost im ages have a great deal of redundant shading norm al vector; it can m ake an object appear to be inform ation, which m ake them good candidate for com pression rougher than it really is geometrically algorithms 3D texture or solid texture: the texture is a function instead of lossless com pression: preserves all im age data; you can com press an explicit table of values (like an image) and uncom press an im age as m any tim es as you want and still end dithering: a technique for trading off spacial resolution to get m ore color up with the same original pixel values (or gray-scale) resolution; the apparent color or gray-scale was runlength com pression/ coding: reduce storage for successive increased at the expense of adding a dither pattern, or graininess, to the pixels that have exactly the sam e value; works reasonably well im age; the noticeable graininess in the im age is a disadvantage of on com puter-generated im ages, especially ones that have large dithering; the dither pattern can be a tile of any size or random patterns areas of background with the sam e value horizontally; doesn’t a black-and-white dithering exam ple by treating each clum p of 2x2 work well on scanned im ages because they often have slight pixels as a unit; set 0, 1, 2, 3, or 4 of the pixels to white and the differences between adjacent pixels rest to black; when view as an aggregate, each clum p now can have LZW (Lem pel-Ziv and Welch) com pression: identity and five different gray levels com pressing repeating patterns in the data; the patterns grow each tim e they are referenced; LZW works well on a variety of images, including scanned and dithered images im ages are anim ated by showing m any of them in rapid succession to give lossy com pression: lose som e (least like to be noticed by hum an the illusion of a m oving im age; fram e rate: NTSC video standard 30 visual system ) of the data during com pression and thus achieve fram es/ sec (due to interlacing, it actually shows a new half-im age every higher com pression ratios; the color inform ation is blurred m ore 1/60 seconds), PAL or SECAM video standard 25 fram es/ sec, m ovie 24 (yielding higher com pression) than the intensity inform ation; the frames/sec (to decrease flicker, each frame is shown twice) higher the com pression ratio, the m ore com pression artifacts key fram es (WYSI WYG m ethod): the positions and other param eters of become visible in the image scene elem ents are exactly defined for certain key fram es; the joint photographic experts group (JPEG) for still digital color param eter are then interpolated to obtain values for the in-between im age: works on individual im ages to com press redundant frames (may not result in the exact motion you had in mind) information from pixel to pixel linear interpolation: changes each param eter by a fixed step each m otion picture experts group (MPEG) for m otion picture: to fram e; produces very jerky m otion that seem s to suddenly start, com press redundant inform ation from fram e to fram e; hum an stop, and change direction are m ore tolerant of som e kinds of artifacts in anim ated im ages cubic interpolation: avoids sudden starts, stops, or changes in than in still images; higher compression ratios than JPEG direction image file format tag image file format (TIFF) for importing images into application

graphics interchange form at (GI F) for downloading im ages via a modem portable pixel map (PPM) is the only format OpenGL supported

Conference ACM/SIGGRAPH (the Association for Com puting Machinery’s Special Interest Group on Graphics) http://siggraph.org IEEE Computer Graphics and Applications Visual Computer