Introduction to Graphics
Computer Science Tripos Part 1A/1B Michaelmas Term 2017/2018
Department of Computer Science and Technology The Computer Laboratory
William Gates Building 15 JJ Thomson Avenue Cambridge CB3 0FD
www.cst.cam.ac.uk
This handout includes copies of the slides that will be used in lectures. These notes do not constitute a complete transcript of all the lectures and they are not a substitute for text books. They are intended to give a reasonable synopsis of the subjects discussed, but they give neither complete descriptions nor all the background material.
Material is copyright © Neil A Dodgson, Peter Robinson & Rafał Mantiuk, 1996-2017, except where otherwise noted.
All other copyright material is made available under the University’s licence. All rights reserved.
Introduction to Computer Graphics Peter Robinson & Rafał Mantiuk www.cl.cam.ac.uk/~pr & ~rkm38
Eight lectures & two practical tasks for Part IA CST Two supervisions suggested Two exam questions on Paper 3 2 What are Computer Graphics & Image Processing?
Scene description
Computer Image analysis & graphics computer vision
Digital image Image Image capture display
Image processing 3 Why bother with CG & IP? All visual computer output depends on CG printed output (laser/ink jet/phototypesetter) monitor (CRT/LCD/plasma/DMD) all visual computer output consists of real images generated by the computer from some internal digital image Much other visual imagery depends on CG & IP TV & movie special effects & post-production most books, magazines, catalogues, brochures, junk mail, newspapers, packaging, posters, flyers 4 Course Structure Background What is an image? Human vision. Resolution and quantisation. Storage of images in memory. [1 lecture] Rendering Perspective. Reflection of light from surfaces and shading. Geometric models. Ray tracing. [3 lectures] Graphics pipeline Polygonal mesh models. Transformations using matrices in 2D and 3D. Homogeneous coordinates. Projection: orthographic and perspective. [1 lecture] Graphics hardware and modern OpenGL Vertex processing. Rasterisation. Fragment processing. Working with meshes and textures. [2 lectures] Technology Colour spaces. Output devices: brief overview of display and printer technologies. [1 lecture] 5 Course books Fundamentals of Computer Graphics Shirley & Marschner CRC Press 2015 (4th edition) Computer Graphics: Principles & Practice Hughes, van Dam, McGuire, Sklar et al. Addison-Wesley 2013 (3rd edition) OpenGL Programming Guide: The Official Guide to Learning OpenGL Version 4.5 with SPIR-V Kessenich, Sellers & Shreiner Addison Wesley 2016 (7th edition and later) 6 Computer Graphics & Image Processing
Background What is an image? Human vision Resolution and quantisation Storage of images in memory Rendering Graphics pipeline Graphics hardware and modern OpenGL Colour 7 What is required for vision? illumination some source of light objects which reflect (or transmit) the light eyes to capture the light as an image
direct viewing transmission reflection 8 The spectrum visible light is only a tiny part of the whole electromagnetic spectrum
the short wavelength the long wavelength end of the spectrum end of the spectrum is violet is red
violet blue green yellow red 9 What is an image? two dimensional function value at any point is an intensity or colour not digital! 10 The workings of the human visual system to understand the requirements of displays (resolution, quantisation and colour) we need to know how the human eye works...
The lens of the eye forms an image of the world on the retina: the back surface of the eye 11 Structure of the human eye
the retina is an array of light detection cells the fovea is the high resolution area of the retina the optic nerve takes signals from the retina to the visual cortex in the brain
Fig. 2.1 from Gonzalez & Woods 12 Light detectors in the retina two classes rods cones cones come in three types sensitive to short, medium and long wavelengths allow you to see in colour the cones are concentrated in the macula, at the centre of the retina the fovea is a densely packed region in the centre of the macula contains the highest density of cones provides the highest resolution vision 13 Colour signals sent to the brain
the signal that is sent to the brain is pre-processed by the retina long +medium + short = luminance
long – medium = red-green
long + medium – short = yellow-blue
this theory explains: colour-blindness effects why red, yellow, green and blue are perceptually important colours why you can see e.g. a yellowish red but not a greenish red 14 Mixing coloured lights by mixing different amounts of red, green, and blue lights we can generate a wide
range of responses in the human eye
green green
blue red light light off fully on red blue not all colours can be created in this way 15 What is a digital image? a contradiction in terms if you can see it, it’s not digital if it’s digital, it’s just a collection of numbers a sampled and quantised version of a real image a rectangular array of intensity or colour values 16 Sampling a digital image is a rectangular array of intensity values each value is called a pixel “picture element” sampling resolution is normally measured in pixels per inch (ppi) or dots per inch (dpi) computer monitors have a resolution around 100 ppi laser and ink jet printers have resolutions between 300 and 1200 ppi typesetters have resolutions between 1000 and 3000 ppi 17 Image capture a variety of devices can be used scanners line CCD (charge coupled device) in a flatbed scanner spot detector in a drum scanner cameras area CCD Heidelberg CMOS camera chips drum scanner
area CCD flatbed scanner www.hll.mpg.de www.nuggetlab.com
The image of the Heidelberg drum scanner and many other images in this section come from “Handbook of Print Media”, by Helmutt Kipphan, Springer-Verlag, 2001 18 Image capture example
103 59 12 80 56 12 34 30 1 78 79 21 145 156 52 136 143 65 115 129 41 128 143 50 85 106 11 74 96 14 85 97 23 66 74 23 73 82 29 67 76 21 40 48 7 33 39 9 94 54 19 42 27 6 19 10 3 59 60 28 102 107 41 208 88 63 204 75 54 197 82 63 179 63 46 158 62 46 146 49 40 52 65 21 60 68 11 40 51 17 35 37 0 28 29 0 83 50 15 2 0 1 13 14 8 243 173 161 231 140 69 239 142 89 230 143 90 210 126 79 184 88 48 152 69 35 123 51 27 104 41 23 55 45 9 36 27 0 28 28 2 29 28 7 40 28 16 13 13 1 224 167 112 240 174 80 227 174 78 227 176 87 233 177 94 213 149 78 196 123 57 141 72 31 108 53 22 121 62 22 126 50 24 101 49 35 16 21 1 12 5 0 14 16 11 3 0 0 237 176 83 244 206 123 241 236 144 238 222 147 221 190 108 215 170 77 190 135 52 136 93 38 76 35 7 113 56 26 156 83 38 107 52 21 31 14 7 9 6 0 20 14 12 255 214 112 242 215 108 246 227 133 239 232 152 229 209 123 232 193 98 208 162 64 179 133 47 142 90 32 29 19 27 89 53 21 171 116 49 114 64 29 75 49 24 10 9 5 11 16 9 237 190 82 249 221 122 241 225 129 240 219 126 240 199 93 218 173 69 188 135 33 219 186 79 189 184 93 136 104 65 112 69 37 191 153 80 122 74 28 80 51 19 19 37 47 16 37 32 223 177 83 235 208 105 243 218 125 238 206 103 221 188 83 228 204 98 224 220 123 210 194 109 192 159 62 150 98 40 116 73 28 146 104 46 109 59 24 75 48 18 27 33 33 47 100 118 216 177 98 223 189 91 239 209 111 236 213 117 217 200 108 218 200 100 218 206 104 207 175 76 177 131 54 142 88 41 108 65 22 103 59 22 93 53 18 76 50 17 9 10 2 54 76 74 108 111 102 218 194 108 228 203 102 228 200 100 212 180 79 220 182 85 198 158 62 180 138 54 155 106 37 132 82 33 95 51 14 87 48 15 81 46 14 16 15 0 11 6 0 64 90 91 54 80 93 220 186 97 212 190 105 214 177 86 208 165 71 196 150 64 175 127 42 170 117 49 139 89 30 102 53 12 84 43 13 79 46 15 72 42 14 10 13 4 12 8 0 69 104 110 58 96 109 130 128 115 196 154 82 196 148 66 183 138 70 174 125 56 169 120 54 146 97 41 118 67 24 90 52 16 75 46 16 58 42 19 13 7 9 10 5 0 18 11 3 66 111 116 70 100 102 78 103 99 57 71 82 162 111 66 141 96 37 152 102 51 130 80 31 110 63 21 83 44 11 69 42 12 28 8 0 7 5 10 18 4 0 17 10 2 30 20 10 58 88 96 53 88 94 59 91 102 69 99 110 54 80 79 23 69 85 31 34 25 53 41 25 21 2 0 8 0 0 17 10 4 11 0 0 34 21 13 47 35 23 38 26 14 47 35 23 A real image A digital image 19 Sampling resolution 256256 128128 6464 3232
22 44 88 1616 20 Quantisation each intensity value is a number for digital storage the intensity values must be quantised limits the number of different intensities that can be stored limits the brightest intensity that can be stored how many intensity levels are needed for human consumption 8 bits often sufficient some applications use 10 or 12 or 16 bits more detail later in the course colour is stored as a set of numbers usually as 3 numbers of 5–16 bits each more detail later in the course 21 Quantisation levels 8 bits 7 bits 6 bits 5 bits (256 levels) (128 levels) (64 levels) (32 levels)
1 bit 2 bits 3 bits 4 bits (2 levels) (4 levels) (8 levels) (16 levels) 22 Storing images in memory 8 bits became a de facto standard for greyscale images 8 bits = 1 byte 16 bits is now being used more widely, 16 bits = 2 bytes an 8 bit image of size W H can be stored in a block of W H bytes one way to do this is to store pixel[x][y] at memory location base + x + W y memory is 1D, images are 2D 5 base 4 3 2 5 1 base + 1 + 5 2 0 0 1 2 3 4 23 Colour images
tend to be 24 bits per pixel 3 bytes: one red, one green, one blue increasing use of 48 bits per pixel, 2 bytes per colour plane can be stored as a contiguous block of memory of size W H 3 more common to store each colour in a separate “plane” each plane contains just W H values the idea of planes can be extended to other attributes associated with each pixel alpha plane (transparency), z-buffer (depth value), A-buffer (pointer to a data structure containing depth and coverage information), overlay planes (e.g. for displaying pop-up menus) — see later in the course for details 24 The frame buffer most computers have a special piece of memory reserved for storage of the current image being displayed
B frame output U buffer stage display S (e.g. DAC)
the frame buffer normally consists of dual-ported Dynamic RAM (DRAM) sometimes referred to as Video RAM (VRAM) 25 Computer Graphics & Image Processing Background Rendering Perspective Reflection of light from surfaces and shading Geometric models Ray tracing Graphics pipeline Graphics hardware and modern OpenGL Technology 26 Depth cues 27 Rendering depth 28 Perspective in photographs
Gates Building – the rounded version Gates Building – the rectilinear version (Stanford) (Cambridge) 29 Early perspective Presentation at the Temple Ambrogio Lorenzetti 1342 Uffizi Gallery Florence 30 Wrong perspective Adoring saints Lorenzo Monaco 1407-09 National Gallery London 31 Renaissance perspective Geometrical perspective Filippo Brunelleschi 1413 Holy Trinity fresco Masaccio (Tommaso di Ser Giovanni di Simone) 1425 Santa Maria Novella Florence De pictura (On painting) textbook by Leon Battista Alberti 1435 32 More perspective The Annunciation with Saint Emidius Carlo Crivelli 1486 National Gallery London 33 False perspective Calculating 34 perspective 35 Ray tracing Identify point on surface and calculate illumination Given a set of 3D objects, shoot a ray from the eye through the centre of every pixel and see what surfaces it hits
shoot a ray through each pixel whatever the ray hits determines the colour of that pixel Ray tracing: examples 36
ray tracing easily handles reflection, refraction, shadows and blur ray tracing is computationally expensive 37 Ray tracing algorithm select an eye point and a screen plane
FOR every pixel in the screen plane determine the ray from the eye through the pixel’s centre FOR each object in the scene IF the object is intersected by the ray IF the intersection is the closest (so far) to the eye record intersection point and object END IF ; END IF ; END FOR ; set pixel’s colour to that of the object at the closest intersection point END FOR ; 38 Intersection of a ray with an object 1
plane
D N O ray:,PO sD s 0 plane: PNd 0 dNO s ND
polygon or disc intersection the ray with the plane of the polygon as above then check to see whether the intersection point lies inside the polygon a 2D geometry problem (which is simple for a disc) 39 Intersection of a ray with an object 2
sphere a D D D C b 2D O C O r 2 ray : P O sD, s 0 c O C O C r sphere : ( P C)(P C) r 2 0 d b2 4ac b d s 1 2a d real d imaginary b d s2 cylinder, cone, torus 2a all similar to sphere try them as an exercise 40 Ray tracing: shading
once you have the intersection of a light 2 ray with the nearest object you can light 1 also: calculate the normal to the object at that intersection point shoot rays from that point to all of the light sources, and calculate the diffuse N and specular reflections off the object at that point this (plus ambient illumination) D C O r gives the colour of the object (at that point) 41 Ray tracing: shadows
because you are tracing light 2 rays from the intersection light 1 point to the light, you can check whether another object is between the light 3 intersection and the light and is hence casting a N shadow also need to watch for self- D C shadowing O r 42 Ray tracing: reflection
if a surface is totally or partially reflective then new rays can be spawned to find the contribution to the
N 2 pixel’s colour given by light the reflection N 1 this is perfect (mirror) reflection
O 43 Ray tracing: transparency & refraction
objects can be totally or partially transparent this allows objects behind the current one to be seen through
D2 it transparent objects can have D light 1 refractive indices bending the rays as they pass through the objects D'1 D0 O transparency + reflection D'2 means that a ray can split into two parts 44 Illumination and shading
Dürer’s method allows us to calculate what part of the scene is visible in any pixel But what colour should it be? Depends on: lighting shadows properties of surface material 45 How do surfaces reflect light?
perfect specular Imperfect specular diffuse reflection reflection reflection (Lambertian reflection) (mirror)
the surface of a specular reflector is facetted, each facet reflects perfectly but in a slightly different direction to the other facets
Johann Lambert, 18th century German mathematician 46 Comments on reflection
the surface can absorb some wavelengths of light e.g. shiny gold or shiny copper
specular reflection has “interesting” properties at glancing angles owing to occlusion of micro-facets by one another
plastics are good examples of surfaces with: specular reflection in the light’s colour diffuse reflection in the plastic’s colour 47 Calculating the shading of a surface gross assumptions: there is only diffuse (Lambertian) reflection all light falling on a surface comes directly from a light source there is no interaction between objects no object casts shadows on any other so can treat each surface as if it were the only object in the scene light sources are considered to be infinitely distant from the object the vector to the light is the same across the whole surface observation: the colour of a flat surface will be uniform across it, dependent only on the colour & position of the object and the colour & position of the light sources 48 Diffuse shading calculation
N L is a normalised vector pointing in the direction of the light source L N is the normal to the surface
Il is the intensity of the light source
kd is the proportion of light which is II k ld cos diffusely reflected by the surface I is the intensity of the light reflected Ild kNL() by the surface
use this equation to calculate the colour of a pixel 49 Diffuse shading: comments
can have different Il and different kd for different wavelengths (colours) watch out for cos < 0 implies that the light is behind the polygon and so it cannot illuminate this side of the polygon do you use one-sided or two-sided surfaces? one sided: only the side in the direction of the normal vector can be illuminated if cos < 0 then both sides are black two sided: the sign of cos determines which side of the polygon is illuminated need to invert the sign of the intensity for the back side this is essentially a simple one-parameter ( BRDF 50 Specular reflection Phong developed an easy-to- L is a normalised vector pointing in the calculate approximation to direction of the light source specular reflection R is the vector of perfect reflection N is the normal to the surface N L R V is a normalised vector pointing at the viewer I is the intensity of the light source V l ks is the proportion of light which is specularly reflected by the surface n n is Phong’s ad hoc “roughness” coefficient II k ls cos I kR() V n I is the intensity of the specularly reflected ls light
Phong Bui-Tuong, “Illumination for computer n=1 n=3 n=7 n=20 n=40 generated pictures”, CACM, 18(6), 1975, 311–7 51
Examples 100%
75%
50% specular reflection
25%
0%
100% 75% 50% 25% 0% diffuse reflection 52 Shading: overall equation
the overall shading equation can thus be considered to be the ambient illumination plus the diffuse and specular reflections from each light source
N Li Ri n II kIa kLNI ai dii kRV si ()() V i i
the more lights there are in the scene, the longer this calculation will take 53 The gross assumptions revisited
diffuse reflection approximate specular reflection no shadows need to do ray tracing or shadow mapping to get shadows lights at infinity can add local lights at the expense of more calculation need to interpolate the L vector no interaction between surfaces cheat! assume that all light reflected off all other surfaces onto a given surface can be amalgamated into a single constant term: “ambient illumination”, add this onto the diffuse and specular illumination 54 Sampling
we have assumed so far that each ray passes through the centre of a pixel i.e. the value for each pixel is the colour of the object which happens to lie exactly under the centre of the pixel this leads to: stair step (jagged) edges to objects small objects being missed completely thin objects being missed completely or split into small pieces 55 Anti-aliasing
these artefacts (and others) are jointly known as aliasing methods of ameliorating the effects of aliasing are known as anti-aliasing
in signal processing aliasing is a precisely defined technical term for a particular kind of artefact in computer graphics its meaning has expanded to include most undesirable effects that can occur in the image this is because the same anti-aliasing techniques which ameliorate true aliasing artefacts also ameliorate most of the other artefacts 56 Sampling in ray tracing
single point shoot a single ray through the pixel’s centre super-sampling for anti-aliasing shoot multiple rays through the pixel and average the result regular grid, random, jittered, Poisson disc adaptive super-sampling shoot a few rays through the pixel, check the variance of the resulting values, if similar enough stop, otherwise shoot some more rays 57 Types of super-sampling 1
regular grid divide the pixel into a number of sub-pixels and shoot a ray through the centre of each problem: can still lead to noticable aliasing unless a very high resolution sub-pixel grid is used random shoot N rays at random points in the pixel 12 8 4 replaces aliasing artefacts with noise artefacts the eye is far less sensitive to noise than to aliasing 58 Types of super-sampling 2
Poisson disc shoot N rays at random points in the pixel with the proviso that no two rays shall pass through the pixel closer than e to one another for N rays this produces a better looking image than pure random sampling very hard to implement properly
Poisson disc pure random 59 Types of super-sampling 3
jittered divide pixel into N sub-pixels and shoot one ray at a random point in each sub-pixel an approximation to Poisson disc sampling for N rays it is better than pure random sampling easy to implement
jitteredPoisson disc pure random More reasons for wanting to take 60 multiple samples per pixel super-sampling is only one reason why we might want to take multiple samples per pixel many effects can be achieved by distributing the multiple samples over some range called distributed ray tracing N.B. distributed means distributed over a range of values can work in two ways each of the multiple rays shot through a pixel is allocated a random value from the relevant distribution(s) all effects can be achieved this way with sufficient rays per pixel each ray spawns multiple rays when it hits an object this alternative can be used, for example, for area lights 61 Examples of distributed ray tracing
distribute the samples for a pixel over the pixel area get random (or jittered) super-sampling used for anti-aliasing distribute the rays going to a light source over some area allows area light sources in addition to point and directional light sources produces soft shadows with penumbrae distribute the camera position over some area allows simulation of a camera with a finite aperture lens produces depth of field effects distribute the samples in time produces motion blur effects on any moving objects 62 Anti-aliasing
one sample per pixel multiple samples per pixel 63 Area vs point light source
an area light source produces soft shadows a point light source produces hard shadows 64 Finite aperture
left, a pinhole camera below, a finite aperture camera below left, 12 samples per pixel below right, 120 samples per pixel note the depth of field blur: only objects at the correct distance are in focus
1, 120 65 Computer Graphics & Image Processing Background Rendering Graphics pipeline Polygonal mesh models Transformations using matrices in 2D and 3D Homogeneous coordinates Projection: orthographic and perspective Graphics hardware and modern OpenGL Colour 66 Unfortunately… Ray tracing is computationally expensive used by hobbyists and for super-high visual quality Video games and user interfaces need something faster So: Model surfaces as polyhedra – meshes of polygons Use composition to build scenes Apply perspective transformation and project into plane of screen Work out which surface was closest Fill pixels with colour of nearest visible polygon Modern graphics cards have hardware to support this 67 Three-dimensional objects
Polyhedral surfaces are made up from meshes of multiple connected polygons
Polygonal meshes open or closed manifold or non-manifold
Curved surfaces must be converted to polygons to be drawn 68 Surfaces in 3D: polygons Easier to consider planar polygons 3 vertices (triangle) must be planar > 3 vertices, not necessarily planar
a non-planar “polygon” rotate the polygon about the vertical axis should the result be this or this?
this vertex is in front of the other three, which are all in the same plane 69 Splitting polygons into triangles
Most Graphics Processing Units (GPUs) are optimised to draw triangles Split polygons with more than three vertices into triangles
which is preferable?
? 70 2D transformations
scale why? it is extremely useful to be able to transform predefined objects to an arbitrary rotate location, orientation, and size any reasonable graphics package will include translate transforms 2D Postscript 3D OpenGL (shear) 71 Basic 2D transformations scale x' mx about origin y' my by factor m rotate xxy'cossin about origin by angle yxy'sincos translate xxx' o along vector (xo,yo) yyy' o shear parallel to x axis x' x ay by factor a y' y 72 Matrix representation of transformations
scale rotate about origin, factor m about origin, angle x' m 0x x' cossin x y' 0 m y y' sincos y
do nothing shear identity parallel to x axis, factor a x' 1 0x x' 1 a x y' 0 1 y y' 0 1 y 73 Homogeneous 2D co-ordinates
translations cannot be represented using simple 2D matrix multiplication on 2D vectors, so we switch to homogeneous co-ordinates x y (,,),xyw w w an infinite number of homogeneous co-ordinates map to every 2D point w=0 represents a point at infinity usually take the inverse transform to be: (,)(,,xyxy ) 1 74 Matrices in homogeneous co-ordinates scale rotate about origin, factor m about origin, angle x' m 0 0 x x' cossin 0 x y' 0m 0 y y' sincos 0 y w' 0 0 1w w' 001 w do nothing shear identity parallel to x axis, factor a x' 1 0 0 x x' 1a 0 x y' 0 1 0 y y' 0 1 0 y w' 0 0 1w w' 0 0 1w 75 Translation by matrix algebra
x' 10 xo x y' 01 y y 0 w' 001 w
In homogeneous coordinates
x' x wxo y' y wyo w' w
In conventional coordinates x' x y' y x y w' w 0 w' w 0 76 Concatenating transformations
often necessary to perform more than one transformation on the same object can concatenate transformations by multiplying their matrices e.g. a shear followed by a scaling:
scale shear x'' m 0 0 x' x' 10 ax y'' 00 m y' y' 0 1 0 y w'' 0 0 1w' w' 0 0 1w scale shear both x'' m 0 01a 0 x m ma 0 x y'' 0m 00 1 0 y 00 m y w'' 0 0 10 0 1w 0 0 1w 77 Transformation are not commutative be careful of the order in which you concatenate transformations rotate then scale scale 2 2 2 2 0 200 1 1 0 010 rotate by 45° scale by 2 2 2 along x axis 001 001 2 1 1 1 2 2 0 2 2 0 2 1 0 1 1 0 2 2 2 2 scale by 2 rotate by 45° along x axis 001 001 scale then rotate rotate 78 Scaling about an arbitrary point
scale by a factor m about point (xo,yo)
translate point (xo,yo) to the origin (x ,y ) scale by a factor m about the origin o o (0,0) translate the origin to (xo,yo)
x' 1 0 x x x'' m 00 x' x''' 10 x x'' o o y' 0 1 y y y'' 00 m y' y''' 01 y y'' o o w' 0 0 1 w w'' 001 w' w''' 001 w''
x''' 1 0 xo m 0 01 0 xo x Exercise: show how to y''' 0 1 yo 0m 0 0 1 yo y perform rotation about w''' 0 0 1 0 0 10 0 1 w an arbitrary point 79 3D transformations
3D homogeneous co-ordinates x y z (,,,)(,,)xyzw w w w 3D transformation matrices translation identity rotation about x-axis
100 tx 1000 1000 010 t 0100 00 cossin y 001 tz 0010 00 sincos 0001 0001 0001
scale rotation about z-axis rotation about y-axis
mx 0 0 0 cos sin 0 0 cossin 00 0m 0 0 sin cos 0 0 0100 y 0 0mz 0 0 0 1 0 sincos 00 0 0 0 1 0 0 0 1 0001 80 3D transformations are not commutative
90° rotation 90° rotation about z-axis about x-axis y opposite z z x faces x y z z x x 90° rotation 90° rotation about x-axis about z-axis 81 Model transformation 1 y the graphics package Open Inventor defines a cylinder to be: centre at the origin, (0,0,0) 2 x radius 1 unit height 2 units, aligned along the y-axis 2 this is the only cylinder that can be drawn, but the package has a complete set of 3D transformations we want to draw a cylinder of: radius 2 units the centres of its two ends located at (1,2,3) and (2,4,5) its length is thus 3 units what transforms are required? and in what order should they be applied? 82 Model transformation 2 y
order is important: 2 x scale first rotate 2 translate last scaling and translation are straightforward S y 2 0 0 0 1 0 0 1.5 0 1.5 0 0 0 1 0 3 S T 0 0 2 0 0 0 1 4 3 x 0 0 0 1 0 0 0 1 scale from translate centre of size (2,2,2) cylinder from (0,0,0) to to size (4,3,4) halfway between (1,2,3) and (2,4,5) 4 83 Model transformation 3 rotation is a multi-step process break the rotation into steps, each of which is rotation about a principal axis work these out by taking the desired orientation back to the original axis-aligned position
the centres of its two ends located at (1,2,3) and (2,4,5)
desired axis: (2,4,5)–(1,2,3) = (1,2,2)
original axis: y-axis = (0,1,0) 84 Model transformation 4
desired axis: (2,4,5)–(1,2,3) = (1,2,2) original axis: y-axis = (0,1,0)
zero the z-coordinate by rotating about the x-axis z 1 0 0 0 0 cosθ sin θ 0 2,2,1() R 1 0 sin θ cosθ 0 0 0 0 1 y ,1 22 22 0, 2 θ arcsin 2 2 ,1( 8 0, ) 2 2 85 Model transformation 5
then zero the x-coordinate by rotating about the z-axis we now have the object’s axis pointing along the y-axis
cos sin 0 0 y φ φ 2 ,0 12 8 0, ,1( )0,8 sin φ cosφ 0 0 R )0,3,0( 2 0 0 1 0 0 0 0 1 x 1 φ arcsin 2 12 8 86 Model transformation 6 the overall transformation is: first scale then take the inverse of the rotation we just calculated finally translate to the correct position x' x y' TR 1 R 1 S y z' 1 2 z w' w 87 Application: display multiple instances
transformations allow you to define an object at one location and then place multiple instances in your scene 88 3D 2D projection to make a picture 3D world is projected to a 2D image like a camera taking a photograph the three dimensional world is projected onto a plane
The 3D world is described as a set of (mathematical) objects e.g. sphere radius (3.4) centre (0,2,9) e.g. box size (2,4,3) centre (7, 2, 9) orientation (27º, 156º) 89 Types of projection parallel e.g. (,,)(,)xyzxy useful in CAD, architecture, etc looks unrealistic perspective x y e.g. (,,)(,)xy z z z things get smaller as they get farther away looks realistic this is how cameras work 90 Geometry of perspective projection
y (',',)xyd d ( ,x , )y z xx' z ( ,0 , )0 0 z d yy' z d 91 Projection as a matrix operation
d x 1 0 0 0 x xx' z y 0 1 0 0 y 1/ d 0 0 0 1/ dz d yy' z / d 0 0 1/ d 0 1 z
This is useful in the z-buffer 1 x algorithm where we need to x / w z' y interpolate 1/z values rather remember y / w z z than z values. z / w w 92 Perspective projection with an arbitrary camera
we have assumed that: screen centre at (0,0,d) screen parallel to xy-plane z-axis into screen y-axis up and x-axis to the right eye (camera) at origin (0,0,0) for an arbitrary camera we can either: work out equations for projecting objects about an arbitrary point onto an arbitrary plane transform all objects into our standard co-ordinate system (viewing co-ordinates) and use the above assumptions 93 A variety of transformations
object in object in object in object in object world viewing 2D screen co-ordinates co-ordinates co-ordinates co-ordinates modelling viewing projection transform transform
the modelling transform and viewing transform can be multiplied together to produce a single matrix taking an object directly from object co-ordinates into viewing co-ordinates either or both of the modelling transform and viewing transform matrices can be the identity matrix e.g. objects can be specified directly in viewing co-ordinates, or directly in world co-ordinates this is a useful set of transforms, not a hard and fast model of how things should be done 94 Viewing transformation 1
world viewing co-ordinates viewing co-ordinates transform the problem: to transform an arbitrary co-ordinate system to the default viewing co-ordinate system camera specification in world co-ordinates eye (camera) at (e ,e ,e ) x y z l look point (centre of screen) at (lx,ly,lz) u up along vector (u ,u ,u ) x y z e perpendicular to el 95 Viewing transformation 2
translate eye point, (ex,ey,ez), to origin, (0,0,0)
100 ex 010 e T y 001 ez 0001 scale so that eye point to look point distance, el , is distance from origin to screen centre, d
d el 0 0 0 0d 0 0 el ()()()l e2 l e 2 l e 2 S el x x y y z z d 0 0el 0 0 0 0 1 96 Viewing transformation 3
need to align line el with z-axis first transform e and l into new co-ordinate system eSTe0lSTl'''' then rotate e''l'' into yz-plane, rotating about y-axis
z cosθ 0 sin θ 0 22 0,'',''''lll yxz 0 1 0 0 R 1 ('','','')lx l y l z sin θ 0 cosθ 0 0 0 0 1 x l '' θ arccos z 2 2 l '' x l '' z 97 Viewing transformation 4
having rotated the viewing vector onto the yz plane, rotate it about the x-axis so that it aligns with the z-axis
lRl''''' 1
z 1 0 0 0 0,,'''''' 0 ll 22 yz 0 cosφ sin φ 0 (,,)0 0 d R2 (,''',''')0 ll 0 sin φ cosφ 0 yz 0 0 0 1 y l ''' φ arccos z 2 2 l ''' y l ''' z 98 Viewing transformation 5
the final step is to ensure that the up vector actually points up, i.e. along the positive y-axis actually need to rotate the up vector about the z-axis so that it lies in the positive y half of the yz plane uRRu'''' why don’t we need to 21 multiply u by S or T? cosψ sin ψ 0 0 u is a vector rather than a point, vectors do not sin ψ cosψ 0 0 R get translated 3 0 0 1 0 0 0 0 1 scaling u by a uniform scaling matrix would u '''' y ψ arccos make no difference to the 2 2 u '''' x u '''' y direction in which it points 99 Viewing transformation 6
world viewing co-ordinates viewing co-ordinates transform we can now transform any point in world co-ordinates to the equivalent point in viewing co-ordinate x' x y' y RRRST3 2 1 z' z w' w in particular: e(,,)(,,)0 0 0 l 0 0 d the matrices depend only on e, l, and u, so they can be pre- multiplied together MRRRST321 100 Illumination & shading Drawing polygons with uniform colours gives poor results Interpolate colours across polygons 101 Illumination & shading Interpolating colours across polygons needs colour at each vertex algorithm to blend between the colours across the polygon Works for ambient lighting and diffuse reflection Specular reflection requires more information than just the colour 102 Gouraud shading
for a polygonal model, calculate the diffuse illumination at each vertex calculate the normal at the vertex, and use this to calculate the diffuse illumination at that point normal can be calculated directly if the polygonal model was derived from a curved surface
[(xyzrgb',' ), ,(,, )] interpolate the colour between the 111111 vertices across the polygon surface will look smoothly curved [(xyz',' ), , rather than looking like a set of polygons 222 (,,rgb222 )] surface outline will still look polygonal
[(x3',' y 3), z 3,(,, r 3 g 3 b 3 )]
Henri Gouraud, “Continuous Shading of Curved Surfaces”, IEEE Trans Computers, 20(6), 1971 103 Flat vs Gouraud shading
note how the interior is smoothly shaded but the outline remains polygonal
http://computer.howstuffworks.com/question484.htm 104 Phong shading
similar to Gouraud shading, but calculate the specular component in addition to the diffuse component therefore need to interpolate the normal across the polygon in order to be able to calculate the reflection vector
[(xyzrgb1111111',' ), ,(,, ), N ] N.B. Phong’s approximation to specular reflection ignores (amongst other things) the
[(xyz222',' ), , effects of glancing incidence (,,rgb2222 ), N ]
[(x3',' y 3), z 3,(,, r 3 g 3 b 3), N 3 ] Introduction to Computer Graphics
ªBackground ªRendering ªGraphics pipeline ªGraphics hardware and modern OpenGL u GPU & APIs u OpenGL Rendering pipeline u Example OpenGL code u GLSL u Transformations & vertex shaders u Raster buffers u Textures ªColour
115 What is a GPU?
} Graphics Processing Unit } Like CPU (Central Processing Unit) but for processing graphics } Optimized for floating point operations on large arrays of data } Vertices, normals, pixels, etc.
116 Transistor count
Intell 8-core Core i7 Haswell-E Nvidia GeForce GTX Titan X
2,600,000,000 transistors 8,000,000,000 transistors
117 What does a GPU do
} Performs all low-level tasks & a lot of high-level tasks } Clipping, rasterisation, hidden surface removal, … } Essentially draws millions of triangles very efficiently } Procedural shading, texturing, animation, simulation, … } Video rendering, de- and encoding, deinterlacing, ... } Physics engines } Full programmability at several pipeline stages } fully programmable } but optimized for massively parallel operations
118 What makes GPU so fast?
} 3D rendering can be very efficiently parallelized } Millions of pixels } Millions of triangles } Many operations executed independently at the same time } This is why modern GPUs } Contain between hundreds and thousands of SIMD processors } Single Instruction Multiple Data – operate on large arrays of data } >>400 GB/s memory access } This is much higher bandwidth than CPU } But peak performance can be expected for very specific operations
119 GPU APIs (Application Programming Interfaces) OpenGL DirectX } Multi-platform } Microsoft Windows / Xbox } Open standard API } Proprietary API } Focus on general 3D } Focus on games applications } Application manages } Open GL driver manages resources the resources
} Similar functionality and performance
120 One more API
} Vulkan – cross platform, open standard } Low-overhead API for high performance 3D graphics } Compared to OpenGL / DirectX } Reduces CPU load } Better support of multi-CPU-core architectures } Finer control of GPU } But } The code for drawing a few primitives can take 1000s line of code } Intended for game engines and code that must be very well optimized
121 GPU for general computing
} OpenGL and DirectX are not meant to be used for general purpose computing } Example: physical simulation, machine learning } CUDA – NVidia’s architecture for parallel computing } C-like programming language } With special API for parallel instructions } Requires NVidia GPU } OpenCL – Similar to CUDA, but open standard } Can run on both GPU and CPU } Supported by AMD, Intel and Nvidia, Qualcomm, Apple, …
122 GPU and mobile devices
} OpenGL ES 1.0-3.2 } Stripped version of OpenGL } Removed functionality that is not strictly necessary on mobile devices } Devices } iOS: iPhone, iPad } Android phones } PlayStation 3 } Nintendo 3DS } and many more OpenGL ES 2.0 rendering (iOS)
123 WebGL
} JavaScript library for 3D rendering in a web browser } WebGL 1.0 - based on OpenGL ES 2.0 } WebGL 2.0 – based on OpenGL ES 3.0 } Chrome and Firefox (2017) } Most modern browsers support WebGL } Potentially could be used to create 3D games in a browser } and replace Adobe Flash http://zygotebody.com/
124 OpenGL in Java
} Standard Java API does not include OpenGL interface } But several wrapper libraries exist } Java OpenGL – JOGL } Lightweight Java Game Library - LWJGL } We will use LWJGL 3 } Seems to be better maintained } Access to other APIs (OpenCL, OpenAL, …)
} We also need a linear algebra library } JOML – Java OpenGL Math Library } Operations on 2, 3, 4-dimensional vectors and matrices
125 OpenGL History
} Proprietary library IRIS GL by SGI } Geometry shaders } OpenGL 1.0 (1992) } OpenGL 4.0 (2010) } OpenGL 1.2 (1998) } Catching up with Direct3D 11 } OpenGL 2.0 (2004) } OpenGL 4.5 (2014) } GLSL } OpenGL 4.6 (2017) } Non-power-of-two (NPOT) textures } SPIR-V shaders } OpenGL 3.0 (2008) } Major overhaul of the API } Many features from previous versions depreciated } OpenGL 3.2 (2009) } Core and Compatibility profiles
126 OpenGL rendering pipeline OpenGL programming model
CPU code GPU code
} gl* functions that } Fragment shaders } Create OpenGL objects } Vertex shaders } Copy data CPU<->GPU } and other shaders } Modify OpenGL state } Written in GLSL } Enqueue operations } Similar to C } Synchronize CPU & GPU } From OpenGL 4.6 could be } C99 library written in other language } Wrappers in most and compiled to SPIR-V programming language OpenGL rendering pipeline
Vertex Vertex Tessellation Tessellation data shader control shader evaluation shader
Primitive Geometry Rasterization Clipping assembly shader
Fragment Screen shader buffer
Programmable Fixed stages stages
129 OpenGL rendering pipeline
Vertex Vertex Tessellation Tessellation data shader control shader evaluation shader
Primitive Geometry Rasterization Clipping assembly shader Processing of vertices, normals, uv texture coordinates. Fragment Screen shader buffer
Programmable Fixed stages stages
130 OpenGL rendering pipeline
Vertex Vertex Tessellation Tessellation data shader control shader evaluation shader
Primitive Geometry Rasterization Clipping [Optional]setup Create new shader primitives by tessellating existing primitives (patches). Fragment Screen shader buffer
Programmable Fixed stages stages
131 OpenGL rendering pipeline
[Optional] Operate on tessellated geometry. Can create new primitives. Vertex Vertex Tessellation Tessellation data shader control shader evaluation shader
Primitive Geometry Rasterization Clipping assembly shader fur shadow volumes
Fragment Screen shader buffer
Programmable Fixed stages stages
132 OpenGL rendering pipeline Organizes vertices into primitives and prepares them for rendering.
Vertex Vertex Tessellation Tessellation data shader control shader evaluation shader
Primitive Geometry Rasterization Clipping assembly shader
Fragment Screen shader buffer
Programmable Fixed stages stages
133 OpenGL rendering pipeline Remove or modify vertices so that they all lie within the viewport (view frustum).
Vertex Vertex Tessellation Tessellation data shader control shader evaluation shader
Primitive Geometry Rasterization Clipping assembly shader
Fragment Screen shader buffer
Programmable Fixed stages stages
134 Generates fragments (pixels) to be drawn for each primitive. OpenGLInterpolates rendering vertex attributes. pipeline scanlines
Vertex Vertex Tessellation Tessellation data shader control shader evaluation shader
Primitive Geometry Rasterization Clipping assembly shader
Fragment Screen shader buffer
Programmable Fixed stages stages
135 Computes colour per each fragment (pixel). Can lookup colour OpenGL renderingin the texture. pipeline Can modify pixels’ depth value.
Vertex Vertex Tessellation Tessellation data shader control shader evaluation shader
Primitive Geometry Rasterization Clipping Physically accurate assemblyNon-Photorealistic-shaderRendering materials shader
Fragment Screen shader buffer
Programmable Fixed stages stages
136 Managing buffers/objects in OpenGL
} Generating names } “name” is like a reference in Java } glGen* functions create names WITHOUT allocating the actual object } From OpenGL 4.5: glCreate* functions create names AND allocate actual object } Binding objects } glBind* functions } Performs two operations } Allocates memory for a particular object (if it does not exist) } Makes this object active in the current OpenGL Context } Functions operating on OpenGL objects will change the currently bound (or active) object 137 Managing buffers/objects in OpenGL
} Unbinding objects } Passing “0” instead of “name” unbinds the active object } glBind( .., 0 ) } Deleting object } glDelete* functions } Deletes both the object and its name
138 Geometry objects in OpenGL (OO view)
139 OpenGL as a state-machine If OpenGL was OO API: But it is not, and you must do:
VertexArray va = new int va = glGenVertexArrays(); VertexArray(); glBindVertexArray(va); // va becomes “active” VertexArray
ArrayBuffer vertices = new int vertices = glGenBuffers(); ArrayBuffer( my_data ); glBindBuffer(GL_ARRAY_BUFFE R, vertices); // This adds vertices to currently bound VertexArray va.add( vertices );
140 OpenGL example code - overview Let us draw some triangles
} Initialize rendering window & OpenGL Initialize OpenGL context } Send the geometry (vertices, triangles, normals) to the GPU } Load and compile Shaders Set up inputs
} Clear the screen buffer } Set the model-view-projection matrix Draw a frame } Render geometry } Flip the screen buffers
Free resources
142 A more complete example int vertexArrayObj = glGenVertexArrays(); // Create a name glBindVertexArray(vertexArrayObj); // Bind a VertexArray float[] vertPositions = new float[] { -1, -1, 0, 0, 1, 0, 1, -1, 0 }; // x, y, z, x, y, z … // Java specific code for transforming float[] into an OpenGL-friendly format FloatBuffer vertex_buffer = BufferUtils.createFloatBuffer(vertPositions.length); vertex_buffer.put(vertPositions); // Put the vertex array into the CPU buffer vertex_buffer.flip(); // "flip" is used to change the buffer from read to write mode int vertex_handle = glGenBuffers(); // Get an OGL name for a buffer object glBindBuffer(GL_ARRAY_BUFFER, vertex_handle); // Bring that buffer object into existence on GPU glBufferData(GL_ARRAY_BUFFER, vertex_buffer, GL_STATIC_DRAW); // Load the GPU buffer object with data Note on LWJGL
} The OpenGL functions and constants can be found in the LWJGL packages: } org.lwjgl.opengl.GL11 } org.lwjgl.opengl.GL15 } org.lwjgl.opengl.GL20 } org.lwjgl.opengl.GL30 } … } For simplicity, package names are omitted in all examples shown in these slides
144 Let us draw some triangles
} Initialize rendering window & OpenGL Initialize OpenGL context } Send the geometry (vertices, triangles, normals) to the GPU } Load and compile Shaders Set up inputs
} Clear the screen buffer } Set the model-view-projection matrix Draw a frame } Render geometry } Flip the screen buffers
Free resources
145 Rendering 1 of 2 // Step 1: Pass a new model-view-projection matrix to the vertex shader Matrix4f mvp_matrix; // Model-view-projection matrix mvp_matrix = new Matrix4f(camera.getProjectionMatrix()).mul(camera.getViewMatrix()); int mvp_location = glGetUniformLocation(shaders.getHandle(), "mvp_matrix"); FloatBuffer mvp_buffer = BufferUtils.createFloatBuffer(16); mvp_matrix.get(mvp_buffer); glUniformMatrix4fv(mvp_location, false, mvp_buffer);
// Step 2: Clear the buffer glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // Set the background colour to dark grey glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
146 Rendering 2 of 2 // Step 3: Draw our VertexArray as triangles glBindVertexArray(vertexArrayObj); // Bind the existing VertexArray object glDrawElements(GL_TRIANGLES, no_of_triangles, GL_UNSIGNED_INT, 0); // Draw it as triangles glBindVertexArray(0); // Remove the binding
// Step 4: Swap the draw and back buffers to display the rendered image glfwSwapBuffers(window); glfwPollEvents();
147 GLSL - fundamentals Shaders
} Shaders are small programs executed on a GPU } Executed for each vertex, each pixel (fragment), etc. } They are written in GLSL (OpenGL Shading Language) } Similar to C++ and Java } Primitive (int, float) and aggregate data types (ivec3, vec3) } Structures and arrays } Arithmetic operations on scalars, vectors and matrices } Flow control: if, switch, for, while } Functions
149 Example of a vertex shader
#version 330 in vec3 position; // vertex position in local space in vec3 normal; // vertex normal in local space out vec3 frag_normal; // fragment normal in world space uniform mat4 mvp_matrix; // model-view-projection matrix void main() { // Typica ly normal is transformed by the model matrix // Since the model matrix is identity in our case, we do not modify normals frag_normal = normal;
// The position is projected to the screen coordinates using mvp_matrix gl_Position = mvp_matrix * vec4(position, 1.0); } Why is this piece of code needed? 150 Data types
} Basic types } float, double, int, uint, bool } Aggregate types } float: vec2, vec3, vec4; mat2, mat3, mat4 } double: dvec2, dvec3, dvec4; dmat2, dmat3, dmat4 } int: ivec2, ivec3, ivec4 } uint: uvec2, uvec3, uvec4 } bool: bvec2, bvec3, bvec4 vec3 V = vec3( 1.0, 2.0, 3.0 ); mat3 M = mat3( 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0 );
151 Indexing components in aggregate types
} Subscripts: rgba, xyzw, stpq (work exactly the same) } float red = color.r; } float v_y = velocity.y; but also } float red = color.x; } float v_y = velocity.g; } With 0-base index: } float red = color[0]; } float m22 = M[1][1]; // second row and column of matrix M
152 Swizzling You can select the elements of the aggregate type: } vec4 rgba_color( 1.0, 1.0, 0.0, 1.0 ); } vec3 rgb_color = rgba_color.rgb; } vec3 bgr_color = rgba_color.bgr; } vec3 luma = rgba_color.ggg;
153 Arrays
} Similar to C float lut[5] = float[5]( 1.0, 1.42, 1.73, 2.0, 2.23 );
} Size can be checked with “length()” for( int i = 0; i < lut.length(); i++ ) { lut[i] *= 2; }
154 Storage qualifiers } const – read-only, fixed at compile time } in – input to the shader } out – output from the shader } uniform – parameter passed from the application (Java), constant for the primitive } buffer – shared with the application } shared – shared with local work group (compute shaders only)
} Example: const float pi=3.14;
155 Shader inputs and outputs
glGetAttribLocation glBindBuffer ArrayBuffer (normals) glVertexAttribPointer glEnableVertexAttribArray (see the next slide) ArrayBuffer (vertices)
out vec3 in vec3 position frag_normal Vertex shader Vertex attribute out vec3 colour in vec3 normal interpolation Fragment shader in vec3 frag_normal
[optional] FrameBuffer (pixels) glBindFragDataLocation or layout(location=?) in GLSL
156 How to specify input to a vertex shader? // Get the locations of the "position" vertex attribute variable in our shader int position_loc = glGetAttribLocation(shaders_handle, "position"); // If the vertex attribute found if (position_loc != -1) { // Activate the ArrayBuffer that should be accessed in the shader glBindBuffer(GL_ARRAY_BUFFER, vertex_handle); // Specifies where the data for "position" variable can be accessed glVertexAttribPointer(position_loc, 3, GL_FLOAT, false, 0, 0); // Enable that vertex attribute variable glEnableVertexAttribArray(position_loc); }
157 Passing uniform(s) to a shader
} In shader: uniform mat4 mvp_matrix; // model-view-projection matrix
} In Java: Matrix4f mvp_matrix; // Matrix to be passed to the shader ... int mvp_location = glGetUniformLocation(shaders.getHandle(), "mvp_matrix"); FloatBuffer mvp_buffer = BufferUtils.createFloatBuffer(16); mvp_matrix.get(mvp_buffer); glUniformMatrix4fv(mvp_location, false, mvp_buffer);
Name of the method depends on the data type. For example, glUniform3fv for Vector3f 158 GLSL Operators
} Arithmetic: + - ++ -- } Multiplication: } vec3 * vec3 – element-wise } mat4 * vec4 – matrix multiplication (with a column vector) } Bitwise (integer): <<, >>, &, |, ^ } Logical (bool): &&, ||, ^^ } Assignment: float a=0; a += 2.0; // Equivalent to a = a + 2.0
} See the quick reference guide at: https://www.opengl.org/documentation/glsl/ 159 GLSL Math
} Trigonometric: } radians( deg ), degrees( rad ), sin, cos, tan, asin, acos, atan, sinh, cosh, tanh, asinh, acosh, atanh } Exponential: } pow, exp, log, exp2, log2, sqrt, inversesqrt } Common functions: } abs, round, floor, ceil, min, max, clamp, … } And many more
} See the quick reference guide at: https://www.opengl.org/documentation/glsl/ 160 GLSL flow control if( bool ) { } // true for( int i = 0; i<10; i++ ) { } else { ... // false } } while( n < 10 ) { switch( int_value ) { ... case n: } // statements break; do { case m: ... // statements } while ( n < 10 ) break; default:
161 Transformations (Vertex shaders) Model, View, Projection matrices
Model matrix
To position each object in the scene. Object coordinates Could be different World coordinates for each object. Object centred at the origin
163 Model, View, Projection matrices
View matrix
To position all objects relative to World coordinates the camera View (camera) coordinates
Camera at the origin, pointing at -z 164 Model, View, Projection matrices
The default OpenGL coordinate system is right-handed
Projection matrix
To project 3D coordinates to a 2D plane. Note that z View (camera) coordinate is Screen coordinates coordinates retained for depth testing. x and y must be in the range -1 and 1
165 All together
3D world Screen ! $ vertex coordinates x ! $ coordinates # s & x xs/ws and # & # ys & y ys/ws must be # & between # & = P ⋅V ⋅ M ⋅ z # z & -1 and 1 # s & # & # w & w " s % " %
Projection, view and model matrices
166 Transforming normal vectors
} Transformation by a nonorthogonal matrix 1 0 0 does not preserve angles 0 2 0 0 0 1 } Since: Normal transform � � = 0 � � = �� MT = 0
Vertex position Transformed normal transform and tangent vector