EECS 351-1 : Introduction to Building the Virtual Camera ver. 1.4

3D Transforms (cont’d): 3D Transformation Types: did we really describe ALL of them? No! --All fit in a 4x4 matrix, suggesting up to 16 ‘degrees of freedom’. We already have 9 of them: 3 kinds of Translate (in x,y,z directions) + 3 kinds of Rotate (around x,y,z axes) + 3 kinds of Scale (along x,y,z directions). ?Where are the other 7 degrees of freedom? 3 kinds of ‘shear’ (aka ‘skew’): the transforms that turn a square into a parallelogram: -- Sxy sets the x-y shear amount; similarly, Sxz and Syz set x-z and y-z shear amount:

Shear(Sx,Sy,Sz) = [ 1 Sxy Sxz 0] [ Sxy 1 Syz 0] [ Sxz Syz 1 0] [ 0 0 0 1] 3 kinds of ‘’ transform: --Px causes perspective distortions along x-axis direction, Py, Pz along y and z axes:

Persp(px,py,pz) = [1 0 0 0] [0 1 0 0] [0 0 1 0] [px py pz 1] --Finally, we have that lower-left ‘1’ in the matrix. Remember how we convert homogeneous coordinates [x,y,z,w] to ‘real’ or ‘Cartesian’ 3D coordinates [x/w, y/w, z/w]? If we change the ‘1’ in the lower left, it scales the ‘w’—it acts as a simple scale factor on all ‘real’ coordinates, and thus it’s redundant:—we have only 15 degrees of freedom in our 4x4 matrix. We can describe any 4x4 transform matrix by a combination of these three simpler classes: ‘rigid body’ transforms == preserves angles and lengths; does not distort or reshape a model, includes any combination of rotation and translation. ‘affine’ transforms== preserves parallel lines, but not angles or line lengths includes rotation, translation, scale, shear or skew… ‘perspective’ transforms; links x,y or z to the homogeneous coord w: provides the math behind a ‘pinhole’ camera image, but in orderly 4x4 matrix form!

Cameras and 3D Viewing: Algebra and Intuition Camera: device that ‘flattens’ 3D space onto a 2D plane. Get the intuition first, then the math:

f (focal length)

y (xv, yv, zv, 1) (xv, yv, zv,zv/f)

‘Center of z Projection’ 3D model ‘2D focal plane’ (connected vertices)

A planar perspective camera, at its simplest most general form, is just a plane and a point. The plane splits the universe into two halves; the half that contains the point, which is the location of our eye or camera, and then the other half of the universe that we’ll see with our eye or camera. The point is known by many names; the ‘camera location’, the ‘center of projection’(COP), the ‘viewpoint’ the ‘view reference point’ etc., and the plane is ‘the image plane’, the ‘near clip plane’, or even the ‘focal plane’.

We paint our planar perspective image onto that plane in a very simple way. Draw a line from our point (viewpoint, camera point, etc) through the plane, and trace it into the other half of the universe until it hits something. Copy the color we find on that ‘something’ and paint it onto the plane where our tracing-line pierced it. Use this method to set the color of all the plane points to form the complete planar perspective image of one half of the universe.

However, real cameras are finite: we can’t capture the entire infinitely large picture plane, and instead we usually select the picture within a rectangle region of that plane. First, let’s define the camera’s ‘focal length’ f as the shortest distance from its center-of-projection (COP) to the universe-splitting image plane. We make a vector perpendicular to the image plane that measures the distance from our eye-point to the plane, and call that our direction-of-gaze, or ‘lookAt’ direction. We define a rectangular region of the plane, usually (but not always) centered at the point nearest the COP, and record its colors as our perspective image. The size and placement of that rectangle, along with focal length ‘f’ describes all the basic requirements for simple lenses in any kind of planar perspective camera:

Now let’s formalize these ideas.  Given a plane and a point, suppose your eye is located at the point and that it becomes the origin of a coordinate system. Call the origin point the ‘eye’ point, or the ‘center of projection’ or the ‘view reference point’(VRP), and use your eye to look down the ‘-z’ axis. Why not +z? because we want to use right-handed coordinate systems, and we want to keep these x & y axes of our 3D camera for use as the x & y axes of our 2D image, aimed rightward & upward.  At z= -f, = -znear, we construct the ‘focal plane’ that splits the universe in half.  draw a picture on a 2D ‘focal’ plane. How? Trace rays from the eye origin (VRP) to points (vertices, usually) in that other half of the universe, and find their location on the focal plane.  Given a 3D vertex point (xv, yv, zv), where do we draw a point on our image plane? Ans: the image point is (f·xv / -zv, f·yv / -zv, f·zv / -zv) = f· (xv / -zv, yv / -zv, -1)  As we change the focal distance ‘f’ between the camera origin (e.g. ‘center of projection’) and the 2D image plane, the image gets scaled up or down, larger or smaller, but does not otherwise change: adjusting the ‘f’ parameter is equivalent to adjusting a zoom lens. However, if we instead chose to keep ‘f’ fixed and instead move the vertices of a 3D model forwards or backwards along the zv axis, the effect is more complex. Vertices nearer the camera, those with small zv values, change their on-screen image positions far more than vertices farther away.  Therefore, moving a 3D object closer or further away is NOT equivalent to adjusting a zoom lens, because it causes depth-dependent image changes: some call it foreshortening, others call it ‘changing perspective’ or ‘flattening the image’. As a 3D object moves away from the camera, the rays that carry color from its surface points to our camera’s center-of-projection (COP) get more and more parallel, with larger angles for shorter distances. Changing those angles yields a different appearance due to color changes (imagine the surface of a CD), and may change its occlusion: a surface that moves near the camera may block your view of more distant points.

No matter how large this 3D object may be, as -zv reaches infinity the image of the object shrinks to a single point. This is the image’s z-axis ‘vanishing point’; the points that painters use to help them make an accurate perspective drawing. For scenes with world-space axes that don’t match the camera axes, drawing lines parallel to each of these world-space axes will cause them to converge on other points; if they fall within the picture itself, we can see 1, 2, or 3 vanishing points. Any parallel lines in 3D that aren’t parallel to our image plane will form vanishing points somewhere on our image plane, so you can have as many ‘vanishing points’ as you wish in a drawing; the number of vanishing points doesn’t really tell you much about the camera that made an image, only the content of the scene that the camera captured. (‘vanishing points’ in paintings define convergence of major axes of viewed objects, such as horizontal lines where walls meet floors, or vertical lines where walls meet each other).

--Easy Question: suppose your camera DIDN’T divide by zv: then where does the 3D vertex point ()appear in the image plane? Then we form no vanishing points, and the model stays the same size no matter how far away. This is known as ‘orthographic’ projection, and if we define ‘d’ as the size of the image, we can use it to replace the ‘f’, and specify the ‘distance from the origin’ like this: Answer: the image point is (d·xv d·yv, d·zv) = d· (xv, yv, 1)

Homogeneous Coordinates: Clever Matrix Math for Cameras and 3D Viewing: The ‘intuitive’ camera-construction method above isn’t linear, and isn’t suitable for reliable graphics programs. It requires us to divide by numbers that can change; if we use it to draw pictures by the millions, eventually our 3D drawing program will fail with a divide-by-zero problem. It is messy to use, too—how can you ‘move’ the camera freely? What if you want an image plane that isn’t perpendicular to the Z axis, that doesn’t have its ‘vanishing point’ in the very center of the picture it makes? THIS is the real reason why we use homogeneous coords & matrices (4x4) for graphics: --it lets us make a 4x4 matrix that maps 3-D points to a 2-D image plane, like a camera; --it lets you position the camera separately from the model, place them in ‘world’ space; --it never causes ‘divide-by-zero’ errors while you compute your image, --it lets you adjust your camera easily, and even seems to let you ‘tilt’ the image plane (e.g. emulate an architectural ‘view’ camera), though you actually just adjust the position of the rectangle on the plane that you will use as your image. --You can transform just the vertices of a model to the image plane, and then draw the lines and polygons in 2D.

The most-basic perspective camera matrix: (same as the algebraic one above) just ‘messes around’ with the bottom-most row of the 4x4 matrix; it avoids the divide by coupling zv to the ‘w’ value—now ‘w’ is doing something very useful: Tpers = [ 1 0 0 0] [ 0 1 0 0] [ 0 0 1 0] [ 0 0 1/f 0]  (note last two elements)

That’s all you need for any perspective camera! Compare it with the coords in the drawing above. Transform a 3D point (xv, yv,zv,1) by Tpers, and you get another 3D point; convert to ‘real’ (Cartesian) 3D coordinates (x/w,y/w,z/w). Try it: T T Tpers [xv,yv,zv,1] = [xv,yv,zv, zv/f] in 3D homogeneous coords; convert to 3D Cartesian coords by dividing by (zv/f): [xv*f/zv, yv*f/zv, zv*f/zv]T = f * [xv/zv, yv/zv, 1 ]T This gives us 3D coordinates in an image plane, at location z=f, with the same coordinates we found by algebra above. Note that our picture-drawing program does not have to convert points to ‘real’ (Cartesian) coordinates (x/w, y/w, z/w) until we are ready to make pixels; ‘divide by zero’ never happens when we’re doing all the complicated and time-consuming transformations needed to construct our image.

(Challenging puzzle: ‘To draw a perspective image of a 3D line, you only need to transform its endpoints to image space, and then draw a line between those image points’. Can you prove this is always true? Make a parameterized point P(t) that moves between P0 and P1, as we did before. Transform each of them to image space: transform P0 to make P0’, transform P1 to make P1’, Transform Pt to make Pt’. Make a parameterized image point Ps that moves between P0’ and P1’; is Pt’(t) = Ps(s)? Can you solve for the function s(t) or t(s)? Online books, tutorials show you how…

***(Be careful with signs: in a right-handed coordinate system, the image plane is always positioned on the -Z axis, but the ‘f’ parameter used by graphics software often isn’t negative! In OpenGL and WebGL, the ‘znear’ parameter defines image- plane position, but functions that use it expect POSITIVE znear values. ***

Earlier we learned how to make jointed objects by concatenating Translate, Rotate, and Scale matrices arranged in a ‘scene graph’. If we apply those same skills to position and aim a ‘virtual camera’, we will discover they’re not well suited for camera positioning. For our first attempt, suppose we a) translate() along –z axis to push the ‘world’ drawing axes out and away from the camera, b) then rotate() to ‘spin’ the world around its own origin, and c) then translate() to move the world under our camera’s view. While this will let us explore the world, it will not be an easy exploration; steps a) and b) force us to position the camera using spherical coordinates; step a) sets radius, the distance from camera to world origin, and step b) chooses camera azimuth, elevation, and spin. While step c) can move us to any position in the world, our camera aiming direction was fixed by step b), and we cannot easily change that aiming direction. “?!?!Why is this so hard?!?” you may ask; “?!?! Why can’t I just specify the camera’s position in the 3D world, and aim it at something interesting in that 3D world?!?!”

The answer is that we’re using the wrong tools. Our existing transformations (translate, rotate, and scale) always begin with the CVV drawing axes (our ‘camera’ axes) and then transform them successively to each nested set of drawing axes we need to draw jointed objects. Before we used cameras, our CVV coordinate system would suffice as our ‘3D world’ coordinate system. But when we create a ‘virtual camera’ we changed the purpose of the CVV; now it holds a view of the 3D world as seen by a camera. Our existing tools now require us to construct our ‘3D world’ coordinate axes by using values we specified from the camera’s drawing axes (the CVV axes) in steps a,b,c above. That’s not easy! Instead we want an opposite, an inverse of that process. Let’s call it the ‘LookAt()’ function: this new tool must create a matrix that constructs the camera’s drawing axes by using values we specify in the world’s drawing axes. We will call it the ‘view’ matrix, at place it at the root of our scene-graph, where it transforms the CVV (camera) coordinate axes into the ‘world’ coordinate axes.

The VIEW matrix, or How to build a ‘camera-positioning’ matrix from world-space values To specify camera position in ‘world’ drawing axes, begin with just 3 values, each with 3 coordinates:

EYE(a point), AT(a point), and view ‘up’ or VUP(a vector). We want to place our ‘virtual camera’ in a ‘world’ coordinate system so that: -the camera position is at a 3D point EYE (or ‘view reference point’ or ‘camera location’). -the camera lens is aimed in the direction of the 3D world-space point AT, or ‘look-at’, aimed towards a point we will ‘look-at’ point in the world. -the camera is ‘level,’ it has no unwanted tilt to one side or the other. Do this by specifying a camera ‘up’ vector direction ‘VUP’ in world space. The ‘VUP’ vector, the ‘AT’ point, and the EYE point define a plane that always slices through the 2D output image perpendicularly to include its ‘y’ axis. Don’t be fooled - VUP is NOT restricted to the +y direction in the 3D world the camera will view. Instead, the VUP vector defines the world-space direction that will appear as ‘up’ in the camera image. For example, changing the VUP vector from the +x direction to the +y direction will turn the camera on its side. Note that you can use VUP to help you define where to put the ground and the sky in the ‘world’ coordinate system. If you choose the world’s ‘ground plane’ as (x,y,0) with the +z axis pointing upwards towards the sky, then a ‘level’ camera will have a VUP vector aimed towards +z as well.

We then use EYE, AT, and VUP to construct our ‘viewing’ transform, following a step-by-step process below described in many textbooks (but not ours). You will need a ‘viewing’ matrix for Project B, but the cuon-matrix.js library can build this matrix for you with its ‘LookAt()’ function. Definitions: --World coordinates denoted ‘WC’. Position specified using (x,y,z) --Camera coordinate system, or ‘CAM’. To avoid confusion with WC, we will rename the camera’s ‘x,y,z’ coordinates as u,v,n instead.

Step 1) Define Eye Coordinate System Given EYE, AT & VUP, lets first construct each part of a ‘camera’ coordinate system as measured in ‘World’ coordinates WC. The CAM origin is just the eye point ‘EYE’, already defined in world coordinates. To construct the drawing axes: (careful here!-- we only care about directions, not positions, so we only use vectors, not points). --Find N vector in WC (Careful! Right-handed coordinates! the N vector direction is backwards; it points to the eyepoint from the chosen point our camera is looking at). We already know this one—it’s the normalized vector from AT to EYE: Nraw = (EYE - AT) Make it unit length: N = Nraw/||Nraw|| --Find the U,V vectors. We know ‘VUP’, ‘EYE’ and N define a plane P, and this plane always contains the camera image’s V vector (recall V is the direction of the ‘+y’ axis in the camera image). But if the ‘P’ plane includes both the N and V vectors(axes), then plane P is perpendicular to the U vector (axis). It’s easiest to find a vector in the U direction first, by using the cross-product: Uraw = VUP x N Make it unit length: U = Uraw / ||Uraw||. --From these two coordinate system axis vectors we can easily find the third. Given U and N (in a right-handed coordinate system) we can find V by another cross product (as N and U are already unit vectors, we don’t need to normalize V): V = N x U

Step 2) Backwards but Easy:Transform World coords Into CAM coords With our new unit-length U,V,N vectors and the EYE point expressed in world-coordinate system values, we can convert a ‘world’coordinate system point P0 = (P0x, P0y, P0z, 1) into a CAM coordinates (u,v,n,1) quite easily if we choose the special case where we placed the EYE point at the ‘WC’ origin. Here, the vector from the P0 to the EYE point (the CAM coord. origin) is obvious: it’s just (P0x, P0y, P0z, 0). Here, that vector’s dot-product with the U,V, and N unit vectors will give us the (u,v,n) coordinates we seek. We can write those dot-products in matrix form quite easily, and they represent a simple rigid-body rotation from ‘WC’ to ‘CAM’ axes: CAM = [Rcam] P0 [u] = [ Ux Uy Uz 0 ] [P0x] when EYE = (0,0,0,1). [v] [ Vx Vy Vz 0 ] [P0y] [n] [ Nx Ny Nz 0 ] [P0z] [1] [ 0 0 0 1 ] [ 1 ]

That’s easy enough, but it’s the OPPOSITE of what we want! This matrix converts vertex numbers (and drawing axes) from the world coordinate system WC to the CAM coordinate system, but we need a matrix for our scene graph that converts the CAM coordinate system to the WC coordinate system. We need the inverse of this matrix:

Step 3): The Othonormal Inverse Fortunately, [Rcam] is a special kind matrix: each row is a unit-length vector (U,V, or N), each one ‘orthogonal’ to all the other rows: their dot-products are all zero because they’re perpendicular vectors, and thus [Rcam] is ‘orthonormal’ matrix: its inverse is its transpose: Rcam –1 = Rcam T, and we just exchange its rows and columns. If we apply this [Rcam T] matrix to our CAM drawing axes, we get a new set of ‘world’ drawing axes where our camera is aimed in the correct direction, but has its eyepoint (EYE) positioned at the origin. To complete our camera positioning, we apply a second drawing-axis transformation, one that that ‘moves the world out, away from our eye’, but using world coordinate measurements; we call the function Translate(-EYEx, -EYEy, -EYEz) to form our final ‘view’ matrix:

VIEW = [Rcam T][Trans(-EYE)] = [ Ux Vx Nx -EYEx ]. [ Uy Vy Vz -EYEy ] [ Uz Vz Nz -EYEz ] [ 0 0 0 1 ] Note the translation moves the new world drawing axes away from the old world axes at the eyepoint.

The PROJECTION matrix This matrix emulates the lens system of a camera; it performs the 3D2D transformation that may include perspective and foreshortening.

Step 4A) Apply a matrix that does the 3D2D perspective transformation. Now each (translated) vertex is expressed in (u,v,n,w) coordinates defined by the camera. We could apply the perspective transform Tpers we defined above, convert from homogeneous to Cartesian coordinates, and at last we’d have the 2D image locations for each vertex. But this is naïve in at least two different ways:

First, we need a computed ‘depth’ value for each pixel we render so that we can perform hidden-surface removal; if we stay in homogeneous coordinates we can compare the ‘z’ values at every pixel for every drawing primitive, and draw that pixel’s fragment only if no previously-drawn fragment had a z-value nearer to the eye.

Second, we may have precision problems – while the 3D half-universe is unbounded, our hardware is not; we need to measure our (u,v,n,w) values by their distance from the camera’s viewing frustum— the six-sided pyramid-like box made by the camera’s field of view (a 4-sided pyramid; apex at the eyepoint) and the camera’s ‘near’ and ‘far’ clipping planes.

World Coord Axes

What’s all this about ‘near’ and ‘far’ clipping planes?!? Left, right, top, bottom planes? Alas, all computers have finite precision. The x,y,z axes we’ve discussed easily stretch to infinity in all directions; we could never adequately describe all positionsl in half the universe with nothing more than a set of four GLSL ‘float’ values! Instead, we have to choose a subset, some finite volume of 3D space to describe with the finite, limited set of floating-point numbers our computers give us. Of course, we choose the volume of 3D space that is in front of our camera—everywhere else will be off-screen in the picture we make.

THUS, we can limit our 3D camera view-frustum size in 6 ways: For the z axis: (znear, zfar) -the ‘near’ and ‘far’ planes perpendicular to the z axis, with z=f=znear and z=zfar respectively. -ONLY points between ‘znear’ and ‘zfar’ will be drawn on-screen, and both must be >0; the ‘znear’ is also known as the ‘focal distance’ f, and the zfar value limits the distance to the visible horizon in the scene. -For better results, keep the ratio (zfar/znear) modest; < ~ 10,000:1. As this ratio increases, you reduce the ability of WebGL to distinguish whether one surface is behind another; foreground objects might not occlude background objects! (We’ll explain more when we discuss ‘z-buffering’). If you separate znear and zfar too widely, your program will lose precision in distinguishing objects with nearly-identical Z values (See ‘Z-fighting’), but there is no other penalty. With modern graphics hardware, floating-point z values have greatly reduced the likelihood of z-fighting except for the very largest world-space models (e.g. the earth, described with 1mm resolution). For the x and y axis: (left, right, top, bottom) -The combination znear and zfar and your camera’s angular field-of-view set the maximum values for x,y. If you choose to use the gl.perspective() function in the cuon-matrix-quat.js library supplied in starter code, this field-of-view is symmetric, and set by your selection of the camera’s aspect ratio and its vertical field-of-view in degrees. If instead you use the gl.frustum() function, you can individually specify the left, right, top, and bottom limits of the frustum as measured at the znear clipping plane; this permits you to construct unusual camera images that emulate a leather-bellowed ‘view’ camera.

Pseudo-Depth: Remember, when we convert to Cartesian coordinates, our naïve Tpers matrix gave us (fx/z, fy/z, fz/z) = f*(x/z,y/z,1). Note that the 3rd Cartesian result doesn’t really tell us anything at all—it’s just the position of the image plane. We’d like to have a z value that tells us where we are between znear and zfar; in fact, we NEED this value to make depth comparisons when rendering. We shouldn’t draw any drawing primitive that’s BEHIND those we’ve already drawn in the frame buffer; accordingly we need to find a depth-like value we can store with pixels as we draw them. OpenGL will keep a ‘depth’ value along with the color of any pixel it draws, and then check that depth before drawing each primitive. We only draw a new pixel if its depth is shorter (nearer the eyepoint) than any previous value drawn here.

Instead of the naïve Tpers matrix that yields (fx/z, fy/z, fz/z) = f( x/z, y/z, 1),

we can construct a 4x4 homogeneous matrix that gives us the same image coordinates AND ‘pseudo-depth’ determined by two ‘magic constants’ a,b: f  (x/z, y/z, (az +b)/z).

While not quite the same as actual depth, this ‘pseudo-depth’ is almost as good; --it is monotonic (we don’t change the order of depth) and --it gives us greatest precision for depth for vertex positions near the camera, and least precision for depths nearest the far-clip plane.

We can find values a,b from user-specified near/far planes, and then construct a 4x4 matrix to give us this result. While quite clever and interesting, deriving the mapping from the frustum to the CVV is rather tedious and arcane, and instead we will let the widely-used function gl.Frustum() and gl.Perspective() functions do it for us; just choose one and use it (both are implemented for you in cuon-matrix-quat.js). See: https://www.opengl.org/sdk/docs/man2/xhtml/glFrustum.xml https://www.opengl.org/sdk/docs/man2/xhtml/gluPerspective.xml Both functions use the same underlying matrix: n = znear, f = zfar, both>0. (f also equals focal distance, from origin to image plane, as before) t = ytop, b = ybottom, r = xright, l = xleft

Tpers=[ 2n/(r-l) 0 (r+l)/(r-l) 0 ] [ 0 2n/(t-b) (t+b)/(t-b) 0 ] [ 0 0 -(f+n)/(f-n) -2fn/(f-n)] [ 0 0 -1 0 ]

Step 4B: Suppose you want an orthographic camera instead of a perspective (pinhole) camera? Like the naïve Tpers, you could use a naïve Tortho matrix, which is just the 4x4 identity matrix with a 3rd row set entirely to zero (so that z is ignored), but that’s a poor strategy. Orthographic cameras also need near,far,left,right,top, and bottom clipping planes for the most precise results, and the ‘ortho()’ function implemented for you in cuon-matrix-quat.js applies this orthographic matrix:

Tortho= [ 2/(r-l) 0 0 -(r+l)/(r-l) ] [ 0 2/(t-b) 0 -(t+b)/(t-b) ] [ 0 0 -2/(f-n) -(f+n)/(f-n) ] [ 0 0 0 1 ]

Step 5: VIEWPORT-to-screen: After we apply Tpers or Tortho and the perspective divide, all transformed vertices (u,v,n,w) fit into the Canonical View Volume (CVV), so that. –1 <= u/w <= +1, - 1<= v/w <= +1, 0 <= n/w <= +1). The u/w, v/w are the 2D coordinates within camera image plane, and n/w is a normalized measure of true depth from the eyepoint to the vertex (e.g. not the same as z distance)—it is the position of a vertex between the znear and zfar planes. To make the final image, we only need to map the camera’s coordinates (u/w, v/w) to our HTML5 canvas or viewport or screen. These coordinate values all stay within [-1,1] (because they were clipped to that range) but now we want to change these coordinates to match our on-screen viewport; the rectangle from [0,0] to [width,height] measured in pixels. Once again we apply a scale matrix, followed by a translate matrix to adjust the u and v values (w is unchanged, n can be ignored): TSviewport = [ sx 0 0 tx ] [ 0 sy 0 ty ] [ 0 0 1 0 ] [ 0 0 0 1 ] After this change to viewport coordinates, divide by ‘w’: pixel coordinates (x,y) = (u/w, v/w)

3D Virtual Camera Summary: The ‘viewing’ transformation, the 4x4 matrix that transforms a world-space vertex (x,y,z,w) to its on- screen pixel coordinates (u/w’, v/w’) is the concatenation of 4 matrices: [u] = [Tsviewport][Tpers or Tortho][VIEW][MODEL][x] [v] [y] [n] [z] [w’] [w]

Viewing in WebGL: See gl. lookAt( ) and gl.setLookAt() functions.: Eye point= the center of projection, the VRP, specified in 3D ‘world space’ coordinates Center point= the 3D world-space ‘aiming point’ for the camera. VPN = Center - Eye Up = same as VUP vector.

See gl.perspective() and gl.setPersepective() function: fovy: field-of-view angle in degrees in y direction: determines angle between top and bottom clip planes aspect: ratio of camera image width to camera image height. zNear, zFar = always positive values; be sure zNear < zFar

See gl.Frustum() and gl.setFrustum function: Constructs Tpers matrix as described above; user supplies left, right, top, bottom, near, far values.

See gl.ortho() and gl.setOrtho() function: Constructs orthographic matrix described above; user supplies left, right, bottom, top, near, far values.