Perspective Projection: Simplified Model of Human Eye, Or Camera Lens (Pinhole Camera)
Total Page:16
File Type:pdf, Size:1020Kb
Computer Graphics Lecture 04 – 3D Projection and Visualization Edirlei Soares de Lima <[email protected]> Projection and Visualization • An important use of geometric transformations in computer graphics is in moving objects between their 3D locations and their positions in a 2D view of the 3D world. • This 3D to 2D mapping is called a viewing transformation or projection. Viewing Transformations • The viewing transformation has the objective of mapping 3D coordinates (represented as [x, y, z] in the canonical coordinate system) to coordinates in the screen (expressed in units of pixels). – Important factors: camera position and orientation, type of projection, screen resolution, field of view. • Transformations: – Camera Transformation: converts points in canonical coordinates (or world space) to camera coordinates; – Projection Transformation: moves points from camera space to the canonical view volume; – Viewport Transformation: maps the canonical view volume to screen space. Viewing Transformations Viewing Transformations • Orthographic Projection: can be done by ignoring z-coordinate. • Perspective Projection: simplified model of human eye, or camera lens (pinhole camera). Viewing Transformations • Orthographic Projection vs Perspective Projection Viewing Transformations Viewport Transformation • The viewport transformation maps the canonical view volume to screen space. – The canonical view volume is a cube containing all 3D points whose coordinates are between −1 and +1 (i.e. (x, y, z) ∈ [−1, 1]). – Considering a screen of nx by ny pixels, we need to map the square [−1, 1] to the rectangle [0, nx] × [0, ny]. Viewport Transformation • Viewport transformation is windowing transformation that can be accomplished with three operations (2D example): 1. Move the points (xl, yl) to the origin. 2. Scale the rectangle to be the same size as the target rectangle. 3. Move the origin to point (x’l, y’l). Viewport Transformation • These operations can be represented as multiplications of matrices (2D example): 푥′ − 푥′ ℎ 푙 0 0 1 0 푥′푙 푥ℎ − 푥푙 1 0 −푥푙 푇 = 0 1 푦′푙 푦′ℎ − 푦′푙 0 1 −푦푙 0 0 0 0 1 0 0 1 푦ℎ − 푦푙 0 0 1 푥′ − 푥′ 푥′ 푥 − 푥′ 푥 ℎ 푙 0 푙 ℎ ℎ 푙 푥ℎ − 푥푙 푥ℎ − 푥푙 푇 = 푦′ − 푦′ 푦′ 푦 − 푦′ 푦 0 ℎ 푙 푙 ℎ ℎ 푙 푦ℎ − 푦푙 푦ℎ − 푦푙 0 0 1 Viewport Transformation • Back to our 3D problem: map the canonical view volume to screen space. 푛 푛 − 1 푥 0 푥 푥푠푐푟푒푒푛 2 2 푥푐푎푛표푛푐푎푙 푦푠푐푟푒푒푛 = 푛푦 푛푦 − 1 푦푐푎푛표푛푐푎푙 0 1 2 2 1 0 0 1 • In homogeneous coordinates: 푛푥 푛푥 − 1 0 0 (푛 , 푛 ) 2 2 (0, 푛푦) 푥 푦 푛푦 푛푦 − 1 푀푣푝 = 0 0 2 2 0 0 1 0 0 0 0 1 keeps the z-coordinate (0,0) (푛푥, 0) Viewing Transformations Projection Transformation • The projection transformation moves points from camera space to the canonical view volume. • This is a very important step, because we usually want to render geometry in some region of space other than the canonical view volume. • The simplest type of projection is parallel projection, in which 3D points are mapped to 2D by moving them along a projection direction until they hit the image plane. – In the orthographic projection, the image plane is perpendicular to the view direction. Orthographic Projection • Our first step in generalizing the view will keep the view direction and orientation fixed looking along −z with +y up, but will allow arbitrary rectangles to be viewed. • The orthographic view volume is an axis-aligned box [l, r] × [b, t] × [f, n]. x = l ≡ left plane x = r ≡ right plane y = b ≡ bottom plane y = t ≡ top plane z = n ≡ near plane z = f ≡ far plane Orthographic Projection • We assume a viewer who is looking along the minus z-axis with his head pointing in the y-direction. This implies that n > f. • How can we transform points that are inside of the orthographic view volume to the canonical view volume? – This transform is another windowing transformation! Orthographic Projection • Orthographic projection matrix: 2 푟 + 푙 0 0 − 푟 − 푙 푟 − 푙 2 푡 + 푏 0 0 − 푀표푟푡ℎ = 푡 − 푏 푡 − 푏 0 0 2 푛 + 푓 − 푛 − 푓 푛 − 푓 0 0 0 1 • The 푀표푟푡ℎ matrix can be combined with 푀푣푝 matrix to transform points to screen coordinates: 푥푠푐푟푒푒푛 푥 푦푠푐푟푒푒푛 푦 = 푀푣푝푀표푟푡ℎ 푧푐푎푛푛표푛푐푎푙 푧 1 1 Orthographic Projection • Now we can start the implementation of the code to draw objects on screen (only lines): construct Mvp construct Morth M = MvpMorth for each line segment (ai, bi) do p = Mai q = Mbi drawline(xp, yp, xq, yq) • In order to test the orthographic projection in Unity, first we need to simulate the world space and create the mesh of a 3D object. p7 p6 ... p4 p5 public class World : MonoBehaviour{ private Mesh mesh; public Vector3[] vertices; p3 p2 public int[] lines; void Start() p0 p1 { mesh = new Mesh(); GetComponent<MeshFilter>().mesh = mesh; mesh.name = "MyMesh"; Vector3 p0 = new Vector3(-1f, -1f, -1f); Vector3 p1 = new Vector3(1f, -1f, -1f); Vector3 p2 = new Vector3(1f, -1f, -3f); Vector3 p3 = new Vector3(-1f, -1f, -3f); Vector3 p4 = new Vector3(-1f, 1f, -1f); Vector3 p5 = new Vector3(1f, 1f, -1f); Vector3 p6 = new Vector3(1f, 1f, -3f); Vector3 p7 = new Vector3(-1f, 1f, -3f); ... p7 p6 ... p4 p5 vertices = new Vector3[] { p3 p2 // Bottom p0, p1, p2, p3, // Left p0 p1 p7, p4, p0, p3, // Front p4, p5, p1, p0, // Back p6, p7, p3, p2, // Right p5, p6, p2, p1, // Top p7, p6, p5, p4 }; ... p7 p6 p4 p5 ... int[] triangles = new int[] p3 p2 { 3, 1, 0, // Bottom 3, 2, 1, p0 p1 7, 5, 4, // Left 7, 6, 5, 11, 9, 8, // Front 11, 10, 9, 15, 13, 12, // Back 15, 14, 13, 19, 17, 16, // Right 19, 18, 17, 23, 21, 20, // Top 23, 22, 21, }; ... p7 p6 ... lines = new int[] p4 p5 { 0, 1, 0, 3, p3 p2 0, 5, 1, 2, 1, 9, p0 p1 2, 3, 2, 12, 3, 4, 5, 9, 5, 4, 9, 12, 12, 4 }; mesh.vertices = vertices; mesh.triangles = triangles; mesh.RecalculateNormals(); } } Orthographic Projection • After simulating the world space, we need to: – Define the orthographic view volume; – Construct the 푀푣푝 and 푀표푟푡ℎ matrices; – Draw the objects in screen space. 푛푥 푛 − 1 0 0 푥 2 2 orthographic view volume 푛푦 푛푥 − 1 푀푣푝 = 0 0 2 2 0 0 1 0 0 0 0 1 2 푟 + 푙 0 0 − 푟 − 푙 푟 − 푙 2 푡 + 푏 0 0 − 푀표푟푡ℎ = 푡 − 푏 푡 − 푏 0 0 2 푛 + 푓 − 푛 − 푓 푛 − 푓 0 0 0 1 ... public class ViewTrasform : MonoBehaviour { public World world; private float left_plane = 5f; private float right_plane = -5f; private float botton_plane = -5f; private float top_plane = 5f; private float near_plane = -1f; private float far_plane = -11f; private Texture2D frameBuffer; ... ... void OnGUI() { Matrix4x4 mvp = new Matrix4x4(); mvp.SetRow(0,new Vector4(Screen.width/2f,0f,0f,(Screen.width-1)/2f)); mvp.SetRow(1,new Vector4(0f,Screen.height/2f,0f,(Screen.height-1)/2f)); mvp.SetRow(2,new Vector4(0f, 0f, 1f, 0f)); mvp.SetRow(3,new Vector4(0f, 0f, 0f, 1f)); Matrix4x4 morth = new Matrix4x4(); morth.SetRow(0, new Vector4(2f / (right_plane - left_plane), 0f, 0f, -((right_plane+left_plane)/(right_plane-left_plane)))); morth.SetRow(1, new Vector4(0f, 2f / (top_plane - botton_plane), 0f, -((top_plane + botton_plane) / (top_plane - botton_plane)))); morth.SetRow(2, new Vector4(0f, 0f, 2f / (near_plane - far_plane), -((near_plane + far_plane) / (near_plane - far_plane)))); morth.SetRow(3, new Vector4(0f, 0f, 0f, 1f)); Matrix4x4 m = mvp * morth; ClearBuffer(frameBuffer, Color.black); ... ... for (int i = 0; i < world.lines.Length; i+=2) { Vector4 p = multiplyPoint(m, new Vector4(world.vertices[world.lines[i]].x, world.vertices[world.lines[i]].y, world.vertices[world.lines[i]].z, 1)); Vector4 q = multiplyPoint(m, new Vector4(world.vertices[world.lines[i + 1]].x, world.vertices[world.lines[i + 1]].y, world.vertices[world.lines[i + 1]].z, 1)); DrawLine(frameBuffer, (int)p.x, (int)p.y, (int)q.x, (int)q.y, Color.white); } } http://wiki.unity3d.com/index.php/TextureDrawLine Matrix by Point Multiplication Vector4 multiplyPoint(Matrix4x4 matrix, Vector4 point) { Vector4 result = new Vector4(); for (int r = 0; r < 4; r++) { float s = 0; for (int z = 0; z < 4; z++) s += matrix[r, z] * point[z]; result[r] = s; } return result; } Note: we could also use the function Matrix4x4.MultiplyPoint(Vector3 point), but it multiplies the matrix by a Vector3 and returns another Vector3. For now this is not a problem, but it will be a problem when we need the w coordinate to do perspective projection. Exercise 1 1) How do you know that the resulting rectangle on screen is correct? Use the rotation transformations to rotate the object and see if it looks 3D. Viewing Transformations Camera Transformation • The camera transformation converts points in world space to camera coordinates. – This transformation allow us to change the viewpoint in 3D and look in any direction. • Camera specification: – Eye position (e): location that the eye “sees from”; – Gaze direction (g): vector in the direction that the viewer is looking; – View-up vector (t): vector in the plane that both bisects the viewer’s head into right and left halves (for a person standing on the ground, it points “to the sky”). Camera Transformation • The camera vectors provide enough information to set up a coordinate system with origin e and a uvw basis. 푔 푤 = − 푔 푡 × 푤 푢 = 푡 × 푤 푣 = 푢 × 푤 Camera Transformation • After set up the coordinate system with origin e and a uvw basis, we need to convert the coordinates of the objects from xyz-coordinates into uvw-coordinates. – Step 1: Translate e to the world origin (0, 0, 0); 1 0 0 −푥푒 0 −푦 푇 = 0 1 푒 0 0 1 −푤푒 0 0 0 1 – Step 2: Rotate uvw to align it with xyz; 푥푢 푦푢 푧푢 0 푧 0 푅 = 푥푣 푦푣 푣 푥푤 푦푤 푧푤 0 0 0 0 1 Camera Transformation • After set up the coordinate system with origin e and a uvw basis, we need to convert the coordinates of the objects from xyz-coordinates into uvw-coordinates.