Eigenvectors and Linear Transformations

Total Page:16

File Type:pdf, Size:1020Kb

Eigenvectors and Linear Transformations Eigenvectors and linear transformations Brian Krummel October 25, 2019 1 Matrix of a linear transformation relative to bases Recall that if T : Rn ! Rm is a linear transformation, then there is a m × n matrix A such that T (x) = Ax for all x in Rn. In particular, A = T (e1) T (e2) ··· T (en) : n Here we secretly used the fact that the standard basis fe1; e2;:::; eng is a basis for R (and m similarly on R ). Thus the j-th column of A are precisely the value T (ej) of T applied to the standard basis vector ej. In some problems, it might be more convenient to choose a basis other than the standard bases for Rn and Rm. For instance, we might choose to work with a basis of eigenvectors. Moreover, we can consider linear transformations T : V ! W between finite-dimensional abstract vector spaces V and W . By representing vectors in V and W by their coordinate vectors for a basis, we can turn T into an equivalent linear transformation Te : Rn ! Rm, which is given by matrix multiplication. Let T : V ! W be a linear transformation between any pair of finite-dimensional vector spaces V and W . Let B = fb1; b2;:::; bng be a basis for V and D = fd1; d2;:::; dmg be a basis for W . We can express each vector x in V as a linear combination x = r1b1 + r2b2 + ··· + rnbn (1) for unique scalar weights rj and thus we can represent the vector x in V by its coordinate vector n [x]B in R : 2 3 r1 6 r 7 6 2 7 [x]B = 6 . 7 : (2) 4 . 5 rn m Similarly, we can represent each vector y in W by its coordinate vector [y]D in R . We want to find a m × n matrix M such that the linear transformation T is equivalent to multiplication of the coordinate vectors by M: y = T (x) means [y]D = M[x]B: 1 Some equivalent characterizations of M are [T (x)]D = M[x]B for each x in V (by substituting y = T (x) into [y]D = M[x]B), and 2 3 r1 6 r 7 6 2 7 [T (r1b1 + r2b2 + ··· + r1bn)]D = M 6 . 7 (3) 4 . 5 rn for every choice of coordinates rj (by (1) and (2)). We call M the matrix of T relative to the bases B and D. For each j = 1; 2; : : : ; n, by setting rj = 1 and rk = 0 for k 6= j so that x = bj in (1) and [x]B = [bj]B = ej in (2), we conclude from (3) that the j-th column of M is Mej = [T (bj)]D so that M = [T (b1)]D [T (b2)]D ··· [T (bn)]D : As a diagram: x in V T T(x) in W -Coordinates -Coordinates B D n Multiply by M m [x]B in R [T(x)]D in R 2 Example 1. Recall that P2 is the space of all polynomials a0 + a1x + a2x of degree at most 2. 2 Let T : P2 ! R be the linear transformation defined by p(2) T (p) = : p(3) Recall that a good way to solve linear algebra problems involving polynomials is to associate a polynomial with its coordinate vector relative to the basis B = f1; x; x2g: 2 3 a0 2 p(x) = a0 + a1x + a2x with [p]B = 4 a1 5 : a2 2 2 We will use the standard basis for R . We want to find 2 × 3 matrix M such that T (p) = M[p]B: 2 a 3 ∗ ∗ ∗ ∗ ∗ ∗ 0 M = such that T (a + a x + a x2) = a : ∗ ∗ ∗ 0 1 2 ∗ ∗ ∗ 4 1 5 a2 Notice that for the basis polynomials 1; x; x2 1 2 22 4 T (1) = ;T (x) = ;T (x2) = = : 1 3 32 9 Placing T (1), T (x), and T (x2) as the 1st, 2nd, and 3rd columns of M respectively, 1 2 4 M = T (1) T (x) T (x2) = : 1 3 9 Example 2. Suppose V is an abstract two-dimensional vector space with basis B = fb1; b2g and W is another vector space with basis D = fd1; d2; d3g. Let T : V ! W be a linear transformation defined by T (b1) = 3d1 + 5d2 + 7d3 T (b2) = 6d1 + 8d2 + 9d3: (4) By taking T (bj) and writing the coefficients of di as the columns of a vector, we find that the D-coordinates of T (b1) and T (b2) are 2 3 3 2 6 3 [T (b1)]D = 4 5 5 [T (b2)]D = 4 8 5 : 7 9 Placing the D-coordinates of T (b1) and T (b2) as the 1st and 2nd columns of M respectively, we obtain 2 3 6 3 M = [T (b1)]D [T (b2)]D = 4 5 8 5 : 7 9 Notice that in (4), we wrote T (b1) and T (b2) from left-to-right, but in M we wrote their coordi- nates as columns of M going up-and-down (this looks like we took a transpose). 2 Matrix of a transformation and similarity Notice that we could have done the same process with T : V ! V and B as a basis for V . Then T has a matrix relative to the basis B, which we denote by M = [T ]B so that [T (x)]B = [T ]B[x]B: n Now let B = fb1; b2;:::; bng is any basis for R . (For instance, B might be a basis of eigenvectors for a diagonalizable matrix A.) Recall that to solve problems with a basis B, we often did so by placing the basis vectors bj as the columns of an n × n matrix P : P = b1 b2 ··· bn : 3 Since B is a basis, P is invertible. Let x is a vector in Rn and express x as x = r1b1 + r2b2 + ··· + rnbn for scalar weights ri so that the B-coordinate vector of x is 2 3 r1 6 r 7 6 2 7 [x]B = 6 . 7 : 4 . 5 rn Then 2 3 r1 6 r 7 6 2 7 P [x]B = b1 b2 ··· bn 6 . 7 = r1b1 + r2b2 + ··· + rnbn = x: 4 . 5 rn Multiplying by P −1: −1 [x]B = P x for each x in Rn. Hence multiplication by P −1 transforms the vector x in Rn into its B-coordinate vector [x]B. Let T : Rn ! Rn defined by T (x) = Ax in standard coordinates and let's find the matrix M = [T ]B of T relative to B coordinates. We can express the behavior of M = [T ]B as the diagram: T x in V T(x) in V Multiply by A -1 -1 P P -Coordinates -Coordinates Multiply by Multiply by B B n Multiply by B n [x]B in R [T(x)]B in R For each vector x in Rn, multiplying A times x is the same as converting x to B-coordinates by multiplying by P −1, then multiplying by the matrix M, then converting back to standard coordinates by multiplying by P . This gives us Ax = PMP −1x for each x in Rn. That is, A = PMP −1: Equivalently, −1 M = [T ]B = P AP: Therefore, A is similar to M if after some change of basis A becomes M. 4 Example 3 (Old example revised). Recall from a few lectures back that 5 1 A = 1 5 has 1 eigenvector b = with eigenvalue 6 1 1 −1 eigenvector b2 = with eigenvalue 4: x21 Ae2 Thus 5 1 1 −1 6 0 1 −1 −1 A = = : 1 5 1 1 e2 0 4 1 1 2 Thus after changing the basis for R to the basis ofe1 eigenvectorsAe1 x1 B = fb1; b2g, A becomes the diagonal matrix 6 0 D = : 0 4 Geometrically, this means: x2 Av1 v1 x1 v2 x2 x2 Av2 T Example 4. Suppose V = R2 with the basis x1 x1 1 0 B = b = ; b = : 1 1 2 −1 What this means is that the basis B provides a coordinate grid for R2: x2 x1 We can describe every vector in R2 uniquely in terms of its coordinates on this grid; for instance: 1 1 0 1 x = = + 2 ) [x] = −1 1 −1 B 2 Notice that when we express x as a column vector in the usual way, this assumes that we have the standard basis fe1; e2g with the standard left-to-right, up-and-down coordinate grid. But the choice of coordinate grid, i.e. the choice of basis, is not sacred and we can describe directed line segments in terms of any basis we want. 5 Suppose we are given a linear transformation T : R2 ! R2 defined by −1 1 T (x) = Ax = x −2 1 for each x in R2. Setting 1 0 P = b b = ; 1 2 1 −1 we compute M = [T ]B equals 1 0 −1 −1 1 1 0 1 0 −1 1 1 0 M = P −1AP = = 1 −1 −2 1 1 −1 1 −1 −2 1 1 −1 1 0 0 −1 0 −1 = = : 1 −1 −1 −1 1 0 Therefore the matrix [T ]B of the transformation T relative to the basis B is 0 −1 [T ] = : B 1 0 This means that T (b1) = b2;T (b2) = −b1: By direct computation −1 1 1 0 T (b ) = = = b ; 1 −2 1 1 −1 2 −1 1 0 −1 T (b ) = = = −b : 2 −2 1 −1 −1 1 Geometrically, the basis B determines a coordinate grid on R2.
Recommended publications
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam
    MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 2006, 09:30am - 10:50am Problem 1 (15 points). Let the matrix A be given by 1 −2 −1 2 −1 5 6 3 : 5 −4 5 4 5 (a) Find the inverse A−1 of A, if it exists. (b) Based on your answer in (a), determine whether the columns of A span R3. (Justify your answer!) Solution. (a) To check whether A is invertible, we row reduce the augmented matrix [A I3]: 1 −2 −1 1 0 0 1 −2 −1 1 0 0 2 −1 5 6 0 1 0 3 ∼ : : : ∼ 2 0 3 5 1 1 0 3 : 5 −4 5 0 0 1 0 0 0 −7 −2 1 4 5 4 5 Since the last row in the echelon form of A contains only zeros, A is not row equivalent to I3. Hence, A is not invertible, and A−1 does not exist. (b) Since A is not invertible by (a), the Invertible Matrix Theorem says that the columns of A cannot span R3. Problem 2 (15 points). Let the vectors b1; : : : ; b4 be defined by 3 2 −1 0 0 5 1 0 −1 1 0 1 1 0 0 1 b1 = ; b2 = ; b3 = ; and b4 = : −2 −5 3 0 B C B C B C B C B 4 C B 7 C B 0 C B −3 C @ A @ A @ A @ A (a) Determine if the set B = fb1; b2; b3; b4g is linearly independent by computing the determi- nant of the matrix B = [b1 b2 b3 b4].
    [Show full text]
  • Coordinatization
    MATH 355 Supplemental Notes Coordinatization Coordinatization In R3, we have the standard basis i, j and k. When we write a vector in coordinate form, say 3 v 2 , (1) “ »´ fi 5 — ffi – fl it is understood as v 3i 2j 5k. “ ´ ` The numbers 3, 2 and 5 are the coordinates of v relative to the standard basis ⇠ i, j, k . It has p´ q “p q always been understood that a coordinate representation such as that in (1) is with respect to the ordered basis ⇠. A little thought reveals that it need not be so. One could have chosen the same basis elements in a di↵erent order, as in the basis ⇠ i, k, j . We employ notation indicating the 1 “p q coordinates are with respect to the di↵erent basis ⇠1: 3 v 5 , to mean that v 3i 5k 2j, r s⇠1 “ » fi “ ` ´ 2 —´ ffi – fl reflecting the order in which the basis elements fall in ⇠1. Of course, one could employ similar notation even when the coordinates are expressed in terms of the standard basis, writing v for r s⇠ (1), but whenever we have coordinatization with respect to the standard basis of Rn in mind, we will consider the wrapper to be optional. r¨s⇠ Of course, there are many non-standard bases of Rn. In fact, any linearly independent collection of n vectors in Rn provides a basis. Say we take 1 1 1 4 ´ ´ » 0fi » 1fi » 1fi » 1fi u , u u u ´ . 1 “ 2 “ 3 “ 4 “ — 3ffi — 1ffi — 0ffi — 2ffi — ffi —´ ffi — ffi — ffi — 0ffi — 4ffi — 2ffi — 1ffi — ffi — ffi — ffi —´ ffi – fl – fl – fl – fl As per the discussion above, these vectors are being expressed relative to the standard basis of R4.
    [Show full text]
  • MATH 304 Linear Algebra Lecture 14: Basis and Coordinates. Change of Basis
    MATH 304 Linear Algebra Lecture 14: Basis and coordinates. Change of basis. Linear transformations. Basis and dimension Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis. Theorem Any vector space V has a basis. If V has a finite basis, then all bases for V are finite and have the same number of elements (called the dimension of V ). Example. Vectors e1 = (1, 0, 0,..., 0, 0), e2 = (0, 1, 0,..., 0, 0),. , en = (0, 0, 0,..., 0, 1) form a basis for Rn (called standard) since (x1, x2,..., xn) = x1e1 + x2e2 + ··· + xnen. Basis and coordinates If {v1, v2,..., vn} is a basis for a vector space V , then any vector v ∈ V has a unique representation v = x1v1 + x2v2 + ··· + xnvn, where xi ∈ R. The coefficients x1, x2,..., xn are called the coordinates of v with respect to the ordered basis v1, v2,..., vn. The mapping vector v 7→ its coordinates (x1, x2,..., xn) is a one-to-one correspondence between V and Rn. This correspondence respects linear operations in V and in Rn. Examples. • Coordinates of a vector n v = (x1, x2,..., xn) ∈ R relative to the standard basis e1 = (1, 0,..., 0, 0), e2 = (0, 1,..., 0, 0),. , en = (0, 0,..., 0, 1) are (x1, x2,..., xn). a b • Coordinates of a matrix ∈ M2,2(R) c d 1 0 0 0 0 1 relative to the basis , , , 0 0 1 0 0 0 0 0 are (a, c, b, d). 0 1 • Coordinates of a polynomial n−1 p(x) = a0 + a1x + ··· + an−1x ∈Pn relative to 2 n−1 the basis 1, x, x ,..., x are (a0, a1,..., an−1).
    [Show full text]
  • Vectors, Change of Basis and Matrix Representation: Onto-Semiotic Approach in the Analysis of Creating Meaning
    International Journal of Mathematical Education in Science and Technology ISSN: 0020-739X (Print) 1464-5211 (Online) Journal homepage: http://www.tandfonline.com/loi/tmes20 Vectors, change of basis and matrix representation: onto-semiotic approach in the analysis of creating meaning Mariana Montiel , Miguel R. Wilhelmi , Draga Vidakovic & Iwan Elstak To cite this article: Mariana Montiel , Miguel R. Wilhelmi , Draga Vidakovic & Iwan Elstak (2012) Vectors, change of basis and matrix representation: onto-semiotic approach in the analysis of creating meaning, International Journal of Mathematical Education in Science and Technology, 43:1, 11-32, DOI: 10.1080/0020739X.2011.582173 To link to this article: http://dx.doi.org/10.1080/0020739X.2011.582173 Published online: 01 Aug 2011. Submit your article to this journal Article views: 174 View related articles Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=tmes20 Download by: [Universidad Pública de Navarra - Biblioteca] Date: 22 January 2017, At: 10:06 International Journal of Mathematical Education in Science and Technology, Vol. 43, No. 1, 15 January 2012, 11–32 Vectors, change of basis and matrix representation: onto-semiotic approach in the analysis of creating meaning Mariana Montiela*, Miguel R. Wilhelmib, Draga Vidakovica and Iwan Elstaka aDepartment of Mathematics and Statistics, Georgia State University, Atlanta, GA, USA; bDepartment of Mathematics, Public University of Navarra, Pamplona 31006, Spain (Received 26 July 2010) In a previous study, the onto-semiotic approach was employed to analyse the mathematical notion of different coordinate systems, as well as some situations and university students’ actions related to these coordinate systems in the context of multivariate calculus.
    [Show full text]
  • Orthonormal Bases
    Math 108B Professor: Padraic Bartlett Lecture 3: Orthonormal Bases Week 3 UCSB 2014 In our last class, we introduced the concept of \changing bases," and talked about writ- ing vectors and linear transformations in other bases. In the homework and in class, we saw that in several situations this idea of \changing basis" could make a linear transformation much easier to work with; in several cases, we saw that linear transformations under a certain basis would become diagonal, which made tasks like raising them to large powers far easier than these problems would be in the standard basis. But how do we find these \nice" bases? What does it mean for a basis to be\nice?" In this set of lectures, we will study one potential answer to this question: the concept of an orthonormal basis. 1 Orthogonality To start, we should define the notion of orthogonality. First, recall/remember the defini- tion of the dot product: n Definition. Take two vectors (x1; : : : xn); (y1; : : : yn) 2 R . Their dot product is simply the sum x1y1 + x2y2 + : : : xnyn: Many of you have seen an alternate, geometric definition of the dot product: n Definition. Take two vectors (x1; : : : xn); (y1; : : : yn) 2 R . Their dot product is the product jj~xjj · jj~yjj cos(θ); where θ is the angle between ~x and ~y, and jj~xjj denotes the length of the vector ~x, i.e. the distance from (x1; : : : xn) to (0;::: 0). These two definitions are equivalent: 3 Theorem. Let ~x = (x1; x2; x3); ~y = (y1; y2; y3) be a pair of vectors in R .
    [Show full text]
  • Notes on Change of Bases Northwestern University, Summer 2014
    Notes on Change of Bases Northwestern University, Summer 2014 Let V be a finite-dimensional vector space over a field F, and let T be a linear operator on V . Given a basis (v1; : : : ; vn) of V , we've seen how we can define a matrix which encodes all the information about T as follows. For each i, we can write T vi = a1iv1 + ··· + anivn 2 for a unique choice of scalars a1i; : : : ; ani 2 F. In total, we then have n scalars aij which we put into an n × n matrix called the matrix of T relative to (v1; : : : ; vn): 0 1 a11 ··· a1n B . .. C M(T )v := @ . A 2 Mn;n(F): an1 ··· ann In the notation M(T )v, the v showing up in the subscript emphasizes that we're taking this matrix relative to the specific bases consisting of v's. Given any vector u 2 V , we can also write u = b1v1 + ··· + bnvn for a unique choice of scalars b1; : : : ; bn 2 F, and we define the coordinate vector of u relative to (v1; : : : ; vn) as 0 1 b1 B . C n M(u)v := @ . A 2 F : bn In particular, the columns of M(T )v are the coordinates vectors of the T vi. Then the point of the matrix M(T )v is that the coordinate vector of T u is given by M(T u)v = M(T )vM(u)v; so that from the matrix of T and the coordinate vectors of elements of V , we can in fact reconstruct T itself.
    [Show full text]
  • Eigenvalues and Eigenvectors MAT 67L, Laboratory III
    Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled \Experiments" are not graded, and should not be turned in. They are designed for you to get more practice with MATLAB before you start working on the programming problems, and they reinforce mathematical ideas. (3) A subset of the questions labeled \Problems" are graded. You need to turn in MATLAB M-files for each problem via Smartsite. You must read the \Getting started guide" to learn what file names you must use. Incorrect naming of your files will result in zero credit. Every problem should be placed is its own M-file. (4) Don't forget to have fun! Eigenvalues One of the best ways to study a linear transformation f : V −! V is to find its eigenvalues and eigenvectors or in other words solve the equation f(v) = λv ; v 6= 0 : In this MATLAB exercise we will lead you through some of the neat things you can to with eigenvalues and eigenvectors. First however you need to teach MATLAB to compute eigenvectors and eigenvalues. Lets briefly recall the steps you would have to perform by hand: As an example lets take the matrix of a linear transformation f : R3 ! R3 to be (in the canonical basis) 01 2 31 M := @2 4 5A : 3 5 6 The steps to compute eigenvalues and eigenvectors are (1) Calculate the characteristic polynomial P (λ) = det(M − λI) : (2) Compute the roots λi of P (λ). These are the eigenvalues. (3) For each of the eigenvalues λi calculate ker(M − λiI) : The vectors of any basis for for ker(M − λiI) are the eigenvectors corresponding to λi.
    [Show full text]
  • Reciprocal Frame Vectors
    Reciprocal Frame Vectors Peeter Joot March 29, 2008 1 Approach without Geometric Algebra. Without employing geometric algebra, one can use the projection operation expressed as a dot product and calculate the a vector orthogonal to a set of other vectors, in the direction of a reference vector. Such a calculation also yields RN results in terms of determinants, and as a side effect produces equations for parallelogram area, parallelopiped volume and higher dimensional analogues as a side effect (without having to employ change of basis diagonalization arguments that don’t work well for higher di- mensional subspaces). 1.1 Orthogonal to one vector The simplest case is the vector perpendicular to another. In anything but R2 there are a whole set of such vectors, so to express this as a non-set result a reference vector is required. Calculation of the coordinate vector for this case follows directly from the dot product. Borrowing the GA term, we subtract the projection to calculate the rejection. Rejuˆ (v) = v − v · uˆ uˆ 1 = (vu2 − v · uu) u2 1 = v e u u − v u u e u2 ∑ i i j j j j i i 1 vi vj = ∑ ujei u2 ui uj 1 ui uj = (u e − u e ) 2 ∑ i j j i v v u i<j i j Thus we can write the rejection of v from uˆ as: 1 1 ui uj ui uj Rej (v) = (1) uˆ 2 ∑ v v e e u i<j i j i j Or introducing some shorthand: uv ui uj Dij = vi vj ue ui uj Dij = ei ej equation 1 can be expressed in a form that will be slightly more convient for larger sets of vectors: 1 ( ) = uv ue Rejuˆ v 2 ∑ Dij Dij (2) u i<j Note that although the GA axiom u2 = u · u has been used in equations 1 and 2 above and the derivation, that was not necessary to prove them.
    [Show full text]
  • Coordinate Vectors and Examples
    Coordinate Vectors and Examples Francis J. Narcowich Department of Mathematics Texas A&M University June 2013 Coordinate vectors. This is a brief discussion of coordinate vectors and the notation for them that I presented in class. Here is the setup for all of the problems. We begin with a vector space V that has a basis B = {v1,..., vn} – i.e., a spanning set that is linearly independent. We always keep the same order for vectors in the basis. Technically, this is called an ordered basis. The following theorem, Theorem 3.2, p. 139, in the text gives the necessary ingredients for making coordinates: Theorem 1 (Coordinate Theorem) Let V = Span(B), where the set B = {v1,..., vn}. Then, every v ∈ V can represented in exactly one way as linear combination of the vj’s if and only if B = {v1,..., vn} is linearly independent – hence, B is a basis, since it spans. In particular, if B is a basis, there are unique scalars x1,...,xn such that v = x1v1 + x2v2 + ··· + xnvn . (1) This theorem allows us to assign coordinates to vectors, provided we don’t change the order of the vectors in B. That is, B is is an ordered basis. When order matters we write B = [v1,..., vn]. (For example, for P3, the ordered basis [1,x,x2] is different than [x,x2, 1].) If the basis is ordered, then the coefficient xj in equation (1) corresponds to vj, and we say that the xj’s are the coordinates of v relative to B. We collect them into the coordinate vector x1 .
    [Show full text]
  • Inner Product Spaces
    Part VI Inner Product Spaces 475 Section 27 The Dot Product in Rn Focus Questions By the end of this section, you should be able to give precise and thorough answers to the questions listed below. You may want to keep these questions in mind to focus your thoughts as you complete the section. What is the dot product of two vectors? Under what conditions is the dot • product defined? How do we find the angle between two nonzero vectors in Rn? • How does the dot product tell us if two vectors are orthogonal? • How do we define the length of a vector in any dimension and how can the • dot product be used to calculate the length? How do we define the distance between two vectors? • What is the orthogonal projection of a vector u in the direction of the vector • v and how do we find it? What is the orthogonal complement of a subspace W of Rn? • Application: Hidden Figures in Computer Graphics In video games, the speed at which a computer can render changing graphics views is vitally im- portant. To increase a computer’s ability to render a scene, programs often try to identify those parts of the images a viewer could see and those parts the viewer could not see. For example, in a scene involving buildings, the viewer could not see any images blocked by a solid building. In the mathematical world, this can be visualized by graphing surfaces. In Figure 27.1 we see a crude image of a house made up of small polygons (this is how programs generally represent surfaces).
    [Show full text]
  • Bases, Coordinates and Representations (3/27/19)
    Bases, Coordinates and Representations (3/27/19) Alex Nita Abstract This section gets to the core of linear algebra: the separation of vectors and linear transformations from their particular numerical realizations, which, until now, we've largely taken for granted. We've taken them for granted because calculus and trigonometry were constructed within the standard Cartesian co- ordinate system for R2 and R3, and the issue was never raised. There is a way to construct a coordinate-free calculus, of course, just like there is a way to construct a coordinate-free linear algebra|the abstract way, purely in terms of properties|but I believe that misses the crucial point. We want coordinates. We need them to apply linear algebra to anything, because coordinates correspond to units of measurement forming a `grid.' Putting coordinates on an abstract vector space is like `turning the lights on' in the dark. Except the abstract vector space is not en- tirely dark. The abstract vector space has structure, it's just that the structure is defined purely in terms of properties, not numbers. In the quintessentially modern Bourbaki style of math, where prop- erties alone are considered, the `abstract vector space over a field’ loses even that residual scalar-multiplicative role of numbers, replac- ing it with the generic scalar|the abstract number|the algebraic field. A field captures, by a clever use of (algebraic) properties, the commonalities of Q, R, C, finite fields Fp, and other types of num- bers. The virtue of this approach is that many proofs become simplified— we no longer have to compute ad nauseam, we can just use proper- ties, whose symbolic tidiness compresses or eliminates the mess of numbers.
    [Show full text]