Glossary of Linear Algebra Terms
Total Page：16
File Type：pdf, Size：1020Kb
INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We ﬁrst recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to deﬁne general notions of length and angle on other vector spaces. Deﬁnition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisﬁes the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w. Thus in particular, for ﬁxed w, the maps x 7! w · x and x 7! x · w are linear maps valued in R. (iii:) The dot product is positive deﬁnite: u · u ≥ 0 with equality if and only if u = 0. In physics and engineering contexts, where vectors are often deﬁned as being mathematical objects embodying a direction together with a notion of magnitude or length, the dot product reveals its power by allowing the computation of lengths and angles. For example, in three dimensions we see that the dot product of a vector with itself gives the sum of the squares of 2 2 2 the components: u · u = u1 + u2 + u3. Since the component vectors relative to the standard 1 Spring 2018 M235.4 -Linear Algebra: Inner Product Spaces Havens basis are mutually perpendicular, by the Pythagorean theorem we deduce that u · u is the square of the Euclidean length of u. This motivates the following deﬁnition: Deﬁnition 1.2. The magnitude, or length of a vector v 2 Rn is the quantity n !1=2 p X 2 kvk := v · v = vi : i=1 By positive deﬁniteness of the dot product, kvk is a well deﬁned real number associated to the n-vector v. In particular, k · k : Rn ! R deﬁnes a norm on Rn, as it satisﬁes the following properties for all u; v 2 Rn and s 2 R: (i:) non-degeneracy: kuk ≥ 0 with equality if and only if u = 0, (ii:) absolute homogeneity: ksuk = jsjkuk, (iii:) sub-additivity: ku + vk ≤ kuk + kvk. The inequality in (iii:) is called the triangle inequality. Speaking of triangles, there is a further connection of the norm to triangles, which leads to the fact that we can extract angles using dot products. To any pair of nonparallel vectors u and v we obtain a triangle 4(u; v) := ft1u + t2v j t1; t2 2 [0; 1]; t1 + t2 ≤ 1g with sides u, v and third side the line segment (1 − t)u + tv = u + t(v − u), t 2 [0; 1]. By the law of cosines, kv − uk2 = kuk2 + kvk2 − 2kukkvk cos θ where θ is the interior angle of the triangle 4(u; v) between its edges u and v. Using that kv − uk2 = (u − v) · (u − v) = u · u − 2u · v + v · v = kuk2 + kvk2 − 2u · v ; we see that u·v = kukkvk cos θ. On the other hand, if u and v are collinear, the dot product is easily seen to be ±kukkvk, with positive sign if and only if u = sv for a positive scalar s. (Recall, two vectors are parallel if and only if one is a scalar multiple of the other.) Thus we have the following proposition giving a geometric, \coordinate-free" interpretation of the dot product: Proposition 1.2. Let u; v 2 Rn, and let θ 2 [0; π] be the measure of the angle between the vectors u and v, as measured in a plane containing both u and v. Then the dot product is the scalar u · v = kukkvk cos θ : In particular, u and v are collinear if and only if ju · vj = kukkvk, and otherwise θ 2 (0; π), and there is a uniquely determined plane containing u and v equal to span fu; vg. More generally, the Cauchy-Schwartz inequality holds: ju · vj ≤ kukkvk : 2 Spring 2018 M235.4 -Linear Algebra: Inner Product Spaces Havens The Cauchy-Schwartz inequality in this case follows readily from the fact that j cos θj ≤ 1. In the extreme case, the left hand side of the Cauchy-Schwartz inequality may be 0. Geometrically, this requires either one of the vectors be the zero vector, or cos θ = 0, and thus θ = π=2+kπ for k 2 Z. Thus, in particular, two nonzero vectors in Rn are perpendicular if and only if their dot product is 0. Deﬁnition 1.3. Two vectors u and v are said to be orthogonal if and only if u · v = 0. Note that 0 is orthogonal to all vectors, and also parallel to all vectors. Observe that the dot product allows us to calculate angles directly from the components of vectors. In particular, the angle θ between vectors u and v is given by u · v θ = arccos : kukkvk Deﬁnition 1.4. A vector u 2 Rn is called a unit vector if and only if kuk = 1. A vector u may be normalized to a unit vector u^ by scaling: u u^ = : kuk The set of all unit vectors in Rn forms a set called the (n − 1)-dimensional unit sphere: n−1 n n S := fx 2 R : kxk = 1g = fx 2 R : x · x = 1g : The reason it is called (n−1)-dimensional, rather than n-dimensional, is akin to the reason a plane is considered 2-dimensional. A point of a plane is determined by two scalars (the weights needed to locate a position via a linear combination of vectors in a basis spanning the plane), while a point x 2 Sn−1 ,! Rn is determined by n − 1 scalars, since any n − 1 components of x determine the last component via the condition x · x = 1. 3 Spring 2018 M235.4 -Linear Algebra: Inner Product Spaces Havens Exercises (1) Compute all possible dot products between the following vectors: 2 1 3 2 −2 3 2 −1 3 2 1=2 3 2 0 3 2 −7 3 6 7 6 7 6 7 6 7 6 7 6 7 a = 6 1 7 ; b = 6 3 7 ; c = 6 4 7 ; v = 6 −1=3 7 ; v = 6 9 7 ; w = 6 6 7 : 4 5 4 5 4 5 4 5 4 5 4 5 1 −3 7 1=6 9 0 (2) Verify proposition 1.1 above directly using the coordinate formula given for the dot product. (3) Give a concrete example showing that the dot product does not have the cancellation property, i.e. show that u · v = u · w does not imply that v = w. For a ﬁxed pair u; v 2 R3, describe geometrically the set of all vectors w 2 R3 such that u·w = u·v. (4) For real numbers, it is well known that multiplication satisﬁes an associative property: a(bc) = abc = a(bc) for any a; b; c 2 R. Why is there no associative property of the dot product? What's wrong with writing a · (b · c) = a · b · c = (a · b) · c? (5) A regular tetrahedron is a solid in R3 with four faces, each of which is an equilateral triangle. Find the angles between the faces of a tetrahedron, which are dihedral angles (a dihedral angle is an angle between the faces of a polyhedron). (6) By a diagonal of a cube, we mean the line segment from one vertex of a cube to the farthest vertex across the cube. By a diagonal of a cube's face, we mean the diagonal of the square face from one vertex to the opposite vertex of that face. (a) Find the lengths of the diagonals of a cube and diagonals of faces in terms of the side length of a cube. (b) Find the angles between a diagonal of a cube and an adjacent edge of the cube. (c) Each diagonal of the cube is adjacent to how many face diagonals? Find the angle between a diagonal of a cube and an adjacent face diagonal. (7) Prove that, for any u; v 2 Rn, 2 kuk2 + 2 kvk2 = ku + vk2 + ku − vk2 ; and 1Å ã u · v = ku + vk2 − ku − vk2 4 (8) Consider linearly independent vectors u; v 2 R2, and let P be the parallelogram whose sides they span. Under what conditions are the diagonals of P orthogonal? (9) Demonstrate via vector algebra that the diagonals of a parallelogram always bisect each other. 4 Spring 2018 M235.4 -Linear Algebra: Inner Product Spaces Havens 1.2. Orthogonal Sets and Orthonormal Bases. n Deﬁnition 1.5.