Orthogonality

Total Page:16

File Type:pdf, Size:1020Kb

Orthogonality Jim Lambers MAT 610 Summer Session 2009-10 Lecture 3 Notes These notes correspond to Sections 2.5-2.7 in the text. Orthogonality n T A set of vectors fx1; x2;:::; xkg in R is said to be orthogonal if xi xj = 0 whenever i 6= j. If, in T addition, xi xi = 1 for i = 1; 2; : : : ; k, then we say that the set is orthonormal. In this case, we can T also write xi xj = ij, where 1 i = j = ij 0 i 6= j is called the Kronecker delta. Example The vectors x = 1 −3 2 T and y = 4 2 1 T are orthogonal, as xT y = 1(4) − p T 3(2) + 2(1) = 4 − 6 + 2 = 0. Using the fact that kxk2 = x x, we can normalize these vectors to obtain orthonormal vectors 2 1 3 2 4 3 x 1 y 1 x~ = = p −3 ; y~ = = p 2 ; kxk 4 5 kyk 4 5 2 14 2 2 21 1 which satisfy x~T x~ = y~T y~ = 1; x~T y~ = 0: 2 m The concept of orthogonality extends to subspaces. If S1;S2;:::;Sk is a set of subspaces of R , T we say that these subspaces are mutually orthogonal if x y = 0 whenever x 2 Si, y 2 Sj, for i 6= j. n ? Furthermore, we define the orthogonal complement of a subspace S ⊆ R to be the set S defined by ? m T S = fy 2 R j x y = 0; x 2 Sg: 3 Example Let S be the subspace of R defined by S = spanfv1; v2g; where 2 1 3 2 1 3 v1 = 4 1 5 ; v2 = 4 −1 5 : 1 1 n o Then, S has the orthogonal complement S? = span 1 0 −1 T . It can be verified directly, by computing inner products, that the single basis vector of S?, T v3 = 1 0 −1 , is orthogonal to either basis vector of S, v1 or v2. It follows that any 1 multiple of this vector, which describes any vector in the span of v3, is orthogonal to any linear combination of these basis vectors, which accounts for any vector in S, due to the linearity of the inner product: T T T (c3v3) (c1v1 + c2v2) = c3c1v3 v1 + c3c2v3 v2 = c3c3(0) + c3c2(0) = 0: Therefore, any vector in S? is in fact orthogonal to any vector in S. 2 3 It is helpful to note that given a basis fv1; v2g for any two-dimensional subspace S ⊆ R , a basis ? fv3g for S can be obtained by computing the cross-product of the basis vectors of S, 2 3 (v1)2(v2)3 − (v1)3(v2)2 v3 = v1 × v2 = 4 (v1)3(v2)1 − (v1)1(v2)3 5 : (v1)1(v2)2 − (v1)2(v2)1 A similar process can be used to compute the orthogonal complement of any (n − 1)-dimensional n subspace of R . This orthogonal complement will always have dimension 1, because the sum of the dimension of any subspace of a finite-dimensional vector space, and the dimension of its orthogonal complement, is equal to the dimension of the vector space. That is, if dim(V ) = n, and S ⊆ V is a subspace of V , then dim(S) + dim(S?) = n: Let S = range(A) for an m × n matrix A. By the definition of the range of a matrix, if y 2 S, n m T then y = Ax for some vector x 2 R . If z 2 R is such that z y = 0, then we have zT Ax = zT (AT )T x = (AT z)T x = 0: If this is true for all y 2 S, that is, if z 2 S?, then we must have AT z = 0. Therefore, z 2 null(AT ), and we conclude that S? = null(AT ). That is, the orthogonal complement of the range is the null space of the transpose. Example Let 2 1 1 3 A = 4 2 −1 5 : 3 2 ? If S = range(A) = spanfa1; a2g, where a1 and a2 are the columns of A, then S = spanfbg, where T b = a1 × a2 = 7 1 −3 . We then note that 2 7 3 1 2 3 0 AT b = 1 = : 1 −1 2 4 5 0 −3 2 That is, b is in the null space of AT . In fact, because of the relationships rank(AT ) + dim(null(AT )) = 3; rank(A) = rank(AT ) = 2; we have dim(null(AT )) = 1, so null(A) = spanfbg. 2 m A basis for a subspace S ⊆ R is an orthonormal basis if the vectors in the basis form an orthonormal set. For example, let Q1 be an m × r matrix with orthonormal columns. Then these columns form an orthonormal basis for range(Q1). If r < m, then this basis can be extended to a m ? T basis of all of R by obtaining an orthonormal basis of range(Q1) = null(Q1 ). ? If we define Q2 to be a matrix whose columns form an orthonormal basis of range(Q1) , then T the columns of the m × m matrix Q = Q1 Q2 are also orthonormal. It follows that Q Q = I. That is, QT = Q−1. Furthermore, because the inverse of a matrix is unique, QQT = I as well. We say that Q is an orthogonal matrix. One interesting property of orthogonal matrices is that they preserve certain norms. For exam- 2 T T T 2 ple, if Q is an orthogonal n×n matrix, then kQxk2 = x Q Qx = x x = kxk2. That is, orthogonal matrices preserve `2-norms. Geometrically, multiplication by an n × n orthogonal matrix preserves the length of a vector, and performs a rotation in n-dimensional space. Example The matrix 2 cos 0 sin 3 Q = 4 0 1 0 5 − sin 0 cos 3 is an orthogonal matrix, for any angle . For any vector x 2 R , the vector Qx can be obtained by rotating x clockwise by the angle in the xz-plane, while QT = Q−1 performs a counterclockwise rotation by in the xz-plane. 2 In addition, orthogonal matrices preserve the matrix `2 and Frobenius norms. That is, if A is an m × n matrix, and Q and Z are m × m and n × n orthogonal matrices, respectively, then kQAZkF = kAkF ; kQAZk2 = kAk2: The Singular Value Decomposition The matrix `2-norm can be used to obtain a highly useful decomposition of any m × n matrix A. Let x be a unit `2-norm vector (that is, kxk2 = 1 such that kAxk2 = kAk2. If z = Ax, and we define y = z=kzk2, then we have Ax = y, with = kAk2, and both x and y are unit vectors. 3 n m We can extend x and y, separately, to orthonormal bases of R and R , respectively, to obtain n×n m×m orthogonal matrices V1 = x V2 2 R and U1 = y U2 2 R . We then have T T y U1 AV1 = T A x V2 U2 T T y Ax y AV2 = T T U2 Ax U2 AV2 T T y y y AV2 = T T U2 y U2 AV2 wT = 0 B ≡ A1; T T T where B = U2 AV2 and w = V2 A y. Now, let s = : w 2 2 2 2 2 Then ksk2 = + kwk2. Furthermore, the first component of A1s is + kwk2. It follows from the invariance of the matrix `2-norm under orthogonal transformations that 2 2 2 2 2 2 2 kA1sk2 ( + kwk2) 2 2 = kAk2 = kA1k2 ≥ 2 ≥ 2 2 = + kwk2; ksk2 + kwk2 and therefore we must have w = 0. We then have 0 U T AV = A = : 1 1 1 0 B Continuing this process on B, and keeping in mind that kBk2 ≤ kAk2, we obtain the decom- position U T AV = Σ where m×m n×n U = u1 ⋅ ⋅ ⋅ um 2 R ;V = v1 ⋅ ⋅ ⋅ vn 2 R are both orthogonal matrices, and m×n Σ = diag(1; : : : ; p) 2 R ; p = minfm; ng is a diagonal matrix, in which Σii = i for i = 1; 2; : : : ; p, and Σij = 0 for i 6= j. Furthermore, we have kAk2 = 1 ≥ 2 ≥ ⋅ ⋅ ⋅ ≥ p ≥ 0: 4 This decomposition of A is called the singular value decomposition, or SVD. It is more commonly written as a factorization of A, A = UΣV T : The diagonal entries of Σ are the singular values of A. The columns of U are the left singular vectors, and the columns of V are the right singluar vectors. It follows from the SVD itself that the singular values and vectors satisfy the relations T Avi = iui;A ui = ivi; i = 1; 2;:::; minfm; ng: For convenience, we denote the ith largest singular value of A by i(A), and the largest and smallest singular values are commonly denoted by max(A) and min(A), respectively. The SVD conveys much useful information about the structure of a matrix, particularly with regard to systems of linear equations involving the matrix. Let r be the number of nonzero singular values. Then r is the rank of A, and range(A) = spanfu1;:::; urg; null(A) = spanfvr+1;:::; vng: That is, the SVD yields orthonormal bases of the range and null space of A. It follows that we can write r X T A = iuivi : i=1 This is called the SVD expansion of A. If m ≥ n, then this expansion yields the \economy-size" SVD T A = U1Σ1V ; where m×n n×n U1 = u1 ⋅ ⋅ ⋅ un 2 R ; Σ1 = diag(1; : : : ; n) 2 R : Example The matrix 2 11 19 11 3 A = 4 9 21 9 5 10 20 10 has the SVD A = UΣV T where p p p p p p 2 1= 3 −1= 2 1= 6 3 2 1= 6 −1= 3 1= 2 3 p p p p p U = 4 1=p3 1= 2 1= 6 5 ;V = 4 2p=3 1=p3p 0 5 ; 1= 3 0 −p2=3 1= 6 −1= 3 −1= 2 and 2 42:4264 0 0 3 S = 4 0 2:4495 0 5 : 0 0 0 5 Let U = u1 u2 u3 and V = v1 v2 v3 be column partitions of U and V , respectively.
Recommended publications
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • A New Description of Space and Time Using Clifford Multivectors
    A new description of space and time using Clifford multivectors James M. Chappell† , Nicolangelo Iannella† , Azhar Iqbal† , Mark Chappell‡ , Derek Abbott† †School of Electrical and Electronic Engineering, University of Adelaide, South Australia 5005, Australia ‡Griffith Institute, Griffith University, Queensland 4122, Australia Abstract Following the development of the special theory of relativity in 1905, Minkowski pro- posed a unified space and time structure consisting of three space dimensions and one time dimension, with relativistic effects then being natural consequences of this space- time geometry. In this paper, we illustrate how Clifford’s geometric algebra that utilizes multivectors to represent spacetime, provides an elegant mathematical framework for the study of relativistic phenomena. We show, with several examples, how the application of geometric algebra leads to the correct relativistic description of the physical phenomena being considered. This approach not only provides a compact mathematical representa- tion to tackle such phenomena, but also suggests some novel insights into the nature of time. Keywords: Geometric algebra, Clifford space, Spacetime, Multivectors, Algebraic framework 1. Introduction The physical world, based on early investigations, was deemed to possess three inde- pendent freedoms of translation, referred to as the three dimensions of space. This naive conclusion is also supported by more sophisticated analysis such as the existence of only five regular polyhedra and the inverse square force laws. If we lived in a world with four spatial dimensions, for example, we would be able to construct six regular solids, and in arXiv:1205.5195v2 [math-ph] 11 Oct 2012 five dimensions and above we would find only three [1].
    [Show full text]
  • 2 Hilbert Spaces You Should Have Seen Some Examples Last Semester
    2 Hilbert spaces You should have seen some examples last semester. The simplest (finite-dimensional) ex- C • A complex Hilbert space H is a complete normed space over whose norm is derived from an ample is Cn with its standard inner product. It’s worth recalling from linear algebra that if V is inner product. That is, we assume that there is a sesquilinear form ( , ): H H C, linear in · · × → an n-dimensional (complex) vector space, then from any set of n linearly independent vectors we the first variable and conjugate linear in the second, such that can manufacture an orthonormal basis e1, e2,..., en using the Gram-Schmidt process. In terms of this basis we can write any v V in the form (f ,д) = (д, f ), ∈ v = a e , a = (v, e ) (f , f ) 0 f H, and (f , f ) = 0 = f = 0. i i i i ≥ ∀ ∈ ⇒ ∑ The norm and inner product are related by which can be derived by taking the inner product of the equation v = aiei with ei. We also have n ∑ (f , f ) = f 2. ∥ ∥ v 2 = a 2. ∥ ∥ | i | i=1 We will always assume that H is separable (has a countable dense subset). ∑ Standard infinite-dimensional examples are l2(N) or l2(Z), the space of square-summable As usual for a normed space, the distance on H is given byd(f ,д) = f д = (f д, f д). • • ∥ − ∥ − − sequences, and L2(Ω) where Ω is a measurable subset of Rn. The Cauchy-Schwarz and triangle inequalities, • √ (f ,д) f д , f + д f + д , | | ≤ ∥ ∥∥ ∥ ∥ ∥ ≤ ∥ ∥ ∥ ∥ 2.1 Orthogonality can be derived fairly easily from the inner product.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Orthogonality Handout
    3.8 (SUPPLEMENT) | ORTHOGONALITY OF EIGENFUNCTIONS We now develop some properties of eigenfunctions, to be used in Chapter 9 for Fourier Series and Partial Differential Equations. 1. Definition of Orthogonality R b We say functions f(x) and g(x) are orthogonal on a < x < b if a f(x)g(x) dx = 0 . [Motivation: Let's approximate the integral with a Riemann sum, as follows. Take a large integer N, put h = (b − a)=N and partition the interval a < x < b by defining x1 = a + h; x2 = a + 2h; : : : ; xN = a + Nh = b. Then Z b f(x)g(x) dx ≈ f(x1)g(x1)h + ··· + f(xN )g(xN )h a = (uN · vN )h where uN = (f(x1); : : : ; f(xN )) and vN = (g(x1); : : : ; g(xN )) are vectors containing the values of f and g. The vectors uN and vN are said to be orthogonal (or perpendicular) if their dot product equals zero (uN ·vN = 0), and so when we let N ! 1 in the above formula it makes R b sense to say the functions f and g are orthogonal when the integral a f(x)g(x) dx equals zero.] R π 1 2 π Example. sin x and cos x are orthogonal on −π < x < π, since −π sin x cos x dx = 2 sin x −π = 0. 2. Integration Lemma Suppose functions Xn(x) and Xm(x) satisfy the differential equations 00 Xn + λnXn = 0; a < x < b; 00 Xm + λmXm = 0; a < x < b; for some numbers λn; λm. Then Z b 0 0 b (λn − λm) Xn(x)Xm(x) dx = [Xn(x)Xm(x) − Xn(x)Xm(x)]a: a Proof.
    [Show full text]
  • Inner Products and Orthogonality
    Advanced Linear Algebra – Week 5 Inner Products and Orthogonality This week we will learn about: • Inner products (and the dot product again), • The norm induced by the inner product, • The Cauchy–Schwarz and triangle inequalities, and • Orthogonality. Extra reading and watching: • Sections 1.3.4 and 1.4.1 in the textbook • Lecture videos 17, 18, 19, 20, 21, and 22 on YouTube • Inner product space at Wikipedia • Cauchy–Schwarz inequality at Wikipedia • Gram–Schmidt process at Wikipedia Extra textbook problems: ? 1.3.3, 1.3.4, 1.4.1 ?? 1.3.9, 1.3.10, 1.3.12, 1.3.13, 1.4.2, 1.4.5(a,d) ??? 1.3.11, 1.3.14, 1.3.15, 1.3.25, 1.4.16 A 1.3.18 1 Advanced Linear Algebra – Week 5 2 There are many times when we would like to be able to talk about the angle between vectors in a vector space V, and in particular orthogonality of vectors, just like we did in Rn in the previous course. This requires us to have a generalization of the dot product to arbitrary vector spaces. Definition 5.1 — Inner Product Suppose that F = R or F = C, and V is a vector space over F. Then an inner product on V is a function h·, ·i : V × V → F such that the following three properties hold for all c ∈ F and all v, w, x ∈ V: a) hv, wi = hw, vi (conjugate symmetry) b) hv, w + cxi = hv, wi + chv, xi (linearity in 2nd entry) c) hv, vi ≥ 0, with equality if and only if v = 0.
    [Show full text]
  • LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In
    MATH 110: LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In the following V will denote a finite-dimensional vector space over R that is also an inner product space with inner product denoted by h ·; · i. The norm induced by h ·; · i will be denoted k · k. 1. Let S be a subset (not necessarily a subspace) of V . We define the orthogonal annihilator of S, denoted S?, to be the set S? = fv 2 V j hv; wi = 0 for all w 2 Sg: (a) Show that S? is always a subspace of V . ? Solution. Let v1; v2 2 S . Then hv1; wi = 0 and hv2; wi = 0 for all w 2 S. So for any α; β 2 R, hαv1 + βv2; wi = αhv1; wi + βhv2; wi = 0 ? for all w 2 S. Hence αv1 + βv2 2 S . (b) Show that S ⊆ (S?)?. Solution. Let w 2 S. For any v 2 S?, we have hv; wi = 0 by definition of S?. Since this is true for all v 2 S? and hw; vi = hv; wi, we see that hw; vi = 0 for all v 2 S?, ie. w 2 S?. (c) Show that span(S) ⊆ (S?)?. Solution. Since (S?)? is a subspace by (a) (it is the orthogonal annihilator of S?) and S ⊆ (S?)? by (b), we have span(S) ⊆ (S?)?. ? ? (d) Show that if S1 and S2 are subsets of V and S1 ⊆ S2, then S2 ⊆ S1 . ? Solution. Let v 2 S2 . Then hv; wi = 0 for all w 2 S2 and so for all w 2 S1 (since ? S1 ⊆ S2).
    [Show full text]
  • Eigenvalues, Eigenvectors and the Similarity Transformation
    Eigenvalues, Eigenvectors and the Similarity Transformation Eigenvalues and the associated eigenvectors are ‘special’ properties of square matrices. While the eigenvalues parameterize the dynamical properties of the system (timescales, resonance properties, amplification factors, etc) the eigenvectors define the vector coordinates of the normal modes of the system. Each eigenvector is associated with a particular eigenvalue. The general state of the system can be expressed as a linear combination of eigenvectors. The beauty of eigenvectors is that (for square symmetric matrices) they can be made orthogonal (decoupled from one another). The normal modes can be handled independently and an orthogonal expansion of the system is possible. The decoupling is also apparent in the ability of the eigenvectors to diagonalize the original matrix, A, with the eigenvalues lying on the diagonal of the new matrix, . In analogy to the inertia tensor in mechanics, the eigenvectors form the principle axes of the solid object and a similarity transformation rotates the coordinate system into alignment with the principle axes. Motion along the principle axes is decoupled. The matrix mechanics is closely related to the more general singular value decomposition. We will use the basis sets of orthogonal eigenvectors generated by SVD for orbit control problems. Here we develop eigenvector theory since it is more familiar to most readers. Square matrices have an eigenvalue/eigenvector equation with solutions that are the eigenvectors xand the associated eigenvalues : Ax = x The special property of an eigenvector is that it transforms into a scaled version of itself under the operation of A. Note that the eigenvector equation is non-linear in both the eigenvalue () and the eigenvector (x).
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]
  • Math 480 Notes on Orthogonality the Word Orthogonal Is a Synonym for Perpendicular. Question 1: When Are Two Vectors V 1 and V2
    Math 480 Notes on Orthogonality The word orthogonal is a synonym for perpendicular. n Question 1: When are two vectors ~v1 and ~v2 in R orthogonal to one another? The most basic answer is \if the angle between them is 90◦" but this is not very practical. How could you tell whether the vectors 0 1 1 0 1 1 @ 1 A and @ 3 A 1 1 are at 90◦ from one another? One way to think about this is as follows: ~v1 and ~v2 are orthogonal if and only if the triangle formed by ~v1, ~v2, and ~v1 − ~v2 (drawn with its tail at ~v2 and its head at ~v1) is a right triangle. The Pythagorean Theorem then tells us that this triangle is a right triangle if and only if 2 2 2 (1) jj~v1jj + jj~v2jj = jj~v1 − ~v2jj ; where jj − jj denotes the length of a vector. 0 x1 1 . The length of a vector ~x = @ . A is easy to measure: the Pythagorean Theorem (once again) xn tells us that q 2 2 jj~xjj = x1 + ··· + xn: This expression under the square root is simply the matrix product 0 x1 1 T . ~x ~x = (x1 ··· xn) @ . A : xn Definition. The inner product (also called the dot product) of two vectors ~x;~y 2 Rn, written h~x;~yi or ~x · ~y, is defined by n T X hx; yi = ~x ~y = xiyi: i=1 Since matrix multiplication is linear, inner products satisfy h~x;~y1 + ~y2i = h~x;~y1i + h~x;~y2i h~x1; a~yi = ah~x1; ~yi: (Similar formulas hold in the first coordinate, since h~x;~yi = h~y; ~xi.) Now we can write 2 2 jj~v1 − ~v2jj = h~v1 − ~v2;~v1 − ~v2i = h~v1;~v1i − 2h~v1;~v2i + h~v2;~v2i = jj~v1jj − 2h~v1;~v2i + jj~v2jj; so Equation (1) holds if and only if h~v1;~v2i = 0: n Answer to Question 1: Vectors ~v1 and ~v2 in R are orthogonal if and only if h~v1;~v2i = 0.
    [Show full text]