Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
Total Page:16
File Type:pdf, Size:1020Kb
Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis. 1 In fact, Hilbert spaces also have orthonormal bases (which are countable). The existence of a maximal orthonormal set of vectors can be proved by using Zorn’s lemma. However, we still need to prove that a maximal orthonormal set is a basis. This follows because we define the basis slightly differently for a Hilbert space: instead of allowing only finite linear combinations, we allow infinite ones. The correct way of saying this is that is we still think of the span as the set of all finite linear combinations, then we only need that for any v 2 V, we can get arbitrarily close to v using elements in the span (a converging sequence of finite sums can get arbitrarily close to its limit). Thus, we only need that the span is dense in the Hilbert space V. However, if the maximal orthonormal set is not dense, then it is possible to show that it cannot be maximal. Such a basis is known as a Hilbert basis. Let V be a finite dimensional inner product space and let fw1,..., wng be an orthonormal basis for V. Then for any v 2 V, there exist c1,..., cn 2 F such that v = ∑i ci · wi. The coef- ficients ci are often called Fourier coefficients. Using the orthonormality and the properties of the inner product, we get ci = hwi, vi. This can be used to prove the following Proposition 1.5 (Parseval’s identity) Let V be a finite dimensional inner product space and let fw1,..., wng be an orthonormal basis for V. Then, for any u, v 2 V n hu, vi = ∑ hu, wii · hwi, vi . i=1 Proof: Just plug in v = ∑i hwi, vi wi in the left-hand side and distribute out the inner product. n Let’s consider R . If the wi are the “standard basis”, then this is just writing the inner product hu, vi in the usual way as the sum of the products of the coordinate values ∑j ujvj where uj = u, wj and vj = v, wj . Parseval’s identity says you can do this using any 2 2 orthonormal basis you want. Plugging in the case of v = u, we get kuk = ∑i ui . 2 Adjoint of a linear transformation Definition 2.1 Let V, W be inner product spaces over the same field F and let j : V ! W be a linear transformation. A transformation j∗ : W ! V is called an adjoint of j if hw, j(v)i = hj∗(w), vi 8v 2 V, w 2 W . Example 2.2 Let V = Rn and W = Rm with the usual inner product, and let j : V ! W be represented by the matrix A. Then j∗ is represented by the matrix AT. In particular, hw, Avi = wT Av = (ATw)Tv = ATw, v = hj∗(w), v)i. So, a symmetric matrix is “self-adjoint”. 2 n n Example 2.3 Let V = W = C with the inner product hu, vi = ∑i=1 ui · vi. Let j : V ! V be represented by the matrix A. Then j∗ is represented by the matrix AT. = ([ ] [− ]) = R 1 ( ) ( ) Example 2.4 Let V C 0, 1 , 1, 1 with the inner product h f1, f2i 0 f1 x f2 x dx, = ([ ] [− ]) = R 1/2 ( ) ( ) and let W C 0, 1/2 , 1, 1 with the inner product hg1, g2i 0 g1 x g2 x dx. Let j : V ! W be defined as j( f )(x) = f (2x). Then, j∗ : W ! V can be defined as j∗(g)(y) = (1/2) · g(y/2) . We will prove that every linear transformation has a unique adjoint. However, we first need the following characterization of linear transformations from an inner product space V to the field F it is over. Proposition 2.5 (Riesz Representation Theorem) Let V be a finite-dimensional inner product space over F and let a : V ! F be a linear transformation. Then there exists a unique z 2 V such that a(v) = hz, vi 8v 2 V. We only prove the theorem here for finite-dimensional spaces. However, the theorem holds for any Hilbert space. Proof: Let fw1,..., wng be an orthonormal basis for V. Given v, let c1, ..., cn be its Fourier coefficients, so v = ∑ c w , and c = hw , vi. Since a is a linear transformation, we must i i i i i D E have a(v) = ∑i cia(wi) = ∑i hwi, vi a(wi) = ∑i a(wi)wi, v = hz, vi for z = ∑i a(wi)wi. This can be used to prove the following: Proposition 2.6 Let V, W be finite dimensional inner product spaces and let j : V ! W be a linear transformation. Then there exists a unique j∗ : W ! V, such that hw, j(v)i = hj∗(w), vi 8v 2 V, w 2 W . Proof: For each w 2 W, the map hw, j(·)i : V ! F is a linear transformation (check!) and hence there exists a unique zw 2 V satisfying hw, j(v)i = hzw, vi 8v 2 V. Consider the map b : W ! V defined as b(w) = zw. By definition of b, hw, j(v)i = hb(w), vi 8v 2 V, w 2 W . To check that b is linear, we note that 8v 2 V, 8w1, w2 2 W, hb(w1 + w2), vi = hw1 + w2, j(v)i = hw1, j(v)i + hw2, j(v)i = hb(w1), vi + hb(w2), vi , which implies b(w1 + w2) = b(w1) + b(w2). b(c · w) = c · b(w) follows similarly. Note that the above proof only requires the Riesz representation theorem (to define zw) and hence also works for Hilbert spaces. 3 3 Self-adjoint transformations Definition 3.1 A linear transformation j : V ! V is called self-adjoint if j = j∗. Linear transformations from a vector space to itself are called linear operators. Example 3.2 The transformation represented by matrix A 2 Cn×n is self-adjoint if A = AT. Such matrices are called Hermitian matrices. Proposition 3.3 Let V be an inner product space and let j : V ! V be a self-adjoint linear operator. Then - All eigenvalues of j are real. - If fw1,..., wng are eigenvectors corresposnding to distinct eigenvalues then they are mutu- ally orthogonal. 4.