Chapter VI. Inner Product Spaces

Chapter VI. Inner Product Spaces

Notes ⃝c F.P. Greenleaf 2014-15 LAI-f14-iprods.tex version 2/9/2015 Chapter VI. Inner Product Spaces. VI.1. Basic Definitions and Examples. In Calculus you encountered Euclidean coordinate spaces Rn equipped with additional structure: an inner product B : Rn Rn R. × → n Euclidean Inner Product: B(x, y)= i=1 xiyi which is often abbreviated to B(x, y)=(x, y). Associated! with it we have the Euclidean norm n x = x 2 =(x, x)1/2 ∥ ∥ | i| i=1 " which represents the “length” of a vector, and a distance function d(x, y)= x y ∥ − ∥ which gives the Euclidean distance from x to y.Notethaty = x +(y x). − Figure 6.1. The distance between points x, y in an inner product space is interpreted as the norm (length) ∥y − x∥ of the difference vector ∆x = y − x. This inner product on Rn has the following geometric interpretation (x, y)= x x cos (θ(x, y)) ∥ ∥·∥ ∥· where θ is the angle between x and y,measuredintheplaneM = R-span x, y ,the2- dimensional subspace in Rn spanned by x and y. Orthogonality of two vectors{ } is then interpreted to mean (x, y)=0;thezerovectorisorthogonaltoeverybody,bydefinition. These notions of length, distance,andorthogonality do not exist in unadorned vector spaces. We now generalize the notion of inner product to arbitrary vector spaces, even if they are infinite-dimensional. 1.1. Definition. If V is a vector space over K = R or C,aninner product is a map B : V V K taking ordered pairs of vectors to scalars B(v1,v2) K with the following properties× → ∈ 1. Separate Additivity in each Entry. B is additive in each input if the other input is held fixed: B(v + v ,w)=B(v ,w)+B(v ,w) • 1 2 1 2 B(v, w + w )=B(v, w )+B(v, w ). • 1 2 1 2 106 Figure 6.2. Geometric interpretation of the inner product (x, y)=∥x∥∥y∥·cos(θ(x, y)) in Rn.Theprojectedlengthofavectory onto the line L = Rx is ∥y∥·cos(θ). The angle θ(x, y)ismeasuredwithinthetwo-dimensionalsubspaceM = R-span{x, y}.Vectorsare orthogonal when cos θ =0,so(x, y)=0.Thezerovectorisorthogonaltoeverybody. for v, vi,w,wi in V . rm2. Positive Definite. For all v V , | ∈ B(v, v) 0 and B(v, v)=0if and only if v =0 ≥ 3. Hermitian Symetric. For all v, w V , ∈ B(v, w)=B(w, v) when inputs are interchanged. Conjugation does nothing for x R (x = x for x R),soaninnerproductona real vector space is simply symmetric,∈ with B(w, v)=∈ B(v, w). 1. Hermitian. For λ K, v, w V , ∈ ∈ 4.B(λv, w)=λB(v, w) and, B(v, λw)=λ¯B(v, w). • An inner product on a real vector space is just a bilinear map –onethatisR-linear in each input when the other is held fixed – because conjugation does nothing in R. The Euclidean inner product in Rn is a special case of the standard Euclidean inner product in complex coordinate space V = Cn, n (z, w)= zjwj , j=1 " which is easily seen to have properties (1.)–(4.) The corresponding Euclidean norm and distance functions on Cn are then n n 1/2 1/2 1/2 2 2 z =(z, z) = [ zj ] and d(z, w)= z w = [ zj wj ] ∥ ∥ | | ∥ − ∥ | − | j=1 j=1 " " Again, properties (1.) - (4.) are easily verified. For an arbitrary inner product B we define the corresponding norm and distance functions v = B(v, v)1/2 d (v ,v )= v v ∥ ∥B B 1 2 ∥ 1 − 2∥B which are no longer given by such formulas. 1.2. Example. Here are two important examples of inner product spaces. 107 1. On V = Cn (For Rn)wecandefine“nonstandard”innerproductsbyassigning different positive weights αj > 0toeachcoordinatedirection,taking n n 1/2 2 Bα(z, w)= αj zjwj with norm z α = [ αj zj ] · ∥ ∥ ·| | j=1 j=1 " " This is easily seen to be an inner product. Thus the standard Euclidean inner n n product on R or C ,forwhichα1 = ...= αn =1,ispartofamuchlargerfamily. 2. The space [a, b]ofcontinuouscomplex-valuedfunctionsf :[a, b] C becomes an inner productC space if we define → b (f,h)2 = f(t)h(t) dt (Riemann integral) #a The corresponding “L2-norm”ofafunctionisthen b 1/2 2 f 2 = [ f(t) dt ] ; ∥ ∥ | | #a the inner product axioms follow from simple properties of theRiemannintegral. This infinite-dimensional inner product space arises in manyapplications,particu- larly Fourier analysis. ! 1.3. Exercise. Verify that both inner products in the last example actually satisfy 2 the inner product axioms. In particular, explain why the L -inner product (f,h)2 has f > 0whenf is not the zero function (f(t) 0forallt). ∥ ∥2 ≡ We now take up the basic properties common to all inner productspaces. 1.4. Theorem. On any inner product space V the associated norm has the following properties (a) x 0; ∥ ∥≥ (b) λx = λ x (and in particular, x = x ); ∥ ∥ | |·∥ ∥ ∥− ∥ ∥ ∥ (c) (Triangle Inequality) For x, y V , x y x + y . ∈ ∥ ± ∥≤∥ ∥ ∥ ∥ Proof: The first two are obvious. The third is important because it implies that the distance function d (x, y)= x y satisfies the “geometric triangle inequality” B ∥ − ∥ d (x, y) d (x, z)+d (z,y), for all x, y, z V B ≤ B B ∈ as indicated in Figure 6.3. This follows directlly from (3.) because d (x, y)= x y = (x z)+(z y) x z + z y = d (x, z)+d (z,y) B ∥ − ∥ ∥ − − ∥≤∥ − ∥ ∥ − ∥ B B The version of (3.) involving a ( )signfollowsfromthatfeaturinga(+)because v w = v +( w)and w = w−. −The proof− of (3.) is∥− based∥ on∥ an∥ equally important inequality: 1.5. Lemma (Schwartz Inequality). If B is an inner product on a real or complex vector space then B(x, y) x y | |≤∥ ∥B ·∥ ∥B for all x, y V . ∈ 108 Figure 6.3. The meaning of the Triangle Inequality: direct distance from x to y is always ≤ the sum of distances d(x, z)+d(z, y)toanythirdvectorz ∈ V . 2 Proof: For all real t we have φ(t)= x + ty B 0. By the axioms governing B we can rewrite φ(t)as ∥ ∥ ≥ φ(t)=B(x + ty, x + ty) = B(x, x)+B(ty, x)+B(x, ty)+B(ty, ty) = x 2 + tB(x, y)+t B(x, y)+t2 y 2 ∥ ∥B ∥ ∥B = x 2 +2t Re(B(x, y)) + t2 y 2 ∥ ∥B ∥ ∥B because B(tx, y)=tB(x, y)andB(x, ty)=tB(x, y)(sincet R), and z + z =2Re(z)= 2x for z = x + iy in C.Nowφ : R R is a quadratic function∈ whose minimum value → occurs at t0 where dφ (t )=2t y 2 +Re(B(x, y)) = 0 dt 0 0∥ ∥B or Re(B(x, y)) t = − 0 2 y 2 ∥ ∥B Inserting this into φ we find the actual minimum value of φ: x 2 y 2 2 Re(B(x, y)) 2 + Re(B(x, y)) 2 0 min φ(t):t R = ∥ ∥B ·∥ ∥B − | | | | ≤ { ∈ } y 2 ∥ ∥B Thus 0 x 2 y 2 Re(B(x, y)) 2 ≤∥ ∥B ·∥ ∥B −| | which in turn implies Re B(x, y) x y for all x, y V. | |≤∥ ∥B ·∥ ∥B ∈ If we replace x eiθx this does not change x since eiθ = cos(θ)+i sin(θ) =1for real θ;intheinnerproductontheleftwehave)→ ∥ ∥ B(eiθx,| y)=| |eiθB(x, y). We may| now take θ R so that eiθ B(x, y)= B(x, y) .Forthisparticularchoiceofθ we get ∈ · | | 0 Re(B(eiθx, y)) = Re(eiθB(x, y)) ≤| | | | =Re(B(x, y) )= B(x, y) x y . | | | |≤∥ ∥B ·∥ ∥B That proves the Schwartz inequality. ! Proof (Triangle Inequality): The algebra is easier if we prove the (equivalent) in- equality obtained when we square both sides: 2 0 x + y 2 ( x + y ) ≤∥ ∥ ≤ ∥ ∥ ∥ ∥ = x 2 +2 x y + y 2 ∥ ∥ ∥ ∥·∥ ∥ ∥ ∥ 109 In proving the Schwartz inequality we saw that x + y 2 =(x + y, x+ y)= x 2 +2Re(x, y)+ y 2 ∥ ∥ ∥ ∥ ∥ ∥ so our proof is finished if we can show 2 Re(x, y) 2 x y .But ≤ ∥ ∥·∥ ∥ Re(z) Re(z) z for all z C ≤| |≤| | ∈ and then the Schwartz inequality yields Re(B(x, y)) B(x, y) x y ≤| |≤∥ ∥B ·∥ ∥B as desired. ! 1.6. Example. On V =M(n, K)wedefinetheHilbert-Schmidt inner product and norm for matrices: 2 2 (44) (A, B) =Tr(B∗A)and A = a =Tr(A∗A) HS ∥ ∥HS | ij | i,j=1 " It is easily verified that this is an inner product. First note that the trace map from M(n, K) K → n Tr(A)= aii i=1 " is a complex linear map and Tr( A )=Tr(A); then observe that n A 2 =(A, A) = a 2 is > 0unlessA is the zero matrix. ∥ ∥2 HS | ij | i,j=1 " 2 Alternatively, consider what happens when we identify M(n, C) = Cn as complex vector ∼ 2 spaces. The Hilbert-Schmidt norm becomes the usual Euclidean norm on Cn ,and likewise for the inner products; obviously (A, B)HS is then an inner product on matrix space. The norm A HS and the sup-norm A discussed in Chapter V are different ways to measure the∥ “size”∥ of a matrix; the HS-norm∥ ∥∞ turns out to be particularly well adapted to applications in statistics, starting with “least-squares regression” and moving on into “analysis of variance.” Each of these norms determines a notion of matrix convergence A A as n in M(N,C). n → →∞ 1/2 (n) 2 2-Convergence: An A HS = [ a aij ] 0asn ∥·∥ ∥ − ∥ | ij − | → →∞ i,j " (n) -Convergence: An A =max aij aij 0asn ∥·∥∞ ∥ − ∥∞ i,j {| − |} → →∞ However, despite their differences both norms determine the same notion of matrix con- vergence.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    52 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us