
(III.D) Linear Functionals II: The Dual Space First I remind you that a linear functional on a vector space V over R is any linear transformation f : V ! R. In §III.C we looked at a finite subspace [=derivations] of the infinite- dimensional space of linear functionals on C¥(M) . Now let’s take a finite-dimensional vector space V and consider V_ := fvector space consisting of all linear functionals on Vg, _ read “V-dual”. Two functionals f1, f2 2 V are equal if they give the same value on all ~v 2 V : f1(~v) = f2(~v) . Let B = f~v1,...,~vng be a basis for V , and let fi : V ! R be some special linear functionals de f ined by ( 0, i 6= j fi(~vj) = dij = . 1, i = j By linearity, on any ~v = ∑ ai~vi , this makes fi(~v) = ai. _ PROPOSITION. The f f1,..., fng are a basis for V (the “dual basis”), and dim(V_) = dim(V)[= n]. _ PROOF. that the f fig span V : let f : V ! R be any linear functional, and let bi = f (~vi) . Then on any ~vi = ∑ ai~vi , f (~v) = ∑ ai f (~vi) = ∑ aibi = ∑ bi fi(~v). Therefore f = ∑ bi fi as a functional. 1 2 (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE that the f fig are linearly independent: suppose ∑ aj fj = 0 as a functional ; that is, on all ~v 2 V , ∑ aj fj(~v) = 0 . In particular, applying this to ~v = ~vi for each i , 0 = ∑ aj fj(~vi) = ∑ ajdij = ai. j j So all the ai are zero. 0 0 0 Change of basis. Take another B = f ~v1,..., ~vng for V , and the 0 _ 0 0 _ 0 0 corresponding dual B = f f1,..., fng for V : that is, fi( ~vj) = dij. Regardless of the basis B, ! ! f (~v) = ∑ ai fi ∑ bj~vj = ∑ aibj fi(~vj) = ∑ aibjdij = ∑ aibi. i j i,j i,j i In terms of vectors, with 0 1 0 1 a1 b1 . _ B . C B . C [ f ]B = @ . A , [~v]B = @ . A , an bn this just says 0 1 b1 t . _ B . C f (~v) = ([ f ]B ) · [~v]B = a1 ··· an @ . A . bn So for the different bases, t t f (~v) = ([ f ]0B_ ) · [~v]0B = ([ f ]0B_ ) · (PB!0B[~v]B) = t t = PB!0B[ f ]0B_ · [~v]B for all ~v , which is to say t [ f ]B_ = PB!0B[ f ]0B_ . But by definition of PB_!0B_ , [ f ]0B_ = PB_!0B_ [ f ]B_ (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE 3 and comparing equalities gives the formula t −1 PB_!0B_ = PB!0B describing change of coordinates for covectors.1 If you stretch a ba- 0 sis, say B = f~v1,...,~vng −! B = f2~v1, . , 2~vng , then vectors “appear” to shrink by half – that is, their coordinates with respect to the basis shrink, since the vectors must stay the same. This formula, in the same sense, says that covectors would “appear” to double in size. Brief aside on calculus. In R1 , let x be the coordinate with re- spect to B = feˆg and u the coordinate with respect to 0B = f2eˆg. 1 Then x · (eˆ) = u · (2eˆ) , u = 2 x expresses the “shrinking” of coordinates (of a fixed vector) as the basis stretches. In the last lecture we suggested that coefficients of first-order lin- ear differential operators transform (infinitesimally) like coordinates n d o 0 n d o of vectors. Here this looks as follows: B = dx , B = du , d 1 d d d 1 dx = 2 du (the coefficient of du is half that of dx ); intuitively, d d d f du should be bigger than dx because du is the rise in f per “unit” run d f of 2eˆ , while dx is the rise (in f ) per unit run of eˆ . The covectors in this situation (i.e., the functionals on the space of linear differential d operators) are differential forms dx and du defined by dx dx = 1 , d du du = 1 ; thus 1 · (dx) = 2 · (du) , which is exactly what you 1 know from calculus: u = 2 x =) 2du = dx. (So the change of 1 coefficients goes 1 7! 2 instead of 1 7! 2 .) More generally, on Rn you may define a basis for the differen- tial forms by dx ¶ = d ; these give (at each point) the dual i ¶xj ij to the space of (first-order) linear differential operators.2 They are what appears under the integral sign in calculus simply because 1that is, functionals. In this context, the prefix “co” is essentially a synonym for “dual”. 2When the latter are thought of as tangent vectors, the differential 1-forms are “cotangent vectors”. 4 (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE they transform in exectly the right way under change of variable (u -substitution). Dual of a linear transformation. Every linear transformation T : V ! W induces a dual linear transformation T_ : W_ ! V_ by “pullback” of functionals: given g 2 W_ , define3 T_(g) := g ◦ T . The picture is: R T_g > O g V / W. T As usual take B, C bases for V, W (resp.); B_, C_ their dual bases for V_, W_. We would like to relate the matrices _ C [T]B and B_ [T ]C_ – it turns out they are just transposes of one another. Before doing this in full generality, let’s step back and look at a m m simple case. Consider S : R ! R with matrix [S]eˆ = A with respect to the standard basis. We can think of Rm as its own dual space, as follows. Any~` 2 Rm gives a linear functional4 on Rm by matrix multiplication (of a 1 × m matrix by an m × 1 matrix): ~`(~v) := t~` ·~v. 3also written T∗(g), as dual spaces/bases are often written V∗/B∗. I’m avoiding this so as to minimize confusion when the Hermitian transpose is introduced later on. 4We are really mapping Rm ! (Rm)_ by taking ~` to this functional. (This map also takes eˆ to the dual basis eˆ_ , so eˆ “gives” its own dual basis in the same sense as~` “gives” functionals.) Identifying a vector space with its dual is equivalent to giving an inner product, and what’s going on here is again an ad hoc use of the dot product. (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE 5 The dual transformation S_ : Rm ! Rm , with matrix B, is defined by (S_~`)(~v) =~`(S(~v)). In terms of matrices this translates to t(B ·~`) ·~v = t~` · (A ·~v) t~` · tB ·~v = t~` · A ·~v. If this is true for all~` , ~v then tB = A or B = t A . So in this context, the fact that the matrix of “S -dual” is the transpose of the matrix of S , is very simple.5 Now to prove the corresponding fact for T and T_ above, start with the definition of T_g , for any g 2 W_ : (1) (T_g)~v = (g ◦ T)~v = g(T~v) for all ~v 2 V . Now recall that for any6 ~v 2 V, f 2 V_, and basis B t for V , f (~v) = [ f ]B_ · [~v]B. Applying this fact to the end terms of (1), we have t _ t (2) [T g]B_ · [~v]B = [g]C_ · [T~v]C . Now using the defintion of the matrices of T (resp. T_ ) with respect _ _ to B and C (resp. B and C ), for example C [T]B · [~v]B = [T~v]C , (2) becomes t _ t (3) B_ [T ]C_ · [g]C_ [~v]B = [g]C_ · (C [T]B · [~v]B) . 5Remark for physicists: this argument may remind anyone exposed to quantum mechanics of “adjoint” operators, perhaps more so if we write ~`(~v) = t~` · ~v as D E ~`,~v : then the definition of S_ reads D E D E S_~`, ~v = ~`, S~v . 6the same of course goes for any w~ 2 W , g 2 W_ , and basis C for W 6 (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE Applying the rule t(A · B) = tB · t A for multiplying transposes of matrices, to the left hand-side of (3) gives t t _ t [g]C_ · B_ [T ]C_ · [~v]B = [g]C_ · C [T]B · [~v]B, and since this holds for any g 2 W_ and ~v 2 V , we conclude that t _ B_ [T ]C_ = C [T]B or _ t B_ [T ]C_ = C [T]B . The double dual. Associated to any subspace U ⊆ V there is a subspace U◦ ⊆ V_ of complimentary dimension, called the annihilator of U . It consists simply of all linear functionals f 2 V_ such that f (~x) = 0 for all ~x 2 U . For example, suppose U is the plane in R3 consisting of solutions to x1 + 2x2 + 3x3 = 0 . Then in terms of the dual standard basis eˆ_ (see the supplement), U◦ is just the “line” (in (R3)_ ) spanned by 0 1 1 B C [ f ]eˆ_ = @ 2 A , 3 since then for any ~x 2 U , 0 1 x 1 t B C f (~x) = [ f ]eˆ_ · [~x]eˆ = 1 2 3 @ x2 A = x1 + 2x2 + 3x3 = 0. x3 Now suppose you take one more dual, and consider the annihi- lator of U◦ , (U◦)◦ ⊆ (V_)_ ! Well, it turns out that you just get back what you started with: (V_)_ =∼ V , and moreover (U◦)◦ is again just U . We will prove only the first statement, by checking that the linear transformation V −! (V_)_ given by _ ~a 7−! L~a (defined, on any f 2 V , by L~a( f ) := f (~a)) (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE 7 is 1-to-1 and onto.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-