Vector Space and Dual Vector Space Let V Be a Finite Dimensional Vector

Vector Space and Dual Vector Space Let V Be a Finite Dimensional Vector

Vector space and Dual vector space Let V be a finite dimensional vector space over R. Suppose v1; :::; vn is a basis for V then the dual vector space to V is denoted V ∗ = ff : V ! Rg where f is a R-linear map. The next question might be, what is ∗ i ∗ i j i 1 n ∗ ∗ a basis for V ? Suppose α 2 V with α (v ) = δj. We claim that α ; :::; α is a basis for V . As V is P i also a vector space over R, suppose i ciα = 0, it then follows that: ! X i j ciα (v ) = cj = 0 : 8j i Now let f 2 V ∗ then we have by direct computation: ! X j X j X j j f(v) = f cjv = cj · f(v ) = α (v) · f(cjv ) j j j i.e α1; :::; αn are a basis for V ∗ and so dim(V ) = dim(V ∗) = n. 1 0 Example: Let V = 2 with scalar field with; e1 = e2 = and αi : 2 ! defined by R R 0 1 R R i j i 2 1 2 1 2 α (e ) = δj. Suppose v 2 R then v = αe + βe ) α (v) = α and α (v) = β. And so, the basis elements are precisely the projections onto the x; y axis respectively. Multilinear Functions Let V be a vector space and consider V × · · · × V (k-copies). We say f is a multilinear function if f is linear in each component i.e f(::::; av + bw; :::) = af(:::; v; :::) + bf(:::; w; :::). We say f is k-linear, f is a k-tensor or f is a k-form and denote Lk(V ) to be the set of all k-linear functions on V . n n n P j j Example: Let v; w 2 R and h·; ·i : R × R ! R defined by hv; wi = j v · w . Let f 2 Lk(V ). We say f is symmetric if given any τ 2 Sk : f(vτ(1); :::; vτ(k)) = f(v1; :::; vk). We say f is alternating if f(vτ(1); :::; vτ(k)) = (sgn τ)f(v1; :::; vk). Example: For any example of a symmetric function, consider the dot product given above. Example: Let g : R3 ! R be defined by g(v; w) = v × w which is the cross-product from multivariable calculus. Example: Let f = det : Mat(n; ) ! defined by f(A) = P (sgn τ) a a ··· a . If we let R R τ2Sn 1,τ(1) 1,τ(2) 1,τ(n) n A = [v1 v2 ··· vn] then f 2 An(R ). 1 Actions of Permutations on Functions Let f 2 Lk(V ) and σ 2 Sk then we define (σf)(v1; :::; vk) = f(vσ(1); :::; vσ(n)). With this action we have f is symmetric () σf = f and f is alternating () σf = (sgn σ)f. We now introduce a very useful lemma; P P Lemma: If f 2 Lk(V ) and σ 2 Sk then Sf = σ σf is symmetric and Af = σ(sgn σ) σf is alternating. Proof: Let τ 2 Sk then; ! X X X X τ(Sf) = τ σf = τ(σf) = (τσ)f = σf σ2Sk σ2Sk σ2Sk σ2Sk −1 The last inequality comes from the fact that for any σ 2 Sk we have σf = ττ σf. For the next result, notice that sgn(τσ) = (sgn τ)(sgn σ) ! X X X τAf = τ (sgn σ) σf = (sgn σ) τσf = (sgn τσ)(sgn τ) τσf σ2Sk σ2Sk σ2Sk X = (sgn τ) (sgn τσ) τσf σ2Sk X = (sgn τ) (sgn σ) σf σ2Sk = (sgn τ) Af Example: Let f 2 L3(V ) and suppose v1; v2; v3 2 V . Using the short hand notation for permutations, we have S3 = f(1); (12); (13); (23); (123); (132)g. All of the transpositions i.e cycles of length two, have by definition sign of −1. To determine the others, note that (123) = (13)(12) and (132) = (12)(13) i.e that have sign +1. 2 Tensor Product Let f 2 Lk(V ) and g 2 Lj(V ) then we define the tensor product; f ⊗ g :(v1; :::; vk; vk+1; :::; vk+1) 7! f(v1; :::; vk)g(vk+1; :::; vk+j) The juxtaposition above denotes the standard multiplication which makes sense due to the fact that f; g are functionals. In this case, we can think of the tensor ⊗ as a bilinear operator; ⊗ : Lk(V ) × Lj(V ) ! Lk+j(V ) ⊗(f; g) = f ⊗ g The operation above can be shown to be associative. The next example will show that we can express the inner product on a vector space, as a linear combination of tensor products. Example: We will take a special case where h·; ·i : Rn × Rn ! R. If we take e1; :::; en to be the standard i j basis and he ; e i = gij then we have; * + X i i X j j X i j i j X i j X i j X 1 j hv; wi = v e ; b e = v b he ; e i = v b gij = α (v)α (w)gij = gij(α ⊗ α )(v; w) i j i;j i;j i;j i;j In the above we have αi(v) = vi and αj(w) = wj by our comment on the maps in the dual vector space being the coordinates with respect to the standard basis. The Wedge Product Let f 2 Ak(V ) and g 2 Al(V ) then we define their exterior product or wedge product; 1 f ^ g = A(f ⊗ g) k!l! In coordinates, if we let v1; :::; vk+l 2 V then; 1 X (f ^ g)(v ; :::; v ) = (sgn σ) f(v ; :::; v ) · g(v ; :::; v ) 1 k+1 k!l! σ(1) σ(k) σ(k+1) σ(k+l) σ2Sk+l 3 The division by k!l! comes from the fact that in Sk+l there are permutations σ such that σ(j) = j for all j > k. For such σ we have σ(f ⊗ g)(v1; :::; vk+l) = (sgn σ)f(v1; :::; vk) · σg(vk+1; :::; vk+l). Since jSkj = k!, this value is repeated k!-times. Similarly we have permutations τ 2 Sk+l with τ(j) = j for all j < k + 1 and so τ(f ⊗ g)(v) is repeated l!-times. Our last note is that by definition f ^ g is alternating. Properties of Wedge Product We will prove a few of these properties however some are a bit messy and will be reserved for k-forms on tangent spaces. kl • If f 2 Ak(V ); g 2 Al(V ) then f ^ g = (−1) g ^ f • If f 2 A2k+1 then f ^ f = −f ^ f (by above) and so f ^ f = 0. 1 • If f; g; h 2 A (V ) then (f ^ g) ^ h = f ^ (g ^ h) = A(f ⊗ g ⊗ h) ∗ k!l!h! i • If α 2 L1(V ) and vi 2 V then; 1 k X 1 2 k i (α ^ · · · ^ α )(v1; :::; vk) = (sgn σ) α (v1) · α (v2) ··· α (vk) = det[α (vj)] σ2Sk Multi-Index Notation 1 n P i j i j Let e ; :::; e be a basis for V . Consider f 2 L2(V ) then f(v; w) = i;j v w f(e ; e ) then f is determined j by its values on e . Since we will be only considering forms in Ak(V ) we make another observation; n n X X X viwjf(ei; ej) = viwjf(ei; ej) i;j j=1 i=1 In any case when i > j we have f(ei; ej) = −f(ej; ei). Therefore, we can get the same sum if we sum over 1 ≤ i < j ≤ n since the only thing we are doing is using the property of f to establish an identity, still keeping all summands. X f(v; w) := f(ei; ej)(αi ⊗ αj)(v; w) 1≤i<j≤n I i1 ik We introduce the multi-index notation I = (i1; :::; ik) and write e = (e ; :::; e ). By the above, since f 2 Ak(V ) it is enough to consider strictly increasing indices. 4 n Remark: If we defined A∗(V ) = ⊕k=0Ak(V ) then (A∗(V ); ^) is an anti-commutative graded algebra, called the exterior algebra or the Grassmann algebra of multicovectors. Basis for k-covectors Let e1; :::; en be a basis for V and α1; :::; αn be its dual basis in V ∗. Consider the strictly increasing multi-indices I = (1 ≤ i1 < ··· < ik ≤ n) and J = (1 ≤ j1 < ··· < jk ≤ n) then; I i1 ik ir α (eJ ) = (α ^ · · · ^ α )(ej1 ; :::; ejk ) = det[α (ejs )] ir If I = J then [α (ejs )] = I and so the determinant is 1. If I 6= J we look at the indices (is; js) until ir 6= jr. Without loss of generality, suppose ir < jr then ir 6= jr; jr+1; :::; jn. We also have ir 6= j1; :::; jr−1 ir since we have equality of pairs until ir i.e det[α (ejs )] = 0 since the r-th row has all zeros. I I We will now show that the alternating k-linear functions α , for a basis for Ak(V ). Since α is also a vector space over R suppose we have; ! X I X I J aI α = 0 () aI α (e ) = aJ = 0 I I I The above shows that α are linearly independent. To show that they span Ak(V ), let f 2 Ak(V ). We P I ∗ claim f = f(eI )α . This consideration comes from when we showed the one forms spanned V . Let P I I J P I I J v 2 V and define g = f(e )α then g(e ) = f(e )α (e ) = f(eJ ) ) f = g.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us