
LECTURE 16: MULTILINEAR ALGEBRA 1. Tensors Let V be an n-dimensional vector space, and V ∗ its dual space. Definition 1.1. A function T : V k ! R is a k-tensor on V if it is multilinear, i.e. if for each i and each v1; ··· ; vi−1; vi+1; ··· ; vk 2 V , the map Ti : V ! R; vi 7! T (v1; ··· ; vi; ··· ; vk) is linear. We will denote the space of k-tensors on V by ⊗kV ∗. One can think of ⊗kV ∗ as a generalization of the dual space V ∗, since ⊗1V ∗ = V ∗. We will denote ⊗0V ∗ = R. Similarly the space ⊗kV can be viewed as ⊗k(V ∗)∗. One can \multiply" tensors in a simple way. Definition 1.2. Let T be a k-tensor on V and S a l-tensor on V . Then their tensor product T ⊗ S is a (k + l)-tensor on V defined by (T ⊗ S)(v1; ··· ; vk+l) = T (v1; ··· ; vk)S(vk+1; ··· ; vk+l): Remark. Obviously the tensor product operation ⊗ : ⊗kV ∗ × ⊗lV ∗ ! ⊗k+lV ∗ is a bilinear map and it is associative: (T ⊗ S) ⊗ R = T ⊗ (S ⊗ R): So it makes sense to talk about the tensor products of many tensors. However, the tensor product operation is not commutative in general: T ⊗ S 6= S ⊗ T: Example. For any f 1; ··· ; f k 2 V ∗, then tensor T = f 1 ⊗ · · · ⊗ f k is a k-tensor so that 1 k T (v1; ··· ; vk) = f (v1) ··· f (vk): Such a tensor is called a decomposable k-tensor. (Note that by multi-linearity, 2e ⊗ f = e ⊗ 2f.) The following theorem clarifies how ⊗kV ∗ generalize V ∗: Theorem 1.3. Let fe1; ··· ; eng be a basis of V ∗. Then the set of k-tensors i1 i2 ik fe ⊗ e ⊗ · · · ⊗ e j 1 ≤ i1; ··· ; ik ≤ ng form a basis of ⊗kV ∗. In particular, dim ⊗kV ∗ = nk: 1 2 LECTURE 16: MULTILINEAR ALGEBRA Proof. We will denote by fe1; ··· ; eng the dual basis in V . For any multi-index I = (i1; ··· ; ik), we will denote EI = ei1 ⊗ ei2 ⊗ · · · ⊗ eik . Then the fact EI (e ; ··· ; e ) = δi1;··· ;ik j1 jk j1;··· ;jk implies that the tensors EI 's are linearly independent. k ∗ Moreover, for any T 2 ⊗ V , if we let TI = T (ei1 ; ··· ; eik ), and consider the k-tensor X I S = T − TI E ; I then S(ej1 ; ··· ; ejk ) = 0 for any multi-index J = (j1; ··· ; jk). It follows from multi-linearity that P I I S ≡ 0. In other words, T = TI E is a linear combination of these E 's. P i1 ik Remark. So any k-tensor is of the form T = I TI e ⊗ · · · ⊗ e . It is easy to check: A nonzero P i j 2-tensor T = i;j aije ⊗ e is decomposable if and only if the coefficient matrix (aij) has rank 1. However, in general it is very hard to tell whether a k-tensor, k ≥ 3, is decomposable or not. This is related to quantum entanglement. More generally, for any vector spaces V1; ··· ;Vk of dimensions m1; ··· ; mk respectively, one can define the tensor product V1 ⊗ · · · ⊗ Vk to be the m1 ··· mk dimensional linear space with a basis fv1 ⊗ · · · ⊗ vk j 1 ≤ i ≤ m ; ··· ; 1 ≤ i ≤ m g; i1 ik 1 1 k k where fvj; ··· ; vj g is a basis of V . In particular, we will call 1 mj j ⊗l;kV := (⊗lV ) ⊗ (⊗kV ∗) the space of (l; k)-tensors on V . In other words, T 2 ⊗l;kV if and only if 1 l T = T (β ; ··· ; β ; v1; ··· ; vk) i ∗ is multilinear with respect to β 2 V and vi 2 V . Remark. For any finite dimensional vector spaces V and W , one has a natural linear isomorphism V ⊗ W ∗ ' L(W; V ), where L(W; V ) is the set of all linear maps from W to V . Definition 1.4. Let T be a (l; k)-tensor on V . For any 1 ≤ r ≤ l and 1 ≤ s ≤ k, the (r; s)- contraction of T is the (l − 1; k − 1)-tensor r 1 k−1 X 1 r−1 i r k−1 Cs (T )(β ; ··· ; β ; v1; ··· ; vl−1) = T (β ; ··· ; β ; e ; β ; ··· ; β ; v1; ··· ; vs−1; ei; vs; ··· ; vl−1) i 1 n where fe1; ··· ; eng is a basis of V , and fe ; ··· ; e g the dual basis. One can check that this definition is independent of the choices of the basis feig of V . In r th th fact, Cs (T ) is the (l − 1; k − 1)-tensor obtained from T by pairing the r vector in T with the s co-vector in T . For example, if v; w 2 V and α; β; γ 2 V ∗, one has 1 C2 (v ⊗ w ⊗ α ⊗ β ⊗ γ) = β(v)w ⊗ α ⊗ γ: LECTURE 16: MULTILINEAR ALGEBRA 3 2. Linear p-forms Now let's fix a vector space V . Definition 2.1. A k-tensor T on V is called symmetric if T (v1; ··· ; vk) = T (vσ(1); ··· ; vσ(k)) for all permutations σ of (1; 2; ··· ; k). Example. An inner product on V is a positive symmetric 2-tensor. Definition 2.2. A k-tensor T on V is alternating (or a linear k-form) if it is skew-symmetric, i.e. T (v1; ··· ; vi; ··· ; vj; ··· ; vk) = −T (v1; ··· ; vj; ··· ; vi; ··· ; vk): for all v1; ··· ; vk 2 V and any 1 ≤ i 6= j ≤ k We will denote the vector space of k-forms by ΛkV ∗. Note that ΛkV ∗ is a linear subspace of ⊗kV ∗, and Λ1V ∗ = ⊗1V ∗ = V ∗. Again we set Λ0V ∗ = R. Example. det is a n-form on Rn. Recall that a permutation σ 2 Sk is called even or odd, depending on whether it is expressible as a product of an even or odd number of simple transpositions. For any k-tensor T and any σ σ 2 Sk, we define another k-tensor T by σ T (v1; ··· ; vk) = T (vσ(1); ··· ; vσ(k)): Clearly σ π σ◦π • For all k-tensor T ,(T ) = T for all σ; π 2 Sk. σ • A k-tensor T is symmetric if and only if T = T for all σ 2 Sk. σ σ σ • A k-tensor T is a k-form if and only if T = (−1) T for all σ 2 Sk, where (−1) = 1 if σ is even, and (−1)σ = −1 if σ is odd. For any k-tensor T on V , we consider the anti-symmetrization map 1 X Alt(T ) = (−1)πT π: k! π2Sk Lemma 2.3. The map Alt is a projection from ⊗kV ∗ to ΛkV ∗, i.e. it satisfies (1) For any T 2 ⊗kV ∗, Alt(T ) 2 ΛkV ∗. (2) For any T 2 ΛkV ∗, Alt(T ) = T . k ∗ Proof. (1) For any T 2 ⊗ V and any σ 2 Sk, 1 X 1 X [Alt(T )]σ = (−1)π(T π)σ = (−1)σ (−1)π◦σT π◦σ = (−1)σAlt(T ): k! k! π2Sk π2Sk k ∗ π π (2) If T 2 Λ V , then each summand (−1) T equals T . So Alt(T ) = T since jSkj = k!. We will need 4 LECTURE 16: MULTILINEAR ALGEBRA Lemma 2.4. Let T; S; R be k-, l-, and m-forms respectively. Then (1) Alt(T ⊗ S) = (−1)klAlt(S ⊗ T ). (2) Alt(Alt(T ⊗ S) ⊗ R) = Alt(T ⊗ S ⊗ R) = Alt(T ⊗ Alt(S ⊗ R)). Proof. Exercise. Now we can define a \product operation" for forms: Definition 2.5. The wedge product of T 2 ΛkV ∗ and S 2 ΛlV ∗ is the (k + l)-form (k + l)! T ^ S = Alt(T ⊗ S): k!l! The wedge product operation satisfies Proposition 2.6. The wedge product operation ^ : (ΛkV ∗) × (ΛlV ∗) ! Λk+lV ∗ is (1) Bi-linear: (T;S) 7! T ^ S is linear in T and in S. (2) Anti-commutative: T ^ S = (−1)klS ^ T . (3) Associative: (T ^ S) ^ R = T ^ (S ^ R). Proof. (1) follows from definition. (2) follows from lemma 2.4(1). (3) follows from the definition and lemma 2.4(2). So it makes sense to talk about wedge products of three or more forms. For example, we have (k + l + m)! T ^ S ^ R = Alt(T ⊗ S ⊗ R): k!l!m! One can easily extend this to wedge products of more than 3 forms. In particular, by definition we have: if f 1; ··· ; f k 2 V ∗, then f 1 ^ · · · ^ f k = k!Alt(f 1 ⊗ · · · ⊗ f k). As a consequence, 1 k ∗ Proposition 2.7. For any f ; ··· ; f 2 V and v1; ··· ; vk 2 V , 1 k i f ^ · · · ^ f (v1; ··· ; vk) = det(f (vj)): Proof. We have 1 k 1 k f ^ · · · ^ f (v1; ··· ; vk) = k! Alt(f ⊗ · · · ⊗ f )(v1; ··· ; vk) X σ 1 k = (−1) f (vσ(1)) ··· f (vσ(k)) σ2Sk i = det((f (vj))): Now we are ready to prove Theorem 2.8. Let fe1; ··· ; eng be a basis of V ∗. Then the set of k-forms i1 i2 ik fe ^ e ^ · · · ^ e j 1 ≤ i1 < i2 < ··· < ik ≤ ng k ∗ k ∗ n form a basis of Λ V . In particular, dim Λ V = k . LECTURE 16: MULTILINEAR ALGEBRA 5 Proof. Again we denote by fe1; ··· ; eng the dual basis in V . For any multi-index I = (i1; ··· ; ik) I i1 ik with i1 < ··· < ik, we let Ω = e ^ · · · ^ e . Then for any multi-index J = (j1; ··· ; jk) with j1 < ··· < jk, ΩI (e ; ··· ; e ) = δi1;··· ;ik : j1 jk j1;··· ;jk It follows that these ΩI 's are linearly independent. k ∗ P I Moreover, since any T 2 Λ V is a k-tensor, we can write T = I TI E , where I = (i1; ··· ; ik) I I I runs over all 1 ≤ i1; ··· ; ik ≤ n, and E is as in the proof of theorem 1.3.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-