<<

LECTURE 16: MULTILINEAR

1. Let V be an n-dimensional vector , and V ∗ its .

Definition 1.1. A function T : V k → R is a k- on V if it is multilinear, i.e. if for each i and each v1, ··· , vi−1, vi+1, ··· , vk ∈ V , the map

Ti : V → R, vi 7→ T (v1, ··· , vi, ··· , vk) is linear.

We will denote the space of k-tensors on V by ⊗kV ∗. One can think of ⊗kV ∗ as a generalization of the dual space V ∗, since ⊗1V ∗ = V ∗. We will denote ⊗0V ∗ = R. Similarly the space ⊗kV can be viewed as ⊗k(V ∗)∗. One can “multiply” tensors in a simple way. Definition 1.2. Let T be a k-tensor on V and S a l-tensor on V . Then their tensor T ⊗ S is a (k + l)-tensor on V defined by

(T ⊗ S)(v1, ··· , vk+l) = T (v1, ··· , vk)S(vk+1, ··· , vk+l). Remark. Obviously the ⊗ : ⊗kV ∗ × ⊗lV ∗ → ⊗k+lV ∗ is a and it is associative: (T ⊗ S) ⊗ R = T ⊗ (S ⊗ R). So it makes sense to talk about the tensor products of many tensors. However, the tensor product operation is not commutative in general: T ⊗ S 6= S ⊗ T. Example. For any f 1, ··· , f k ∈ V ∗, then tensor T = f 1 ⊗ · · · ⊗ f k is a k-tensor so that 1 k T (v1, ··· , vk) = f (v1) ··· f (vk). Such a tensor is called a decomposable k-tensor. (Note that by multi-linearity, 2e ⊗ f = e ⊗ 2f.)

The following theorem clarifies how ⊗kV ∗ generalize V ∗: Theorem 1.3. Let {e1, ··· , en} be a of V ∗. Then the set of k-tensors

i1 i2 ik {e ⊗ e ⊗ · · · ⊗ e | 1 ≤ i1, ··· , ik ≤ n} form a basis of ⊗kV ∗. In particular, dim ⊗kV ∗ = nk.

1 2 LECTURE 16:

Proof. We will denote by {e1, ··· , en} the in V . For any multi-index I = (i1, ··· , ik), we will denote EI = ei1 ⊗ ei2 ⊗ · · · ⊗ eik . Then the fact EI (e , ··· , e ) = δi1,··· ,ik j1 jk j1,··· ,jk implies that the tensors EI ’s are linearly independent. k ∗ Moreover, for any T ∈ ⊗ V , if we let TI = T (ei1 , ··· , eik ), and consider the k-tensor X I S = T − TI E , I then S(ej1 , ··· , ejk ) = 0 for any multi-index J = (j1, ··· , jk). It follows from multi-linearity that P I I S ≡ 0. In other words, T = TI E is a of these E ’s. 

P i1 ik Remark. So any k-tensor is of the form T = I TI e ⊗ · · · ⊗ e . It is easy to check: A nonzero P i j 2-tensor T = i,j aije ⊗ e is decomposable if and only if the coefficient (aij) has 1. However, in general it is very hard to tell whether a k-tensor, k ≥ 3, is decomposable or not. This is related to quantum entanglement.

More generally, for any vector spaces V1, ··· ,Vk of m1, ··· , mk respectively, one can define the tensor product V1 ⊗ · · · ⊗ Vk to be the m1 ··· mk dimensional linear space with a basis {v1 ⊗ · · · ⊗ vk | 1 ≤ i ≤ m , ··· , 1 ≤ i ≤ m }, i1 ik 1 1 k k where {vj, ··· , vj } is a basis of V . In particular, we will call 1 mj j ⊗l,kV := (⊗lV ) ⊗ (⊗kV ∗) the space of (l, k)-tensors on V . In other words, T ∈ ⊗l,kV if and only if 1 l T = T (β , ··· , β , v1, ··· , vk) i ∗ is multilinear with respect to β ∈ V and vi ∈ V . Remark. For any finite dimensional vector spaces V and W , one has a natural linear V ⊗ W ∗ ' L(W, V ), where L(W, V ) is the set of all linear maps from W to V . Definition 1.4. Let T be a (l, k)-tensor on V . For any 1 ≤ r ≤ l and 1 ≤ s ≤ k, the (r, s)- contraction of T is the (l − 1, k − 1)-tensor

r 1 k−1 X 1 r−1 i r k−1 Cs (T )(β , ··· , β , v1, ··· , vl−1) = T (β , ··· , β , e , β , ··· , β , v1, ··· , vs−1, ei, vs, ··· , vl−1) i 1 n where {e1, ··· , en} is a basis of V , and {e , ··· , e } the dual basis.

One can check that this definition is independent of the choices of the basis {ei} of V . In r th th fact, Cs (T ) is the (l − 1, k − 1)-tensor obtained from T by pairing the r vector in T with the s co-vector in T . For example, if v, w ∈ V and α, β, γ ∈ V ∗, one has 1 C2 (v ⊗ w ⊗ α ⊗ β ⊗ γ) = β(v)w ⊗ α ⊗ γ. LECTURE 16: MULTILINEAR ALGEBRA 3

2. Linear p-forms Now let’s fix a V . Definition 2.1. A k-tensor T on V is called symmetric if

T (v1, ··· , vk) = T (vσ(1), ··· , vσ(k)) for all permutations σ of (1, 2, ··· , k). Example. An inner product on V is a positive symmetric 2-tensor. Definition 2.2. A k-tensor T on V is alternating (or a linear k-form) if it is skew-symmetric, i.e.

T (v1, ··· , vi, ··· , vj, ··· , vk) = −T (v1, ··· , vj, ··· , vi, ··· , vk). for all v1, ··· , vk ∈ V and any 1 ≤ i 6= j ≤ k We will denote the vector space of k-forms by ΛkV ∗. Note that ΛkV ∗ is a of ⊗kV ∗, and Λ1V ∗ = ⊗1V ∗ = V ∗. Again we set Λ0V ∗ = R. Example. det is a n-form on Rn.

Recall that a permutation σ ∈ Sk is called even or odd, depending on whether it is expressible as a product of an even or odd number of simple transpositions. For any k-tensor T and any σ σ ∈ Sk, we define another k-tensor T by σ T (v1, ··· , vk) = T (vσ(1), ··· , vσ(k)). Clearly σ π σ◦π • For all k-tensor T ,(T ) = T for all σ, π ∈ Sk. σ • A k-tensor T is symmetric if and only if T = T for all σ ∈ Sk. σ σ σ • A k-tensor T is a k-form if and only if T = (−1) T for all σ ∈ Sk, where (−1) = 1 if σ is even, and (−1)σ = −1 if σ is odd. For any k-tensor T on V , we consider the anti-symmetrization map 1 X Alt(T ) = (−1)πT π. k! π∈Sk Lemma 2.3. The map Alt is a projection from ⊗kV ∗ to ΛkV ∗, i.e. it satisfies (1) For any T ∈ ⊗kV ∗, Alt(T ) ∈ ΛkV ∗. (2) For any T ∈ ΛkV ∗, Alt(T ) = T .

k ∗ Proof. (1) For any T ∈ ⊗ V and any σ ∈ Sk, 1 X 1 X [Alt(T )]σ = (−1)π(T π)σ = (−1)σ (−1)π◦σT π◦σ = (−1)σAlt(T ). k! k! π∈Sk π∈Sk k ∗ π π (2) If T ∈ Λ V , then each summand (−1) T equals T . So Alt(T ) = T since |Sk| = k!.  We will need 4 LECTURE 16: MULTILINEAR ALGEBRA

Lemma 2.4. Let T,S,R be k-, l-, and m-forms respectively. Then (1) Alt(T ⊗ S) = (−1)klAlt(S ⊗ T ). (2) Alt(Alt(T ⊗ S) ⊗ R) = Alt(T ⊗ S ⊗ R) = Alt(T ⊗ Alt(S ⊗ R)).

Proof. Exercise.  Now we can define a “product operation” for forms: Definition 2.5. The wedge product of T ∈ ΛkV ∗ and S ∈ ΛlV ∗ is the (k + l)-form (k + l)! T ∧ S = Alt(T ⊗ S). k!l! The wedge product operation satisfies Proposition 2.6. The wedge product operation ∧ : (ΛkV ∗) × (ΛlV ∗) → Λk+lV ∗ is (1) Bi-linear: (T,S) 7→ T ∧ S is linear in T and in S. (2) Anti-commutative: T ∧ S = (−1)klS ∧ T . (3) Associative: (T ∧ S) ∧ R = T ∧ (S ∧ R). Proof. (1) follows from definition. (2) follows from lemma 2.4(1). (3) follows from the definition and lemma 2.4(2).  So it makes sense to talk about wedge products of three or more forms. For example, we have (k + l + m)! T ∧ S ∧ R = Alt(T ⊗ S ⊗ R). k!l!m! One can easily extend this to wedge products of more than 3 forms. In particular, by definition we have: if f 1, ··· , f k ∈ V ∗, then f 1 ∧ · · · ∧ f k = k!Alt(f 1 ⊗ · · · ⊗ f k). As a consequence,

1 k ∗ Proposition 2.7. For any f , ··· , f ∈ V and v1, ··· , vk ∈ V , 1 k i f ∧ · · · ∧ f (v1, ··· , vk) = det(f (vj)). Proof. We have 1 k 1 k f ∧ · · · ∧ f (v1, ··· , vk) = k! Alt(f ⊗ · · · ⊗ f )(v1, ··· , vk) X σ 1 k = (−1) f (vσ(1)) ··· f (vσ(k))

σ∈Sk i = det((f (vj))).  Now we are ready to prove Theorem 2.8. Let {e1, ··· , en} be a basis of V ∗. Then the set of k-forms

i1 i2 ik {e ∧ e ∧ · · · ∧ e | 1 ≤ i1 < i2 < ··· < ik ≤ n} k ∗ k ∗ n form a basis of Λ V . In particular, dim Λ V = k . LECTURE 16: MULTILINEAR ALGEBRA 5

Proof. Again we denote by {e1, ··· , en} the dual basis in V . For any multi-index I = (i1, ··· , ik) I i1 ik with i1 < ··· < ik, we let Ω = e ∧ · · · ∧ e . Then for any multi-index J = (j1, ··· , jk) with j1 < ··· < jk, ΩI (e , ··· , e ) = δi1,··· ,ik . j1 jk j1,··· ,jk It follows that these ΩI ’s are linearly independent. k ∗ P I Moreover, since any T ∈ Λ V is a k-tensor, we can write T = I TI E , where I = (i1, ··· , ik) I I I runs over all 1 ≤ i1, ··· , ik ≤ n, and E is as in the proof of theorem 1.3. Note that Ω = k!Alt(E ). Here, I don’t has to be increasing, but we have: ΩI = 0 if two indices in I equal, and ΩI = ±ΩI0 if I contains no equal indices, where I0 is the re-arrangement of I in increasing order. So X 1 X T = Alt(T ) = T Alt(EI ) = T ΩI I k! I I I I is a linear combination of Ω with I’s being only increasing indices.  Remark. As an immediate consequence, we Λn(V ∗) is one dimensional, so any n-form is a multiple of the non-trivial n-form “det”. Moreover, for k > n,Λk(V ∗) = 0. Finally we define Definition 2.9. The product of a vector v ∈ V with a linear k-form α ∈ Λk(V ∗) is the (k − 1)-covector ιvα(X1, ··· ,Xk−1) := α(v, X1, ··· ,Xk−1). Definition 2.10. Let L : W → V be linear. Then we can define a pullback map ∗ k ∗ k ∗ ∗ L :Λ (V ) → Λ (W ), (L α)(X1, ··· ,Xk) := α(L(X1), ··· ,L(Xk)) It is easy to prove Proposition 2.11. Let α be a linear k-form on V and β a linear l-form on V . Then k (1) For any v ∈ V , ιv(α ∧ β) = (ιvα) ∧ β + (−1) α ∧ ιvβ. (2) For any L : W → V , L∗(α ∧ β) = L∗α ∧ L∗β.