Fall 2013

Multilinear Excursions

1 Multilinear forms

In these notes V is a finite dimensional of dimension n ≥ 1 over a field F . We write V ′ to denote the dual vector space. I will use a semi-standard notation, namely T k(V ′) to denote the space of all k-forms (to be defined in a second!) of V .

• A 0-form is just an element of F ; that is T 0(V ′) = F . • A 1-form is a linear functional on V ; that is T 1(V ′) = V ′. • ≥ k ∈ k ′ × · × → If k 2 then a k-form is a multilinear map from V to F . That is, ω T (V ) iff ω : V| {z V} F and

ktimes satisfies that for each choice of x1, . . . xk−1 ∈ V the maps

x 7→ ω(x, x1, . . . , xk−1),

x 7→ ω(x1, x, x2, . . . , xk−1), ......

x 7→ ω(x1, . . . , xk−1, x),

are linear.

One makes each T k(V ′) into a vector space over F in the obvious way.

′ K Let ω1, . . . , ωk ∈ V . We define ω1 ⊗ · · · ⊗ ωk ∈ T (V ) by

∏k ω1 ⊗ · · · ⊗ ωk(x1, . . . , xk) = ω1(x1) ··· ωk(xk) = [xk, ωk] j=1

So, for example, ω1 ⊗ ω2(x1, x2) = ω1(x1)ω2(x2), which can also be written as ω1 ⊗ ω2(x1, x2) = [x1, ω1][x2, ω2].

′ Some notation could now be of some use. Let Jn = {1, . . . , n}. We will fix a {β1, . . . , βn} of V ; we may { } ∈ k ≥ suppose that it is dual to a basis e1, . . . , en of V . If i = (i1, . . . , ik) Jn with k 1 we define i ⊗ · · · ⊗ β = βi1 βik . ∈ 1 − i This makes sense also if k = 1, then i Jn = Jn simply means that i is a number in the range 1 n and β = βi. 0 For the sake of completeness we also define J0 = {0} and β = 1 ∈ F . ∈ N N ∪ { } { i ∈ k} k Theorem 1 Let k 0 = 0 . Then β : i Jn is a basis of T (V ) ≥ ∈ k Proof. The cases k = 0, 1 are obvious; assume k 2. Notice first that if i = vei1k, j = (j1, . . . , jk) Jn , then { ∏k 1, if i = j for ν = 1, . . . , k, βi(e , . . . , e ) = δ = ν ν j1 jk iν jν 0, otherwise. ν=1 2 ALTERNATING LINEAR FORMS 2

∑ k i k Linear Independence. Let ci ∈ F for i ∈ J and ∈ k ciβ = 0. That means that as a map from V → F this n i Jn form is 0, in particular ∑ i 0 = ciβ (ej1 , . . . , ejk ) = cj ∈ k i Jn ∈ k for all j = (j1, . . . , jk) Jn . k ′ Spanning property. Assume ω ∈ T (V ). If x1, . . . xk ∈ V we can write ∑n xj = ξjiei i=1 for elements ξji ∈ F for 1 ≤ j ≤ k, 1 ≤ i ≤ n. Then multilinearity implies ∑n ··· ω(x1, . . . , xk) = ξ1,i1 ξk,ik ω(ei1 , . . . , eik ). (1)

i1,...,ik=1 One sees that ∑ i ω = ω(ei1 , . . . , eik )β . k i=(i1,...,ik)∈J

In particular the dimension of T k(V ′) = nk.

2 Alternating linear forms

We keep assuming V is an n-dimensional vector space over the field F al to the basis {e1, . . . , en} of V .

k ′ k Let ω ∈ T (V ), k ≥ 2, and assume σ ∈ Sk. We define σ · ω : V → F by

σ · ω(x1, . . . , xk) = ω(xσ(1), . . . , xσ(k)). It is easy to see, one just has to look, that σ· is a from T k(V ′) onto itself.

Exercise 1 Prove: If σ, τ ∈ SK , then σ · (τ · ω) = (τσ) · ω (2) for all ω ∈ T k(V ′).

Definition 1 Let k ≥ 2. We say that a k-form ω is symmetric iff σ·ω = ω for all σ ∈ Sk. It is skew symmetric iff σ · ω = ϵ(σ)ω for all σ ∈ Sk, where ϵ(σ) is the sign of σ; ϵ(σ) = 1 is σ is even, −1 if σ is odd.

It is an immediate{ consequence of Exercise} 1 (and the{ fact that all permutations} are products of transpositions) symmetric σ · ω = ω that a k-form ω is if and only if for all transpositions σ ∈ S . skew symmetric σ · ω = −ω k

Definition 2 Let k ≥ 2.A k-form ω is alternating iff ω(x1, . . . , xk) = 0 whenever x1, . . . , xk ∈ V and there exist 1 ≤ i ≠ j ≤ k such that xi = xk. Here is the silly situation one encounters when working with fields of characteristic 2. As someone who works in analysis, I have had no encounter with these fields outside of algebra courses. But finite fields in general play an important role in algebra, fields of characteristic 2 play important roles in coding theory and cryptography, so we must either include them, or point out why we exclude them. Suppose F has characteristic 2 and V is a vector space over F . If x ∈ V , then x + x = 1 · x + 1 · x = (1 + 1)x = 0 · x = 0; that is, in such vector space x + x = 0 for all x in the space. Another way of phrasing this is by −x = x for all x in the space. In conclusion all k-forms are symmetric and all are skew-symmetric; there is no difference between the two. But not all k-forms are alternating. The relation between the two concepts is given by the following simple lemma. 2 ALTERNATING LINEAR FORMS 3

Lemma 2 Let k ≥ 2 and let ω ∈ T k(V ′). If ω is alternating, then ω is skew symmetric. The converse is also true if the characteristic of F is different from 2.

Proof. Assume ω is alternating and let σ = (ij) ∈ Sk (the transposition exchanging i, j; leaving all other numbers fixed. Assume as we may that i < j. I am going to assume a bit more to avoid too messy notation; it should be clear that what I do works in general. That is, I will assume that i = 1, j = 2. Let x1, . . . , xk ∈ V . Then, by multilinearity,

0 = ω(x1 + x2, x1 + x2, x3 . . . , xk)

= ω(x1, x1, x3, . . . , xk) + ω(x1, x2, x3, . . . , xk) + ω(x2, x1, x3, . . . , xk) + ω(x2, x2, x3, . . . , xk)

= 0 + ω(x1, x2, x3, . . . , xk) + ω(x2, x1, x3, . . . , xk) + 0; that is, ω(x1, x2, x3, . . . , xk) = −ω(x2, x1, x3, . . . , xk) For general i < j one uses the same idea. On applies ω to the vectors y1, . . . , yk where yℓ = xℓ if ℓ ≠ i, j and yi = yj = xi + xj. By multilinearity one gets σ · ω(x1, . . . , xk) = −ω(x1, . . . , xk). Conversely, assume that ω is skew symmetric. If x1, . . . , xk ∈ V and xi = xj for some j ≠ i, then it is clear that for every k-form one has ω(x1, . . . , xk) = σ · ω(x1, . . . , xk). Since ω is skew symmetric, σ · ω(x1, . . . , xk) = −ω(x1, . . . , xk), hence ω(x1, . . . , xk) = −ω(x1, . . . , xk). If the characteristic of the field is 2, this implies nothing. But if it is different from 2 it implies ω(x1, . . . , xk) = 0. In fact, if F is a field of characteristic different from 2, then 2 = 1 + 1 ∈ F , 2 ≠ 0, so that 2−1 exists in F . If W is a vector space over such a field, and if x ∈ W and x + x = 0, then 0 = 1 · x + 1 · x = (1 + 1)x = 2 · x, and we can multiply by 2−1 to get x = 0.

Definition 3 (or notation) If k ≥ 2, the subset of all alternating k-forms is denoted by Λk(V ′). We supplement this defining Λ1(V ′) = V ′ and Λ0(V ′) = F .

It is easy to see that Λk(V ′) is a subspace of T k(V ′). In a previous version of these notes I had an alternation map that allowed one to proceed in a very elegant fashion. Unfortunately, that map doesn’t work for fields of characteristic p ≠ 0. That is, it doesn’t work well. So let us bite the bullet and proceed in a generally valid way. ′ Up to a point, I do want to use an analog of this alternation map. Let ω1, . . . , ωk ∈ V with k ≥ 2. I will define ω1 ∧ · · · ∧ ωk by ∑ ω1 ∧ · · · ∧ ωk = ϵ(σ)σ · (ω1 ⊗ · · · ⊗ ωk).

σ∈Sk Notice that ∏k ∏k σ · (ω1 ⊗ · · · ⊗ ωk)(x1, . . . , xk) = ω1 ⊗ · · · ⊗ ωk(xσ(1), . . . , xσ(k)) = ωi(xσ(i)) = ωσ−1(i)(xi) i=1 i=1

= ωσ−1(1) ⊗ · · · ⊗ ωσ−1(k)(x1, . . . , xk); that is, σ · (ω1 ⊗ · · · ⊗ ωk) = ωσ−1(1) ⊗ · · · ⊗ ωσ−1(k). −1 −1 Notice also that as σ ranges through Sk, so does σ , and ϵ(σ) = ϵ(σ ). So we can also define ∑ ω1 ∧ · · · ∧ ωk = ϵ(σ)ωσ(1) ⊗ · · · ⊗ ωσ(k). (3)

σ∈Sk Let us see a few examples. If k = 2, then

ω1 ∧ ω2 = ω1 ⊗ ω2 − ω1 ⊗ ω2 so ω1 ∧ ω2(x1, x2) = ω1(x1) ⊗ ω2(x2) − ω1(x2) ⊗ ω2(x1). If k = 3,

ω1 ∧ ω2 ∧ ω3 = ω1 ⊗ ω2 ⊗ ω3 + ω2 ⊗ ω3 ⊗ ω1 + ω3 ⊗ ω1 ⊗ ω3 − ω1 ⊗ ω3 ⊗ ω2 − ω3 ⊗ ω2 ⊗ ω1 − ω2 ⊗ ω1 ⊗ ω3 We will need the following theorem. 2 ALTERNATING LINEAR FORMS 4

′ Theorem 3 Let k ≥ 2 and let ω1, . . . , ωk ∈ V

1. Let τ ∈ Sk. Then ωτ(1) ∧ · · · ∧ ωτ(k) = ϵ(τ)ω1 ∧ · · · ∧ ωk.

2. If there exists i, j, 1 ≤ i ≠ j ≤ k such that ωi = ωj, then ω1 ∧ · · · ∧ ωk = 0.

Proof. 1. Let vi = ωτ(i) for i = 1, . . . , k. Then ∑ ωτ(1) ∧ · · · ∧ ωτ(k) = ϵ(σ)vσ(1) ⊗ · · · ⊗ vσ(k).

σ∈Sk

Now vi = ωτ(i) implies vσ(i) = ωτ(σ(i)) = ωτσ(i). Thus ∑ ∑ ωτ(1) ∧ · · · ∧ ωτ(k) = ϵ(σ)ωτσ(1) ⊗ · · · ⊗ ωτσ(k) = ϵ(τ)ϵ(τσ)ωτσ(1) ⊗ · · · ⊗ ωτσ(k) = ϵ(τ)ω1 ∧ · · · ∧ ωk

σ∈Sk σ∈Sk

2. Assume now ωi = ωj for some i ≠ j. For simplicity in notation let us assume that i = 1, j = 2. (One can achieve this by a permutation; by part 1 a permutation would at most multiply the form by −1.) Now i, j can be used as variable indices. For i, j ∈ Jn = {1, . . . , n}, i ≠ j let us define

Si,j = {σ ∈ Sk : σ(i) = 1, σ(j) = 2}.

The following should be clear: Every permutation is in one and exactly one set∪ Si,j. For example, the identity ′ ′ ∩ ′ ′ ∅ permutation is in S1,2. That is; Sij Si j = except if i = i , j = j and Sk = (i,j)∈J 2 ,i≠ j Sij. It should also be ′ n relatively clear that if we take the transposition τ = (12) then the map σ 7→ τσ = σ is a bijection from Si,j onto Sj,i. So any sum over all the elements in Sk can be decomposed as follows:   ∑ ∑ ∑   aσ = (aσ + aσ′ ) .

σ∈Sk 1≤i

′ Clearly, if ω1 = ω2, then ωσ(1) ⊗ · · · ⊗ ωσ(k) = ωσ′(1) ⊗ · · · ⊗ ωσ′(k) for sin Sij, σ = τσ. On the other hand ϵ(σ′) = ϵ(τ)ϵ(σ) = −ϵ(σ). Thus   ∑ ∑ ∑  ′  ω1 ∧ · · · ∧ ωk = ϵ(σ)ωσ(1) ⊗ · · · ⊗ ωσ(k) = (ϵ(σ)ωσ(1) ⊗ · · · ⊗ ωσ(k) + ϵ(σ )ωσ′(1) ⊗ · · · ⊗ ωσ′(k)) σ∈ 1≤i

1≤i

We can now set up bases for these spaces of alternating forms. Recall our basis {e1, . . . , en} of V , with dual ′ basis {β1, . . . , βn} of V . We have Theorem 4 1. Λk(V ′) = {0} if k > n. 2. Assume 1 ≤ k ≤ n. Then {βi ∧ · · · ∧ βi : 1 ≤ i1 < ··· < ik ≤ n} 1 k ( ) n is a basis of Λk(V ′). In particular, dim Λk(V ′) = (since the map (i , . . . , i ) 7→ {i , . . . , i } from k 1 k 1 k k-tuples in Jn whose components are strictly ordered to subsets of Jn having k elements is a one-to-one correspondence).

k ′ Proof. Let ω ∈ Λ (V ), assume k ≥ 2 (if k = 0 or 1, there is nothing to prove). Let x1, . . . , xk ∈ V and write ∑n xj = ξjiei i=1 2 ALTERNATING LINEAR FORMS 5

for elements ξji ∈ F for 1 ≤ j ≤ k, 1 ≤ i ≤ n. Then multilinearity implies ∑n ··· ω(x1, . . . , xk) = ξ1,i1 ξk,ik ω(ei1 , . . . , eik ).

i1,...,ik=1 (This equation is the same as (1).) By the pigeonhole principle, if k > n, there has to be a repetition in the list ∈ k k ′ { } i1, i2, . . . , ik, hence ω(ei1 , . . . , eik ) = 0 for all (i1, . . . , ik) Jn . It follows that ω = 0. This proves Λ (V ) = 0 if k > n. Assume now 2 ≤ k ≤ n. Because ω is alternating any term in (1) in which iν = iµ for ν ≠ µ will evaluate to 0. Thus the only terms remaining are those in which all the iν ’s are distinct; in other words terms where the set {i1, . . . , ik} defined by the indices is a subset of exactly k-elements of Jn. Subsets of k elements of Jn are in one-to-one onto correspondence with k-tuples (i1, . . . , ik) such that 1 ≤ i1 < ··· < ik ≤ n. Every k-tuple of distinct elements of Jn can be obtained from one of these ordered ones by a permutation. Because ω being alternating is also skew symmetric, we see that ∑n ∑ ∑ ··· ··· ξ1,i1 ξk,ik ω(ei1 , . . . , eik ) = ξ1,iσ(1) ξk,iσ(k) ω(eiσ(1) , . . . , eiσ(k) ) ≤ ··· ≤ ∈ i1,...,ik=1 1 i1

1≤i1

ω(x1, . . . , xk) = ηi1,...,ik ω(ei1 , . . . , eik ) (4)

1≤i1

σ∈Sk ∈ k ≤ ··· ≤ ≤ ··· ≤ Now let (i1, . . . , ik), (j1, . . . , jk) Jn and 1 i1 < < ik n, 1 j1 < < jk n. Then ∑ ∏k ∧ · · · ∧ βi1 βik (ej1 , . . . , ejk ) = ϵ(σ) βiν (ejσ(ν) ).

σ∈Sk ν=1 ∏ k ̸ ̸ Suppose ν=1 βiν (ejσ(ν) ) = 0. Then βiν (ejσ(ν) ) = 0 for ν = 1, 2, . . . , k. Then, due to the bases being mutually dual, jσ(ν) = iν for nu = 1, . . . , k. But the iν ’s are ordered, which means we also must have 1 ≤ jσ(1) < ··· < jσ(k) ≤ n; only∏ possible if σ is the identity. It follows that σ = id and (j1, . . . , jk) = (i1, . . . , ik). If this happens, then k ν=1 βiν (ejσ(ν) ) = 1. In sum, { 1, if (j , . . . , j ) = (i , . . . , i ), β ∧ · · · ∧ β (e , . . . , e ) = 1 k 1 k (5) i1 ik j1 jk 0, otherwise.

From this, if we have as before, x1, . . . , xk ∈ V and ∑n xj = ξjiei i=1 ∧ · · · ∧ ≤ ··· ≤ for j = 1, . . . , k, we have (by the computation just done, replacing ω by βj1 βjk , 1 j1 < , jk n, and by (5) ∑ ∧ · · · ∧ ∧ · · · ∧ βj1 βjk (x1, . . . , xk) = ηi1,...,ik βj1 βjk (ei1 , . . . , eik ) = ηj1,...,jk , (6) ≤ ··· ≤ 1 i1

= ω(ei1 , . . . , eik )ηi1,...,ik = ω(x1, . . . , xk);

1≤i1

′ ′ { ∧ · · · ∧ ≤ ··· ≤ } in other words, ω = ω . But ω s a linear combination of βi1 βik , 1 i1 < < ik n , proving this set k ′ ∈ ≤ ··· ≤ spans Λ (V ). Assume now ci1,...,ik F for 1 i1 < i2 < < ik n and ∑ ∧ · · · ∧ ci1,...,ik βi1 βik = 0.

1≤i1

Applying this zero k-form to (ej1 , . . . , ejk ) one gets at once that cj1,...,jk = 0 for all j1, . . . , jk. It follows that the { ∧ · · · ∧ ≤ ··· ≤ } k (set β)i1 βik , 1 i1 < < ik n is also linearly independent, hence a basis of Λ (V ). Since this set has n elements, we are done. . k Here is a corollary important enough to be called a theorem:

Theorem 5 Let V be a vector space of dimension n over a field F and let {e1, . . . , en} be a basis of V . There exists a unique n-alternating form ω on V such that

ω(e1, . . . , en) = 1.

′ n ′ Proof. Take ω = β1 ∧ · · · ∧ βn. By (5), we have( ω()e1, . . . , en) = 1. If ω Λ (V ), then ω = cω for some c ∈ F , by n Theorem 4, according to which dim Λn(V ) = = 1. If ω′(e , . . . , e ) = 1, then c = 1 and ω′ = ω. n 1 n As we shall eventually see, if A is an n × n with entries in F , in other words a square n × n array n of field elements, we can identify A with an n-tuple of vectors in F as follows. If A = (aij)1≤i,j≤n, define the column vectors       a11 a12 a1n        a21   a22   a2n  A1 =  .  ,A2 =  .  , ··· ,An =  .  .  .   .   .  an1 an2 ann 1 n n n The map A → (A ,...,A ) is, of course, one-to-one and onto from Mn(F ) to F . We identify Mn(F ) and F by n this correspondence and let {e1, . . . , en} be the canonical basis of F ; the one that in the identification corresponds to the identity matrix. Then det is the one and only alternating n-form such that det(e1, . . . , en) = 1.