<<

. find:

Theorem . . Let be . Let be unique with

Then is d.

Proof.

3.5 The

3.5.1 The exterior product of two vectors

Let us recall the exterior product of a finite dimensional K-vector V . An alternating is a map B : V × V −→ K satisfying the axioms: (Bil 1) B(v, w) = −B(w, v),

(Bil 2) B(λ1v1 + λ2v2, w) = λ1B(v1, w) + λ2B(v2, w),

for all λ• ∈ K and v•, v, w ∈ V . We can add alternating bilinear forms and multiply them with scalars and get a thus the K- of bilinear forms on V . This space is isomorphic to the space of skew-symmetric matrices of the appropriate size. The second exterior power V2 V of V is the dual to the space of alternating bilinear forms on V . The exterior product of v and w is (v ∧ w) ∈ V2 V given by

(v ∧ w)(B) := B(v, w).

We have the obvious properties:

(i) v ∧ w = −w ∧ v,

(ii) (λ1v1 + λ2v2) ∧ w = λ1v1 ∧ w + λ2v2 ∧ w, V2 (iii) If {v1, . . . , vn} is a basis of V , then {vi ∧ vj : 1 ≤ i < j ≤ n} is a basis of V .

Furthermore: v ∧ w = 0 if and only if v = λw for some λ ∈ K.

3i.e. curve of degree two. 4You may check Wikipedia on this, or the paper of Ciliberto, Harbourne, Miranda, and Roé :https://arxiv.org/abs/1202.0475

19 3.5.2 The relation with two-dimensional subspaces

We are interested in the exterior product because every two-dimensional subspace of V corresponds to a one- dimensional subspace of V2 V . Take U = Span {v, w} ⊂ V with dim U = 2. Then every other basis {u , u } K K 1 2 a b of U is obtained from {v, w} via an invertible matrix A = ∈ GL ( ) as u = av +bw and u = cv +dw. c d 2 K 1 2

u1 ∧ u2 = (av + bw) ∧ (cv + dw) = ad(v ∧ w) + bc(w ∧ v) = det(A)(v ∧ w).

So, Span {v ∧ w} ⊆ V2 V is independent of the choice of basis of U. Thus, to every 2-dimensional subspace K V2 of V corresponds a point in P( V ). That is why it is good to think about projective as associated to n n(n−1) vector spaces, and not simply as P . In this case, just writing P 2 would have been less illuminating. The first question that comes up is: do all 1-dimensional subspaces of Λ2V correspond to a 2-dimensional V2 subspace of V ? The answer is: No, not every element of V is of the form v ∧w. For instance, v1 ∧v2 +v3 ∧v4, where {v1, . . . , v4} is a basis of V . (This is an easy exercise). So, the association U 7−→ v ∧ w V2 gives a subset of the projective space P( V ). Is this algebraic? We will see that it is, and that this holds more generally.

3.5.3 Exterior powers

To parametrize the collection of subspaces of any dimension with a projective algebraic set, we have too take k 5 a look at alternating multilinear maps M : V −→ W . A multilinear map is alternating, if M(v1, . . . , vk) = 0 n n whenever there exists i 6= j with vi = vj. The determinant, viewed as a map det : (K ) −→ K is an alternating multilinear form. Also the ϕ :(K3)2 −→ K3, given by ϕ((a, b, c), (x, y, z)) := (bz − cy, cx − az, ay − bx),

is an alternating bilinear map.

The k-th exterior power of V , also known as the alternating k-fold tensor product, is given by the unique Vk Vk k Vk pair ( V, τk), where V is a k-vector space and τk : V −→ V is an alternating k-, satisfying the following universal property: For every k-fold alternating linear map f : V k −→ W , there exists a unique Vk linear map g : −→ W such that f = g ◦ τk. That is such that the following diagram commutes:

f V k W τ k g Vk V

One can easily see that Vk V = V ⊗k/L, where L = Span {v ⊗ · · · ⊗ v : v = v for some i 6= j}. K 1 k i j Vk Remark 3.11. Assume that {e1, . . . , en} is a basis of V . A basis of V is given by the alternating tensors:

{ei1 ∧ ... ∧ eik : 1 ≤ i1 < ··· < ik ≤ n}.

So dim Vk V = n. In particular, V0 V = , V1 V = V and Vn V = Span (e ∧ · · · ∧ e ). All higher exterior K k K 1 n powers vanish.

5that is linear in each argument.

20 V Ln Vi The exterior algebra of the n-dimensional K-vector spaces V is V := i=0 V , with product operation induced by Vi+j (v1 ∧ ... ∧ vi) ∧ (w1 ∧ . . . wj) := v1 ∧ ... ∧ vi ∧ w1 ∧ ... ∧ wj ∈ V, Vi Vj ∀ v1 ∧ ... ∧ vi ∈ V and w1 ∧ ... ∧ wj ∈ V. n Remark 3.12. Let V = K and vi = (ai,1, . . . , ai,n) for i = 1, . . . , k. Then, we can express the coordinates of v1 ∧ · · · ∧ vk with respect to the canonical basis in terms of the k-minors of (aij): X v1 ∧ · · · ∧ vl = a1,j1 . . . ak,jk · ej1 ∧ · · · ∧ ejk

j1,...,jk ! X X = sign(σ)a1,σ(j1) . . . ak,σ(jk) · ej1 ∧ · · · ∧ ejk

1≤j1<···

1≤j1<···

where Aj1,...,jk is the k × k submatrix of (aij) with the columns j1, . . . , jk. For example, if k = 2, n = 3, then   a11 a12 a13 (aij) = a21 a22 a23 and v1 ∧ v2 = (a11a22 − a12a21)e1 ∧ e2 + (a11a23 − a13a21)e1 ∧ e3 + (a12a23 − a13a22)e2 ∧ e3. So the exterior product encodes all the maximal minors of a matrix in one object. As a consequence we have:

v1 ∧ · · · ∧ vk = 0 ⇔ v1, . . . , vk are linearly dependent.

Furthermore, if the two sets {v1, . . . , vk} ⊆ V and {w1, . . . , wk} ⊆ V are both linearly independent, then Span (v , . . . , v ) = Span (w , . . . , w ) ⇐⇒ v ∧ · · · ∧ v = λ · w ∧ · · · ∧ w for some λ ∈ . (3.1) K 1 k K 1 k 1 k 1 k K This last equivalence needs some proof, but it’s all basic linear algebra.

3.5.4 The Plücker Embedding

Definition 3.13. Let n ∈ N>0 and k ∈ N with 0 ≤ k ≤ n. The Grassmanian of k-planes in the n-dimensional K-vector space V is the set of all k-dimensional linear subspaces of V . We denote this by Gr(k, V ). When V = Kn, we simply write n Gr(k, n)= {W ⊆ K : dimK W = k}. For k = 1, we obtain Pn−1 (as a set for the moment). For any k ≥ 1, by the correspondence between k- dimensional subspaces of V and k − 1 linear subspaces of P(V ), we have a bijection between the collection of k-vector subspaces of Kn and k − 1-dimensional linear subspaces of Pn−1. For this reason, one may find the notation Gr(k − 1, n − 1) for the same object. In order to avoid confusion, we will denote by

n−1 G(k − 1, n − 1) := {L ⊆ P : L is a of dimension k − 1}.

Vk The Plücker Embedding is the map P` : Gr(k, V ) P( (V )) given by

k ^ P`(Span (v , . . . , v )) := [v ∧ · · · ∧ v ] ∈ ( (V )). K 1 k 1 k P By (3.1) this is well defined and injective. For a k-dimensional subspace U ⊆ V , the homogeneous coordinates of Vk P`(U) in P( V ) are called the Plücker coordinates of U. Once a basis of V is chosen (thus an n Vk with K and a corresponding basis of V as well), the Plücker coordinates are just the maximal (i.e. k−) minors of the matrix with the coordinates of v1, . . . , vk as rows.

21 Example 3.14. (a) For k = 1, and U = Span (a e + ··· + a e ) we get P`(U) = (a : ... : a ) ∈ n−1. K 1 1 n n 1 n P (b) Now take U = Span (e + e , e + e ) ∈ Gr(2, 3). We have K 1 2 2 3

(e1 + e2) ∧ (e1 + e3) = −e1 ∧ e2 + e1 ∧ e3 + e2 ∧ e3,

so P`(U) = (−1 : 1 : 1).

To show that the image of the Plücker embedding is a projective algebraic set we need to express being a pure Vk tensor, that is being of the form v1 ∧ · · · ∧ vk ∈ V , as a polynomial condition. For this we need the following lemma. Vk Vk+1 Lemma 3.15. Let 0 6= w ∈ V , with k < n, and define fw : V −→ V by

fw(v) := v ∧ w.

Then rank fw ≥ n − k, and, most importantly,

rank fw = n − k ⇔ ∃ v1, . . . , vk ∈ V with w = v1 ∧ · · · ∧ vk.

Proof. Let r = dim Ker fw = n − rank fw. Choose a basis {v1, . . . , vr} of fw, extend it to a basis {v1, . . . , vn} of V , and express w = P p v ∧ · · · ∧ v . Then use the fact that for i = 1, . . . , r we have v ∧ w = 0. i1<···

This implies that only the pi1...ik with {1, . . . , r} ⊆ {i1, . . . , ik} may be nonzero. This gives the inequality r ≤ k, which is equivalent to rank fw ≥ n − k. For the second part, clearly if r = k, then w = λ · v1 ∧ · · · ∧ vr. In the other direction, if w = w1 ∧ ... ∧ wk, as w 6= 0 the wi are linearly independent, and they belong to Ker fw. By r ≤ k, we are forced to have the required equality. Vk Proposition 3.16. The image of the Plücker embedding is an algebraic subset of P( V ). Vk Proof. For k = n we just have one point. If k < n, then [w] ∈ P( V ) is in P`(Gr(k, V )) if and only if w is a Vk Vk+1 pure tensor in V . By Lemma 3.15 this is equivalent to rank fw : V −→ V having rank n − k. Choosing n  a basis, the matrix of this linear map will be a k+1 × n matrix, whith entries the homogeneous coordinates of [w] (with some repetitions and permutations). As the rank is in general at least n − k, the rank condition is given by the vanishing of the (n − k + 1)−minors of the corresponding matrix. These minors are homogeneous polynomials in the entries of the matrix, and thus in the homogeneous coordinates of w.

Example 3.17. Let’s take a look at the specific case Gr(2, 4) ⊆ P5. Denote the coordinates on P5 by

x12, x13, x14, x23, x24, x34.

5 V2 Let [w] = (p12 : p13 : p14 : p23 : p24 : p34) ∈ P . The corresponding element of V is

w = p12 · e1 ∧ e2 + p13 · e1 ∧ e3 + p14 · e1 ∧ e4

p23 · e2 ∧ e3 + p24 · e2 ∧ e4 + p34 · e3 ∧ e4.

This means that fw maps the basis elements e1, e2, e3, e4 as follows:

e1 7−→ p23 · e1 ∧ e2 ∧ e3 + p24 · e1 ∧ e2 ∧ e4 + p34 · e1 ∧ e3 ∧ e4

e2 7−→ −p13 · e1 ∧ e2 ∧ e3 − p14 · e1 ∧ e2 ∧ e4 + p34 · e2 ∧ e3 ∧ e4

e3 7−→ p12 · e1 ∧ e2 ∧ e3 − p14 · e1 ∧ e3 ∧ e4 − p24 · e2 ∧ e3 ∧ e4

e4 7−→ p12 · e1 ∧ e2 ∧ e4 + p13 · e1 ∧ e3 ∧ e4 + p23 · e2 ∧ e3 ∧ e4

22 So, the matrix which is supposed to have rank 3 is   p23 −p13 p12 0  p24 −p14 0 p12     p34 0 −p14 p13  0 p34 −p24 p23

Computing the sixteen 3−minors of the above matrix, (with the pij replaced by variables), we obtain the generators:

2 2 g1 = x14x23 − x13x14x24 + x12x14x34, g2 = x14x23x24 − x13x24 + x12x24x34, 2 g3 = x14x23x34 − x13x24x34 + x12x34, g4 = 0 2 2 g5 = −x13x14x23 + x13x24 − x12x13x34, g6 = −x14x23 + x13x23x24 − x12x23x34, 2 g7 = 0 g8 = x14x23x34 − x13x24x34 + x12x34, 2 g9 = x12x14x23 − x12x13x24 + x12x34, g10 = 0 2 2 g11 = −x14x23 + x13x23x24 − x12x23x34, g12 = −x14x23x24 + x13x24 − x12x24x34, 2 g13 = 0 g14 = x12x14x23 − x12x13x24 + x12x34, 2 2 g15 = x13x14x23 − x13x24 + x12x13x34, g16 = x14x23 − x13x14x24 + x12x14x34. Once the matrix M is defined, the Macaulay2 command to obtain this list of minors is exteriorPower(3,M). The command minor(3,M) returns the ideal generated by these minors. There seems to be some pattern to this, namely if f = x14x23 − x13x24 + x12x34, then

g1 = x14 · f g2 = x24 · f g3 = x34 · f g5 = −x13 · f g6 = −x23 · f g8 = x34 · f g9 = x12 · f g11 = −x23 · f g12 = −x24 · f g14 = x12 · f g15 = x13 · f g16 = x14 · f

This means, that the ideal generated by the 3−minors is

I = hx12, x13, x14, x23, x24, x34i ∩ hx14x23 − x13x24 + x12x34i .

Which means IP(P`(Gr(2, 4))) = hx14x23 − x13x24 + x12x34i. We will talk about this trick later. So the Grassmannian of planes in four-space is a quadric in P5. Let us now have a look at the affine patches of the Grassmannian. These will shed some light on the structure of this variety. In particular, they provide an easy way to compute the dimension.

3.5.5 Affine Cover of the Grassmannian

We will from now on identify Gr(k, n) with its image under the Plücker embedding, and write Gr(k, n) ⊆ (n)−1 P k . We view a point of P ∈ Gr(k, n) both as a point in projective space, with projective coordinates n (pi1...ik )1≤i1<···

 1 0  .  .. A−1B  (3.2) 0 1

23  Viewing Matk,n−k(K) as a k · (n − k) -dimensional affine space, we have a map:

k(n−k) ϕ0 : A U0  M row span of Ik | C

The correspondence between the matrix presentation of a subspace and the Plücker coordinates is obtained by taking minors, which are polynomials in the entries of C. So the map ϕ0 above gives us a morphism. 0 Furthermore, two difference matrices C and C define different subspaces, so ϕ0 is also bijective. The inverse of ϕ0 is defined by

U 3 (p ) 7−→ ((−1)i+j+k−1p ) ∈ Mat ( ) = k(n−k). 0 i1,...,ik 1≤i1<···

−1 k(n−k) So ϕ is also a morphism, and we conclude that U0 is isomorphic to A . We have just proved the following. Proposition 3.18. The Grassmanian Gr(k, n) can be covered with finitely many affine patches isomorphic to the affine space Ak(n−k). In particular, the dimension of Gr(k, n) is k(n − k).

Every vector subspace of dimension k of An is given by a k × n matrix of maximal rank. Two matrices give the same subspace if they are equivalent modulo row transformations. So a canonical representative can be chosen as a matrix in reduced row echelon form. This gives a presentation of Gr(k, n) as a disjoint union of affine spaces. For instance, for k = 2, n = 4 we have the following possible reduced row echelon shapes:

1 0 ∗ ∗ 1 ∗ 0 ∗ 1 ∗ ∗ 0 0 1 ∗ ∗ 0 0 1 ∗ 0 0 0 1

0 1 0 ∗ 0 1 ∗ 0 0 0 1 0 . 0 0 1 ∗ 0 0 0 1 0 0 0 1

That is, Gr(2, 4) is the disjoint union A4 t A3 t A2 t A2 t A1 t A0.

Remark 3.19. Assuming that the Plücker coordinate p1,...,k = 1, we call the others the affine (or local) coordinates in U0. These correspond now to all the minors of all sizes the right-most (k × (n − k))− block (i.e. to A−1B in (3.2)). By Laplace expansion of each determinant we thus get only quadratic relations among the Plücker coordinates, which are called the Plücker relations. For instance, in Gr(2, 4) ∩ U0 we get   1 0 x1,3 x1,4 0 1 x2,3 x2,4

And the right-most 2−minor is the coordinate x3,4. Expanding this we get

x3,4 = x1,3x2,4 − x1,4x2,3.

To recover the homogeneous Plücker relations we have to homogenize with respect to x1,2, and we obtain the equation f from Example 3.17.

A final result, without proof: Gr(k, n) ' Gr(n − k, n). The bijection is clear: take the orthogonal complement of the subspace. One just needs to check that this gives a morphism of quasi-projective varieties.

Consider watching the video (35’) now.

24

Vk Proof. For k = n we just have one point. If k < n, then [w] ∈ P( V ) is in P`(Gr(k, V )) if and only if w is a Vk Vk+1 pure tensor in V . By Lemma 3.14 this is equivalent to rank fw : V −→ V having rank n − k. Choosing n  a basis, the matrix of this linear map will be a k+1 × n matrix, whith entries the homogeneous coordinates of [w] (with some repetitions and permutations). As the rank is in general at least n − k, the rank condition is given by the vanishing of the (n − k + 1)−minors of the corresponding matrix. These minors are homogeneous polynomials in the entries of the matrix, and thus in the homogeneous coordinates of w.

Example 3.16. Let’s take a look at the specific case Gr(2, 4) ⊆ P5. Denote the coordinates on P5 by

x12, x13, x14, x23, x24, x34. 5 V2 Let [w] = (p12 : p13 : p14 : p23 : p24 : p34) ∈ P . The corresponding element of V is

w = p12 · e1 ∧ e2 + p13 · e1 ∧ e3 + p14 · e1 ∧ e4

p23 · e2 ∧ e3 + p24 · e2 ∧ e4 + p34 · e3 ∧ e4.

This means that fw maps the basis elements e1, e2, e3, e4 as follows:

e1 7−→ p23 · e1 ∧ e2 ∧ e3 + p24 · e1 ∧ e2 ∧ e4 + p34 · e1 ∧ e3 ∧ e4

e2 7−→ −p13 · e1 ∧ e2 ∧ e3 − p14 · e1 ∧ e2 ∧ e4 + p34 · e2 ∧ e3 ∧ e4

e3 7−→ p12 · e1 ∧ e2 ∧ e3 − p14 · e1 ∧ e3 ∧ e4 − p24 · e2 ∧ e3 ∧ e4

e4 7−→ p12 · e1 ∧ e2 ∧ e4 + p13 · e1 ∧ e3 ∧ e4 + p23 · e2 ∧ e3 ∧ e4 So, the matrix which is supposed to have rank 3 is   p23 −p13 p12 0  p24 −p14 0 p12     p34 0 −p14 p13  0 p34 −p24 p23

Computing the sixteen 3−minors of the above matrix, (with the pij replaced by variables), we obtain the generators: 2 2 g1 = x14x23 − x13x14x24 + x12x14x34, g2 = x14x23x24 − x13x24 + x12x24x34, 2 g3 = x14x23x34 − x13x24x34 + x12x34, g4 = 0 2 2 g5 = −x13x14x23 + x13x24 − x12x13x34, g6 = −x14x23 + x13x23x24 − x12x23x34, 2 g7 = 0 g8 = x14x23x34 − x13x24x34 + x12x34, 2 g9 = x12x14x23 − x12x13x24 + x12x34, g10 = 0 2 2 g11 = −x14x23 + x13x23x24 − x12x23x34, g12 = −x14x23x24 + x13x24 − x12x24x34, 2 g13 = 0 g14 = x12x14x23 − x12x13x24 + x12x34, 2 2 g15 = x13x14x23 − x13x24 + x12x13x34, g16 = x14x23 − x13x14x24 + x12x14x34. Once the matrix M is defined, the Macaulay2 command to obtain this list of minors is exteriorPower(3,M). The command minor(3,M) returns the ideal generated by these minors. There seems to be some pattern to this, namely if f = x14x23 − x13x24 + x12x34, then

g1 = x14 · f g2 = x24 · f g3 = x34 · f g5 = −x13 · f g6 = −x23 · f g8 = x34 · f g9 = x12 · f g11 = −x23 · f g12 = −x24 · f g14 = x12 · f g15 = x13 · f g16 = x14 · f This means, that the ideal generated by the 3−minors is

I = hx12, x13, x14, x23, x24, x34i ∩ hx14x23 − x13x24 + x12x34i .

Which means IP(P`(Gr(2, 4))) = hx14x23 − x13x24 + x12x34i. We will talk about this trick later. So the Grassmannian of planes in four-space is a quadric in P5. Let us now have a look at the affine patches of the Grassmannian. These will shed some light on the structure of this variety. In particular, they provide an easy way to compute the dimension.

22 3.5.5 Affine Cover of the Grassmannian

We will from now on identify Gr(k, n) with its image under the Plücker embedding, and write Gr(k, n) ⊆ (n)−1 P k . We view a point of P ∈ Gr(k, n) both as a point in projective space, with projective coordinates n (pi1...ik )1≤i1<···

 1 0  .  .. A−1B  (3.2) 0 1  Viewing Matk,n−k(K) as a k · (n − k) -dimensional affine space, we have a map:

k(n−k) ϕ0 : A U0  M row span of Ik | C

The correspondence between the matrix presentation of a subspace and the Plücker coordinates is obtained by taking minors, which are polynomials in the entries of C. So the map ϕ0 above gives us a morphism. 0 Furthermore, two difference matrices C and C define different subspaces, so ϕ0 is also bijective. The inverse of ϕ0 is defined by

U 3 (p ) 7−→ ((−1)i+j+k−1p ) ∈ Mat ( ) = k(n−k). 0 i1,...,ik 1≤i1<···

−1 k(n−k) So ϕ is also a morphism, and we conclude that U0 is isomorphic to A . We have just proved the following. Proposition 3.17. The Grassmanian Gr(k, n) can be covered with finitely many affine patches isomorphic to the affine space Ak(n−k). In particular, the dimension of Gr(k, n) is k(n − k).

Every vector subspace of dimension k of An is given by a k × n matrix of maximal rank. Two matrices give the same subspace if they are equivalent modulo row transformations. So a canonical representative can be chosen as a matrix in reduced row echelon form. This gives a presentation of Gr(k, n) as a disjoint union of affine spaces. For instance, for k = 2, n = 4 we have the following possible reduced row echelon shapes:

1 0 ∗ ∗ 1 ∗ 0 ∗ 1 ∗ ∗ 0 0 1 ∗ ∗ 0 0 1 ∗ 0 0 0 1

0 1 0 ∗ 0 1 ∗ 0 0 0 1 0 . 0 0 1 ∗ 0 0 0 1 0 0 0 1

That is, Gr(2, 4) is the disjoint union A4 t A3 t A2 t A2 t A1 t A0.

Remark 3.18. Assuming that the Plücker coordinate p1,...,k = 1, we call the others the affine (or local) coordinates in U0. These correspond now to all the minors of all sizes the right-most (k × (n − k))− block (i.e. to A−1B in (3.2)). By Laplace expansion of each determinant we thus get only quadratic relations among the Plücker coordinates, which are called the Plücker relations. For instance, in Gr(2, 4) ∩ U0 we get   1 0 x1,3 x1,4 0 1 x2,3 x2,4

23 And the right-most 2−minor is the coordinate x3,4. Expanding this we get

x3,4 = x1,3x2,4 − x1,4x2,3.

To recover the homogeneous Plücker relations we have to homogenize with respect to x1,2, and we obtain the equation f from Example 3.16.

A final result, without proof: Gr(k, n) ' Gr(n − k, n). The bijection is clear: take the orthogonal complement of the subspace. One just needs to check that this gives a morphism of quasi-projective varieties.