Quick viewing(Text Mode)

Grassmanians and the Pl¨Ucker Embedding 1

Grassmanians and the Pl¨Ucker Embedding 1

GRASSMANIANS AND THE PLUCKER¨ EMBEDDING

J. WARNER

1. Grassmanians Definition 1.1. Let V be an n-dimensional over a field k. Let r ∈ Z with 1 ≤ r ≤ n. Define Grass(r, V ) := {r-dimensional subspaces of V}, the Grassmanian of r-planes in V . ∼ Notice that Grass(1,V ) = P(V ) as sets. Fix a for V = {v1, . . . , vn}. We then have a

r G p : Mn,r(k) → Grass(i, V ) i=1 P given by sending A = (aij) to the subspace span{u1, . . . , ur} where uj = aijvi. Under this map, the zero is sent to the unique zero dimensional subspace. Example 1.2. Let n = 4, r = 2, and consider the 4 × 2 matrices  1 1   1 1   1 1   1 1   1 0   0 1  A =   B =   C =    1 1   0 1   1 0  1 1 0 0 0 0

Then p(A) is the one-dimensional subspace spanned by v1 + v2 + v3 + v4 and p(B) and p(C) are both the two-dimensional subspace spanned by v1 + v2 and v1 + v3. Notice that p(A) ∈ Grass(r, V ) if and only if (A) = r. The following proposition shows that the full rank matrices in Mn,r(k) form an open in the Zariski topology.

Proposition 1.3. Let Σ be a subset of {1, 2, . . . , n} of cardinality r, and for any A ∈ Mn,r(k) let ΣA be the r × r matrix formed from the rows corresponding to the elements of Σ. Then

rank(A) = max rank(ΣA) Σ Proof. The proof follows from the fact that the rank of a matrix is equal to the maximum number of linearly independent rows. 

Corollary 1.4. Rank(A) < r if and only if det(ΣA) = 0 for all Σ. o o Let Mn,r(k) be the set of full rank matrices. Then the corollary shows that Mn,r(k) is the ∼ nr complement of the zero locus of a collection of polynomial equations in Mn,r(k) = A . Thus o nr Mn,r(k) is open in the Zariski topology of A . By restriction, we now have a map o Mn,r(k) → Grass(r, V ) Date: Spring 2013. 1 which we still denote by p. Notice that p is surjective, as any subspace of r can be spanned by r vectors. However, p is not injective as the example shows. The map is injective up to an action of GLr(k).

Proposition 1.5. p(A) = p(B) if and only if there is C ∈ GLr(k) with A = BC. P 0 P Proof. Suppose p(A) = p(B) = U. Let uj = aijvi and uj = bijvi. Then since 0 0 span{u1, . . . , ur} = U = span{u1, . . . , ur}, there exists C = (cij) ∈ GLr(k) such that P 0 uj = cijui (C is invertible because each basis for U is linearly independent). Then we have

X X 0 X X X X akjvk = uj = cijui = cij bkivk = bkicijvk k i i k k i It follows that A = BC. Next suppose there is C ∈ GLr(k) with A = BC. Then using the P 0 notation above uj = cijui, so that p(A) = p(B).  Example 1.6. In the example above, notice that p(B) = p(C), and that the  0 1  D = 1 0 satisfies B = CD.

The proposition shows that GLr acts freely and transitively on the fibers of p. Thus, every fiber is in one-to-one correspondence with the elements of GLr, but there is no distinguished identity in the elements of the fiber. We can choose such a distinguished element if we restrict our attention to certain open sets of Grass(r, V ). For any Σ ⊂ {1, . . . , n} of cardinality r, let UΣ ⊂ Grass(r, V ) be the set of all r-planes U such that for any matrix AU with p(AU ) = U, the r × r matrix ΣAU is invertible. Notice UΣ r(n−r) is an open set in with A , and the UΣ form an open covering of Grass(r, V ). Σ −1 Σ For U ∈ UΣ, let AU be the distinguished element of p (U) such that ΣAU = Ir, where Ir is o Σ the r × r . The map s : UΣ → Mn,r(k) sending U to AU is a section of p over UΣ. This distinguised choice of element in every fiber above the open set UΣ allows us to make −1 ∼ the identification p (UΣ) = UΣ × GLr. The above discussion can be summarized by saying that p is a principal GLr torsor, locally trivial in the Zariski topology.

2. Plucker¨ Embedding Here we show how to consider the Grassmanian as a projective variety inside of P(ΛrV ). Consider p : Grass(r, V ) → P(ΛrV ) given by

U = span{u1, . . . , ur} 7→ u1 ∧ ... ∧ ur To show this map embeds Grass(r, V ) in P(ΛrV ) as a closed projective variety, we must show the map is well-defined, injective, and its is the zero locus of some homogeneous polynomials. 0 To see that the map is well-defined, suppose ui and ui define the same r- in V . Let 0 C be the matrix expressing the ui as a of the ui. Then we have 2 X 0 X 0 u1 ∧ ... ∧ ur = ci1ui ∧ ... ∧ cirui i i X 0 0 = sgn(σ)c1σ(1) . . . crσ(r)u1 ∧ ... ∧ ur σ∈Sr 0 0 = det(C)u1 ∧ ... ∧ ur 0 r so that the ui and ui define the same element in P(Λ V ). Hence p is well-defined. To see that p is injective, we use the following lemma.

Lemma 2.1. If U = span{u1, . . . , ur}, then U = {v ∈ V | v ∧ u1 ∧ ... ∧ ur = 0}

Proof. If v ∈ U, it follows that v ∈ V | v ∧ u1 ∧ ... ∧ ur = 0. Next, suppose that v ∈ V with v ∧u1 ∧...∧ur = 0. Extend {u1, . . . , ur} to a basis of V , and write v as a linear combination of this basis. Then we have n n X X 0 = v ∧ u1 ∧ ... ∧ ur = aiui ∧ u1 ∧ ... ∧ ur = aiui ∧ u1 ∧ ... ∧ ur i=1 i=r+1

However, the elements ui ∧ u1 ∧ ... ∧ ur are linearly independent for i ≥ r + 1, so that by uniqueness of expression, ai = 0 for i ≥ r + 1 and v ∈ U.  With the lemma, we can now show that p is injective. Suppose that p(U) = p(U 0), ie, 0 0 that u1 ∧ ... ∧ ur = c(u1 ∧ ... ∧ ur) for some c 6= 0. Then we have u ∈ U if and only if 0 0 0 0 u ∧ u1 ∧ ... ∧ ur = 0 if and only if c(u ∧ u1 ∧ ... ∧ ur) = 0 if and only if u ∧ u1 ∧ ... ∧ ur) = 0 if and only if u ∈ U 0. Before verifying that the image of p is a closed subset of P(ΛrV ), lets consider an example.

Example 2.2. Consider the 2-plane in V of dimension 4 spanned by v1 + v2 and v1 + v3 (ie, the 2-plane p(B) from our first example). Then p(U) = (v1 + v2) ∧ (v1 + v3) = −(v1 ∧ v2) + 2 (v1 ∧ v3) + (v2 ∧ v3). If we order the basis of Λ V lexicographically, we see that U is mapped onto the point in P5 with homogeneous coordinates [−1 : 1 : 0 : 1 : 0 : 0]. Notice that the coordinates correspond to the of ΣB. The example gives an alternate and equivalent way of defining the Pl¨ucker embedding. If p(AU ) = U, let pΣ(AU ) be the of AU defined by Σ. Then we can define p(U) = [pΣ(AU )] for some choice of ordering of Σ. It is usually convenient to order lexicographically as in the example. If we order the basis of ΛrV in the same way, this is equal to the embedding defined above. Now, let’s see that the image of p satisfies a collection of homogeneous polynomial equa- n tions in r variables. Definition 2.3. Let w ∈ ΛrV . We say that w is divisible by v ∈ V if w = v ∧ ϕ for some r−1 ϕ ∈ Λ V , and that w is totally decomposable if w = v1 ∧ ... ∧ vr for vectors vi ∈ V . The image of p consists equivalence classes of all totally decomposable vectors in ΛrV . First notice that v divides w if and only if v ∧ w = 0 ∈ Λr+1V. The forward direction is clear. For the reverse direction, extend v to a basis and write w as a sum of of degree k in this basis. If v ∧ w = 0, then every in the expansion of w is divisible by v, and hence so is w. From this fact, it follows that w is totally decomposable if and only if 3 the space of vectors dividing it has dimension k (notice the dimension is never larger than k). To each w consider the ϕ(w): V → Λr+1V v → w ∧ v Then w is totally decomposable if and only if the rank of ϕ(w) is equal to n − r. Since the rank can never be smaller, this is equivalent to the rank(ϕ(w)) ≤ n − r. What are the entries of ϕ(w)? Since the map w → ϕ(w) is also linear, it follows that the entries of ϕ(w) are n k-linear combinations of the r homogeneous coordinates of w. If we view the homogeneous coordinates of w as coordinate functions, that is, as variables xi, then the entries of ϕ(w) are homogeneous polynomials of degree 1 in the xi. Then rank(ϕ(w)) ≤ n − r if and only if every n − r + 1 × n − r + 1 minor vanishes. These minors ϕ(w) are given by homogeneous polynomials of degree n−r +1, and we finally see that w is totally decomposable if and only if its coordinates satisfy these homogeneous polynomials. Thus, the image of p is indeed a projective variety inside of P(ΛrV ). What happens if we change our original choice of basis? How is the embedding of Grass(r, V ) in P(ΛrV ) affected? To answer this question, we need the following construction.

Definition 2.4. Order the subsets Σ of {1, . . . , n} of cardinality r:Σ1 < Σ2 < . . . < Σ n . (r) For any T ∈ GLn, let Tij be the r × r minor defined by Σi on rows and Σj on columns. Define the derived matrix of T to be BT ∈ M n (k) with bij = Tij. (r)

Conjecture 2.5. For any T ∈ GLn, det(BT ) is independent of the ordering of Σ1 < Σ2 < . . . < Σ n . Also, for any T ∈ GLn, det(BT ) 6= 0. In fact (r) (n−1) det(BT ) = (det(T )) r−1 The above formula for the determinant of a derived matrix in terms of the determinant of its original matrix has been verified for examples up to n = 6. For the following conjecture, we order the subsets Σ lexicographically, as we have in defining the homogeneous coordinates in the Pl¨ucker embedding. 0 0 Conjecture 2.6. Suppose V = hv1, . . . , vni = hv1, . . . , vni, and let T ∈ GLn(k) be such that P 0 0 vj = tijvi. Let p and p be the Pl¨uckerembeddings for the vi and vi respectively. Then 0 B ◦p(U) = p (U) = p(TAU ), where we view B as a linear on the homogeneous coordinates of P(ΛrV ).

4