<<

arXiv:1701.06864v1 [math.AG] 24 Jan 2017 hthsvnsigdtriat(Theorem 3 vanishing invertible has an by that right the on eoclm n arcswt eo2 zero a with matrices and column zero a (1) matrices the of rows the of combination linear sntepcal eeat eetees the Nevertheless, relevant. especially not is aihs lhuhbt arcshv eodtriat ntec the In determinant. zero have matrices both although vanishes, rei h arxi arxo ierfrsadw nyalwlna combin linear allow only we and forms linear of a letting instance, is For matrix constants. the if true hr sannzr iercmiaino h osof rows the of combination linear non-zero a is there niymti arcs(e Table (see matrices antisymmetric the of difference the o-rva iercmiaino h oun htvnse.Tu,w Thus, vanishes. that 3 columns of the types of combination linear non-trivial o u upss h ubro aibe o h ierfrsapp forms linear the for variables of number the purposes, our For If e od n phrases. and words Key 2010 Date X ac 3 2018. 13, March : ahmtc ujc Classification. Subject niymti matrix. antisymmetric ftefloigfu:amti ihazr o,azr oun eo2 zero a column, zero a row, zero a with matrix a four: following the of eietf hmcnrtl.I atclr ne lmnayrwa row elementary 3 under a particular, coefficients, In irre constant four with concretely. are there them that identify show We we determinant. vanishing with forms Abstract. sasur arxoe edadtedtriatof determinant the and field a over matrix square a is × arcswt aihn eemnns arcswt eorw ma row, zero a with matrices : vanishing with matrices 3   eorwZr qaeAtsmercZr column Zero Antisymmetric square Zero row Zero 0 0 ⋆ 0 ⋆ ⋆ ⋆ ⋆ ⋆ 3 × edtrieteirdcbecmoet ftesaeo 3 of space the of components irreducible the determine We 3 columns IGLRMTIE FLNA FORMS LINEAR OF MATRICES SINGULAR   al 1. Table eemnna aite,dtriatliel,mtie flna f linear of matrices ideals, determinantal varieties, Determinantal   aihs hl ntecs ftescn arxteei lono also is there matrix second the of case the in while vanishes, 0 ⋆ 0 ⋆ ⋆  x × ⋆ ⋆ arcswt aihn determinant vanishing with Matrices x x × 1 x , . . . , 2 1 arxo osat,teei n oekn fmatrix of kind more one is there constants, of matrix 3 1 arxwt aihn eemnn seuvln oone to equivalent is determinant vanishing with matrix 3 0 0 ).   x x AIN TESTA DAMIANO Introduction 41,1A4 13C40. 15A54, 14M12, 2 1  5 eidpnetvrals onntiilconstant non-trivial no variables, independent be ×   7 effective qae pt utpiaino h etand left the on multiplication to Up square. 2 − n snto n ftetretpsabove: types three the of one of not is and ) 1 x   0 x 2 x x x 1 5 4 1 − x x 0 ubro aibe sa ot9 swe as 9, most at is variables of number 0 0 0 0 x 1 2 3 X x − 3 x htvnse.Ti sn longer no is This vanishes. that 0   x 3 2   s ftefis arxi ( in matrix first the of ase uil opnnsand components ducible X dclm operations column nd × aihs tflosthat follows it vanishes,   arcso linear of matrices 3 aeietfidthree identified have e aigi h matrices the in earing 0 0 0 × qaeo an or square 2 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆   orms. rcswith trices tosby ations 1 ), can always replace the initial variables with a for the span of the entries of the 3 × 3 matrix in question. As a consequence of Theorem 7, if the of the span of the entries of the matrix is at least 7, then the determinant cannot vanish identically. Thus, in the interesting cases, the number of variables could be limited to 6. In a broader context, for m, n, r positive , the loci of m×n matrices with entries in a field and at most r are called determinantal varieties, see for instance [Har95, Lecture 9]. The extension of the definition of determinantal varieties to the case in which the entries of the matrix are linear forms is especially relevant in the context of rational normal scrolls and minimal varieties: all of this is very classical and the paper [EH87] is a good introduction to the topic, its history and relevant results. Eisenbud introduces in [Eis88] the property of 1- genericity for spaces of matrices: a sort of non-degeneracy for the entries of a matrix of linear forms. One of his aims is to obtain conditions under which sections of small of certain spaces of matrices inherit the irreducibility property. His ideas have later been applied, developed and extended, see for instance [Eis07, Kat08, Rai12, Rai13, KPU17]. Our work starts with similar questions, but we do not limit the entries of our matrices. First, we do not impose any genericity condition: we study 3 × 3 matrices of linear forms with no restrictions. Second, we find several irreducible components for our schemes and, in fact, we are able to determine all of them. Finally, this leads to questions regarding a deeper understanding of the irreducible components of the schemes we introduce and their ideals.

1. Notation and preliminary results Let k be a field and let n be a positive . We denote

• the algebra of polynomials in x1,...,xn with coefficients in k by R = k[x1,...,xn], • the field of fractions of R by K = k(x1,...,xn). Let M be a matrix with entries in R. The matrix M is homogeneous if the entries of M are homogeneous forms of the same degree; the matrix M is primitive if the only polynomials in R dividing all the entries of M are constants; the degree of M is d = deg M if the maximum of the degrees of the entries of M is d. In the cases of interest below, the entries of M will in fact be homogeneous of the same degree; we state our results for not necessarily homogeneous matrices: the statements in the homogeneous case follow at once observing that if the product of two polynomials is homogeneous and non-zero, then the two polynomials are themselves homogeneous. A vector is a matrix consisting of a single column; a row vector is a matrix consisting of a single row. We begin with an easy lemma on matrices of rank 1 and entries in R. Lemma 1. Let a, b be positive integers and let X be an a × b matrix with entries in R. If the rank of X is 1, then there exist two non-zero column vectors u ∈ Ra and v ∈ Rb such that the identity X = uvt holds. The vectors u and v satisfy deg u + deg v = deg X and if the matrix X is homogeneous, then they are also homogeneous. Proof. Denote by u′ a non-zero column of X; let u ∈ R denote the greatest common divisor of all the entries of the vector u′ and let u be the primitive column vector in Ra such that u′ = uu. Since the rank of the matrix X is one, all the columns of X are proportional over K to the vector u; since the entries of the matrix X are contained in R and the vector u is primitive, all the columns of X are obtained from u by multiplication by an element of t b R. Denote by v = (v1 ··· vb) ∈ R the column vector such that, for i ∈ {1,...,b}, the 2 i-th entry of v is the factor vi ∈ R with the property that viu is the i-th column of X. By construction the identity X = uvt holds. The assertion about homogeneous matrices is immediate.  The next result is a direct consequence of Cramer’s rule. Recall that for a M, the adjoint of M is the matrix adj(M) whose entry in position (i, j) is (−1)i+j times the determinant of the matrix obtained from M by removing the i-th column and the j-th row. Lemma 2. Let d,r be non-negative integers and let X be an (r + 1) × (r + 1) matrix of rank r and degree d with entries in R. There exist two non-zero vectors u and v with entries in R such that the identities Xu = (Xt)v = 0 and deg u + deg v ≤ dr hold. If X is homogeneous, then the vectors u and v can be chosen to be homogeneous and satisfy the deg u + deg v = dr. Proof. Let adj(X) denote the adjoint of the matrix X so that the identities X adj(X) = adj(X)X = det(X)Id=0 hold. The matrix adj(X) has degree at most dr; if X is homogeneous of degree d, then adj(X) is homogeneous of degree dr. Since the rank of X is r, it follows that the matrix adj(X) is non-zero; moreover the columns of adj(X) lie in the of X and hence are proportional over the fraction field K of R, because the kernel of X is one-dimensional. Thus the rank of the matrix adj(X) is one and we can apply Lemma 1 to the matrix adj(X): let u ∈ Rm and v ∈ Rn be vectors such that the equalities adj(X) = uvt and deg u + deg v = deg adj(X) ≤ dr hold. Finally, the identity Xuvt = 0 shows that the vector Xu is the zero vector, and similarly the row vector vtX is the zero vector, completing the proof.  To state the next lemma, we introduce a little notation. Denote by P the of dimension n − 1 whose homogeneous coordinate ring is R. Let r be a non-negative integer r and let ℓ = (ℓ1,...,ℓr) ∈ R be a of r linear forms in R. Denote by L ⊂ P the linear subspace of P defined by the vanishing of the forms in ℓ and by c the codimension of L in P. Denote by IL the ideal sheaf of L ; the sequence ℓ defines a surjective homomorphism ⊕r of sheaves OP(−1) −→ IL , sending (f1,...,fr) to fiℓi. Twisting by OP(2) and V n taking global sections, we define the r,c as theP kernel of the homomorphism 0 ⊕r 0 H P, OP(1) −→ H P, IL (2) .   V n c Lemma 3. The dimension of the vector space r,c defined above is (r − c)n + 2 . Proof. We will use the diagram  0 ↑ (2) 0 −→IL (2)−→ OP(2) −→ OL (2) −→ 0 ↑ ℓ ⊕r OP(1) of sheaves on P. Observe that diagram (2) is exact and stays exact if we take global sections. V n By definition, the vector space r,c is the kernel of the homomorphism 0 ⊕r 0 H P, OP(1) −→ H P, IL (2) .  3  Standard computations yield the equalities

0 c dim H P, IL (2) = cn − 2  0 ⊕r dim H P, OP(1) = rn V n  c  and, taking differences, we find dim r,c =(r − c)n + 2 .  Lemma 4. Let ℓ =(ℓ1,ℓ2,ℓ3) be a triple of linear forms and let Vℓ be the k-vector space of triples of linear forms (f1, f2, f3) such that the identity f1ℓ1 + f2ℓ2 + f3ℓ3 =0 holds.

(1) If the linear forms ℓ1,ℓ2,ℓ3 are k-linearly independent, then Vℓ is spanned by the triples (0,ℓ3, −ℓ2), (−ℓ3, 0,ℓ1), (ℓ2, −ℓ1, 0). (2) If the linear forms ℓ1,ℓ2 are k-linearly independent and ϕ1ℓ1 + ϕ2ℓ2 + ℓ3 = 0 is a k-linear relation, then Vℓ is spanned by the triples (ℓ2, −ℓ1, 0) and ℓ(ϕ1,ϕ2, 1), as ℓ varies among all linear forms in R. Proof. We begin with some easy checks:

• the stated triples satisfy the identity f1ℓ1 + f2ℓ2 + f3ℓ3 = 0; • if item (1) holds, the dimension of the span of the three vectors (0,ℓ3, −ℓ2), (−ℓ3, 0,ℓ1), (ℓ2, −ℓ1, 0) is 3; • if item (2) holds, the dimension of the span of the n + 1 vectors (ℓ2, −ℓ1, 0) and ℓ(ϕ1,ϕ2, 1), as ℓ ranges among the n variables of the ring R, is n + 1.

To conclude it suffices to show that the dimension of the vector space Vℓ is 3 in case (1) and n + 1 in case (2). A moment’s thought makes it clear that the vector space Vℓ mentioned in this lemma V n is the same vector space r,c defined in Lemma 3 in the case (r, c) = (3, 3), if the forms ℓ1,ℓ2,ℓ3 are independent (item (1)), and in the case (r, c) = (3, 2), if the forms ℓ1,ℓ2 are independent and ℓ1,ℓ2,ℓ3 are not (item (2)). Specializing the results of Lemma 3 we finally V n V n  obtain dim 3,3 = 3 and dim 3,2 = n + 1, as required.

2. Main result

Let Mn denote the k-vector space of 3 × 3 matrices whose entries are linear forms in R: the dimension of Mn is 9n. We denote the projective space associated to Mn by PMn, a projective space of dimension 9n − 1. By construction, the determinant induces a det: Mn → R whose is contained in the vector space R3 of forms of degree 3 in R. n+2 The dimension of the vector space R3 is r3 = 3 , so that the vanishing of the function det corresponds to the simultaneous vanishing of r3cubic forms.

Definition 5. The space of 3 × 3 singular matrices of linear forms is the subscheme Fn of PMn defined by the vanishing of the function det. When the index n is clear from the context, we will omit it; for instance, we may denote the scheme Fn simply by F . The space F is our main interest. An immediate consequence of the definition of the space F is that multiplying a 3 × 3 matrix of linear forms by an with entries in k on the left or on the right stabilizes F and its complement. 4 Definition 6. Two f × g matrices X,Y with entries in R are f-equivalent if there are an invertible f × f matrix F and an invertible g × g matrix G both with entries in k such that Y = FXG.

There is a straightforward relationship between Fn and linear subspaces of determinantal 8 varieties. Let P denote the 8-dimensional projective space with coordinates {xij}i,j∈{1,2,3} and let X be the determinantal cubic with equation

x11 x12 x13 X : det x21 x22 x23 =0. x31 x32 x33  

A point in the variety Fn is a of a vector space to the variety X. Equivalently, points of Fn correspond to (not necessarily injective) parameterizations of linear subspaces contained in X.

Theorem 7. Let X be a 3 × 3 matrix of linear forms. If the determinant det(X) vanishes, then X is f-equivalent to one of the following: (1) a matrix with a zero row; (2) a matrix with a zero column; (3) a matrix with a zero square; (4) an antisymmetric matrix.

Proof. If X is the zero matrix, then there is nothing to prove. If the rank of X is 1, then we can apply Lemma 1 to conclude that either the rows or the columns of X are k-proportional and we are in case (1) or (2). Thus, we reduce to the case in which the rank of the matrix X is 2 and we apply Lemma 2: let u, v be non-zero column vectors such that u is in the kernel of X, v is in the kernel of Xt and deg u + deg v = 2. If one of the vectors u or v has degree 0, i.e. is constant, then the columns or the rows of X are k-proportional and we are in case (1) or (2). Therefore we further reduce to the case in which the vectors u and v have degree 1 and are not R- proportional to constant vectors: the entries of u span a k-vector space of dimension at least 2 of linear forms and similarly for the entries of v. The rows of the matrix X therefore are triples contained in the vector space Vu of Lemma 4 and, by our reductions, they span a three-dimensional linear subspace of Vu. If the entries u1,u2,u3 of u are k-linearly independent, then after multiplying the matrix X on the left by an invertible matrix with entries in k, we may assume that the rows of X are (0 u3 −u2), (−u3 0 u1), (u2 −u1 0): thus X is f-equivalent to the antisymmetric matrix 0 u3 −u2 −u3 0 u1 . u2 −u1 0   If the entries u1,u2,u3 of u are k-linearly dependent and ϕ1u1 + ϕ2u2 + ϕ3u3 = 0 is a non-trivial linear relation among them, then after multiplying the matrix X on the left by an invertible matrix with entries in k, we may assume that the last two of the rows of X are (ℓϕ1 ℓϕ2 ℓϕ3) and (mϕ1 mϕ2 mϕ3), where ℓ, m are linear forms. The matrix X 5 ⋆ ⋆ ⋆ is f-equivalent to the matrix  ℓϕ1 ℓϕ2 ℓϕ3  and hence, after a sequence of column mϕ1 mϕ2 mϕ3   ⋆ ⋆⋆ operations, also to the matrix  ℓ 0 0 with a zero square.  m 0 0  

We define four linear subspaces of PMn: ⋆ ⋆ ⋆ • (zero row) Rn = ⋆ ⋆ ⋆ ∈ PMn dim Rn =6n − 1;  0 0 0      ⋆ ⋆ ⋆ • (zero square) Sn = ⋆ 0 0 ∈ PMn dim Sn =5n − 1;  ⋆ 0 0      0 ϕ −χ • (antisymmetric) An = −ϕ 0 ψ  ∈ PMn dim An =3n − 1;  χ −ψ 0      ⋆ ⋆ 0 • (zero column) Cn = ⋆ ⋆ 0 ∈ PMn dim Cn =6n − 1.  ⋆ ⋆ 0    Let G = GL3 × GL3 be the group of ordered pairs of invertible 3 × 3 matrices. The group G acts on M by the rule (f,g) · m = fmgt and this action induces an analogous action on PM . The orbits under the G-action are precisely the f-equivalence classes of matrices. Denote

• by Rn the of the G-orbit of Rn, • by Sn the closure of the G-orbit of Sn, • by An the closure of the G-orbit of An, • by Cn the closure of the G-orbit of Cn. Each of these schemes is clearly irreducible, being the orbit of a linear subspace under an irreducible group. As a consequence of Theorem 7, we know that each G-orbit of a point of F intersects at least one of the linear subspaces R, S, A, C: the of k-rational points of F is the union of the four subvarieties R, S , A , C . In Section 3 we examine these four subvarieties of F . We shall see that the irreducible components of the scheme F are the varieties R, S , A , C .

3. The four components

We analyze here the four components of the scheme Fn that we determined in Theorem 7. Throughout this section, we assume that the integer n satisfies the inequality n ≥ 2: the case n = 1 is essentially the case of 3 × 3 matrices over a field. We begin by determining the stabilizers in G of each of the four linear subspaces Rn, Sn, An, Cn. 6 The stabilizers of matrices with a zero row or column. Let (f,g) in G be a pair preserving the set of matrices R with last row equal to zero. Observe that we can multiply the second element of the pair, g, by any invertible matrix and the pair still preserves the set R. It is then immediate to check that the stabilizers of R and C are the pairs of invertible matrices of the form Stabilizer of R Stabilizer of C ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ,g , f, ⋆ ⋆ ⋆ 0 0 ⋆ 0 0 ⋆       for any f,g in GL3.

The stabilizer of matrices with a zero square. Let (f,g) in G be a pair preserving the set of matrices S with a square of zeros. For i, j in {1, 2, 3}, let fij and gij denote the (i, j)-th entry of f and of g and let eij denote the matrix with (i, j)-th entry equal to 1 and t remaining entries equal to 0. The product feijg is the rank one matrix that is the product of the i-th column of f times the j-th row of gt. Thus the condition that the two matrices t t fe11g , fe12g lie in S translates to the equations

f21g21 = f21g31 = f21g22 = f21g32 =0

f31g21 = f31g31 = f31g22 = f31g32 =0.

Since the matrix g is invertible, the square on the four entries g21, g31, g22, g32 of g cannot vanish and we deduce that the entries f21 and f31 of f vanish. Similarly, the entries g21 and g31 of g vanish as well and finally the stabilizer of S consists of the matrices of the form ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ 0 ⋆ ⋆ , 0 ⋆ ⋆ . 0 ⋆ ⋆ 0 ⋆ ⋆     The stabilizer of antisymmetric matrices. Let (f,g) in G be a pair preserving the set of antisymmetric matrices A; for s,t,u in {0, 1} define the antisymmetric matrix estu by 0 s −t estu = −s 0 u  . t −u 0   t t t The matrices fe100g , fe010g , fe001g are antisymmetric; in particular, their diagonal entries vanish. The implied equations force each row of the matrix g to be proportional to the corresponding row of f. Imposing now the further constraints arising from the matrices t t t fe100g , fe010g , fe001g being antisymmetric shows that the matrices f and g themselves are proportional; clearly, such matrices stabilize A and we find that the stabilizer of A consists of matrices of the form ∗ (f,λf) for f ∈ GL3 and λ ∈ k .

Having determined the stabilizers of the linear subspaces Rn, Sn, An, Cn we now compute the dimensions of the irreducible components of the space Fn of 3 × 3 singular matrices of 7 linear forms. Indeed, the dimension of the orbit under G of each linear subspace is the sum of the dimension of the linear subspace plus the codimension of their stabilizer:

dim(Rn) = 6n +1,

dim(Sn) = 5n +3,

dim(An) = 3n +7,

dim(Cn) = 6n +3. The special case n = 2. In the special case in which the number of variables of the ring R equals 2, the dimensions of the four subvarieties R2, A2, S2, C2 coincides and equals 13. An easy check shows that the subvarieties A2 and S2 coincide in this case, so that F2 is the union of three subvarieties: R2, A2 = S2, C2. A quick computation using the computer 17 algebra package Magma shows that, as subvarieties of the projective space PM2 ≃ P , the degree of R2 and of C2 is 15 and the degree of S2 = A2 is 51. We introduce 9n coordinate functions on Mn: for i, j ∈ {1, 2, 3} and ℓ ∈ {1,...,n}, let ℓ M aij denote the coefficient of xℓ in the (i, j)-th entry of a matrix in n: 1 2 n 1 2 n 1 2 n a11x1 + a11x2 + ··· + a11xn a12x1 + a12x2 + ··· + a12xn a13x1 + a13x2 + ··· + a13xn 1 2 n 1 2 n 1 2 n a21x1 + a21x2 + ··· + a21xn a22x1 + a22x2 + ··· + a22xn a23x1 + a23x2 + ··· + a23xn . 1 2 n 1 2 n 1 2 n a x1 + a x2 + ··· + a x a x1 + a x2 + ··· + a x a x1 + a x2 + ··· + a x   31 31 31 n 32 32 32 n 33 33 33 n

The orbits of matrices with a vanishing row or column. Let M be a matrix in Mn. If M lies in Rn, then it follows that the rows of M are linearly independent. Making use of ℓ the coordinates {aij} defined above, we form the 3n × 3 matrix 1 2 n 1 n 1 n a11 a11 ··· a11 a12 ··· a12 a13 ··· a13 1 2 n 1 n 1 n A = a21 a21 ··· a21 a22 ··· a22 a23 ··· a23 . a1 a2 ··· an a1 ··· an a1 ··· an   31 31 31 32 32 33 33 Therefore, in terms of the associated matrix A, the matrix M lies in Rn if and only if the 3n matrix A has rank at most 2. Hence, the 3 determinants of the 3 × 3 submatrices of the matrix A vanish exactly on the subvariety  Rn. Thus, the variety Rn is actually the determinantal variety of 3n × 3 matrices of rank at most 2. Of course, transposed assertions apply to the variety Cn. The orbits of antisymmetric matrices or of matrices with a zero square. We do not know a similar explicit description of the varieties An and Sn. We checked that the ideals of An and Sn agree with the ideal of Fn in all degrees up to 4 in a few examples.

Concluding questions

An immediate question that arises from our work is to study further the orbit Sn of the matrices with a square of zeros and the orbit An of the antisymmetric matrices. We would also like to present the following personal point of view on the initial question. Computing a determinant involves usually a fair number of operations. Moreover, if the calculation shows that the determinant vanishes, I usually try to find a “justification” of this fact that is independent of computing the determinant. Possible such justifications are 8 • some evident of the rows or columns of the matrix that vanishes (e.g., one row is the sum of two other rows); • the matrix is of odd order and is antisymmetric; • the matrix has “lots” of zeros, (e.g., a 3 × 3 matrix with a zero square).

These are precisely the irreducible components of the schemes Fn! As the results in this paper handle the case of matrices of order 3, we are naturally lead to wonder what happens next. Question. Let n be a positive integer. What are the irreducible components of the spaces of n × n matrices of linear forms and identically vanishing determinant? References [Eis88] David Eisenbud, Linear sections of determinantal varieties, Amer. J. Math. 110 (1988), no. 3, 541–575. ↑2 [Eis07] , Syzygies, degrees, and choices from a life in mathematics. Retiring presidential address, Bull. Amer. Math. Soc. (N.S.) 44 (2007), no. 3, 331–359. ↑2 [EH87] David Eisenbud and Joe Harris, On varieties of minimal degree (a centennial account), Algebraic geometry, Bowdoin, 1985 (Brunswick, Maine, 1985), Proc. Sympos. Pure Math., vol. 46, Amer. Math. Soc., Providence, RI, 1987, pp. 3–13. ↑2 [Har95] Joe Harris, Algebraic geometry, Graduate Texts in Mathematics, vol. 133, Springer-Verlag, New York, 1995. A first course; Corrected reprint of the 1992 original. ↑2 [Kat08] Mordechai Katzman, On ideals of minors of matrices with indeterminate entries, Comm. Algebra 36 (2008), no. 1, 104–111. ↑2 [KPU17] Andrew R. Kustin, Claudia Polini, and Bernd Ulrich, A matrix of linear forms which is annihilated by a vector of indeterminates, J. Algebra 469 (2017), 120–187. ↑2 [Rai13] Claudiu Raicu, 3 × 3 minors of catalecticants, Math. Res. Lett. 20 (2013), no. 4, 745–756. ↑2 [Rai12] , Secant varieties of Segre-Veronese varieties, Algebra Number Theory 6 (2012), no. 8, 1817–1868. ↑2

Mathematical Institute Zeeman Building University of Warwick Coventry CV4 7AL UK E-mail address: [email protected] 9