<<

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future Schedae Informaticae Vol. 25 (2016): 49–59 doi: 10.4467/20838476SI.17.004.8150

An Effective Approach to Picard-Vessiot Theory and the Jacobian Conjecture

PawelBogdan 1*, Zbigniew Hajto1, Elzbieta˙ Adamus2„ 1Faculty of and Computer Science, Jagiellonian University Lojasiewicza 6, 30-348 Krak´ow e-mail: [email protected], [email protected] 2Faculty of , AGH University of Science and Technology al. Mickiewicza 30, 30-059 Krak´ow,Poland e-mail: [email protected]

Abstract. In this paper we present a theorem concerning an equivalent state- ment of the Jacobian Conjecture in terms of Picard-Vessiot extensions. Our theorem completes the earlier work of T. Crespo and Z. Hajto which suggested an effective criterion for detecting polynomial automorphisms of affine spaces. We show a simplified criterion and give a bound on the number of wronskians which we need to consider in order to check if a given polynomial mapping with non-zero constant Jacobian is a polynomial auto- morphism. Our method is specially efficient with cubic homogeneous mappings introduced and studied in fundamental papers by H. Bass, E. Connell, D. Wright and L. Dru˙zkowski.F I R S T V I E W Keywords: Picard-Vessiot theory, Polynomial mapping, Jacobian Conjecture.

1. Introduction

Let K denote an algebraically closed field of characteristic zero. Let n > 0 be a fixed integer and let F = (F ,...,F ): Kn Kn be a polynomial mapping, i.e. F 1 n → i ∈ * Z. Hajto acknowledges support of grant MTM2012-33830, Spanish Science Ministry. „ This work was partially supported by the Faculty of Applied Mathematics AGH UST dean grant for PhD students and young researchers within subsidy of Ministry of Science and Higher Education. Grant no. 15.11.420.038. Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 50

∂Fi K[X1,...,Xn] for i = 1, . . . , n. We consider the Jacobian JF = [ ]1 i,j n. ∂Xj ≤ ≤ The Jacobian Conjecture states that if det(JF ) is a non-zero constant, then F has an inverse, which is also polynomial. The Jacobian Conjecture is one of Stephen Smale’s problems (cf. [9], Problem 16), which are a list of important problems in mathematics for the twenty-first century. Originally the conjecture was formulated for n = 2 by O. Keller (cf. [7]). In 1982 H. Bass, E. Connell and D. Wright ([1]) showed that the general case follows from the case where n 2 and F = (X1 + H1,...,Xn + Hn) and where each Hi is zero or homogeneous of≥ degree 3. One year later L. Dru˙zkowski ([5]) improved this result proving that if the Jacobian Conjecture is true for n 2 and ≥ n n 3 3 F = (X1 + ( a1jXj) ,...,Xn + ( anjXj) ), (1) j=1 j=1 X X then it holds in general. A polynomial mapping F of the form (1) with constant Jacobian is called a Dru˙zkowskimapping. In 2001 Dru˙zkowski [6] proved that in his reduction (1) it is enough to assume that the matrix A = [aij] is nilpotent of degree 2, i.e. A2 = 0. In 2011 T. Crespo and Z. Hajto generalized a classical theorem of L. A. Camp- bell ([3]) by proving an equivalent statement of the Jacobian Conjecture in terms of Picard-Vessiot extensions (cf. [4], Theorem 2). Condition 4 in Theorem 2 in the work of Crespo and Hajto suggested an effective criterion for polynomial automorphisms of affine spaces. However, the effectivity is obstructed by the big number of generalized wronskians which have to be considered when the dimension of the affine space is growing. In this paper we present a simplified criterion for a polynomial automor- phism of an affine space and prove that if the dimension of the space is n then it is 1 enough to consider n2(n + 1) n generalized wronskians. We believe that a deeper 2 − analysis of our algorithm may lead to the proof of the Jacobian Conjecture. Let ( , ∆ ) be a partial differential field with an algebraically closed field of F F constants C and ∆ = ∂1, . . . , ∂m . Let us consider a linear partial differential system in matrixF formF over{ , i.e. a system} of equations of the form F ∂i(Y ) = AiY, i = 1, . . . , m, Ai Mn n( ). (2) F I R S T V ∈I E× FW A matrix y GL ( ), where is a differential field extension of , is called a fun- ∈ n K K F damental matrix for the system (2) if ∂i(y) = Aiy for i = 1, . . . , m. We say that the system (2) is integrable if it has a fundamental matrix. A differential field extension ( , ∆ ) of ( , ∆ ) is a Picard-Vessiot extension for the integrable system (2) if the G G F F following holds: C = C , there exists a fundamental matrix y = yij GLn( ) G F { } ∈ G and is generated over as a field by the entries of y, i.e. = ( yij 1 i,j, n). ThereG is another definitionF of a Picard-Vessiot extension, formulatedG F { by} ≤ Kolchin≤ in [8]. Let be a partial differential field of characteristic zero with ∆ = ∂1, . . . , ∂m and algebraicallyF closed field of constants C . Let be a differentialF field{ extension} F G of . Let Y1,...,Yn denote indeterminates and let Θ denote the free commutative multiplicativeF semigroup generated by the elements of ∆ . So θ Θ is a differen- i1 im F ∈ tial operator of the form ∂1 . . . ∂m , where i1, . . . , im Z+ 0 . Let us denote by Θ(k) the subset of Θ of the elements of order less than∈ or equal∪ { } to k. The determi- nant det(θiyj)1 i,j n is called a generalized wronskian determinant and denoted by ≤ ≤ Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 51

Wθ ,...θ (y1, . . . , yn). Kolchin called a Picard-Vessiot extension of if C = C and 1 n G F G F there exist η1, . . . , ηn linearly independent over C such that = η1, . . . , ηn and ∈ G F G Fh i

Wθ1,...θn (η1, . . . , ηn) θ1, . . . θn Θ(n): (3) ∀ ∈ Wθ01,...θ0n (η1, . . . , ηn) ∈ F for some fixed θ , . . . θ such that W (η , . . . , η ) = 0. 01 0n θ01,...θ0n 1 n 6 Theorem 1 in [4] establishes the equivalence between the two definitions of Picard- Vessiot extension of partial differential fields presented above. Theorem 2 in [4], which is a differential version of the classical theorem of Campbell, gives an equivalent for- mulation of the Jacobian Conjecture. Let K be an algebraically closed field of char- n n acteristic zero and let F = (F1,...,Fn): K K be a polynomial map such that det(J ) = c K 0 . We can equipp K(x →, . . . , x ) with the Nambu derivations, F ∈ \{ } 1 n i.e. derivations δ1, . . . , δn given by

∂ δ1 ∂x1 . 1 T .  .  = (JF− )  .  . ∂ δn    ∂x1     

Observe that K F1,...,Fn = K(F1,...,Fn), i.e. K(F1,...,Fn) is stable under h i 1 δ1, . . . , δn. Moreover if det(JF ) = 1, then J − = [δjxi]1 i,j n. F ≤ ≤ The following theorem is a reformulation of theorems 1 and 2 in [4] in the form we will use in the sequel.

Theorem 1. Let K and F be as above. Then the following conditions are equivalent:

1) F is a polynomial automorphism

2) The matrix F I R S T V I E W 1 x1 . . . xn 0 δ1x1 . . . δ1xn W =  . . . .  . . .. .    0 δ x . . . δ x   n 1 n n    is a fundamental matrix for an integrable system

δkY = AkY, k = 0, . . . , n,

where we are taking δ0 = id, with Ak M(n+1) (n+1)(K(F1,...,Fn)). ∈ × Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 52

2. Wronskian criterion

Theorem 1 gives a method of checking if a given polynomial map F is a polynomial automorphism. If we denote x0 = 1, then we may write W = [δixj]i,j=0,1,...,n. Let us assume that detW = 1 (which is equivalent to det(JF ) = 1). We are going to 1 find A = δ W W − , k = 1, 2, . . . , n in order to check if the entries of A ’s lie in k k · k K(F1,...,Fn). We have that

0 δkx1 . . . δkxn 0 δ δ x . . . δ δ x  k 1 1 k 1 n  0 δkδ2x1 . . . δkδ2xn δ W =  ......  = ωk , where ωk = δ δ x . k   ij i,j=0,1,...,n ij k i j  0 δ δ x . . . δ δ x   k i 1 k i n     ......     0 δ δ x . . . δ δ x   k n 1 k n n    1 T Let us find W − = [Dij]i,j=0,1,...,n , where Dij denote the adjoint determinant of the element δ x of matrix W . We obtain that i j 

D = 1 and j 1 : D = 0, 00 ∀ ≥ 0j

x1 . . . xn δ x . . . δ x 1 1 1 n ...... i+1+1 i+0 Di0 = ( 1) δi 1x1 . . . δi 1xn = ( 1) det [δsxt]s=0,1,...,n; s=i; t=1,...n . − − − − 6 δ x . . . δ x F i+1I 1R Si +1Tn V I E W  ......

δ x . . . δ x n 1 n n

For i, j 1 we get ≥

1 x1 . . . xj 1 xj+1 . . . xn − 0 δ1x1 . . . δ1xj 1 δ1xj+1 . . . δ1xn − ...... i+j Dij = ( 1) 0 δi 1x1 . . . δi 1xj 1 δi 1xj+1 . . . δi 1xn . − − − − − − 0 δi+1x1 . . . δi+1xj 1 δi+1xj+1 . . . δi+1xn − ......

0 δnx1 . . . δnxj 1 δnxj+1 . . . δnxn −

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 53

i+j i+j So Dij = ( 1) det [δsxt]s,t=0,1,...,n; s=i,t=j = ( 1) det [δsxt]s,t=1,...,n; s=i,t=j and consequently− W 1 = [B ] 6 , where6 − 6 6 − ij i,j=0,1,...,n  

1 x1 . . . xi 1 xi+1 . . . xn − 0 δ1x1 . . . δ1xi 1 δ1xi+1 . . . δ1xn − ...... i+j i+j Bij = ( 1) Dji = ( 1) 0 δj 1x1 . . . δj 1xi 1 δj 1xi+1 . . . δj 1xn . − − − − − − − 0 δj+1x1 . . . δj+1xi 1 δj+1xi+1 . . . δj+1xn − ......

0 δnx1 . . . δnxi 1 δnxi+1 . . . δnxn −

k 1 k We compute A = a = δ W W − . We obtain a = 0, i.e. the first k ij i,j=0,1,...,n k · i0 column (i.e. the column indexed by j=0) is a zero column. Moreover for j 1   ≥

δ1x2 . . . δ1xn ...... n n k 1+j δj 1x2 . . . δj 1xn a0j = δkδ0xr Brj = δkxr Brj = δkx1 ( 1) − − + ... · · · − δj+1x2 . . . δj+1xn r=1 r=1 X X ......

δ x . . . δ x n 2 n n

δ1x1 . . . δ1xn 1 ...... −

n+j δj 1x1 . . . δj 1xn 1 ... + δkxn ( 1) − − − = · − δj+1x1 . . . δj+1xn 1 − ......

δnx1 . . . δnxn 1 −

δ1x1 δ1x2 . . . δ1xn ...... F I R S T V I E W δj 1x1 δj 1x2 . . . δj 1xn − − − 0 ; k = j = δkx1 δkx2 . . . δkxn = 6 . 1 ; k = j δj+1x1 δj+1x2 . . . δj+1xn 

......

δ x δ x . . . δ x n 1 n 2 n n

k n If i, j 1, then a ij = δkδixr Brj, this means we have ≥ r=1 · P δ1x2 . . . δ1xn ......

k 1+j δj 1x2 . . . δj 1xn aij = δkδix1 ( 1) − − + ... · − δj+1x2 . . . δj+1xn

......

δ x . . . δ x n 2 n n

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 54

δ1x1 . . . δ1xn 1 ...... −

n+j δj 1x1 . . . δj 1xn 1 ... + δkδixn ( 1) − − − = · − δj+1x1 . . . δj+1xn 1 − ......

δnx1 . . . δnxn 1 −

δ x δ x . . . δ x 1 1 1 2 1 n ......

δj 1x1 δj 1x2 . . . δj 1xn − − − = δ δ x δ δ x . . . δ δ x k i 1 k i 2 k i n δ x δ x . . . δ x j+1 1 j+1 2 j+1 n ......

δ x δ x . . . δ x n 1 n 2 n n 2 The total number of considered determinants is n(n + 1) , since for every δk we 2 have (n + 1) of them and k = 1, . . . , n. However for each Ak we can ignore the first row and the first column (i.e. the row and the column indexed by 0), since they consist of constant elements. Consequently, we can omit 2n + 1 of elements for each 3 Ak. So there are n wronskians left. We can easily observe that for every j = 1, . . . , n k i n and for k = i we have aij = akj. So we can omit 2 of determinants for each j. Due to the lemma6 given below we can omit even more determinants.  Lemma 1. Let (K,0 ) be a differential field and let A = [aij]i,j=1,...,n GLn(K) be a nonsingular matrix with entries in K. Then ∈

a11 a12 . . . a1n 0 a a . . . a 21 22 2n (detA)0 = . . . . = . . .. .

a a . . . a n1 n2 nn

a110 a120 . . . a10 n a11 a12 . . . a1n a11 a12 . . . a1n a a . . . a a0 a0 . . . a0 a a . . . a 21 22 F I2n R 21S T22 V2n I E W21 22 2n = . . . . + . . . . +...+ . . . . = ......

a a . . . a a a . . . a a a . . . a n1 n2 nn n1 n2 nn n0 1 n0 2 nn0

a a . . . a a a . . . a a a . . . a 110 12 1n 11 120 1n 11 12 10 n a0 a . . . a a a0 . . . a a a . . . a0 21 22 2n 21 22 2n 21 22 2n = . . . . + . . . . + ... + ......

a a . . . a a a . . . a a a . . . a n0 1 n2 nn n1 n0 2 nn n1 n2 nn0

Let us use lemma (1) to differentiate detW = 1 with respect to each δk, k = 1, . . . , n. We get that δ1x1 . . . δ1xn . . . δ1 . .. . =

δ x . . . δ x n 1 n n

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 55

δ1x1 . . . δ1xn 2 2 δ1x1 . . . δ1xn δ1x1 . . . δ1xn δ1δ2x1 . . . δ1δ2xn ...... δ3x1 . . . δ3xn . . . = . .. . + +...+ = 0 . .. . δn 1x1 . . . δn 1xn δ x . . . δ x . . . − − n 1 n n δ δ x . . . δ δ x δ x . . . δ x 1 n 1 1 n n n 1 n n

......

δ1x1 . . . δ1xn . . . δn . .. . =

δ x . . . δ x n 1 n n

δ 1x1 . . . δ1xn δnδ1x1 . . . δnδ1xn δ1x1 . . . δ1xn δnδ2x1 . . . δnδ2xn δ2x1 . . . δ2xn . .. . = + δ3x1 . . . δ3xn +...+ . . . = 0 ...... δn 1x1 . . . δn 1xn . . . − − δ x . . . δ x δ2 x . . . δ2 x n 1 n n δ x . . . δ x n 1 n n n 1 n n

Let us go back to considerations concerning the matrix Ak. We can use the k k equations given above to observe that for each k = 1, . . . , n we have a11 +...+ann = 0. So for example

k k k k k akk = a11 ... ak 1,k 1 ak+1,k+1 ... ann. − − − − − − − − So we can omit n determinants more. Hence it is enough to check the following number of wronskian determinants n 1 1 n3 n n = n3 n2(n 1) n = n2(n + 1) n. (4) − · 2 − − 2 − − 2 −   Let us observe that the number given in (4) is optimal, e.g. for n = 2, we have to consider 4 wronskian determinants.F I R S T V I E W

3. Examples

In this section in order to explain how our criterion works for detecting polynomial automorphisms we shall present two explicit examples.

Example 1. Let us consider a well-known wild automorphism: the Nagata auto- morphism:

2 2 2 F1 = x1 2x2(x3x1 + x2) x3(x3x1 + x2) − 2 − F2 = x2 + x3(x3x1 + x2) Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 56

F3 = x3

Using the computer algebra system Maple18 we first compute that det(JF ) = 1 and k next the entries of the matrices aij i,j=1,2,3, for k = 1, 2, 3. We obtain the following results:   k=1: a1 = a1 a1 = 2x3( 2x x3 2x2x2 2x x + 1) = 4F F 4 2F 3 11 − 22 − 33 − 3 − 1 3 − 2 3 − 2 3 2 3 − 3

Remark 1. In order to obtain the equality given above, we use the following method. 1 Once computed a11 we define the following sequence of polynomials in C[x1, x2, x3], 1 P0(x1, x2, x3) = a11(x1, x2, x3), P (x , x , x ) = P (F ,F ,F ) P (x , x , x ), 1 1 2 3 0 1 2 3 − 0 1 2 3 and, assuming Pk 1 is defined, − Pk(x1, x2, x3) = Pk 1(F1,F2,F3) Pk 1(x1, x2, x3). − − − It is easy to prove that for a positive integer m, we have

m 1 − P (x , x , x ) = ( 1)lP (F ,F ,F ) + ( 1)mP (x , x , x ). 1 2 3 − l 1 2 3 − m 1 2 3 Xl=0 In particular, if we assume that for some integer m, Pm(x1, x2, x3) = 0, then

m 1 − P (x , x , x ) = ( 1)lP (F ,F ,F ). 1 2 3 − l 1 2 3 Xl=0 In considered case we obtain 3 3 2 2 P0(x1, x2, x3) = 2x3( 2x1x3 2x2x3 2x2x3 + 1), − 6 − 2 5− − P1(x1, x2, x3) = 4x1x3 + 4x2x3, P2(x1, x2, x3) = 0 and hence a1 = P (F ,F ,F ) P (F ,F ,F ) = 4F F 4 2F 3 11 0 1 2 3 − 1 1 2 3 2 3 − 3 In an analogous way we get all equalities given below. a1 = 2x5 = 2FF 5 I R S T V I E W 12 − 3 − 3 1 a13 = 0

1 4 2 3 2 3 2 2 2 3 a21 = ( 4x1x3 4x2x3 4x2x3 + 2x3)( 2x1x3 2x2x3 2x2x3 + 1) = 8F2 F3 2 − − − − − − − 8F2F3 + 2F3 a1 = ( 4x x4 4x2x3 4x x2 + 2x )x2 = 4F F 4 + 2F 3 22 − 1 3 − 2 3 − 2 3 3 3 − 2 3 3 1 a23 = 0

1 3 8 2 2 7 4 6 6 5 2 6 3 5 5 4 a31 = 4x1x3 12x1x2x3 12x1x2x3 4x2x3 12x1x2x3 24x1x2x3 12x2x3 + 2 5− 2−4 4 3 − 3 − 3 2 − 2 − 2 − 3 10x1x3 + 8x1x2x3 2x2x3 + 16x1x2x3 + 12x2x3 + 6x2x3 + 2x2 = 4F2 F3 + 4F1F2F3 2 − − 2F1F3 + 2F2 Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 57

1 2 7 2 6 4 5 5 3 4 4 2 3 2 a32 = 2x1x3 + 4x1x2x3 + 2x2x3 + 4x1x2x3 + 4x2x3 4x1x3 2x2x3 2x2x3 2x3 = 2F F 4 2F F 2 2F − − − − − 1 3 − 2 3 − 3 1 a33 = 0

k=2: 2 1 a11 = a21 2 1 a12 = a22 2 1 a13 = a23 = 0

2 3 8 2 2 7 4 6 6 5 2 6 3 5 5 4 a21 = 16x1x3 + 48x1x2x3 + 48x1x2x3 + 16x2x3 + 48x1x2x3 + 96x1x2x3 + 48x2x3 24x2x5 +24x4x3 48x x x3 32x3x2 +12x x2 12x2x +12x = 16F 3F 2 24F 2F −+ 1 3 2 3 − 1 2 3 − 2 3 1 3 − 2 3 2 2 3 − 2 3 12F2

2 2 2 2 7 2 6 4 5 5 3 4 4 a22 = a11 a33 = 8x1x3 16x1x2x3 8x2x3 16x1x2x3 16x2x3 + 8x1x3 + 8x x2 −2x −= 8F 2F−3 + 8F F−2 2F − − − 2 3 − 3 − 2 3 2 3 − 3 2 a23 = 0

2 4 9 3 2 8 2 4 7 6 6 8 5 3 7 2 3 6 a31 = 8x1x3 32x1x2x3 48x1x2x3 32x1x2x3 8x2x3 32x1x2x3 96x1x2x3 5− 5 −7 4 3 −6 2 2 5− 4 4− 6 3− 2 4− 3 3 − 96x1x2x3 32x2x3 +24x1x3 +24x1x2x3 24x1x2x3 24x2x3 +64x1x2x3 +96x1x2x3 + 5 2 − 2 3 2 2 4 − − 3 2 2 32x2x3 10x1x3 +36x1x2x3 +38x2x3 12x1x2x3 +4x2 +2x1 = 8F1F2 F3 8F1F2F3 + 3 − − − 8F2 + 2F1

2 3 8 2 2 7 4 6 6 5 2 6 3 5 5 4 2 5 a32 = 4x1x3 +12x1x2x3 +12x1x2x3 +4x2x3 +12x1x2x3 +24x1x2x3 +12x2x3 10x1x3 8x x2x4+2x4x3 16x x x3 12x3x2 6x2x 2x = 4F F F 3+2F F 2 4F−2F 2F− 1 2 3 2 3− 1 2 3− 2 3− 2 3− 2 − 1 2 3 1 3 − 2 3− 2 2 a33 = 0

k=3: 3 1 a11 = a31 3 1 a12 = a32 3 1 a13 = a33 = 0 F I R S T V I E W 3 2 a21 = a31 3 2 a22 = a32 3 2 a23 = a33 = 0

3 5 10 4 2 9 3 4 8 2 6 7 8 6 10 5 4 8 a31 = 4x1x3 + 20x1x2x3 + 40x1x2x3 + 40x1x2x3 + 20x1x2x3 + 4x2 x3 + 20x1x2x3 + 3 3 7 2 5 6 7 5 9 4 4 7 3 2 6 2 4 5 80x1x2x3 + 120x1x2x3 + 80x1x2x3 + 20x2x3 18x1x3 32x1x2x3 + 12x1x2x3 + 6 4 8 3 3 5 2 3 4 − 5 3 − 7 2 3 4 2 2 3 48x1x2x3+22x2x3 64x1x2x3 152x1x2x3 112x1x2x3 24x2x3+16x1x3 36x1x2x3 4 2 6− 2 − 2 3− 5 − 2 2 − 2 2 − 100x1x2x3 48x2x3 + 28x1x2x3 + 8x1x2x3 16x2 2x1x3 + 8x1x2 = 4F1 F2F3 2 − 2 − − − 2F1 F3 + 8F1F2

3 4 9 3 2 8 2 4 7 6 6 8 5 3 7 2 3 6 a32 = 2x1x3 8x1x2x3 12x1x2x3 8x1x2x3 2x2x3 8x1x2x3 24x1x2x3 5−5 7−4 3 6− 2 2 5− 6 3 − 2 4− 3 −3 5 2 − 24x1x2x3 8x2x3 + 8x1x3 + 12x1x2x3 4x2x3 + 20x1x2x3 + 32x1x2x3 + 12x2x3 4x2x3 + 8x− x2x2 + 10x4x + 4x3 2x −= 2F 2F 3 4F F F 2F − 1 3 1 2 3 2 3 2 − 1 − 1 3 − 1 2 3 − 1 Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 58 a3 = a3 a3 = 0 33 − 11 − 22 Example 2. Recently Dan Yan ([10]) has proved that the Jacobian Conjecture is true for the Dru˙zkowski mappings in dimension n 9, however only in the case when the matrix A (cf.(1)) has no zeros on its diagonal,≤ and for general n and rankA 4. Moreover Michiel de Bondt in his thesis [2] proved the validity of the Jacobian≤ Conjecture for all Dru˙zkowski mappings in dimension n 8. Let us consider the following Dru˙zkowski mapping in dimension 13. ≤

1 1 1 1 1 1 3 F1 = X1 + 6 X4 + 6 X5 3 X6 6 X7 6 X8 + 3 X9 + X13 1 1 − 1 − 1 − 1 1 3 F2 = X2 + 6 X4 + 6 X5 3 X6 6 X7 6 X8 + 3 X9 X13 − − − −3  F = X + 1 X + 1 X 1 X 1 X 1 X + 1 X 3 3 6 4 6 5 3 6 6 7 6 8 3 9  1 1 − 1 − 3− F4 = X4 + 6 X1 + 6 X2 3 X3 + X12 − 3  F = X + 1 X + 1 X 1 X X 5 5 6 1 6 2 3 3 12 1 1 − 1 −3 F6 = X6 + 6 X1 + 6 X2 3 X3 −  3 F = X + 1 X + 1 X + 1 X + X 7 7 3 3 6 10 6  11 13 − 1 1 1 3 F8 = X8 + 3 X3 + 6 X10 + 6 X11 X13 − −3  F = X + 1 X + 1 X + 1 X 9 9 3 3 6 10 6 11  −1 1 1 1 1 1 3 F10 = X10 + 6 X4 + 6 X5 3 X6 6 X7 6 X8 + 3 X9 + X12 − −  − 3 F = X + 1 X + 1 X 1 X 1 X 1 X + 1 X X 11 11 6 4 6 5 3 6 6 7 6 8 3 9 12 F = X − − − − 12 12  F13 = X13 In the above example rank(A) = 5. The computation of wronskians is involved, therefore we have presented it separately in our website http : //crypto.ii.uj.edu.pl/galois/. Remark 2. In his landmark paper [3] L.A. Campbell studied in fact general covering maps. Let us observe that our computational approach can be used as well for detect- ing Galois coverings. In this case we can not assume that the Jacobian determinant is a non-zero constant,F I however R S the computationsT V areI analogous.E W

4. References

[1] Bass H., Connell E., Wright D., The Jacobian Conjecture: Reduction of Degree and Formal Expansion of the Inverse, Bulletin of the American Mathematical Society, 1982, 7, pp. 287–330. [2] Bondt M. de, Homogeneous Keller maps, Ph. D. thesis, July 2007, http://webdoc.ubn.ru.nl/mono/b/bondt m de/homokema.pdf. − − [3] Campbell L.A., A condition for a polynomial map to be invertible, Math. An- nalen, 1973, 205, pp. 243–248. Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in future 59

[4] T. Crespo, Z. Hajto, Picard-Vessiot theory and the Jacobian problem, Israel Jour- nal of Mathematics, 2011, 186, pp. 401–406. [5] L. M. Dru˙zkowski, An Effective Approach to Keller’s Jacobian Conjecture, Math. Ann., 1983, 264, pp. 303–313. [6] L. M. Dru˙zkowski, New reduction in the Jacobian conjecture, Univ. Iagell. Acta Math., 2001, 39, pp. 203–206. [7] O.H. Keller, Ganze Cremona Transformationen, Monatsh. Math. Phys., 1939, 47, pp. 299–306. [8] E. R. Kolchin, Picard-Vessiot theory of partial differential fields, Proceedings of the American Mathematical Society 1952, 3, pp. 596–603. [9] S. Smale, Mathematical Problems for the Next Century, Mathematical Intelli- gencer, 1998, 20, pp. 7–15. [10] D. Yan, A note on the Jacobian Conjecture, Linear Algebra and its Applications, 2011, 435, pp. 2110–2113.

F I R S T V I E W