Notes on Differential Equations and Hypergeometric Functions
Total Page:16
File Type:pdf, Size:1020Kb
Notes on differential equations and hypergeometric functions (NOT for publication) F.Beukers 21st August 2009 Contents 1 Ordinary linear differential equations 3 1.1 Differential equations and systems of equations . 3 1.2 Local theory . 7 1.3 Fuchsian equations . 12 1.4 Riemann-Hilbert correspondence . 14 1.5 Fuchsian equations of order two . 17 2 Gauss hypergeometric functions 21 2.1 Definition, first properties . 21 2.2 Monodromy of the hypergeometric function . 24 2.3 Triangle groups . 31 2.4 Some loose ends . 34 3 Hypergeometric functions nFn−1 36 3.1 Definition, first properties . 36 3.2 Monodromy . 37 3.3 Hypergeometric groups . 39 3.4 Rigidity . 41 4 Explicit monodromy 45 4.1 Introduction . 45 4.2 Mellin-Barnes integrals . 46 2 Chapter 1 Ordinary linear differential equations 1.1 Differential equations and systems of equa- tions A differential field K is a field equipped with a derivation, that is, a map ∂ : K → K which has the following properties, For all a, b ∈ K we have ∂(a + b) = ∂a + ∂b. For all a, b ∈ K we have ∂(ab) = a∂b + b∂a. The subset C := {a ∈ K|∂a = 0} is a subfield of K and is called the field of constants. We shall assume that C is algebraically closed and has characteristic zero. We shall also assume that ∂ is non-trivial that is, there exist a ∈ K such that ∂a 6= 0. Standard examples which will be used in later chapters are C(z), C((z)), C((z))an. They are the field of rational functions, formal Laurent series at z = 0 and Lau- rent series which converge in a punctured disk 0 < |z| < ρ for some ρ > 0. As derivation in these examples we have differentiation with respect to z and the field of constants is C. An ordinary differential equation over K is an equation of the form n n−1 ∂ y + p1∂ y + ··· + pn−1∂y + pny = 0, p1, . , pn ∈ K. A system of n first order equations over K has the form ∂y = Ay t in the unknown column vector y = (y1, . , yn) and where A is an n × n-matrix with entries in K. 3 4 CHAPTER 1. ORDINARY LINEAR DIFFERENTIAL EQUATIONS Note that if we replace y by Sy in the system, where S ∈ GL(n, K), we obtain a new system for the new y, ∂y = (S−1AS + S−1∂S)y. Two n × n-systems with coefficient matrices A, B are called equivalent over K if there exists S ∈ GL(n, K) such that B = S−1AS + S−1∂S. It is well known that a differential system can be rewritten as a system by n−1 putting y1 = y, y2 = ∂y, . , yn = ∂ y. We then note that ∂y1 = y2, ∂y2 = y3, . , ∂yn−1 = yn and finally, ∂yn = −p1yn − p2yn−1 − ..., −pny1. This can be rewritten as 0 1 0 1 0 1 y1 0 1 0 ··· 0 y1 B y2 C B 0 0 1 ··· 0 C B y2 C ∂ B . C = B . C B . C @ . A @ . A @ . A yn −pn −pn−1 −pn−2 · · · −p1 yn There is also a converse statement. Theorem 1.1.1 (Cyclic vector Lemma) Any system of linear first order dif- ferential equations over K is equivalent over K to a system which comes from a differential equation. Proof. Let ∂y = Ay be our n × n system. Consider the linear form y = r1y1 + ··· + rnyn with r1, . , rn ∈ K. Using the differential system for the yi we see that ∂y = s1y1 + ··· + snyn, where the si are obtained via (s1, . , sn) = ∂(r1, . , rn) + (r1, . , rn)A. By repeated application of ∂ we find for each i elements ri1, . , rin ∈ K such i that ∂ y = ri1y1 + ··· + rinyn. Denote the matrix (rij)i=0,...,n−1;j=1,...,n by R. If R is invertible, then (rn1, . , rnn) is a K-linear combination of the (ri1, . , rin) for i = 0, 1, . , n − 1. Hence ∂ny is a K-linear combination of the ∂iy (i = 0, . , n − 1). Moreover, when R is invertible, our system is equivalent, via R, to a system coming from a differential equation. So it suffices to show that there exist r1, . , rn such that the corresponding matrix R is invertible. Since ∂ is non-trivial we can find x ∈ K such that ∂x 6= 0. Note that the new derivation ∂ := (x/∂x)∂ has the property that ∂x = x, which we may now assume without loss of generality. Let µ be the smallest index such that the matrix (rij)i=0,...,µ;j=1,...,n has K-linear dependent rows for every choice of r1, . , rn. We must show that µ = n. n Suppose that µ < n. Denote ri = (ri1, . , rin). Choose s0 ∈ K such that n s0,..., sµ−1 are independent. Let t ∈ K be arbitrary. By r0 ∧ · · · ∧ rµ we denote the vector consisting of the determinants of all (µ + 1) × (µ + 1) submatrices of the matrix with rows r0,..., rµ. For any λ ∈ C we have now F.Beukers: Hypergeometric Functions, preliminary notes 1.1. DIFFERENTIAL EQUATIONS AND SYSTEMS OF EQUATIONS 5 (s0 + λt0) ∧ · · · ∧ (sµ + λtµ) = 0. Expand this with respect to powers of λ. Since we have infinitely many choices for λ the coefficient of every power of λ must be zero. In particular the coefficient of λ. Hence Xµ s0 ∧ · · · ∧ ti ∧ · · · ∧ sµ = 0. (1.1) i=0 ¡ ¢ m n m Pi m i−j Now put t = x u with m ∈ Z and u ∈ K . Notice that ti = x j=0 j m uj. Substitute this in (1.1), divide by xm, and collect equal powers of m. Since m can be chosen in infinitely many ways the coefficient of each power of m must be zero. In particular the coefficient of mµ is zero. Hence s0 ∧ · · · ∧ sµ−1 ∧ u0 = 0. Since u0 can be chosen arbitrarily this implies s0 ∧· · ·∧sµ−1 = 0 which contradicts the minimality of µ. 2 We must also say a few words about the solutions of differential equations. It must be pointed out that in general the solutions lie in a bigger field than K. To this end we shall consider differential field extensions L of K with the property that the field of constants is the same as that of K. A fundamental lemma is the following one. Lemma 1.1.2 (Wronski) Let f1, . , fm ∈ K. There exists a C-linear relation between these function if and only if W (f1, . , fm) = 0, where ¯ ¯ ¯ f . f ¯ ¯ 1 m ¯ ¯ ∂f . ∂f ¯ ¯ 1 m ¯ W (f1, . , fm) = ¯ . ¯ ¯ . ¯ ¯ m−1 m−1 ¯ ∂ f1 . ∂ fm is the Wronskian determinant of f1, . , fm. Proof. If the fi are C-linear dependent, then the same holds for the columns of W (f1, . , fm). Hence this determinant vanishes. Before we prove the converse statement we need some observations. First notice that m W (vu1, . , vum) = v W (u1, . , um) for any v, ui ∈ K. In particular, if we take v = 1/um (assuming um 6= 0) we find m m−1 W (u1, . , ur)/um = W (u1/um, . , um−1/um, 1) = (−1) W (∂(u1/um), . , ∂(um−1/um)). F.Beukers: Hypergeometric Functions, preliminary notes 6 CHAPTER 1. ORDINARY LINEAR DIFFERENTIAL EQUATIONS Now suppose that W (f1, . , fm) vanishes. By induction on m we show that f1, . , fm are C-linear dependent. For m = 1 the statement is obvious. So assume m > 1. If fm = 0 we are done, so we can now assume that fm is not zero. By the remarks made above, the vanishing of W (f1, . , fm) implies the vanishing of W (∂(f1/fm), . , ∂(fm−1/fm)). Hence, by the induction hypothesis, there exist a1, . , am−1 such that a1∂(f1/fm) + ··· + am−1∂(fm−1/fm) = 0. After taking primitives and multiplication by fm on both sides we obtain a linear dependence relation between f1, . , fm. 2 Lemma 1.1.3 Let L be a differential extension of K with C as field of constants. Then the solution space in L of a linear equation of order n is a C-vector space of dimension at most n. Proof. It is clear that the solutions form a C-vector space. Consider n + 1 solutions y1, . , yn+1 and let W be their Wronskian determinant. Note that the columns of this determinant all satisfy the same K-linear relation given by the differential equation. Hence W ≡ 0. According to Wronski’s lemma this implies that y1, . , yn+1 are C-linear dependent. 2 Combination of this Lemma with the Cyclic vector Lemma yields Lemma 1.1.4 Let L be a differential extension of K with C as field of constants. Then the solution space in Ln of an n × n-system of equation ∂y = Ay is a C- vector space of dimension at most n. It is allways possible to find differential extensions which have a maximal set of solutions. Without proof we quote the following theorem. Theorem 1.1.5 (Picard-Vessiot) To any n × n-system of linear differential equations over K there exists a differential extension L of K with the following properties 1. The field of constants of L is C. 2. There is an n-dimensional C-vector space of solutions to the system in Ln. Moreover, if L is minimal with respect to these properties then it is uniquely determined up to differential isomorphism. Let y1,..., yn be an independent set of solutions to an n × n-system ∂y = Ay. The matrix Y obtained by concatenation of all columns yi is called a fundamental solution matrix. The Wronskian lemma together with the Cyclic vector Lemma imply that det(Y ) 6= 0.