<<

1

Factorization of second- linear differential and Liouville-Neumann expansions

Esther Garc´ıa1, Lance Littlejohn2, Jos´eL. L´opez3 and Ester P´erezSinus´ıa4 1 Departamento Matem´aticas,IES Miguel Catal´ande Zaragoza, Spain. e-mail: [email protected]. 2 Department of , Baylor University, One Bear Place # 97328, Waco, Texas 96798-7328.

e-mail: Lance−[email protected]. 3 Departamento de Ingenier´ıaMatem´aticae Inform´atica,Universidad P´ublicade Navarra, 31006-Pamplona, Spain. e-mail: [email protected]. 4 Departamento Matem´aticaAplicada, IUMA, Universidad de Zaragoza, 50018-Zaragoza, Spain. e-mail: [email protected].

ABSTRACT. We introduce a quasi- technique for second-order linear differential equations that brings together three rather diverse subjects of the field of differential equations: differ- ential , Liouville-Neumann approximation, and the Frobenius theory. The factorization of a linear differential (differential Galois theory) is a theoretical tool used to solve the equation exactly, although its range of applicability is quite reduced. On the other hand, the Liouville-Neumann is a practical tool that approximates a solution of the equation, it is based on a certain inte- gral equation equivalent to the differential equation and its range of applicability is extraordinary large. In this paper we combine both procedures. We use the ideas of the factorization to find families of integral equations equivalent to the differential equation. From those families of integral equations we propose new Liouville-Neumann that approximate the solutions of the equation. The method is valid for either regular or regular singular equations. We discuss the convergence properties of the algorithms and illustrate them with examples of special functions. These families of integral equations are parametrized by a certain regular , so they constitute indeed a function-parametric family of integral equations. A special choice of that function corresponds with a factorization of the differen- tial equation. In this respect, other choices of that function may be considered as quasi- of the differential equation. The quasi-factorization technique let us find also a relation between the Liouville-Neuman expansion and the Theory of Frobenius and show that the Liouville-Neumann method may be viewed as a generalization of the Theory of Frobenius.

2000 AMS Mathematics Subject Classification: 34A05; 34A30; 33B15; 33C05; 33C10; 33C15; 33C45. Keywords & Phrases: Factorization of differential equations. Integral equations. Liouville-Neuman expansions. Approximation of special functions. 2

1. Introduction

Factorization of (linear or non-linear) differential equations is a largely studied topic because of its implications in the resolution of differential equations. It is, at the level of resolution of differential equations, equivalent to the factorization of at the level of resolution of algebraic equations. The problem of the searching of algebraic algorithms for the solutions of algebraic equations is based on the Galois theory [19]; the problem of the searching of closed algebraic formulas for the solutions of differential equations is based on the differential Galois theory [16], [23]. There exists an obvious analogy between the algebraic and differential theories. In the algebraic case the problem is equivalent to the problem of a factorization of a using only admissible (elements of the algebraic field under consideration). The most important Galois theory theorem establishes that ”an is solvable by radicals if and only if the corresponding Galois is solvable”. In the differential case the problem is equivalent to the problem of a factorization of the dif- ferential equation in terms of a certain set of admissible (Liouvillian) functions [10]. Roughly speaking, this set of admissible functions consists of elementary functions, indefinite integrals of elementary functions and exponentials of these indefinite integrals (Kovacic’s algorithm). The pre- cise definition of admissible or Liouvillian function may be found in [10], although for the purposes of this paper we may content ourselves by considering a vague definition of admissible function: a function written in an algebraically closed way. Analogously to the Galois theorem for alge- braic equations, in the differential Galois theory, the following statement is established: ”A linear differential equation system is solvable by quadratures (Liouvillian extensions) if and only if the connected component of the corresponding is triangularizable” [2]. In 1986, Jerald Kovacic [10] presents an algorithm to solve second order linear differential equations with rational coefficient functions over C/ in terms of admissible functions. However, when the reduced linear differential equation Galois group is exactly SL(2,C/), the Kovacic algorithm does not work and the reduced linear differential equation has no Liouvillian (admissible) solutions. Even though Kovacic’s algorithm is an elegant and precise tool to exactly solve the differential equation, the problem is that, in most of the cases, the Galois group of the reduced linear differential equation is SL(2,C/) and the algorithm cannot be applied. To go deep inside the theory of factorization of differential equations, the Galois Differential Theory, or the Kovacic’s algorithm, we recommend [3], [5], [10], [16], [18], [20], [22], [23], [24] and references therein. In [4], [7], [8], [17] we can find applications of the factorization of differential equations to the computation of special functions. We give a very introduction to the idea of factorization of regular second order linear differential 2 equations. Consider a polynomial of degree two in a real x, p2(x) = x + ax + b. It is factorizable in the form p2(x) = (x + x1)(x + x2), where its roots x1 and x2 are the solutions of the non-linear algebraic system ( a = x1 + x2, (1) b = x1x2.

Consider now a regular second order linear differential operator in the unknown y,

L[y] := y00 + f(x)y0 + g(x)y, f 0, g ∈ C[0,X] (2) and the corresponding homogeneous equation L[y] = 0. For this differential equation, an analogous factorization to the above factorization of the polynomial p2(x) is the following one:

L[y] = [y0 + B(x)y]0 + A(x)[y0 + B(x)y], 3 where the functions A(x) and B(x) are solutions of the first order nonlinear differential system ( f = A + B, (3) g = AB + B0.

If we can find a solution (A, B) of the above system, then we have found two independent solutions of the equation L[y] = 0. To see this, we define

0 0 LA[y] := y + Ay, LB[y] := y + By.

Then we can write L[y] = LA[LB[y]]. (In the following, for simplicity in the notation, we will omit brackets and write only L[y] = LALB[y].) Then, an obvious solution y1(x) of the second order equation L[y] = 0 is a solution of the first order equation LB[y] = 0, that is,

µ Z x ¶ y1(x) = exp − B(t)dt .

If u(x) is a solution of the first order equation LA[u] = 0, that is,

µ Z x ¶ u(x) = exp − A(t)dt ,

−1 then another solution of the second order equation L[y] = 0 is y2(x) = LB [u], because L[y2] = −1 −1 LALBLB [u] = LA[u] = 0. The inverse operator LB may be constructed using the Green function of LB [11], Z x −1 y1(x) LB [y] = GB(x, t)y(t)dt, GB(x, t) = . (4) y1(t) Therefore, a second solution of the second order equation L[y] = 0 is Z µ Z ¶ Z µZ ¶ x u(t) x x t y2(x) = y1(x) dt = exp − B(t)dt exp [B(s) − A(s)]ds dt. y1(t)

The solutions y1 and y2 are linearly independent [11] and then {y1, y2} form a basis of the space of solutions of L[y] = 0. (In [11] we can find the construction of complete systems of solutions of factorized linear differential equations of arbitrary order). Observation 1. If we try to solve (3) by replacing A = f − B into the second equation, we obtain 0 2 that B is a solution of the Ricatti equation B = B − fB + g. But precisely B and y1 are related 0 by means of the Ricatti transformation B = −y /y. Therefore, y1 is just the solution of L[y] = 0 obtained by means of the standard Ricatti transformation B = −y0/y. It is then appropriate to give the name Ricatti system to the system (3).

It is straightforward to see that y2 is precisely a second solution obtained from y1 by the method of variation of parameters. We see that, in principle, the factorization L = LALB gives us the same couple of independent solutions y1 and y2 obtained by the method of the Ricatti transformation plus variation of parameters. It is not obvious that the resolution of the Ricatti equation is equivalent to the resolution of system (3). Observation 2. Observe also when the operator L has constant coefficients, we can choose A and B constants, and the differential system (3) becomes the algebraic system: ( f = A + B, g = AB, 4 the same system (1) that we have for the algebraic equation p2(x) = 0. In this case the above solutions y1 and y2 of L[y] = 0 are just the ones obtained by the standard method applied to equations of constant coefficients, ( −Bx −Ax y1(x) = e , y2(x) = e , if A 6= B, −Bx −Bx y1(x) = e , y2(x) = xe , if A = B.

Then, somehow, the factorization of a linear differential operator of the second order may be considered as a generalization of the factorization of a second degree polynomial. We have taken a glance at the interesting theory of factorization of differential equations (for a full explanation see [10]). The problem is that, as it is explained in [10], in general, to find a factorization of a differential equation (to solve the Ricatti system (3)) is the exception more than the rule. The set of factorizable linear differential equations by means of the Kovacic’s algorithm is a very small subset of the set of linear differential equations. We will see in the next section that, although an exact factorization of (2) is not always possible, a quasi-factorization is, and that this quasi-factorization may be used as a platform to construct a new family of Liouville-Neumann methods to approximate the solutions of a differential equation. The same idea is true for second order linear differential equations with a regular singular point and this is explained in Section 3. In the last part of this section, we briefly review the standard Liouville-Neumann methods used to approximate solutions of second order linear differential equations with regular or regular singular endpoints. In Section 4 we investigate, among the family of new Liouville-Neumann methods introduced in Sections 2 and 3, some particularly interesting methods and modifications of them and give some interesting examples of initial value problems with special function solutions. In Section 5 we show a relation of the the Liouville-Neumann expansion and the quasi-factorization algorithm with the Theory of Frobenius. A few comments and remarks are postponed to Section 6.

1.1. The standard Liouville-Neumann expansion in the regular case

The standard Liouville-Neumann expansion for the equation (2) consists of the integration of the differential equation that transforms the differential equation into the integral equation [12]

y(x) = φ(x) + (T y)(x), with

Z x 0 0 0 | φ(x) := y0 + [y0 + f(0)y0]x, (T y)(x) := {(x − t)[f (t) − g(t)] − f(t)}y(t)dt, y0, y0 ∈ R. 0

The solution of this integral equation is a solution of the differential equation (2) satisfying y(0) = 0 0 y0 and y (0) = y0 [12]. From this integral equation we define the Liouville-Neumann expansion: ( y0(x) = y0,

yn(x) = φ(x) + (T yn−1)(x), n = 1, 2, 3,...

The sequence {yn(x)} converges uniformly on [0,X] to the unique solution of the initial value problem [12] ( y00 + f(x)y0 + g(x)y = 0, 0 0 y(0) = y0, y (0) = y0. 5

1.2. The standard Liouville-Neumann expansion in the regular singular case

The standard Liouville-Neumann approximation can also be used in the case of regular singular linear differential equations. Consider an homogeneous linear differential equation of second order with a regular singular point at x = 0:

x2u00 + xF (x)u0 + G(x)u = 0,F 0,G ∈ C[0,X].

As it is explained in [12], the function y(x) = x−λu(x), with u(x) a solution of this equation and λ the largest root of its equation, is the regular solution at x = 0 of an equation of the form: L[y] := xy00 + f(x)y0 + g(x)y = 0, f 0, g ∈ C[0,X]. (5) Therefore, without loss of generality, we will consider this equation through the paper. It is shown in [12] that the integration of the differential equation (2) transforms the differential equation into the integral equation y(x) = φ(x) + (T y)(x), (6) with

Z 1 0 φ(x) := [f(0) − 1]y0, (T y)(x) := {2 − f(xt) + x(1 − t)[f (xt) − g(xt)]}y(xt)dt, y0 ∈ |R. 0 The solution of this integral equation is a solution of the differential equation (5) satisfying y(0) = y0 [12]. From this integral equation we define the Liouville-Neumann expansion ( y0(x) = y0, (7) yn(x) = φ(x) + (T yn−1)(x), n = 1, 2, 3,...

When 0 < f(0) < 4, the sequence {yn(x)} converges uniformly on [0,X] to the unique solution of the initial value problem [12], ( xy00 + f(x)y0 + g(x)y = 0, (8) y(0) = y0. Both the existence and uniqueness of the solution of (8) (or equivalently (6)) and the convergence of (7) to that solution are proved in [13] showing that T n is contractive for large enough n provided 0 < f(0) < 4.

2. Quasi-factorization and Liouville-Neumann expansions in the regular case Both the quasi-factorization and the Liouville-Neumann expansion can be considered in linear differential equations of arbitrary order but, for the sake of clearness in the exposition, we consider only second order equations of the form (2). Moreover, we add two initial conditions to (2) and consider the initial value problem: ( L[y] := y00 + f(x)y0 + g(x)y = 0, f 0, g ∈ C[0,X], (9) 0 0 y(0) = y0, y (0) = y0,

0 with y0 and y0 real numbers and X > 0. The key point of the discussion is the following observation: the differential L[y] may be written in the form

L[y](x) = LALB[y](x) − u(x), 6 with dy L [y](x) := + A(x)y(x),A(x) := f(x) − B(x), A dx dy L [y](x) := + B(x)y(x), u(x) := [B0(x) + f(x)B(x) − B2(x) − g(x)]y(x) B dx and B0(x) ∈ C[0,X]. We emphasize, at this point, the remarkable fact that the expression

LALB[y](x) − u(x) as defined above is a function-parametric expression (with B(x) as a param- 0 eter) and the identity L[y](x) = LALB[y](x) − u(x) is valid for any B (x) ∈ C[0,X]. Therefore, the solutions of L[y] = 0 are the solutions of LALB[y] = u. Solving this equation for y we obtain −1 −1 −1 −1 y(x) = LB LA u(x). The inverse operators LB and LA are not well defined because the equations LA[v] = u and LB[y] = v have many solutions, which translate into the existence of many inverse operators. We select one of these inverse operators by imposing the initial conditions y(0) = y0 0 0 and y (0) = y0. Then we have · ¸ R x Z x R t −1 − A(t)dt A(s)ds v(x) = LA [u](x) = e 0 v(0) + e 0 u(t)dt , 0 with v(0) := y0(0) + B(0)y(0) and · ¸ R x Z x R t −1 − B(t)dt B(s)ds y(x) = LB [v](x) = e 0 y(0) + e 0 v(t)dt . 0 Joining both equations we find that, for any arbitrary (continuous differentiable) function B(x), ½ · ¸ ¾ R x Z x R t Z t R s − B(t)dt [2B(s)−f(s)]ds 0 [f(r)−B(r)]dr y(x) = e 0 y(0) + e 0 y (0) + B(0)y(0) + e 0 u(s)ds dt . 0 0 (10) This is a family of integral equations equivalent to the initial value problem (9): any solution of (9) is a solution of (10) and vice versa. Observe the quite remarkable fact that (10) is a function- parametric family (with B(x) as a parameter) of integral equations whose unique solution (for all of them) is the solution of the regular initial value problem (9). This family of integral equations may be written in the form

y(x) = φ(x) + (T y)(x), with ½ ¾ R x Z x R t − B(t)dt 0 [2B(s)−f(s)]ds φ(x) := e 0 y(0) + [y (0) + B(0)y(0)] e 0 dt 0 and

R x Z x R t Z t R s − B(t)dt [2B(s)−f(s)]ds [f(r)−B(r)]dr 0 2 (T y)(x) := e 0 dte 0 e 0 [B (s)+f(s)B(s)−B (s)−g(s)]y(s)ds. 0 0 (11) The operator T : C[0,X] −→ C[0,X] is a bounded continuous operator in C[0,X] for continuous functions f, g, B and B0 in [0,X]. In other words,

Z x Z t |(T y)(x)| ≤ M dt |y(s)|ds, 0 0 where the positive constant M is a bound in [0,X] for the continuous function inside of the integrand of the double integral in (11). Therefore we have that, on [0,X],

Mx2 |(T y)(x)| ≤ ||y|| . 2 ∞ 7

It is easy to show by induction over n that

(Mx2)n |(T ny)(x)| ≤ ||y|| (2n)! ∞ and then the operator T n is contractive for large enough n [[6], Chap 5, Sec. 2], [[9], Chap. 2, Sec. 3]. This means that the sequence of functions ( y0(x) = y0, (12) yn(x) = φ(x) + (T yn−1)(x), n = 1, 2, 3,... converges to the unique solution of (10) [[21], Sec. 4.2.3.1]. Indeed, the recursion (12) is a function- parametric family of Liouville-Neumann recursions (one recursion for any B0(x) ∈ C[0,X]). All of them converge to the unique solution of (9). This family of Liouville-Neumann expansions generates new approximations different from the classical Liouville-Neumann expansion [[21], Sec. 4.2.3.1]. For the particular case B = 0 the integral equation simplifies to: · ¸ Z x R t Z t R s − f(t)dt 0 f(v)dv y(x) = y(0) + e 0 y (0) − e 0 g(s)y(s)ds dt 0 0 and, for n = 1, 2, 3,..., the corresponding Liouville-Neumann algorithm becomes:   y0(x) = 0, · ¸ Z x R t Z t R s − f(s)ds 0 f(r)dr  yn(x) = y(0) + e 0 y (0) − e 0 g(s)yn−1(s)ds dt. 0 0

But the most remarkably fact is that for the particular case of B(x) being a solution of the Ricatti equation B0(x) + f(x)B(x) − B2(x) − g(x) = 0 we have that u(x) = 0 and the integral equation (10) becomes an integral representation of the solution y(x) of (9), an expression of the solution y(x) in terms of admissible functions: ½ ¾ R x Z x R t − B(t)dt 0 [2B(s)−f(s)]ds y(x) = e 0 y(0) + [y (0) + B(0)y(0)] e 0 dt . 0

This is just one of the solutions of the equation (2) obtained by factorization of the equation that of course requires the resolution of the associated Ricatti equation. Therefore, the factorization of the equation (2) in terms of admissible functions (equivalent to the resolution of the associated Ricatti equation in terms of admissible functions) may be considered as a particular case of the family of integral equations (10), as a particular Liouville-Neumann algorithm that converges after one iteration. Example 1. Consider the initial value problem: ( y00 − xy = 0, y(0) = Ai(0), y0(0) = Ai0(0).

The unique solution of this initial value problem is y(x) = Ai(x), the classical Airy function. This function is not Liouvillian, such as the Kovacic’s algorithm predicts for this differential equation 8

[[2], Nota 2.4], [10]. On the other hand, for any B0(x) ∈ C[0,X], the Liouville-Neumann expansion (12) converges to this function. In particular, for B = 0 (12) reads:   y0(x) = Ai(0), Z x Z t (13) 0  yn(x) = Ai(0) + Ai (0)x + dt s yn−1(s)ds, n = 1, 2, 3,... 0 0 This recursion generates the Taylor expansion of Ai(x) at x = 0: µ ¶ µ ¶ X∞ 1 x3n X∞ 2 x3n+1 Ai(x) = Ai(0) 3n + Ai0(0) 3n . 3 (3n)! 3 (3n + 1)! n=0 n n=0 n

More precisely, the n−th iteration yn(x) of the Liouville-Neumann algorithm is the Taylor poly- nomial of degree 3n of Ai(x). The standard Liouville-Neumann algorithm for this example is given by:   y0(x) = Ai(0), Z x 0  yn(x) = Ai(0) + Ai (0)x + (x − t)tyn−1(t)dt, n = 1, 2, 3,... 0 Integrating by parts in the integral appearing in the previous algorithm (13), it is easy to check that both algorithms are, in this case, the same. But this is not the case for other choices of B(x): for example, if we choose B(x) = 1 and use the same starting point in the iterative method, the corresponding Liouville-Neumann algorithm is given by:   y0(x) = Ai(0), ½ Z x · Z t ¸ ¾ −x 2t 0 −s  yn(x) = e Ai(0) + e Ai (0) + Ai(0) + e (s − 1)yn−1(s)ds dt , n = 1, 2, 3,... 0 0

This algorithm provides a different iteration sequence {yn(x)} from the one obtained for B(x) = 0 that converges to Ai(x) for any x ∈ |R. The two first approximations of this new sequence are given by:

0 y1(x) = Ai(0)(1 − x) + (Ai(0) + Ai (0)) sinh x, · ¸ Ai0(0) ex e−x x2 y (x) = e−xx(x − 1) + Ai(0) (x2 − 2x + 3) + (1 − 3x/2) − (9 + x/2) + cosh x 2 8 4 4 4 Ai(0)Ai0(0)32/3Γ(−1/3) + [exx(1 − x/3) − 4 sinh x] . 8

In Figure 1 we have represented the exact solution y(x) = Ai(x) and some iterations yn(x) for the two different algorithms obtained for B(x) = 0 and B(x) = 1. As it is observed in the graphs, the iterations provided for B(x) = 0 seems to approximate faster the exact solution in intervals of the form [a, 0] with a < 0, while the ones provided for B(x) = 1 converge faster in intervals of the form [0, b] with b > 0.

1.0 1.0

0.5 0.5

-4 -2 2 4 -4 -2 2 4 -0.5

-0.5 -1.0 9

Figure 1. Plot of the exact solution y(x) = Ai(x) (thick dashed red) of the solution in Example 1 and the approximations y2(x) (blue), y3(x) (orange), y4(x) (magenta) and y5(x) (green) obtained with B(x) = 0 and B(x) = 1 respectively. Example 2. Consider the initial value problem: ( y00 + 2xy0 = 0, √ y(0) = 0, y0(0) = 2/ π,

2 whose unique solution is the error function y(x) = erf(x). This function is a primitive of e−t [[1], eq. 7.1.1] and then it is Liouvillian, as the Kovacic’s algorithm assures for this differential equa- tion [[2], Nota 2.4], [10]. Moreover, the application of the standard Liouville-Neumann algorithm generates the Taylor series at x = 0 of the error function, that is,

2 X∞ (−1)nx2n+1 erf(x) = √ . π n!(2n + 1) n=0

On the other hand, for any B0(x) ∈ C[0,X], the Liouville-Neumann algorithm (12) generates infinitely many (one for every choice of B(x)) sequences of functions that converge to erf(x). In particular, for B = 0 the algorithm (12) reads   y0(x) = 0, Z x 2 −t2  yn(x) = √ e dt, n = 1, 2, 3,... π 0 and provides the integral representation of the error function (its definition) for any n, that is, generates a sequence of functions that converges to erf(x) after one iteration. Observe that in this case, since g(x) = 0, B(x) = 0 is a solution of the Ricatti equation B0(x)+f(x)B(x)−B2(x)−g(x) = 0. For a different choice of B(x), like for example B(x) = x, the Liouville-Neumann algorithm (12) is given by:   y0(x) = 0, ½ Z Z ¾ x t (14) −x2/2 2 s2/2 2  yn(x) = e √ + dt e (1 + s )yn−1(s)ds , n = 1, 2, 3,... π 0 0

The iteration sequence {yn(x)} obtained in this case is a combination of products of an exponential function and polynomials of degree 4n − 3. For example,

−x2/2 µ 3 5 ¶ 2 2x e x x y (x) = e−x /2 √ , y (x) = √ 2x + + , 1 π 2 π 3 10 2 µ ¶ e−x /2 x3 7x5 13x7 x9 y (x) = √ 2x + + + + . 3 π 3 60 1260 720

In Figure 2 the exact solution y(x) = erf(x) and some iteration functions yn(x) in (14) have been represented. It can also be observed that comparing the n−th iteration yn(x) in (14) with a Taylor polynomial of the same degree, the Liouville-Neumann algorithm with B(x) = x provides a better approximation than the standard Liouville-Neumann algorithm that, in this example, agrees with the Taylor polynomial of the exact solution (see Figure 2). 10

2 1.0

1 0.5

- - -4 -2 2 4 4 2 2 4

-0.5 -1

-1.0 -2

Figure 2. In the first graph, plot of the exact solution y(x) = erf(x) (thick dashed red) in Example

2 and the approximations y2(x) (blue), y3(x) (orange), y4(x) (magenta) and y5(x) (green) obtained for B(x) = x. In the second graph, plot of the exact solution y(x) = erf(x) (thick dashed red) in

Example 2, the iteration y5(x) (blue) obtained for B(x) = x and the Taylor polynomial of degree 17 obtained with the standard Liouville-Neumann algorithm for n = 8 (brown).

3. Quasi-factorization and Liouville-Neumann expansions in the regular singular case As well as in the regular case, both the quasi-factorization and the Liouville-Neumann expan- sions can be considered in linear differential equations of arbitrary order having a regular singular point [14]. But, as well as in the regular case, for the sake of clearness in the exposition we only consider second order equations of the form (5). We add an initial condition to (5) and consider the initial value problem (8): ( L[y] := xy00 + f(x)y0 + g(x)y = 0, f 0, g ∈ C[0,X], (15) 0 y(0) = y0, y ∈ C[0,X], with y0 a real and X > 0. Similarly to the regular case, the differential expression L[y] may be written in the form

L[y](x) = LALB[y](x) − u(x), with

dy L [y](x) := x + A(x)y(x),A(x) := f(x) − xB(x), A dx dy L [y](x) := + B(x)y(x), u(x) := [xB0(x) + f(x)B(x) − xB2(x) − g(x)]y(x) B dx

0 and xB (x) ∈ C[0,X]. As well as in the regular case, the identity L[y](x) = LALB[y](x) − u(x) is 0 valid for any xB (x) ∈ C[0,X]. Therefore, the solutions of L[y] = 0 are the solutions of LA[v] = u, −1 −1 LB[y] = v. Solving this equation for y we obtain y(x) = LB LA u(x). As explained above, the −1 operator LB is multiply defined and we select an inverse one by imposing the initial conditions y(0) = y0.

On the other hand, to compute the inverse of the operator LA, consider the equation LA[v] := xv0 + Av = u. After the change of variable v = x−αw, with α := A(0) > 0 we find a regular equation for w:  A − α  w0 + w = xα−1u, x (16)  w ∈ C[0,X]. 11

The general solution of this equation is R ½ Z R ¾ x α−A(t) x t A(s)−α dt α−1 ds w(x) = e 0 t w(0) + t u(t)e 0 s dt . 0

Therefore, the unique solution of LA[v] = u is R Z R 1 α−A(xt) 1 1 A(xts)−α −1 t dt α−1 s ds v(x) = LA [v] = e 0 t u(xt)e 0 dt. 0

On the other hand · ¸ R x Z x R t −1 − B(t)dt B(s)ds y(x) = LB [v](x) = e 0 y(0) + v(t)e 0 dt . 0

Joining the last two equations we find that, for any arbitrary function B(x) (satisfying xB0(x) ∈ C[0,X]), R ½ Z R R Z R ¾ x x t 1 α−A(ts) 1 1 A(tsr)−α − B(t)dt B(s)ds ds α−1 dr y(x) = e 0 y(0) + dte 0 e 0 s s k(ts)y(ts)e 0 r ds , 0 0 (17) with k(x) := xB0(x) + f(x)B(x) − xB2(x) − g(x). This is a family of integral equations equivalent to the initial value problem (15): any solution of (15) is a solution of (17) and vice versa. As it happened in the regular case, observe the quite remarkable fact that (17) is a function-parametric family (with B(x) as a parameter) of integral equations whose unique solution (for all of them) is the solution of the regular singular initial value problem (15). This family of integral equations may be written in the form

y(x) = φ(x) + (T y)(x), with R x − B(t)dt φ(x) := y(0)e 0 and R Z R R Z R x x t 1 α−A(ts) 1 1 A(tsr)−α − B(t)dt B(s)ds ds α−1 dr (T y)(x) := e 0 dte 0 e 0 s s k(ts)y(ts)e 0 r ds. (18) 0 0

The operator T : C[0,X] −→ C[0,X] is a bounded continuous operator in C[0,X] for continuous functions f, g, B and xB0 in [0,X] and α = f(0) > 0. Therefore, as well as in the regular case, the operator T n is contractive for large enough n (provided f(0) > 0). This means that the sequence of functions ( y0(x) = y0, (19) yn(x) = φ(x) + (T yn−1)(x), n = 1, 2, 3,... converges to the unique solution of (17) when f(0) > 0. Formula (19) is a function-parametric family of Liouville-Neumann recursions (one recursion for any xB0(x) ∈ C[0,X]). All of them converge to the unique solution of (15). This family of Liouville-Neumann expansions generates new approximations different from the classical one [[12], Sec. 3, eq. 21]. 12

For the particular case B = 0 the integral equation simplifies to: Z R Z R x 1 f(0)−f(ts) 1 1 f(tsu)−f(0) ds f(0)−1 du y(x) = y(0) − dte 0 s s g(ts)y(ts)e 0 u ds 0 0 and, for n = 1, 2, 3,..., the corresponding Liouville-Neumann algorithm simplifies to:   y0(x) = y0, Z R Z R x 1 f(0)−f(ts) 1 1 f(tsu)−f(0) s ds f(0)−1 u du  yn(x) = y(0) − dte 0 s g(ts)yn−1(ts)e 0 ds. 0 0

Of course, for the particular case of B(x) being a solution of the regular singular Ricatti equation xB0(x) + f(x)B(x) − xB2(x) − g(x) = 0 we have that u(x) = 0 and the integral equation (17) becomes an integral representation of the solution y(x) of (15), an expression of the solution y(x) in terms of admissible functions: R x − B(t)dt y(x) = y(0)e 0 . As it happened in the regular case, this is just one of the solutions of the equation (5) obtained by factorization of the equation that of course requires the resolution of the associated Ricatti equation. Therefore, the factorization of the equation (5) in terms of admissible functions (equivalent to the resolution of the associated Ricatti equation in terms of admissible functions) may be considered as a particular case of the family of integral equations (17), as a particular Liouville-Neumann algorithm that converges after one iteration. Observation 3. The Liouville-Neumann expansion (19) converges to the unique solution of (15) for f(0) > 0. On the other hand, the convergence of the standard Liouville-Neumann expansion (19) is only proved for 0 < f(0) < 4 (see [12] and [13]). In fact, in [13] the existence and uniqueness of the solution of (15) is proved for 0 < f(0) < 4. We see here that indeed this window for f(0) may be opened up to f(0) > 0 (see connection with Frobenius’ theory explained in Section 5). Example 3. Consider the initial value problem ( xy00 + ay0 + y = 0, y(0) = 1, y0 ∈ C[0, ∞), with a > 0. The general solution of the equation xy00 + ay0 + y = 0 is Γ(a) √ Γ(2 − a) √ y(x) = c1 √ Ja−1(2 x) + c2 √ J1−a(2 x), xa−1 xa−1 where Ja(x) is the Bessel function regular at x = 0. Then, the unique solution of the initial value Γ(a) √ problem is y(x) = √ Ja−1(2 x). The Kovacic’s algorithm applied to this differential equation xa−1 [[2], Nota 2.4], [10] assures that this function is not Liouvillian for a∈ / Z/ + 1/2. On the other hand, for any xB0(x) ∈ C[0,X], the Liouville-Neumann expansion (19) converges to this function. In particular, for B = 0 (19) reads   y0(x) = 1, Z x Z 1 a−1  yn(x) = 1 − dt ds s yn−1(st), n = 1, 2, 3,... 0 0

Γ(a) √ This recursion generates the Taylor expansion of √ Ja−1(2 x) at x = 0: xa−1

Γ(a) √ X∞ (−1)nxn √ Ja−1(2 x) = . a−1 n!(a) x n=0 n 13

The application of the standard regular singular Liouville-Neumann algorithm to this problem is restricted to 0 < a < 4. It is given by:

Z 1 yn(x) = a − 1 + (2 − a − x(1 − t))y(xt)dt. 0

In such a case, the algorithm generates a sequence of polynomials of degree n which is different from the one obtained with B(x) = 0 for any value of a except for a = 2. Example 4. Consider the initial value problem: ( x2u00 + xu0 + (x2 − a2)u = 0, (20) 0 u(0) = Ja(0), u ∈ C[0, ∞), with a > −1/2, with a solution being the Bessel function u(x) = Ja(x). The application of the Kovacic’s algorithm to this differential equation [[2], Nota 2.4], [10] let us prove that this function is Liouvillian if and only if a ∈ Z/ + 1/2. This equation presents a regular singular point at x = 0. In order to apply the algorithm described in Section 3, we need to rewrite (20) in the form (15).

As it is proved in [12], the solution u(x) = Ja(x) of (20) is related to the unique solution of this other problem:  00 0  xy + (2a + 1)y + xy = 0, 2−a (21)  y(0) = , y0 ∈ C[0, ∞), Γ(1 + a) by u(x) = xay(x). For any xB0(x) ∈ C[0,X], the Liouville-Neumann expansion (19) converges −a a to the function y(x) = x Ja(x) provided a > −1/2, then un(x) := x yn(x) defines an iterative sequence of functions that converges uniformly on [0,X] to the function u(x) = Ja(x) for a > −1/2. In particular, for B = 0 the algorithm (19) reads:   2−a  y (x) = ,  0 Γ(1 + a) −a Z x Z 1  2 2a+1  yn(x) = − tdt s yn−1(st) ds, n = 1, 2, 3,... Γ(1 + a) 0 0

This recursion generates the Taylor expansion of Ja(x) at x = 0:

X∞ (−1)n(x/2)a+2n J (x) = . a n!Γ(a + n + 1) n=0

More exactly, the n−th iteration of the Liouville-Neumann algorithm is the Taylor polynomial of degree 2n of the exact solution. In this example, the standard regular singular Liouville-Neumann algorithm can be applied for −1/2 < a < 3/2. For these values, the algorithm generates a convergent sequence of polynomials of degree 2n, different from the one obtained with B(x) = 0 unless a = 1/2.

Figure 3 shows the exact solution u(x) = Ja(x) of problem (20) and the iterations un(x) with n = 2, 3, 4, 5 for a = 0 and a = 1/2. In Figure 4 approximations obtained with B(x) = 0 and with the standard regular singular Liouville-Neumann algorithm of the same order are compared. 14

1.0 0.8

0.8 0.6

0.6 0.4 0.4 0.2 0.2

1 2 3 4 5 1 2 3 4 5 -0.2 -0.2

-0.4 -0.4

Figure 3. Plot of the exact solution u(x) = Ja(x) (thick dashed red) of the solution in Example 4 and the approximations u2(x) (blue), u3(x) (orange), u4(x) (magenta) and u5(x) (green) obtained with B(x) = 0 for a = 0 and a = 1/2 respectively.

1.0 0.8

0.8 0.6

0.6 0.4

0.4 0.2

0.2 1 2 3 4 5 6 -0.2 1 2 3 4 5 6 -0.4 -0.2

-0.6 -0.4

Figure 4. Plot of the exact solution u(x) = Ja(x) (thick dashed red) of the solution in Example 4 and the approximations of order 4 and 5 obtained with B(x) = 0 (blue) and with the standard regular singular Liouville-Neumann algorithm (brown) for a = 2/3. Example 5. Consider the initial value problem ( √ xy00 + (a + x x)y0 = 0, y(0) = 1, y0 ∈ C[0, ∞), √ with a > 0. The general solution of the equation xy00 + (a + x x)y0 = 0 is µ ¶ 2 2 √ y(x) = c + c Γ (1 − a), x x . 1 2 3 3

Then, the unique solution of the initial value problem is y(x) = 1. This function is obviously Liouvillian. On the other hand, for any xB0(x) ∈ C[0,X], the Liouville-Neumann expansion (19) converges to this function. In particular, for B = 0 the integral equation (17) becomes the exact representation of the solution y(x) = 1.

4. Some remarks about the choice of B(x)

The choice of B(x) in the Liouville-Neumann algorithm for the regular and for the regular sin- gular case is a key point in order to design good approximation algorithms. The most complicated computational problem is generated by the presence of the exponential factors in these algorithms. For certain choices of B(x), and depending on the coefficient functions f(x) and g(x), the compu- tation of the iterations in terms of elementary functions becomes difficult. We investigate in this section some considerations about this subject. 15

4.1. A modification of the Liouville-Neumann algorithm in the regular case

Observe that the family of integral equations (10) can be written in the form: ½ Z x R t [2B(s)−f(s)]ds 0 ye(x) = y(0) + e 0 y (0) + B(0)y(0) 0 ¾ (22) Z t R s [f(r)−2B(r)]dr 0 2 + e 0 [B (s) + f(x)B(x) − B (x) − g(x)]ye(s)ds dt, 0

R x B(t)dt with ye(x) = e 0 y(x). From this representation we have that

ye(x) = φ(x) + (T ye)(x), with Z x R t 0 [2B(s)−f(s)]ds φ(x) := y(0) + [y (0) + B(0)y(0)] e 0 dt 0 and

Z x R t Z t R s [2B(s)−f(s)]ds [f(r)−2B(r)]dr £ 0 2 ¤ (T ye)(x) := dt e 0 e 0 B (s) + f(x)B(x) − B (x) − g(x) ye(s)ds. 0 0

Then, we obtain the following Liouville-Neumann algorithm: ( ye0(x) = y(0), (23) yen(x) = φ(x) + (T yen−1)(x), n = 1, 2, 3,...

R x 0 B(t)dt that, for any B (x) ∈ C[0,X], converges uniformly on [0,X] to e 0 y(x), being y(x) the unique solution of (10). As we can observe in (22), the choice of B(x) = f(x)/2 simplifies the exponential factors and, for n = 1, 2, 3,..., the corresponding Liouville-Neumann algorithm reduces to:   ye0(x) = y(0), · ¸ Z x Z t µ ¶ 0 1 1 2 1 0  yen(x) = y(0) + y (0) + f(0)y(0) x + dt f (s) + f (s) − g(s) yen−1(s)ds. 2 0 0 4 2

Thus, for some family of functions f(x) and g(x) (when they are polynomials for example), the choice B(x) = f(x)/2 may simplify the computations of the Liouville-Neumann iterations. As it is directly observed in (10) and when f(x) = 0, the choice B(x) = 0 eliminates the R x − 1 f(t)dt exponential factors in (10). If we perform the Liouville transformation y(x) = w(x) e 2 in (9), we obtain that w satisfies the initial value problem: ( w00 − q(x)w = 0, (24) 0 0 w(0) = y0, w (0) = y0 + y0f(0)/2,

1 2 1 0 with q(x) := 4 f (x)+ 2 f (x)−g(x). The corresponding Liouville-Neumann algorithm for B(x) = 0 simplifies to:   w0(x) = y(0), Z x Z t µ ¶ 0 1 2 1 0  wn(x) = w(0) + w (0)x + dt f (s) + f (s) − g(s) wn−1(s)ds, 0 0 4 2 16

R x 1 f(t)dt with w(x) = e 2 y(x). This result agrees with the one previously obtained for ye(x). Finally, it is worth pointing out that the application of the standard Liouville-Neumann algorithm to the initial value problem (24) generates the same algorithm as the one obtained for B(x) = 0:   w0(x) = y(0), Z x µ ¶ 0 1 2 1 0  wn(x) = w(0) + w (0)x + (x − t) f (t) + f (t) − g(t) wn−1(t)dt, 0 4 2 Thus, the classical Liouville-Neumann algorithm is a particular case of the function-parametric family of algorithms (12), the corresponding to the case B(x) = 0 (see Example 1).

4.2. New families of integral equations in the regular case

We consider the family of integral equations (10) with B(x) = b ∈ |R, that is: ½ · ¸ ¾ Z x R t Z t R s −bx [2b−f(s)]ds 0 [f(r)−b]dr 2 y(x) = e y(0) + e 0 y (0) + by(0) + e 0 (bf(s) − b − g(s))y(s)ds dt . 0 0 (25) Taking the derivative with respect to b in (25) and using that y(x) is independent of b, we obtain ½ Z x R t −bx [2b−f(s)]ds 0 xy(x) = e e 0 [y(0) + 2t(y (0) + by(0))]dt 0 ¾ Z x R t Z t R s [2b−f(s)]ds [f(r)−b]dr 2 + dt e 0 e 0 [f(s) − 2b + (2t − s)(bf(s) − b − g(s))]y(s)ds . 0 0 (26) For b = 0, (26) reads:

Z x R t − f(s)ds 0 xy(x) = e 0 [y(0) + 2ty (0)]dt 0 (27) Z x R t Z t R s − f(s)ds f(r)dr + dt e 0 e 0 [f(s) + (s − 2t)g(s)]y(s)ds. 0 0 On the other hand, if we set B(x) = f(x)/2 + b in (22), derive this expression with respect to b and evaluate for b = 0, we obtain this other family of integral equations equivalent to the initial value problem (9): · ¸ Z Z µ ¶ 1 x t 2t − s 1 1 ye(x) = y(0) + y0(0) + f(0)y(0) x + dt f 2(s) + f 0(s) − g(s) ye(s)ds, 2 0 0 x 4 2

R x f(t) dt with ye(x) = e 0 2 y(x).

4.3. A modification of the Liouville-Neumann algorithm in the regular singular case

As in the regular case, the family of integral equations (17) can be written in the form: Z R Z R x t f(s)−f(0) 1 ts f(r)−f(0) [2B(s)− ]ds f(0)−1 [ −2B(r)]dr ye(x) = y(0) + e 0 s s k(ts)ye(ts)e 0 r dsdt, (28) 0 0

R x B(t)dt 0 2 where ye(x) = e 0 y(x) and k(x) := xB (x) + f(x)B(x) − xB(x) − g(x). From this represen- tation we have that ye(x) = φ(x) + (T ye)(x), 17 with φ(x) := y(0) and Z R Z R x t f(s)−f(0) 1 ts f(r)−f(0) [2B(s)− ]ds f(0)−1 [ −2B(r)]dr (T ye)(x) := e 0 s s k(ts)ye(ts)e 0 r dsdt. 0 0

Then, we obtain the following Liouville-Neumann algorithm for the function ye(x): ( ye0(x) = y0, (29) yen(x) = φ(x) + (T yen−1)(x), n = 1, 2, 3,...

R x 0 B(t)dt that, for any xB (x) ∈ C[0,X] converges uniformly on [0,X] to e 0 y(x), being y(x) the f(x)−f(0) unique solution of (15). In this case, the choice B(x) = 2x eliminates the exponential factors and, for n = 1, 2, 3,..., the Liouville-Neumann algorithm (19) reads:   ye0(x) = y(0), Z x Z 1 · ¸ f(0)−1 1 0 1  yen(x) = y(0) + dt s f (ts) + (f(ts) − f(0))(f(ts) + f(0) − 2) − g(ts) ye(ts)ds. 0 0 2 4ts (30) Thus, for some family of functions f(x) and g(x) (when they are polynomials for example), the f(x)−f(0) choice B(x) = 2x simplifies the computations of the Liouville-Neumann iterations {yen(x)}. R x − 1 f(t)−f(0) dt After the transformation y(x) = w(x) e 2 t in (15), we obtain that w(x) satisfies the initial value problem  · ¸  1 1  xw00 + f(0)w0 + (2 − f(x) − f(0))(f(x) − f(0)) − f 0(x) + g(x) w = 0, 4x 2 (31)  w(0) = y0.

As it has been proved in the regular case, it is straightforward to show that, for this problem, the Liouville-Neumann algorithm (19) with B(x) = 0 coincides with the above Liouville-Neumann algorithm obtained for ye(x). As a difference with the regular case, the application of the standard regular singular Liouville- Neumann algorithm to the initial problem (31) (only convergent for 0 < f(0) < 4),  w (x) = y(0),  0  wn(x) = [f(0) − 1]y(0)  Z 1 ½ · ¸¾  dt 1 1 0  + 2 − f(0) − (x − t) (2 − f(t) − f(0))(f(t) − f(0)) − f (t) + g(t) wn−1(t), 0 x 4t 2 generates the same algorithm as the one obtained in (19) for B(x) = 0 only for the case f(0) = 2. In the following example, we show how this new modified algorithm can be useful in some cases in which the iterations of the general algorithm (19) are difficult to compute. Example 6. Consider the initial value problem [[1], eq. 13.1.1] ( xy00 + (b − x)y0 − ay = 0, y(0) = 1, y0 ∈ C[0, ∞), 18 whose unique solution is the confluent hypergeometric function y(x) = M(a, b; x). Except for certain values of the parameters, this function is not Liouvillian, such as the Kovacic’s algorithm predicts for this differential equation [10]. As it is shown in [12], the application of the regular sin- gular Liouville-Neumann algorithm, convergent for 0 < b < 4, generates a sequence of polynomials in x of degree n: Xn (n) k yn(x) = ck x , k=0 where, for n = 1, 2, 3,...

(n) (n) (a)n c0 = 1, cn = 2 , (m + n + 1)((m + 2)n−1) (m + 1) (m + 1)(m + 2k) − kb k + a − 1 c(n) = c(n−1) + c(n−1), for k = 1, 2, 3, . . . , n − 1. k (m + k)(m + k + 1) k (m + k)(m + k + 1) k−1

On the other hand, for any xB0(x) ∈ C[0,X], the Liouville-Neumann expansion (19) converges uniformly on [0,X] to the exact solution M(a, b; x). However, the computation of the sequence

{yn(x)} turns out difficult in this example for several choices of B(x). The application of the algorithm (30), convergent for b > 0, reads:   ye0(x) = 1, Z x Z 1 µ ¶ b−1 1  yen(x) = 1 + dt s a − (2b − ts) ye(ts)ds, 0 0 4 with ye(x) = e−x/2y(x). In this case, the iterations can be easily computed and generate a sequence of polynomials in x of degree 2n. Then, the confluent hypergeometric function M(a, b; x) can be approximated for the product of the exponential factor ex/2 and that sequence of polynomials. In Figure 5 approximations obtained with algorithm (30) and with the standard regular singular Liouville-Neumann algorithm of the same order are compared for some values of the parameters a and b. As it is observed in the graphs for these values of the parameters, the algorithm (30) provides better approximations than the ones obtained with the standard algorithm.

8

140 000

120 000 6

100 000

80 000 4

60 000

40 000 2

20 000

5 10 15 20 -10 -8 -6 -4 -2

Figure 5. Plot of the exact solution y(x) = M(a, b; x) (thick dashed red) of the solution in Example 6, the approximation of order 5 obtained with algorithm (30) (blue) and the approximation of order 10 given by the standard regular singular Liouville-Neumann algorithm (brown) for a = 1 and b = 3. Example 7. Consider the initial value problem ( √ √ xy00 + (a + x x)y0 + xy = 0, y(0) = 1, y0 ∈ C[0, ∞), 19 √ √ with a > 0. The general solution of the equation xy00 + (a + x x)y0 + xy = 0 for a 6= 1 is

3/2 1−a 3/2 y(x) = c1 1F1(2/3; (1 + 2a)/3; −2x /3) + c2x 1F1((4 − 2a)/3; (5 − 2a)/3; −2x /3).

For a = 1 the general solution is µ ¯ ¶ 1/3 ¯2x2/3 y(x) = c F (2/3; 1; −2x3/2/3) + c G2,0 ¯ , 1 1 1 2 1,2 0, 0 ¯ 3 where G denotes the Meijer G function.

In any case, the unique solution of the initial value problem is y(x) = 1F1(2/3; (1 + 2a)/3; −2x2/3/3). This function is not Liouvillian, such as the Kovacic’s algorithm predicts for this differential equation [10]. The Liouville-Neumann algorithm (30), convergent for a > 0, is given by   ye0(x) = 1, Z x Z 1 a−1 3/2  yen(x) = 1 + dt s (2a − 2 + (st) )ye(ts)ds, 0 0

3/2 ex with ye(x) = 3 y(x). For example, the first two terms of the sequence {yn} are: µ ¶ 3/2 −x 2a − 3 3/2 1 3 y (x) = e 3 1 + x + x , 1 3(2a + 1) 12(2 + a) µ 3/2 2 2 −x 2a − 3 3/2 2a − 3a + 6 3 4a − 9 9/2 y (x) = e 3 1 + x + x + x 2 3(2a + 1) 36a2 + 90a + 36 36(4a3 + 24a2 + 39a + 14) ¶ 1 + x6 . 288(a2 + 7a + 10) On the other hand, the two first iterations of the standard regular singular Liouville-Neumann algorithm (convergent for 0 < a < 4) are given by: 4 y (x) = 1 − x3/2, 1 15 4(2a − 9) 1 y (x) = 1 + x3/2 + x3, 2 75 18 whereas the first terms provided by the Taylor expansion at x = 0 of the solution are:

y1(x) = 1, 4 y (x) = 1 − x3/2, 2 3(2a + 1) 4 10 y (x) = 1 − x3/2 + x3. 3 3(2a + 1) 2a2 + 5a + 2

2.0

1.5

1.0

0.5

1 2 3 4 5 6 20

Figure 6. Plot of the exact solution (thick dashed red) in Example 7, the approximation of order 5 obtained with algorithm (30) (blue), the approximation of order 10 given by the standard regular singular Liouville-Neumann algorithm (brown) and the approximation of order 15 (magenta) in the series expansion of the exact solution for a = 1.

4.4. New families of integral equations in the regular singular case

Set B(x) = b ∈ |R in the family of integral equations (17), that is: ½ Z R ¡ ¢ Z R ¡ ¢ ¾ x 1 f(st)−f(0) 1 1 f(str)−f(0) −bx bt b− ds f(0)−1 −b dr y(x) = e y(0) + dte e 0 s s k(ts)y(ts)e 0 r ds , 0 0 (32) with k(x) := bf(x) − b2x − g(x). Taking the derivative with respect to b in (32), we obtain Z R ¡ ¢ Z x 1 f(st)−f(0) 1 −bx bt b− ds f(0)−1 xy(x) = e dte e 0 s s [(bt + 1)f(st) − bst(bt + 2) − g(st)]y(ts) 0 0 R 1¡ ¢ f(str)−f(0) −b dr × e 0 r ds. (33) For b = 0, (33) reads: Z R ¡ ¢ Z R ¡ ¢ x 1 f(st)−f(0) 1 1 f(str)−f(0) − ds f(0)−1 dr xy(x) = dte 0 s s [f(st) − g(st)]y(ts)e 0 r ds. (34) 0 0

f(x)−f(0) On the other hand, if we set B(x) = 2x + b in (28), take the derivative with respect to b and evaluate for b = 0, we obtain this other family of integral equations: Z Z ½ · x dt 1 1 ye(x) = sf(0)−1 f(0) + (2 − s)t f 0(ts) x 2 0 0 ¸¾ (35) 1 + (f(ts) − f(0))(f(ts) + f(0) − 2) − g(ts) ye(ts)ds, 4ts

R x f(t)−f(0) dt with ye(x) = e 0 2t y(x).

5. Connection with Frobenius’ theory in the regular singular case

The theory presented above generalizes somehow the theory of Frobenius, which requires for f and g to be analytic in [0,X], whereas in the above sections we have only required a certain differentiability. Just consider Example 5; the existence and uniqueness of the solution does not follow from the Frobenius theory because f is not analytic at x = 0. Example 3 shows that the condition f(0) > 0 (a > 0 in that example) is not only sufficient but also necessary in many instances to have uniqueness of the solution and convergence of the Liouville-Neumann expansion in the regular singular case. But the origin of that condition is very technical. In this section we analyze the lower bound f(0) > 0 when we restrict ourselves to analytic coefficients f and g. We are going to give a different explanation of the bound f(0) > 0 in the framework of the Frobenius’ theory. Consider the differential equation:

xy00 + f(x)y0 + g(x)y = 0, (36) 21 with f and g analytic in [0,X]. The roots of its characteristic equation λ(λ − 1 + f(0)) = 0 are

λ1 = 0 and λ2 = 1 − f(0). When f(0) ∈/ Z/ we have that the general solution of (36) is [15]

1−f(0) y(x) = c1y1(x) + c2x y2(x), where y1(x) and y2(x) are regular functions at x = 0 satisfying y1(0) = y2(0) = 1. The functions 1−f(0) y1(x) and x y2(x) are two independent solutions of (36). If f(0) < 1 we have that the singular initial value problem ( xy00 + f(x)y0 + g(x)y = 0, x ∈ (0,X), (37) y(0) = y0,

1−f(0) has an infinite number of solutions y(x) = y0y1(x) + c2x y2(x), c2 ∈ |R. On the other hand, if f(0) > 1 then the unique solution of (37) is y(x) = y0y1(x). When f(0) ∈/ Z/, the general solution of (36) also satisfies

0 0 1−f(0) 0 −f(0) y (x) = c1y1(x) + c2x y2(x) + c2(1 − f(0))x y2(x).

Therefore, if f(0) < 0, the singular initial value problem ( xy00 + f(x)y0 + g(x)y = 0, x ∈ (0,X), (38) 0 y(0) = y0, y ∈ C[0,X]

1−f(0) has an infinite number of solutions y(x) = y0y1(x) + c2x y2(x), c2 ∈ |R. On the other hand, if f(0) > 0 then the unique solution of (38) is y(x) = y0y1(x). If f(0) ∈ Z/−, say f(0) = 1 − p with p ∈ |N, then the two roots of the characteristic equation of

(36) are λ1 = 0 and λ2 = p. In this case, the general solution of the equation (36) is [15]

p p y(x) = c1x y1(x) + c2 [y2(x) + cx y1(x) log x] , (39) where y1(x) and y2(x) are regular functions at x = 0 satisfying y1(0) = y2(0) = 1 and c1, c2, c ∈ |R. The derivative of this function has the form

0 p 0 p−1 £ 0 p−1 p 0 p−1 ¤ y (x) = c1x y1(x) + c1px y1(x) + c2 y2(x) + c(px y1(x) log x + x y1(x) log x + x y1(x) .

0 For p = 1 (f(0) = 0) we find that y (x) is not regular at x = 0 unless c2 = 0. But in this case y(x) = c1xy1(x) and then y(0) = 0; y(x) is not a solution of (38) unless y0 = 0. For p > 1 we 0 have that y (x) is regular at x = 0 for any c2 ∈ |R and y(x) is a solution of (38) if c2 = y0 for any value of c1 ∈ |R. The singular initial value problem (38) has an infinite number of solutions when f(0) = −p, −p − 1,... In general, the singular initial value problem (38) has not a unique solution when f(0) ∈ Z/−.

6. Final Remarks

In this paper we have introduced the idea of quasifactorization of second-order linear differen- tial equations to find families of integral equations equivalent to the differential equations. From these families of integral equations we have obtained new Liouville-Neumann algorithms that ap- proximate the solutions of the equation. The method is valid for either regular or regular singular equations. These families of integrals depend on a certain regular function B(x), so they constitute indeed a function-parametric family of integral equations. For the particular case of B(x) being a 22 solution of the Riccati equation, the integral representation of the differential equation associated to this B(x) provides an expression of the solution in terms of admissible functions that corre- sponds with a factorization, and not a quasifactorization, of the differential equation. However, such particular function is difficult to find in most of the cases. The Liouville-Neumann algorithms obtained in the regular case for y00 + f(x)y0 + g(x)y = 0 generate a sequence of functions that converges uniformly over compacts to the unique solution of the problem, whereas in the regular singular case for xy00 +f(x)y0 +g(x)y = 0 the iterative sequence of functions obtained with the new Liouville-Neumann algorithms converge uniformly on compacts when f(0) > 0. This last condition is better than the one obtained in the regular singular case [12], 0 < f(0) < 4, where a Liouville-Neumann algorithm is obtained directly from the second order differential equation, without using a quasifactorization of the equation. The condition f(0) > 0 in the regular singular case has also been explained in the frame of the Frobenius Theory. A slight modification of these new Liouville-Neumann algorithms has let us obtain modified Liouville-Neumann algorithms that simplify the computation of the iterations. From some partic- ular choices of B(x) in these algorithms, it has been shown that the standard Liouville-Neumann algorithms can be considered as a particular case of these new families. Among the new Liouville- Neumann algorithms and for some special cases of B(x), other algorithms have been derived. The new algorithms have been applied to differential equations whose solutions are special func- tions, obtaining either known or new expansions. The new expansions have been compared with the existing ones, Taylor expansions or the ones obtained from the standard Liouville-Neumann algorithms.

7. Acknowledgments

E. P´erezSinus´ıathanks the Department of Mathematics of the Baylor University for its hospital- ity during the completion of this work. The Ministerio de Ciencia y Tecnolog´ıa (REF. MTM2007- 63772) is acknowledged by its financial support. 23

References

[1] M. Abramowitz and I.A. Stegun, Handbook of mathematical functions, Dover, New York, 1970.

[2] P. B. Acosta, La teor´ıade Morales-Ramis y el algoritmo de Kovacic, Lecturas Matem´aticas, Volumen Especial, (2006), 21-56.

[3] L. M. Berkovich, Method of factorization of ordinary differential operators and some of its applications, Applicable Analysis and Discrete Mathematics. 1 (2007) 122-149.

[4] A. Bernardini and P. Natalini, Factorization of differential operators for some special functions, Soochow J. Math. 29 n. 2 (2003) 147-156.

[5] M. Bronstein, An improved algorithm for factoring ordinary differential operators, Proceed- ings of ISSAC’94, Oxford, Uk ACM Press, 1994, 336-340.

[6] L. Debnath and P. Mikusinski´ , Introduction to Hilbert Spaces with Applications, Academic Press, New York, 1999.

[7] M. Foupouagnigni, W. Koepf and A. Ronveaux, Factorization of fourth-order differential equations for perturbed classical orthogonal polynomials, J. Comput. Appl. Math. 162 (2004) 299-326.

[8] M. X. He and P. E. Ricci, Differential equation of Appell polynomials via the factorization method, J. Comput. Appl. Math. 139 (2002) 231-237.

[9] H. Hochstadt, Integral Equations, Wiley and Sons, New York, 1996.

[10] J. Kovacic, An algorithm for solving second order linear homogeneous differential equations. J Symbolic Computation. 2, (1986), 3-43.

[11] L. Littlejohn and J. L. Lopez, Variation of parameters and solutions of composite products of linear differential equations, to be published in J. Math. Anal. Appl.

[12] J. L. Lopez, The Liouville-Neumann expansion at a regular singular point, J. Diff. Equat. Appl. 15 n. 2 (2009) 119-132.

[13] J. L. Lopez, Initial value problems for second order linear differential equations with a regular singular point. Submitted.

[14] C. Ferreira and J. L. Lopez, Initial value problems for linear differential equations with a regular singular point. In preparation.

[15] A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential equations, Chapman & Hall/CRC, New York, 2002. 24

[16] A. Magid, Lectures on differential Galois theory, University Lecture Series 7, American Math- ematical Society: Rhode Island MA, 1994.

[17] A. Ronveaux, Factorization of the Heuns’ differential operator, Appl. Math. Comp. 141 (2003) 177-184.

[18] W. Robin, Operator factorization and the solution of second-order linear ordinary differential equations, Int. J. Math. Education Sci. Tec. 38 n. 2 (2007) 189-211.

[19] J. Rotman, Galois Theory, Springer-Verlag, New York, 1998.

[20] F. Schwarz, A factorization algorithm for linear ordinary differential equations, Proceedings of ISSAC’89, ACM Press, 1989, 17-25.

[21] N.M. Temme, Special Functions: An Introduction to the Classical Functions of Mathematical Physics, Wiley and Sons, New York, 1989.

[22] S. P. Tsarev, An algorithm for complete enumeration of all factorizations of a linear ordinary differential operator, Proceedings ISSAC’89, ACM Press, 1996, 176-189.

[23] M. Van der Put and M. Singer, Galois Theory in Linear Differential Equations, Springer- Verlag: New York, 2003.

[24] M. Van Hoeij, Factorization of differential operators with rational functions coefficients, J. Symb. Comput. 24 (1997) 537-561.