<<

Asymptotic Expansions

V 5.9 2020/10/15

The interest in originated from the necessity of search- ing for approximations to functions close the point(s) of interest. Suppose we have a function f(x) of single real parameter x and we are interested in an approximation to f(x) for x “close to” x0. The function f(x) may be undefined

at x = x0, however the limit limx→x0 f(x) = A must exists, finite, 0 or ∞, as x0 is approached from below or from above. An approximation to f(x) for x close to x0 is a function g(x) which is close to f(x) as x → x0. The formal definition is:

Definition. Let D j R, x0 a condensation point of D, and f(x), g(x) and η(x) real functions defined on D \ x0, i.e., on D excluding x0. The function g(x) is 1 an approximation to the function f(x) to order η(x) as x → x0 in D if,

f(x) − g(x) lim = 0, x→x0 η(x)

i.e., if for any small positive number δ there exists a punctured neighborhood Uδ(x0) of x0 such that

f(x) − g(x) ≤ δ η(x) for x ∈ Uδ(x0) ∩ D.

The function η(x) is called gauge function and intuitively measures how close g(x) is to f(x) as x → x0. Searching for an approximation to f(x) as x → x0 actually requires to know the rate at which f(x) → A as x → x0, i.e., how the function f(x) approaches asymptotically the value A as x → x0 along the chosen direction. In other words, the asymptotic behavior of f(x) as x → x0. The above example is a particular case of a single real variable function. In many problems we deal with functions f(x1, . . . , xn; 1, . . . , m) depending on two sets of arguments: the set (x1, . . . , xn) called variables and the set (1, . . . , m) called parameters. The distinction between the two sets is not intrinsic and depends on the context. For functions of more that one argument the above definition must be slightly Notmodified. Suppose for we look for an Distribution approximation to the function f(x; ) for x

1 The limit is evaluated in the domain D \ x0. This will be always understood in the following.

Preprint submitted to Elsevier October 15, 2020 in some region D as the parameter  approaches the value  → 0+. Without loss of generality we assume the distinguished value of 0+ for parameters. An approximation to f(x; ) is a function g(x; ) which is uniformly close to f(x; ) as  → 0+ for all x in D. Thus the above definition becomes: Definition. Given the real functions f(x, ) and g(x, ) defined for x in the domain D j R and  in the interval I : 0 <  ≤ 0, and the real function η() defined for  ∈ I. The function g(x; ) is an approximation to the function f(x; ) to order η() for all x ∈ D as  → 0+ if

f(x; ) − g(x; ) lim = 0 uniformly in D, →0+ η()

i.e., if for any small positive number δ there exists a punctured neighborhood Iδ : 0 <  ≤ δ of  = 0 such that for all x ∈ D

f(x; ) − g(x; ) ≤ δ η() for  ∈ Iδ ∩ I,

with δ and δ independent of x. These definitions not only enlighten that searching for approximation close to points of interest naturally leads to an asymptotic analysis. What is most important, they show that the asymptotic behavior of functions can be studied by comparing them with known gauge functions. The simplest gauge functions are powers, for example for  → 0+

. . . , −2, −1, 1, , 2,...

but other choices are possible. The corollary is that fundamental to asymptotic analysis is to determine the relative order of functions.

1. The Order Relations

The concept of order of magnitude of functions is central in asymptotic analysis. The precise mathematical definition of the intuitive idea of same order of magnitude, smaller order of magnitude and equivalent is provided by the Bachmann-Landau symbols “O”, “o” and “∼”.

1.1. The O and ∼ symbols The formal definition of the O-symbol used in asymptotical analysis is:

Definition. Let D j R, x0 a condensation point of D and f, g : D \ x0 → R real functions defined on D \ x0. Then f = O(g) as x → x0 in D if there exists Nota positive constant forC and a punctured Distribution neighborhood UC (x0) of x0 such that

|f(x)| ≤ C |g(x)| for all x ∈ UC (x0) ∩ D. (1)

2 We say that f(x) is asymptotically bounded by g(x), or simply f(x) is “order big O” of g(x), as x → x0 in D. If the ratio f/g is defined then (1) implies that |f/g| is bounded from above by C in UC (x0) ∩ D, or, equivalently that

f(x) lim < ∞. x→x0 g(x)

These definitions do not inquire into the fate of f and g in x0; asymptotic analysis concerns with the asymptotic behavior of the functions in the neigbor- hood of the point x0. In x0 the functions f and g can be defined or not. If f = O(g) as x → x0 for all x0 in D then f is bounded by g everywhere in D. In this case we say that f = O(g) in D. The O-symbol provides a one-side bound, the statement f = O(g) as x → x0 indeed does not necessarily imply that f and g are of the same order of magnitude. Consider for example the functions f = xα and g = xβ, where α and β are constants with α > β. Then xα = O(xβ) as x → 0+ because the α−β choice C = δ and UC (0) : 0 < x < δ satisfies the requirement (1). Clearly, the reverse xβ = O(xα) as x → 0+ is not true. Notice that xα/xβ → 0 while xβ/xα → +∞ as x → 0+. Two functions f and g are strictly of the same order of magnitude if f = O(g) and g = O(f) as x → x0. To stress this point one sometimes introduces the notation f = Os(g), as x → x0

for f strictly of order g to indicate f = O(g) and g = O(f) as x → x0. In

this case limx→x0 |f/g| exists and it is neither zero or infinity. We shall not emphasize the distinction and use only the symbol “O”. However, the possible one-side nature of O-symbol should always be kept in mind.

If the limx→x0 (f/g) exists and is equal to 1 we say that f(x) is asymptotically equal to g(x) as x → x0, or simply f(x)“goes as” g(x) as x → x0, and write

f ∼ g, as x → x0.

For example, as x → 0

sin x ∼ x, sin(7x) = O(x), 1 − cos x = Ox2 log(1 + x) ∼ x, ex ∼ 1, sin(1/x) = O(1).

The assertion 1 = O(sin(1/x)) as x → 0 is not true because sin(1/x) vanishes in every neighborhood of x = 0. Similarly as x → +∞

N 3x X n N  cosh(3x) = O e , log(1 + 7x) ∼ log x, anx = O x . Not for Distributionn=1 In the case of functions f(x; ), defined for x in a domain D and  in the interval I : 0 <  ≤ 0, we say that

f(x; ) = Og(x; ) as  → 0+ in D,

3 if for each x in D there exist a positive number Cx and a punctured neighborhood Iδ(x) : 0 <  ≤ δ(x) of  = 0, such that

|f(x; )| ≤ Cx |g(x; )| for all  ∈ Iδ(x) ∩ I. (2)

If the ratio f/g exists then from (2) follows that

f(x; ) lim < ∞, for all x ∈ D. →0+ g(x; )

The limit may depend on x. If the positive constant Cx and δ(x) do not depend on x for all x ∈ D then f(x; ) = O(g(x; ) as  → 0+ uniformly in D. For example

cos(x + ) = O(1) as  → 0 uniformly in R, x e − 1 = O() as  → 0 nonuniformly in R. A relation not uniform in D can be uniform in a subdomain of D. For example, the second example is uniform in the subdomain [0 : 1] ⊂ R. As in the case of a single variable function, if lim→0+ (f/g) = 1 for all x ∈ D we say that f is asymptotic equal to g in D and write

f(x; ) ∼ g(x; ) as  → 0+ for all x ∈ D.

1.2. The o-symbol The formal definition of the o-symbol is:

Definition. Let D j R, x0 a condensation point of D and f, g : D \ x0 → R real functions defined on D \ x0. We say f = o(g) as x → x0 in D if for every δ > 0 there exists a punctured neighborhood Uδ(x0) of x0 such that

|f(x)| ≤ δ |g(x)| for all x ∈ Uδ(x0) ∩ D. (3)

This inequality indicates that |f| becomes arbitrarily small compared to |g| as x → x0 in D. Thus, if the ratio f/g is defined, (3) implies that

f(x) lim = 0. x→x0 g(x)

Notice that f = o(g) as x → x0 always implies f = O(g) as x → x0 not in strict sense. The converse is not true. Asymptotic equivalence f(x) ∼ g(x) as x → x0  means that f(x) = g(x) + o g(x) as x → x0. If f = o(g) as x → x0 we say that f(x) is asymptotically smaller than g(x) Notas x → x0, or simply forf(x) is “order Distribution little o” of g(x) as x → x0. Often the alternative notation f  g as x → x0, or f “much smaller than” g as x → x0, is used in place of f = o(g).

4 For example, as x → 0

x3 = o(x), sin(7x) = o(1), cos x = ox−1, log(1 + x2) = o(x), x −1/3 −1/x2 n e − 1 = o x , e = o x for all n ∈ N, while as x → +∞

N 2 3 X n N+1 x = o x , log x = o(x), anx = o x . n=1 Generalization to functions f(x, ), defined for x in a domain D and  in the interval I : 0 <  ≤ 0, is immediate. The statement

f(x, ) = og(x, ) as  → 0+ for x ∈ D,

means that for each x in D and any given δ > 0 there exist a punctured neigh- borhood Iδ(x) : 0 <  ≤ δ(x) of  = 0, such that

|f(x, )| ≤ δ |g(x, )| for all  in Iδ(x) ∩ I.

This inequality implies that if the ratio f/g exists then

f(x; ) lim = 0, for all x ∈ D. →0+ g(x; )

+ If δ is independent of x we say that f(x; ) = o(g(x; )) as  → 0 uniformly in D. For example

−1 cos(x + ) = o( ) as  → 0+ uniformly in R, x 1/2 e − 1 = o( ) as  → 0+ nonuniformly in R. ex − 1 = o(1/2) as  → 0+ uniformly in D : 0 ≤ x ≤ 1.

The uniform validity of the order relation is more important for the O-symbol. In asymptotic analysis the O (in strict sense) and ∼ symbols are usually more relevant than the o-symbol because the latter hide so much information. We are more interested to know how a function behaves close to some point, rather than just knowing that it is much smaller than some other functions. For example, sin x = x + ox2 as x → 0 tells us that sin x − x vanishes faster than x2 as x → 0, while sin x = x + Ox3 as x → 0 (in strict sense) tells us that sin x − x Notvanishes as x3 as forx → 0. Distribution 2. Asymptotic Expansion

An asymptotic expansion describes the asymptotic behavior of a function close to a point in terms of some reference functions, the gauge functions.

5 The definition was introduced by Poincar´e(1886) and provides a mathemat- ical framework for the use of many divergent . We shall call the set of functions ϕn : D\x0 → R, n = 0, 1,... , an asymptotic sequence as x → x0 in D \ x0, if for each n we have   ϕn+1(x) = o ϕn(x) as x → x0 in D.

Some examples of asymptotic sequence are:

n Example 2.1. The functions ϕn(x) = (x − x0) form an asymptotic sequence as x → x0.

n/2 −n Example 2.2. The functions ϕn(x) = x , ϕn(x) = (log x) and ϕn(x) = (sin x)n are examples of asymptotic sequences as x → 0+, while the functions −n ϕn(x) = x form an asymptotic sequence as x → +∞. Example 2.3. The sequence {log x, x2, x3,... } form an asymptotic sequence as x → 0+.

Example 2.4. The sequence {. . . , x−2, x−1, 1, x, x2,... } form an asymptotic sequence as x → 0+, while the sequence {. . . , x2, x, 1, x−1, x−2,... } form an asymptotic sequence as x → +∞. The asymptotic expansion of a function f(x) is defined as:

Definition. Let the set of functions ϕn(x), n = 0, 1, 2,... , defined on the domain D \ x0 form an asymptotic sequence as x → x0 in D and an a sequence P∞ of numbers. The formal series n=0 anϕn(x) is an asymptotic expansion of f(x) as x → x0 in D with gauge functions ϕn(x) if, and only if, for any N:

N X   f(x) − an ϕn(x) = o ϕN (x) as x → x0 in D, n=0 or, equivalently,

N−1 X   f(x) − an ϕn(x) = O ϕN (x) as x → x0 in D. (4) n=0 In this case we shall write

∞ X f(x) ∼ an ϕn(x), as x → x0 in D. n=0 NotAsymptotic expansion for of this form Distribution are also called asymptotic series. The definition of the asymptotic expansion of a single variable function is not general enough to handle functions of more than one variable. For them we need the more general definition:

6 Definition. Let the functions ϕn(), n = 0, 1, 2,... , with  ∈ I : 0 <  ≤ 0 form an asymptotic sequence as  → 0+. We say that the sequence of functions gn(x; ) form an uniform asymptotic sequence of approximations to f(x; ) in + the domain x ∈ D with gauge functions ϕn() as  → 0 if f(x; ) − g (x; ) lim n = 0, for all n, (5) →0+ ϕn() uniformly in D. The boundary of the domain D where the approximation is uniformly valid may depend on , and also on n. Domains with -dependent boundaries are essential in asymptotic analysis because the gauge functions are not unique. The definition (5) is not an asymptotic expansion of f(x; ); to find an asymp- totic expansion of f(x; ) one constructs a sequence of functions an(x; ) such that n X gn(x; ) = am(x; ), (6) m=0 P∞ and (5) holds. The formal series n=0 an(x; ) is a generalized uniform asymp- totic expansion of f(x; ) and we write

∞ X + f(x; ) ∼ an(x; ) as  → 0 , (7) n=0 uniformly in D. The formal expression (7) is just the restatement of the defini- tion (5) with the particular choice (6). The leading term a0(x; ) of the series is called asymptotic representation of f(x; ) as  → 0+. Notice that nothing is said about the convergence of the series. The series (7) is just formal and generally it does not converge; if it converges it may not converge to f(x; ). In most cases the functions an(x; ) do not have a simple expression in terms of ϕn(), and finding them may be rather difficult. To overcome this problem, Poincar´eintroduced a definition in terms of the functions ϕn(). Definition. The uniform asymptotic expansion of the function f(x; ) in the + domain x ∈ D and  in the interval I : 0 <  ≤ 0, as  → 0 in the sense of Poincar´e, or simply of Poincar´etype, is an asymptotic expansion of the form

∞ X + f(x; ) ∼ an(x) ϕn(), as  → 0 uniformly for x ∈ D. (8) n=0

The functions an(x) are independent of  and play the role of coefficients of the Notasymptotic expansion. for Distribution Poincar´etype asymptotic expansions have useful properties not shared by generalized asymptotic expansion, for example they can be added or integrated term by term, hence they are largely used in asymptotic analysis. Their form is, however, not general enough to handle all problems that can be studied with

7 perturbation techniques. Nevertheless, they can generally be used as starting point to construct more suitable generalized asymptotic expansions. Since Poincar´etype expansions simply extend the definition of asymptotic expansion of functions f(x) to functions f(x; ), they are usually introduced using the following equivalent definition:

Definition. Let the functions ϕn(), n = 0, 1, 2,... , with  ∈ I : 0 <  ≤ 0 + form an asymptotic sequence as  → 0 and an(x) a sequence of functions inde- P∞ pendent of  defined on the domain D. Then the formal series n=0 an(x)ϕn() is a uniform asymptotic expansion of Poincar´etype of f(x; ) in the domain + x ∈ D with gauge functions ϕn() as  → 0 if, and only if, for any N:

N X   + f(x; ) − an(x) ϕn() = o ϕN () , as  → 0 , n=0 or, equivalently,

N−1 X   + f(x; ) − an(x) ϕn() = O ϕN () , as  → 0 , n=0 uniformly for x ∈ D.

2.1. Uniqueness of Poincar´etype Expansions and Subdominance The asymptotic expansion of a function is not unique because there exists an infinite number of asymptotic sequences that can be used in the expansion. However, given an asymptotic sequence of gauge functions ϕn the Poincar´etype expansion in terms of ϕn is unique. Consider for simplicity a function f(x) of a single variable, the result holds for the Poincar´etype expansions of functions f(x; ) as well. From the definition (4) it follows that

N−1 X f(x) − anϕn(x) n=0 lim = aN . x→x0 ϕN (x)

Thus the coefficients an are uniquely determined by the gauge functions ϕn; if the function f(x) admits an asymptotic expansion with respect to the sequence of gauge functions ϕn(x) then the expansion is unique. The converse is not true. For any asymptotic sequence ϕn(x) as x → x0 there always exists a function ψ(x) which is transcendentally small with respect to the sequence, i.e.,  ψ = o ϕn , ∀n, as x → x0. NotOne may then add for to f a sequence Distribution of transcendentally small terms, by say positive powers of ψ. As a consequence an asymptotic expansion does not determines the function f uniquely. Different functions may have the same asymptotic expansion with respect to the given asymptotic sequence of gauge functions ϕn.

8 Example 2.5. For any constant c 6= 0, the functions 1 1 f(x) = and g(x) = + ce−1/x, 1 − x 1 − x have the same asymptotic expansion as x → 0+ in terms of the gauge functions n ϕn(x) = x : f(x) ∼ g(x) ∼ 1 + x + x2 + ... as x → 0+,

because e−1/x is transcendentally small with respect to xn: e−1/x = o(xn) as x → 0+ for every n ∈ N. An asymptotic expansion establishes an equivalence relation among func- tions: if for all n   f(x) − g(x) = o ϕn(x) as x → x0,

then the functions f and g are asymptotically equivalent as x → x0 with respect to the gauge functions ϕn. The function h(x) = f(x) − g(x) is said subdom- inant to the asymptotic expansion with gauge functions ϕn as x → x0. An asymptotic expansion is, thus, asymptotic to a class of functions that differ one another by subdominant functions transcendentally small with respect to the gauge function ϕn as x → x0. Notice that the concept of transcendentally smallness is an asymptotic one. Transcendentally small terms might be numerically important for values of x not too close to x0, even for relatively small non-zero values of x − x0.

2.2. Uniformly vs Nonuniformly valid Asymptotic Expansions In perturbation calculations one often encounters situations where the quan- tity to be expanded is a function f(x; ), where  is a (small) perturbation pa- rameter and x is a variable independent of  varying in a domain D. In these cases one generally constructs an asymptotic expansion of the Poincar´etype (8) with some gauge functions ϕn(). The functions ϕn() are not uniquely determined by the problem. Rather, these are usually found in the course of the solution of the problem. Due to this arbitrariness different asymptotic ap- proximations can be constructed. Only asymptotic expansions uniform in the domain D, however, lead to useful approximations to f(x; ). P∞ An asymptotic expansion of f(x; ), e.g. n=0 an(x)ϕn(), is only formal. However the partial sum of the first N terms is a well defined object for any finite N and gives an approximation to the function f(x; ). Thus, we can write

N−1 X f(x; ) − an(x) ϕn() = RN (x; ) Not forn=0 Distribution where RN (x; ) is the reminder or error of the approximation. The asymptotic properties of the gauge functions ϕn() ensures that for any fixed N and x ∈ D:   RN (x; ) = O ϕN () , as  → 0. (9)

9 The asymptotic expansion gives both an approximation to f(x; ) as  → 0 and the estimation of the error in the approximation. The error RN (x; ) is in general a function of both  and x. For a useful approximation, however, the error must not depend upon the value of x, hence (9) must be valid uniformly in D. Useful asymptotic expansions of f(x; ) are only those which are uniform in D.   Notice that since ϕn() = o ϕn−1() as  → 0 the requirement of uniform validity implies that each coefficient an(x) of a Poincar´etype expansion cannot be more singular than an−1(x) for all x in the domain D. In other words, as  → 0 each term an(x)ϕn() must be a small correction to the preceding term irrespective of the value of x.

Example 2.6. The expansion

 2   3  sinx +  = 1 − + ... sin x +  − + ... cos x 2 3! 2 3 = sin x +  cos x − sin x − cos x + ..., as  → 0, 2 3!

is a uniform asymptotic expansion in R. The coefficients of the power n are bounded for all x ∈ R, thus an(x) is not more singular than an−1(x) and the expansion is uniform in R. Example 2.7. Consider the function 1 f(x; ) = , x +  defined for all x ≥ 0 and  > 0. If x > 0, then

1   2  f(x; ) ∼ 1 − + + ... , as  → 0+. (10) x x x2

Each term of this expansion is singular at x = 0, and more singular than the preceding one. The expansion is not uniform in the domain D : x ≥ 0. It is, nevertheless, uniform in the domain D1 : 0 < x0 ≤ x where x0 is a positive number independent of . Uniformity breaks down as we get too close to x = 0, signaling that some change must occurs near x = 0. Indeed for x = 0 the expansion has a completely different form: 1 f(0; ) ∼ , as  → 0+. (11)  Generally speaking, the uniform validity of the expansion (10) breaks when two Notsuccessive terms forbecome of the same Distribution order, i.e.,  = O(1) ⇒ x = O() as  → 0+. x

10 If we introduce the variable ξ = x/ and rewrite f(x; ) in terms of ξ the different asymptotic expansion emerges: 1 1 f(ξ; ) ∼ , as  → 0+. (12)  ξ + 1

The expansion (12) is uniform in the domain D2 : 0 ≤ ξ ≤ ξ0 < 1/, with ξ0 a constant independent of . Notice that D2 differs from D1. Nevertheless the expansion (12) matches with (11) in the limit ξ → 0+ and with (10) as ξ  1. We shall give the precise meaning of the vague word “matches” later, discussing asymptotic matching.

2.3. Asymptotic versus Convergent series Convergence of a series is an intrinsic property of the coefficients of the series, one can indeed prove the convergence of a series without knowing the function to which it converges. Asymptotic is, instead, a relative concept. It concerns both the coefficients of the series and the function to which the series is asymptotic. To prove that an asymptotic series is asymptotic to the function f(x) both the coefficients of the series and the function f(x) must be considered. As a consequence, a convergent series need not be asymptotic and an asymptotic series need not be convergent. To understand the difference between convergence and asymptotic consider P∞ the asymptotic series n=0 anϕn(x), where an are some coefficients and ϕn(x) an asymptotic sequence as x → x0 in a domain D. The series is in general only formal because it may not converge. The partial sum

N−1 X SN (x) = an ϕ(x) n=0 is, nevertheless, well defined for any N. The convergence of the series is con- cerned with the behavior of SN (x) as N → ∞ for fixed x, whereas the asymp- totic property as x → x0 with the behavior of SN (x) as x → x0 for fixed N. A convergent series defines for each x a unique limiting sum S∞(x). However, at difference with an asymptotic expansion, it does not provide information on how rapidly it converges nor on how well SN (x) approximates S∞(x) for fixed N. An asymptotic series, on the contrary, does not define a unique limiting function f(x), but rather a class of asymptotically equivalent functions. Hence it does not provide an arbitrary accurate approximation to f(x) for x 6= x0. Still, the partial sums SN (x) provide good approximations to the value of f(x) for x sufficiently close to x0. The approximation, however, depends on how x is close to x0. This leads to the problem of the optimal truncation of the asymptotic expansion, i.e., the optimal number of terms to be retained in the partial sum.2 NotThe following for example illustrate Distribution these differences.

2The optimal truncation can be affected by the presence of transcendentally small sub- dominant functions because they may give finite contributions.

11 Example 2.8. Consider the , or Gauss error function, defined as

x 2 Z 2 erf(x) = √ e−t dt. (13) π 0 Since the Maclaurin series of ez is absolutely convergent for all z ∈ , expand- 2 C ing e−t in power series of −t2 and integrating term by term, we obtain the convergent power series expansion of erf(x):

∞ 2  1 1  2 X (−1)n erf(x) = √ x − x3 + x5 + ... = √ x2n+1. π 3 2 · 5 π (2n + 1)n! n=0

The series converges for all x ∈ R because the ratio between the (n + 1)-th and the n-th term 2 un+1 2n + 1 x = , un 2n + 3 n + 1 is smaller than 1 for all x ∈ R and n large enough. For moderate values of x the convergence is, however, very slow. For example, 31 terms are needed to have an approximation to erf(3) with an accuracy of 10−5. A better result can be obtained using an asymptotic expansion. To derive an asymptotic expansion of erf(x) we rewrite (13) as,

∞ 2 Z 2 erf(x) = 1 − √ e−t dt π x 1 Z ∞ = 1 − √ s−1/2e−s ds. π x2 Integrating by parts twice we get:

" 2 # 1 e−x 1 Z ∞ erf(x) = 1 − √ − s−3/2e−s ds π x 2 x2 " 2 2 # 1 e−x e−x 3 Z ∞ √ −5/2 −s = 1 − − 3 + s e ds . π x 2x 4 x2

The procedure can be easily iterated to any order N. Introducing the functions Z ∞ −n−1/2 −s Fn(x) = s e ds x2  ∞  1 Z ∞  = − s−n−1/2e−s + n + s−n−1/2−1e−s ds 2 x 2 x2 2 e−x  1 Not= for− n + DistributionF (x), x2n+1 2 n+1

12 the error function erf(x) can be written as 1 erf(x) = 1 − √ F (x) π 0 " 2 # 1 e−x 1 = 1 − √ − F (x) π x 2 1     1 2 1 1 1 · 3 = 1 − √ e−x − + F (x) π x 2x3 22 2 = ...

2 N−1 e−x X (2n − 1)!! = 1 − √ (−1)n + R (x), π 2nx2n+1 N n=0 where (2n − 1)!! = 1 · 3 ····· (2n − 1), and

1 (2N − 1)!! R (x) = (−1)N √ F (x). N π 2N N

The series diverges for any value of x because the ratio between the (n + 1)-th and the n-th term n 2n+1 un+1 (2n + 1)!! 2 x 2n + 1 = n+1 2n+3 × = 2 , (14) un 2 x (2n − 1)!! 2x

is larger than 1 whenever 2n + 1 > 2x2. In spite of that, Z ∞ −N−1/2 −s |RN (x)| = CN s e ds x2 Z ∞ CN −s ≤ 2N+1 e ds x x2 2 e−x ≤ C , N x2N+1

N √ where CN = (2N − 1)!!/(2 π). Thus,

" 2 N−1 # 2 e−x X (2n − 1)!! 1  e−x  erf(x) − 1 − √ (−1)n = O , as x → ∞, π 2n x2n+1 x2N+1 n=0 and the series gives an asymptotic expansion of erf(x) for large x:

2 ∞ e−x X (2n − 1)!! 1 erf(x) ∼ 1 − √ (−1)n , as x → ∞. (15) π 2n x2n+1 Not forn=0 Distribution Only the first two terms of the expansion (15) suffice to have an approxi- mation to erf(3) with an accuracy of 10−5. The series, nevertheless, does not provide an arbitrary accurate approximation to erf(x) and we cannot improve

13 the approximation at will by adding more terms. From (14) it follows that the 2 ratio between two successive terms of the series becomes larger than 1 as n & x . Hence, the asymptotic series (15) has an optimal truncation for N equal to the largest integer smaller than x2. In this example we have encountered an asymptotic expansion of the form

∞ X f(x) ∼ g(x) an ϕn(x), as x → x0, n=0

−x2 where the function g(x) is bounded as x → x0. In the example g(x) ∝ e , clearly bounded as x → ∞. This reflects the non uniqueness of gauge functions: since g(x) is bounded as x → x0 the functions ϕn(x) andϕ ˜n(x) = g(x) ϕn(x) form two different asymptotic sequence as x → x0. This provides additional flexibility, but it should be used with care because differences might be numer- ically relevant for x not too close to x0. Example 2.9. The asymptotic expansion of the function f(x) = 1/(1 − x) as n x → 0 with gauge function ϕn(x) = x is:

∞ 1 X ∼ xn, as x → 0. (16) 1 − x n=0

n The series with gauge functionsϕ ˜n(x) = (1 + x) x is also an asymptotic ex- pansion of f(x) as x → 0 because the function g(x) = 1 + x is bounded as x → 0: ∞ 1 X ∼ (1 + x) xn, as x → 0. (17) 1 − x n=0 Both series (16) and (17) provide an asymptotic expansion of f(x) = 1/(1 − x) as x → 0, but the approximation to f(x) for x close to 0 are different. The difference between the two approximations decreases as x approaches 0, but for small finite value of x it might be numerically relevant. For example, for x = 0.1 the first three terms of (16) give the approximantion 1.11 to the exact value 1.11111 ... , while the first three terms of (17) give 1.221.

2.4. Asymptotic power expansion Asymptotic expansions of the form

∞ X n f(x) ∼ an (x − x0) , as x → x0, n=0 Notare called asymptotic for power expansions Distribution, or asymptotic power series, and are among the most common and useful asymptotic expansions.

14 One of the fundamental notions of real analysis is the Taylor expansion of in- finitely differentiable C∞ functions. If f(x) is a smooth, infinitely differentiable ∞ C function in a neighborhood of x = x0, then the Taylor theorem ensures that

N (n) X f (x0) f(x) = (x − x )n + o|x − x |N , as x → x . n! 0 0 0 n=0 Hence, f(x) has the asymptotic power expansion

∞ (n) X f (x0) f(x) ∼ (x − x )n as x → x . (18) n! 0 0 n=0

If, and only if, f(x) is analytic at x = x0 the asymptotic power series converges to f(x) in a neighborhood |x − x0| < r of x0 with r > 0. In this case the series gives the usual representation of the function about x0. Nevertheless, the Taylor expansion of f(x) about the point x = x0 does not provide an asymptotic expansion of f(x) as x → x1 6= x0, even if convergent for |x1 − x0| < r; partial sums of the Taylor expansion do not usually give good approximations to f(x) as x → x1 6= x0. ∞ If f(x) is C but not analytic at x = x0 the series (18) need not converge in any neighborhood of x0, or if it does converge, its sum need not necessarily be f(x). See Example 2.5, where the function g(x) is C∞ but not analytic at x = 0 for any c 6= 0. For asymptotic power expansions the following theorem, due to Borel, pro- vides, in some sense, the inverse of the Taylor theorem. The Borel theorem states that for any sequence of real numbers an there exists an infinitely differ- ∞ (n) entiable C function f(x) such that an = f (x0)/n! and

∞ X n f(x) ∼ an (x − x0) , as x → x0. n=0 The function f(x) need not to be unique. Hence, unlike convergent Taylor series, there are no restrictions on the growth rate of the coefficients an of an asymptotic power series, as shown in the following example. Example 2.10. Consider the integral (Euler, 1754): Z ∞ e−t f(x) = dt , (19) 0 1 + xt defined for x ≥ 0. Integrating by parts we get Z ∞ Z ∞ −t 1 d −t e f(x) = − dt e = 1 − x dt 2 . Not for0 1 + xt Distributiondt 0 (1 + xt) One more gives, Z ∞ −t 2 e f(x) = 1 − x + 2x dt 3 , 0 (1 + xt)

15 and one more,

Z ∞ −t 2 3 e f(x) = 1 − x + 2x − 2 · 3 x dt 4 . 0 (1 + xt) Thus, after N integration by parts:

N−1 X n n f(x) = (−1) n! x + RN (x), n=0 where Z ∞ −t N N e RN (x) = (−1) N! x dt N+1 . 0 (1 + xt) The coefficients of the power series grow as the of n and the ratio between two successive terms |un/un−1| = nx becomes larger than 1 as n > 1/x. The series does not converge for any x > 0, nevertheless, for any N

Z ∞ −t Z ∞ N e N −t N |RN (x)| ≤ N! x dt N ≤ N! x dt e = N! x . (20) 0 (1 + xt) 0

N−1 + Thus RN (x) = o(x ) as x → 0 , and

∞ Z ∞ e−t X dt ∼ (−1)nn! xn, as x → 0+. 1 + xt 0 n=0 The power series is not convergent because the integrand in (19) has a nonin- tegrable singularity at t = −1/x and the integral is not defined for x < 0. A convergent power series about x = 0 would converge in a finite interval centred at x = 0, hence also for negative x. Since the asymptotic power series does not converge, its partial sum does not provide an arbitrary accurate approximation to the integral (19) for any fixed x > 0. However, from (20) it follows that for any fixed N the error is polynomial in x. What is the optimal truncation of the series, that is the one which gives the best approximation? The ratio between two successive terms |un/un−1| = nx grows with n and becomes larger than 1 as n > 1/x. The best 1/x approximation occurs when√ N ∼ [1/x], for which RN ∼ (1/x)!x . Using the Stirlng’s formula N! ∼ 2πNN N e−N valid for N  1, the error of the optimal truncation reads: r2π R (x) ∼ e−1/x, as x → 0+. N x Thus, while for arbitrary N the partial sum is polynomially accurate in x, the Notoptimal truncated for sum is exponentially Distribution accurate. 2.5. Stokes Phenomenon

The asymptotic expansion as z → z0 of complex function f(z) with an essential singularity in z = z0 is typically valid only in wedge-shaped regions

16 α < arg(z − z0) < β, and the function has different asymptotic expansion in different wedges. For definitiveness we shall discuss the case z0 = ∞, however the same scenario appears for essential singularities at any z0 ∈ C. The change of the form of the asymptotic expansion across the boundaries of the wedges is called Stokes phenomenon. The origin of the Stokes phenomenon can be understood as follows. If the function g(z) is asymptotically equal to the function f(z) as z → ∞ this means that f(z) = g(z) + h(z) with h(z) subdomi- nant compared with g(z) as z → ∞. As z approaches the boundary of the wedge of validity of the expansion the subdominant h(z) grows in magnitude compared to the dominant term g(z). At the boundary of the wedge the functions h(z) and g(z) are of the same order of magnitude and the distinction between dominant and subdominant disappears. As z crosses the boundary the functions h(z) and g(z) exchange their role and on the other side g(z)  h(z) as z → ∞. This exchange is the Stokes phenomenon, and it is typical of exponential functions. Example 2.11. Consider the complex function

ez − e−z f(z) = sinh z = 2 In the half plane Re z > 0 both sinh z and ez grows exponentially as z → ∞ and 1 sinh z ∼ ez, as z → ∞, Re z > 0. 2 1 −z 1 z In this region the difference h(z) = − 2 e between sinh z and g(z) = 2 e is transcendentally small as z → ∞. 1 −z In the half plane Re z < 0 the function h(z) = − 2 e is no longer transcen- dentally small, and 1 sinh z ∼ − e−z, as z → ∞, Re z < 0. 2

1 z In the region Re z < 0 the function g(z) = 2 e is subdominant. 1 z 1 −z The two leading asymptotic expansions 2 e and − 2 e switch from being dominant to subdominant on the line Re z = 0 (anti-Stokes lines), where they are purely oscillatory. They are most unequal in magnitude on the line Im z = 0 (Stoke lines), where they are purely real and either exponential growing or decreasing as z → ∞.

The Stokes and anti-Stokes lines are not unique and do not really have a pre- cise definition because the region where the function f(z) has some asymptotic expansion is rather vague refelcting the non uniqueness of gauge functions. In the above example, we could equally well take as Stokes line any line Im z = a. NotAs a general definition, for the anti-Stokes Distribution lines are roughly where some term in the asymptotic expansion changes from increasing to decreasing. Hence, anti-Stokes lines bound regions where the function has different asymptotic behaviors. The Stokes lines are lines along which some term approaches infinity or zero fastest.

17 3. Elementary Operations on Poicar´etype Asymptotic Expansions

In singular perturbation methods we usually assume some form of the expan- sions, substitute them into the equations, and perform elementary operations such as addition, subtraction, exponentiation, integration and differentiation. Although the expansions are only formal and can be divergent, these operations are generally carried out without justification. Nevertheless conditions under which these operations are justified can be given. In the following we shall discuss some elementary properties of the Poincar´e type expansion (8) of functions f(x; ). If not specified the gauge functions ϕn() are generic.

3.1. Equating Coefficients. It is not strictly correct to write

∞ ∞ X X an(x)ϕn() ∼ bn(x)ϕn(), as  → 0 (21) n=0 n=0 because asymptotic series can only be asymptotic to functions. However what we mean with (21) is that

∞ ∞ X X an(x)ϕn() and bn(x)ϕn(), n=0 n=0 are asymptotic to functions which are asymptotically equivalent with respect to the gauge functions ϕn() as  → 0. That is, they differ by terms o[ϕn()] as  → 0 for all n. The uniqueness of the Poincar´etype expansions given the gauge functions ϕn() then implies that the coefficients of ϕn() in the two series must agree. Hence, we can equate the coefficient of the two expansions in (21) and say that from (21) it follows that an(x) = bn(x).

3.2. Addition and Subtraction. Addition and subtraction are in general justified, thus

∞ X  αf(x; ) + βg(x; ) ∼ αan(x) + βbn(x) ϕn(), as  → 0. n=0

where α and β are constants and an(x) and bn(x) the expansion coefficients of f and g, respectively. The proof is simple.

3.3. Multiplication and Division. Multiplication of asymptotic expansions is justified only if the result is still Not for DistributionP P an asymptotic expansion. The formal product of anϕn and bnϕn gener- ates all products ϕnϕm of gauge functions and generally it is not possible to rearrange them into an asymptotic sequence. Multiplication is justified only for asymptotic sequences ϕn such that the products ϕnϕm lead to an asymptotic

18 sequence, or posses asymptotic expansions. An important class of asymptotic expansions with this property are asymptotic power expansions. In this case P∞ n the multiplication is straightforward. Suppose f(x; ) ∼ n=0 an(x) and P∞ n g(x; ) ∼ n=0 bn(x) as  → 0, then

∞ X n f(x; ) g(x; ) ∼ cn(x)  as  → 0, n=0 where n X cn(x) = am(x) bn−m(x). (22) m=0

PN n Proof. Using (22), and rearranging the sums, the reminder fg − n=0 cn takes the form,

N n N N X X n X m X n−m fg − ambn−m  = fg − am bn−m n=0 m=0 m=0 n=m N h X mi = g f − am  m=0 N N−m X mh X pi + am g − bp  . m=0 p=0

Since as  → 0 N X n N  f − an  = o  , n=0 N−m X n N−m g − bn  = o  , n=0 N and moreover g(x; ) − b0(x) = O(), both terms in the last equality are o( ) as  → 0. Thus,

N X n N  f(x; ) g(x; ) − cn(x)  = o  , as  → 0, n=0 concluding the proof. The division of asymptotic expansions can be handled treating it as the inverse of the multiplication, hence division is not justified in general. In the Notparticular case of for asymptotic power Distribution expansions assuming b0(x) 6= 0, and using (22), it follows that:

∞ f(x; ) X ∼ d (x) n as  → 0, g(x; ) n n=0

19 with d0(x) = a0(x)/b0(x) and

n−1 1 h X i d (x) = a (x) − d (x) b (x) , n ≥ 1. n b (x) n m n−m 0 m=0

If the expansion coefficients an(x) and/or bn(x) are functions of x the division of uniformly valid asymptotic expansions might lead to a non-uniformly valid expansion. When this occurs the division is not justified anymore. The Example 2.7, with f(x; ) ∼ 1 and g(x; ) ∼ x +  as  → 0+, illustrates this point.

3.4. Exponentiation. Exponentiation of asymptotic expansions is generally not justified. As for multiplication, exponentiation produces all products of gauge functions. Thus the exponentiation can be justified only for asymptotic expansions such that the products ϕnϕm form an asymptotic sequence, or posses asymptotic expansions. Moreover, if the coefficients of the asymptotic expansion are functions of x the exponentiation might results in a non-uniform expansion. For instance, Example 2.7 shows that the asymptotic expansion of (x+)−1 as  → 0+ is not uniformly valid for x ≥ 0.

Example 3.1. Consider the function f(x; ) with the uniform asymptotic ex- pansion f(x; ) ∼ x +  as  → 0+, in the domain D : x ≥ 0. Then

√ √   2  pf(x; ) ∼ x +  ∼ x 1 + − + ... as  → 0+. 2x 8x2

This expansion is valid uniformly in the domain D0 : 0 < x0 ≤ x, where x0 is a positive constant, but not in the domain D. The uniform validity of the asymptotic expansion breaks down when x/ = O(1), and indeed when x = 0 the expansion has a completely different form: √ pf(0; ) ∼  as  → 0+.

3.4.1. Integration. Asymptotic expansions of Poincar´etype can be generally integrated term- by-term with respect to the expansion parameter  and/or the variable x. If f(x; ) and the gauge functions ϕn() are integrable functions of  in the interval I :  ∈ [0; 0] where 0 is constant, then term-by-term integration is justified and

NotZ  for∞ DistributionZ  0 0 X 0 0 d f(x;  ) ∼ an(x) d ϕn( ) as  → 0. 0 n=0 0

20 PN Proof. The proof is straightforward. Since f − n=0 anϕn = o(ϕN ) as  → 0, then:

Z  N Z  Z  N 0 0 X 0 0 0h 0 X 0 i d f(x;  ) − an(x) d ϕn( ) = d f(x;  ) − an(x) ϕn( ) 0 n=0 0 0 n=0 Z  0  0  = d o ϕN ( ) 0 Z   0 0 + = o d ϕN ( ) , as  → 0 . 0

In the case of asymptotic power expansions

∞ X n f(x; ) ∼ an(x)  as  → 0, n=0 termwise integration is always justified and leads to

Z  ∞ X an(x) d0 f(x; 0) ∼ n+1, as  → 0. n + 1 0 n=0 On the contrary, the first two terms of the asymptotic expansion

∞ X −n f(x; ) ∼ an(x)  , as  → +∞, n=0 are not integrable at  = +∞, and termwise integration of the series is not justified. Nevertheless,

Z ∞ ∞ X an(x) dt f(x; t) − a (x) − a (x) t−1 ∼ 1−n, as  → +∞. 0 1 n − 1  n=2 The proof is straightforward and is left as exercise. αn The extension to asymptotic expansions with gauge functions ϕn() =  + −αn as  → 0 , or ϕn() =  as  → +∞, with αn+1 > αn is trivial. Similarly, if f(x; ) and an(x) are integrable functions of x in some interval I : x ∈ [x0, x1], then for any x ∈ I

Z x ∞ Z x 0 0 X 0 0 dx f(x ; ) ∼ ϕn() dx an(x ), as  → 0. Notx0 forn=0 Distributionx0 The proof is straightforward.

21 3.5. Differentiation. Termwise differentiation of uniform asymptotic expansions, either with re- spect to the perturbation parameter  or with respect to the variable x, might lead to non-uniformly valid asymptotic expansions. When this occurs the dif- ferentiation of the asymptotic expansion term-by-term is not justified. Example 3.2. Consider the function f(x; ) with the uniform asymptotic ex- pansion √ f(x; ) ∼ 1 +  x as  → 0+, in the domain D : x ≥ 0. Then ∂  f(x; ) ∼ √ as  → 0+. ∂x 2 x

This expansion is valid uniformly in the domain D0 : 0 < x0 ≤ x, where x0 is a positive constant, but not in the domain D. In addition to this, subdominant terms might become relevant under differ- entiation as illustrated by the following example. Example 3.3. The functions f(x) and

−2 −2 g(x) = f(x) + e−x sin ex , have the same asymptotic power expansion as x → 0 and gauge functions n −x−2 ϕn(x) = x because e is transcendentally small with respect to any power xn as x → 0 and the last term is subdominant. However their derivatives f 0(x) and −x−2 e −2 2 −2 g0(x) = f 0(x) + 2 sin ex − cos ex , x3 x3 do not have the same power expansion as x → 0 because x−3 is not subdominant to xn as x → 0. To differentiate term-by-term the asymptotic expansion

∞ X f(x; ) ∼ an(x) ϕn(), as  → 0, (23) n=0 some additional conditions are needed. For instance, assume it is known that the partial derivative ∂f(x; )/∂ exists and possess an asymptotic expansion as  → 0 with some gauge functions ψn() and expansion coefficients an(x). If ∂f(x; )/∂ and ψn() are integrable functions of  in the interval I : 0 ≤  ≤ 0, then the asymptotic expansion can be integrated term-by-term with respect to  leading to the asymptotic expansion (23) of f(x; ) with ϕn() = R  0 0 Notd ψn( ). Under for these conditions Distribution termwise differentiation with respect to  of the asymptotic expansion (23) is justified and leads to:

∞ ∂f(x; ) X ∼ a (x) ϕ0 (), as  → 0. ∂ n n n=0

22 Similarly, if it is known that ∂f(x; )/∂x exists and possess an asymptotic expansion as  → 0 with gauge functions ϕn() and expansion coefficients bn(x). And, moreover, ∂f(x; )/∂x and bn(x) are integrable functions of x in the inter- val I : x0 ≤ x ≤ x1, then the asymptotic expansion can be integrated term-by- term with respect to x obtaining the asymptotic expansion (23) of f(x; ) with R x 0 0 an(x) = dx bn(x ) for x ∈ I . Thus, if these conditions are satisfied termwise differentiation of the asymptotic expansion (23) with respect to x is justified and leads to: ∞ ∂f(x; ) X ∼ a0 (x) ϕ (), as  → 0. ∂x n n n=0 We shall came back on this point when discussing the asymptotic expansion of the solutions of differential equations.

4. Asymptotic Expansion of Integrals

Asymptotic expansion of integrals is a collection of techniques originally developed because many used in Physics and Applied Mathe- matics have integral representations. Nowadays asymptotic expansion of inte- grals find applications in different fields, such as Statistical Mechanics and Field Theory. We shall not discuss asymptotic expansion of integrals here, rather we discuss two examples to illustrate the use of asymptotic expansions.

4.1. Generalized Stieltjeis Series and Integrals. Consider the function f(x) defined by the Stieltjeis integral:

Z ∞ ρ(t) f(x) = dt , x ≥ 0 (24) 0 1 + xt where ρ(t) is a generic non-negative function:

ρ(t) ≥ 0, t ≥ 0.

We have already encountered this type of integral in Example 2.10 with ρ(t) = e−t. There we obtained the asymptotic power expansion of f(x) as x → 0+ by successive integration by parts. For a generic function ρ(t) this procedure cannot be applied. Nevertheless, an asymptotic power expansion of f(x) as x → 0+ can be obtained by substituting into the integrand the asymptotic powers expansion

∞ 1 X ∼ (−xt)n, as x → 0+, 1 + xt n=0 Notand integrating term-by-term. for Thus, Distribution if the moments Z ∞ n an = dt ρ(t) t n ≥ 0 (25) 0

23 exists and are finite for any n, the function f(x) as x → 0+ has the asymptotic expansion: ∞ X n n + f(x) ∼ (−1) anx , as x → 0 , (26) n=0 called Stieltjeis Series. The series does not converge for any function ρ(t). A convergent power series in x converges inside a disk of finite radius centered at x = 0. The Stieltjeis integral (24) is not defined for negative x because the singularity at t = −1/x is not integrable; thus, the Stieltjeis Series (26) cannot be convergent. Proof. The proof that series (26) is an asymptotic power expansion as x → 0+ is simple. By using (25) and the identity

N X 1 − zN+1 zn = , 1 − z n=0 valid for any N, we have

N N X Z ∞ h 1 X i f(x) − (−1)na xn = dt ρ(t) − (−xt)n n 1 + xt n=0 0 n=0 Z ∞ h 1 1 − (−xt)N+1 i = dt ρ(t) − 0 1 + xt 1 + xt Z ∞ ρ(t) = dt (−xt)N+1. 0 1 + xt Then

N Z ∞ X n n N+1 N+1 N+1 f(x) − (−1) anx ≤ x dt ρ(t) t = aN+1x n=0 0 and N X n n N  + f(x) − (−1) anx = o x , as x → 0 , n=0 which concludes the proof.

4.2. . The gamma function Γ(ν) is defined for all complex numbers ν with real positive part as Z ∞ Γ(ν) = dt tν−1e−t, Re ν > 0. (27) Not for0 Distribution The gamma function can be extended via an analytic continuation to the whole except to non-positive integers, where it has simple poles. For simplicity we consider the case of real positive large ν.

24 By using the functional relation νΓ(ν) = Γ(1 + ν) and introducing the new variable t = ν(y + 1), Γ(ν) can be written as 1 Z ∞ Γ(ν) = Γ(1 + ν) = νν e−ν dy eνh(y), ν −1 where h(y) = ln(1 + y) − y. The function h(y) has a global maximum at y = 0, see Figure 1. When ν  1

��

��

�� ���� ��

��

�� �� �� �� �� �� �� �� �� �

Figure 1: Function h(y) = ln(1 + y) − y.

only the neighborhood of y = 0 contributes significantly to the integral because contributions from y not in the neighborhood 0 are exponentially depressed. Then Γ(ν) can be estimated for ν  1 by restricting the integration to the neigh- borhood of y = 0, where h(y) attains its maximum value (Laplace’s method). We thus divide the interval of integration into three regions: Z ∞ Z − Z  Z ∞ dy eνh(y) = dy eνh(y) + dy eνh(y) + dy eνh(y), −1 −1 −  were   1 is arbitrary, but sufficiently small so that in the interval − ≤ y ≤  the function h(y) can be replaced by the Maclaurin expansion: 1 1 1 h(y) = − y2 + y3 − y4 + O(y5). (28) 2 3 4 The parameter  may depend on ν, but for the moment it is sufficient to know Notthat   1 so that for (28) can be used. Distribution The functional dependence of  on ν is fixed later to make the whole calculation consistent. Consider the first integral, Z − Z − dy eνh(y) ≤ dy max eνh(y) = (1 − ) eνh(−), −1 −1 −1≤y≤−

25 thus Z − dy eνh(y) = Oeνh(−), as ν → +∞. −1 The analysis of the third integral is a bit more complex. Rewrite it as Z ∞ Z ∞ dy eνh(y) = eνh() dy eν[h(y)−h()].   The first derivative of h(y), h0(y) = 1/(1 + y) − 1, is negative and monotonously increasing in modulus for y > ; hence

h(y) − h() ≤ h0()(y − ), y ≥ .

Then Z ∞ Z ∞ νh() νh(y) νh() νh0()(y−) e dy e ≤ e dy e = 0 ,   −νh () and Z ∞ dy eνh(y) = Oeνh(), as ν → +∞.  We can use the arbitrariness of  and take  = ν−a with 1/3 < a < 1/2. This choice ensures that ν2  1 but νn  1 with n ≥ 3 as ν → +∞, so that

− Z 2 dy eνh(y) = Oe−ν /2, as ν → +∞, −1

∞ Z 2 dy eνh(y) = Oe−ν /2, as ν → +∞,  and their contribution is exponentially small as ν  1. Since   1 in the second integral√ h(y) can be replaced by the expansion (28). Introducing the variable x = y ν, this leads to: √ Z ∞ Z  ν − 1 x2+ √1 x3− 1 x4+O(x5/ν3/2) 2 νh(y) −1/2 2 3 ν 4ν −ν /2 dy e ∼ ν √ dx e + O e . −1 − ν (29) The choice  = ν−a with 1/3 < a < 1/2 ensures that

xm xn   1 m > n, n, m = 3, 4, 5,.... νm/2−1 νn/2−1 √ for |x| ≤  ν. The exponential in the integral in (29) is then of the form 2 exp[−x √/2 + g√ν (x)] with gν (x)  1 as ν → +∞ in the whole domain of integra- Nottion [− ν,  ν]. for Expanding the Distribution exponential exp[gν (x)] in powers of gν (x) we obtain: √ ∞  ν Z Z 1 2 h 1 i 2 νh(y) −1/2 − 2 x 2 −ν /2 dy e ∼ ν √ dx e 1 + gν (x) + gν (x) + ... + O e . −1 − ν 2

26 √ The integration limits ± ν can be safely extended to ±∞ with an error of order 2 Oe−ν /2, exponentially small as ν → +∞. Thus, introducing the Gaussian average: Z +∞ dx − 1 x2 hfi = √ e 2 f(x), −∞ 2π the asymptotic expansion of the function Γ(ν) as ν → +∞ can be written as √ h 1 1 i Γ(ν) ∼ νν−1/2 e−ν 2π 1 + hg i + hg2i + hg3i + ... , ν 2 ν 3! ν where 1 1 g (x) = √ x3 − x4 + O(x5/ν3/2). ν 3 ν 4ν The averages can be evaluated using the well known result for the moments of the Gaussian distribution of zero average and variance 1: hx2ni = (2n − 1)!! and hx2n+1i = 0. To the leading order we have 1 3 hg i = − hx4i + O(1/ν2) = − + O(1/ν2), ν 4ν 4ν 1 5 · 3 hg2i = hx6i + O(1/ν2) = + O(1/ν2). ν 9ν 9ν 3 2 and hgν i = O(1/ν ), so that √ h 1 i Γ(ν) ∼ νν−1/2 e−ν 2π 1 + + O(1/ν2) , as ν → +∞, (30) 12ν known as Stirling’s formula, after James Stirling, even if it was first stated by Abraham de Moivre. The proof that (30) is an asymptotic expansion of Γ(ν) as ν → +∞ is straightforward. It is enough to compute the next terms in the expansion and estimate the error when the series is truncated to the first N terms. The con- 2 tributions of order Oe−ν /2 are transcendentally small with respect to any power ν−n as ν → +∞ and can be ignored. The asymptotic expansion (30) of Γ(ν) has been derived using the integral representation (27) and the Laplace’s method; however, the definition (27) is not the only possible definition of Γ(ν). For example, an asymptotic expansion of Γ(ν) as ν → +∞ can be obtained also from the integral representation √ Z ∞ t  1 arctg( ν ) ln Γ(ν) = ν − ln ν − ν + ln 2π + 2 dt 2πt . 2 0 e − 1 Performing the change of variable t = νy, one sees that as ν → +∞ only values of y up to y = O(1/ν) contributes significantly to the integral; the others give Notjust a correction O for(e−2πν ) to the value Distribution of the integral. Then, since 1/ν  1, we can replace arctg(x) by the asymptotic expansion (Taylor series)

∞ X x2n−1 arctg(x) ∼ (−1)n−1 , x → 0, 2n − 1 n=1

27 and integrate termwise the resulting asymptotic expansion:

∞ Z ∞ arctg( t ) X (−1)n−1 Z ∞ t2n−1 dt ν ∼ dt . (31) e2πt − 1 (2n − 1) ν2n−1 e2πt − 1 0 n=1 0

We have neglected the O(e−2πν ) terms because transcendentally small with respect to any power ν−n as ν → +∞. Termwise integration is justified because the coefficient of the expansion are integrable functions of t. The integral in (31) is equal to Z ∞ t2n−1 (−1)n−1 dt 2πt = B2n, (32) 0 e − 1 4n

where Bn are the Bernoulli numbers given by the coefficients of the expansion

∞ z X zn = B , 0 < |z| < 2π. ez − 1 n n! n=0

Substituting (32) into (31), we have

∞  1 √ X B2n ln Γ(ν) ∼ ν − ln ν −ν +ln 2π+ , as ν → +∞. (33) 2 2n(2n − 1) ν2n−1 n=1

The first terms of the series using B2 = 1/6, B4 = −1/30, B6 = 1/42, are  1 √ 1 1 1 ln Γ(ν) ∼ ν− ln ν−ν+ln 2π+ − + +O(ν−7), as ν → +∞, 2 12ν 360ν3 1260ν5 The series (33) is not convergent, but when truncated gives and approximation of ln Γ(ν) as ν → +∞ with error of the order of the first omitted term. The asymptotic expansion (33) differs from the expansion (30) derived using the integral representation (27); nevertheless, they are asymptotically equivalent as ν → +∞. From (33) we have

√  1  Γ(ν) ∼ νν−1/2e−ν 2π exp + O(ν−3) , (34) 12ν

which differs from (30) by terms O(1/ν2) as ν → ∞ because

 1  exp + O(ν−3) − 1 − 1/12ν = O(1/ν2) as ν → +∞. 12ν

For finite values of ν, however, they give different approximations. For example the approximation (30) gives Γ(5) = 23.99 ... , Γ(6) = 119.99 ... and Γ(7) = Not719.95 ... with an for error O(1/ν2) Distribution = O(10−2). By contrast, the approximation (34) gives Γ(5) = 24.000 ... , Γ(6) = 120.001 ... and Γ(7) = 720.005 ... with a smaller error O(1/ν3) = O(10−3).

28