1. Function We will prove that the improper Z ∞ Γ(x) = e−ttx−1dt 0 exists for every x > 0. The function Γ(x) is called the . Let us recall the comparison test for improper . Theorem 1.1. (Comparison Test for of Type I) Let f(x), g(x) be two continuous functions on [a, ∞) such that 0 ≤ f(x) ≤ g(x) for all x ≥ a. Then Z ∞ Z ∞ (1) If g(x)dx is convergent, so is f(x)dx. a a Z ∞ Z ∞ (2) If f(x)dx is divergent to infinity, so is g(x)dx. a a Theorem 1.2. ( Comparison Test) Let f(x), g(x) be two nonnegative continuous functions on [a, ∞). Suppose that f(x) lim = L with L > 0. x→∞ g(x) Z ∞ Z ∞ Then both g(x)dx and f(x)dx are convergent or divergent. a a Z ∞ Lemma 1.1. For every s > 0, the improper integral e−stdt converges. 0 Proof. Let us compute Z N e−sM − 1 e−stdt = . 0 −s By definition, Z ∞ Z N e−sM − 1 1 e−stdt = lim e−stdt = lim = . 0 N→∞ 0 N→∞ −s s The limit exists and hence the improper integral converges.  Now let us study the case when x ≥ 1.

Lemma 1.2. Let n be a . Then tn−1 lim = 0. 1 t t→∞ e 2 Proof. By L’ Hospital rule, tn−1 (n − 1)tn−2 lim 1 = lim 1 . t→∞ 2 t t→∞ 1 2 t e 2 e Since tn−1 is a polynomial of degree n − 1, we know dn tn−1 = 0. dtn Inducitvely, we find tn−1 0 lim 1 = lim n 1 = 0. t→∞ 2 t t→∞ 1  2 t e 2 e  1 2

By the definition of limit, we choose  = 1, there exists M > 0 such that for all t ≥ M tn−1 <  = 1. 1 t e 2

n−1 1 t Hence for t ≥ M, 0 ≤ t < e 2 . This implies that for t ≥ M,

−t n−1 −t 1 t − 1 t (1.1) 0 ≤ e t ≤ e · e 2 = e 2 . Z ∞ − 1 t Lemma 1.1 implies that e 2 dt is convergent. (1.1) and Comparison test implies that 0 Z ∞ −t n−1 e t dt is convergent for every n ∈ N. 0 Let x ≥ 1 be any real number. Let [x] be the largest integer so that [x] ≤ x < [x] + 1. Then for t ≥ 0, (1.2) 0 ≤ e−ttx−1 ≤ e−tt[x]. Z ∞ Z ∞ Since e−tt[x]dx is convergent, by comparison test and (1.2), we find e−ttx−1dt is 0 0 convergent. Now let us study the case when 0 < x < 1. Then we know 1 tx−1 t ≤ ≤ . 1 t 1 t 1 t e 2 e 2 e 2 We know that t 1 lim = lim = 0. 1 t 1 t t→∞ e 2 t→∞ e 2 By the Sandwich principle, tx−1 lim = 0 1 t t→∞ e 2 for 0 < x < 1. (Of course, this statement is true for all x > 0.) This implies that

−t x−1 − 1 t 0 ≤ e t ≤ e 2 , t ≥ 1. Z ∞ By comparison test, e−ttx−1dt is convergent for 0 < x < 1. Now, we need to verify that 1 Z 1 Z 1 t −t x−1 e e t dt = 1−x dt 0 0 t −t Z 1 e −t x−1 is convergent. Notice that lim 1−x = ∞. Hence the integral e t dt is a type II t→0 t 0 improper integral. Theorem 1.3. Let f, g ∈ (a, b] and 0 ≤ f(x) ≤ g(x) for all a < x ≤ b. Z b Z b (1) If g(x)dx is convergent, so is f(x)dx. a a Z b Z b (2) If f(x)dx is divergent to infinity, so is g(x)dx. a a 3

Note that 0 < e−ttx−1 ≤ etx−1. We know that Z 1 Z 1 tx−1dt = lim tx−1dt 0 b→0 b x 1 t = lim b→0 x b  1 bx  = lim − b→0 x x 1 = . x Z 1 By the comparison test, e−ttx−1dt is convergent. 0 Theorem 1.4. For every x > 0, the improper integral Z ∞ Γ(x) = e−ttx−1dt 0 is convergent. Proposition 1.1. For x > 0, Γ(x + 1) = xΓ(x). Proof. For every N > 0, using , Z N Z N −t x x −t N −t x−1 e t dt = −t e 0 + x e t dt 0 0 Z N = −N xe−N + x e−ttx−1dt. 0 N x We have seen that lim N xe−N = lim = 0 for all x > 0. Then N→∞ N→∞ eN Z N Γ(x + 1) = lim e−ttxdt N→∞ 0 Z N = lim (−N xe−N + x e−ttx−1dt) N→∞ 0 Z N = x lim e−ttx−1dt N→∞ 0 = xΓ(x).  Corollary 1.1. For every n ≥ 0, Γ(n + 1) = n!. Z ∞ Proof. We know e−tdt = 1. Hence Γ(1) = 1. We can prove the formula by induction. 0 Assume that the statement is true for some nonnegative integer k, i.e. Γ(k + 1) = k!. By the previous proposition, Γ(k + 2) = Γ((k + 1) + 1) = (k + 1)Γ(k + 1) = (k + 1) · k! = (k + 1)!.  4

Proposition 1.2. For any s > 0, x > 0, we have Z ∞ −st x−1 Γ(x) e t dt = x . 0 s Proof. This formula can be proved by using substitution u = st.  Example 1.1. Z ∞ −st 2 2 3 10 e (2 − 3t + 5t )dt = − 2 + 3 . 0 s s s This can be proved by the previous proposition.

Here comes a question, if f(t) is a on [0, ∞) such that Z ∞ −st 2 3 10 e f(t)dt = − 2 + 3 , 0 s s s what can you say about f(t)? Let us introduction the notion of Laplace transformation. 5

2. Laplace Transformation, its inverse transformation, and linear homogeneous o.d.e of constant coefficients Z ∞ Definition 2.1. Let f(t) be a continuous function on [0, ∞). Suppose e−stf(t)dt con- 0 verges for s ≥ a for some a ∈ R. Denote Z ∞ L(f)(s) = e−stf(t)dt 0 called the of f.

Now, let us assume that the Laplace transform of f1, f2 exist on some domain D ⊂ R where f1, f2 are continuous functions on [0, ∞). For any real numbers a1, a2, we know that for any s ∈ D, Z ∞ −st L(a1f1 + a2f2)(s) = (a1f1(t) + a2f2(t))e dt 0 Z M −st = lim (a1f1(t) + a2f2(t))e dt M→∞ 0  Z M Z M  −st −st = lim a1 f1(t)e dt + a2 f2(t)e dt M→∞ 0 0 Z M Z M −st −st = a1 lim f1(t)e dt + a2 lim f2(t)e dt M→∞ 0 M→∞ 0 = a1L(f1)(s) + a2L(f2)(s).

Proposition 2.1. Suppose that f1, f2 are two continuous functions on [0, ∞) such that their Laplace transform exist on some domain D ⊂ R for s. Then

L(a1f1 + a2f2)(s) = a1L(f1)(s) + a2L(f2)(s), s ∈ D, for any a1, a2 ∈ R. (We say that the Laplace transform is a linear transformation.)

Theorem 2.1. Suppose f1, f2 are two continuous functions on [0, ∞) such that their Laplace transform exist. If L(f1) = L(f2), then f1 = f2. Hence if g(s) = L(f)(s), we denote f(t) = L−1(g)(t) called then Laplace inverse transform. Proposition 2.2. Suppose f(t) is a smooth function on [0, ∞)(f (k)(t) exists for all k ≥ 1. Set f (0)(t) = f(t).) Assume that lim f (k)(t)e−st = 0 for all k ≥ 0. Then t→∞ L(f 0)(s) = −f(0) + sL(f)(s). Proof. Using integration by parts, Z N Z N −st 0 −st N −st e f (t)dt = f(t)e 0 + s e f(t)dt 0 0 Z N = f(N)e−sN − f(0) + s e−stf(t)dt. 0 6

By assumption, lim f(N)e−sN = 0 (consider k = 0.) Hence N→∞ Z ∞ Z N e−stf 0(t)dt = lim e−stf 0(t)dt 0 N→∞ 0  Z N  = lim f(N)e−sN − f(0) + s e−stf(t)dt N→∞ 0 Z N = −f(0) + s lim e−stf(t)dt N→∞ 0 Z ∞ = −f(0) + s e−stf(t)dt. 0  Corollary 2.1. Under the same assumption as above, we have X L(f (n))(s) = snF (s) − sn−k−1f (k). k=0 When n = 2, we have L(f 00)(s) = s2F (s) − sf 0(0) − f(0). Now let us use Laplace transform to solve ordinary differential equation with constant coefficients. Example 2.1. Solve for y0 + 2y = 0 with initial condition y(0) = 1. Take Laplace transform, we obtain L(y0 + 2y)(s) = L(y0)(s) + 2L(y)(s) = 0. Let g(s) = L(y)(s). Using the previous proposition, L(y0)(s) = −y(0) + sL(y)(s) = −1 + sg(s). 1 Hence (−1 + sg(s)) + 2g(s) = 0. This implies that g(s) = . Then s + 2 y = L−1(g)(t) = e−2t. Example 2.2. Solve for y00 + y = 0 with initial condition y(0) = a and y0(0) = b. Let g(s) be the Laplace transform of y. Then L(y00)(s) + g(s) = 0. On the other hand, L(y00)(s) = s2g(s) − sy(0) − y0(0) = s2g(s) − as − b. This implies that s 1 (s2 + 1)g(s) = as + b =⇒ g(s) = a + b . s2 + 1 s2 s 1 We know = L(cos t)(s), and = L(sin t)(s). Hence y(t) = a cos t + b sin t. s2 + 1 s2 + 1 In general, we can solve for the linear (homogeneous) differential equation of constant coefficients through Laplace transformation. A linear homogeneous differential equation of constant coefficients is a differential equation (n) (n−1) 0 (2.1) any + an−1y + ··· + a1y + a0y = 0, where a0, ··· , an ∈ R. If an 6= 0, we say that the equation is of order n. Let us study the case when n = 2 : (2.2) ay00 + by0 + cy = 0. 7

Here a 6= 0. Taking the Laplace transformation of the equation, we have a(s2F (s) − sy0(0) − y(0)) + b(sF (s) − y(0)) + cF (s), where F (s) = L(y)(s). Therefore F (s) is a in s and given by As + B F (s) = . as2 + bs + c where A = ay0(0) and B = ay0(0) + by(0). We can use partial fraction expansion to decom- pose F (s). Definition 2.2. The polynomial χ(s) = as2 + bs + c is called the characteristic polynomial of (2.2). We assume that a = 1 and let D = b2 − 4c be the discriminant of the characteristic polynomial. case 1: If D > 0, χ(s) has two distinct real roots. We denote the root of χ(s) by λ1 and λ2. Thus we may write 1 1 F (s) = C1 + C2 s − λ1 s − λ2 for some C1,C2 ∈ R. We know that 1 L(eat)(s) = . s − a Using the inverse transform and the linearity of L−1, we see that y(t) = L−1(F )(t)     −1 1 −1 1 = C1L + C2L s − λ1 s − λ2 λ1t λ2t = C1e + C2e . Hence y is a linear combination of eλ1t and eλ2t. case 2: If D = 0, χ(s) has one repeated root, called λ. Then we write 1 1 F (s) = C + C . 1 s − λ 2 (s − λ)2 Recall that Z ∞ n λt −st n tλ Γ(n + 1) n! L(t e )(s) = e t e dt = n+1 = n+1 . 0 (s − λ) (s − λ) Thus we obtain y(t) = L−1(F )(t)  1   1  = C L−1 + C L−1 1 s − λ 2 (s − λ)2 λt λt = C1e + C2te λt = (C1 + C2t)e . case 3: If D < 0, by completing the square, we write χ(s) = (s − α)2 + β2. We write As B s − α β F (s) = + = C + C . (s − α)2 + β2 (s − α)2 + β2 1 (s − α)2 + β2 2 (s − α)2 + β2 8

Notice that s − α β L(eαt cos βt)(s) = , L(eαt sin βt) = . (s − α)2 + β2 (s − α)2 + β2 Thus y(t) = L−1(F )(t)  s − α   β  = C L−1 + C L−1 1 (s − α)2 + β2 2 (s − α)2 + β2 αt αt = C1e cos βt + C2e sin βt αt = e (C1 cos βt + C2 sin βt). Remark. Let V be the subset of C2[0, ∞) consisting of y such that (2.2) holds. The above observation implies that V is in fact a two dimensional real vector (sub)space (of C2[0, ∞)).

This method can be applied to (2.1) for any n ∈ N. Definition 2.3. The characteristic polynomial of (2.1) is defined to be n χ(s) = ans + ··· + a1s + a0. Taking the Laplace transformation of (2.1) and using Corollary 2.1, we find that P (s) F (s) = χ(s) for some polynomial P (s) ∈ R[s]. Thus F (s) is a rational function and we can solve for y using the partial fraction for F (s). Thus we reduce our problems to the cases when χ(s) = (s − a)n or χ(s) = ((s − α)2 + β2)m. When χ(s) = (s − a)n, we write A A A F (s) = 1 + 2 + ··· + n s − a (s − a)2 (s − a)n 1 1! (n − 1)! = C + C + ··· + C , 0 s − a 1 (s − a)2 n−1 (s − a)n where Ci−1 = Ai/(i − 1)! for i = 1, ··· , n. By taking the inverse transform, we obtain n−1 at y(t) = (C0 + C1t + ··· + Cn−1t )e . When χ(s) = ((x − α)2 + β2)m, we write A s + B A s + B F (s) = 1 1 + ··· + m m (x − α)2 + β2 ((s − α)2 + β2)m C (s − a) + D β C (s − a) + D β = 1 1 + ··· + m m . (x − α)2 + β2 ((s − α)2 + β2)m

Here Ci,Dj are constants. By finding the inverse transform of (Ci(s − a) + Diβ)/(((s − α)2 + β2)i), we obtain y. 9

3. In this section, x, y are positive real numbers. Let us define Γ(x)Γ(y) B(x, y) = . Γ(x + y) The function B(x, y) is called the Gamma function. It follows immediately from the defi- nition that B(x, y) = B(y, x), ∀x, y > 0. Using the property of Gamma function: Γ(x + 1) = xΓ(x) for x > 0, we obtain: Proposition 3.1. Suppose p, q > 0. Then (1) B(p, q) = B(p + 1, q) + B(p, q + 1). q q (2) B(p, q + 1) = B(p + 1, q) = B(p, q). p p + q Theorem 3.1. For x, y > 0, Z ∞ tx−1 B(x, y) = x+y dt. 0 (1 + t) Let us postpone the proof of this equation.

Let us consider the substitution t = tan2 θ with 0 ≤ θ ≤ π/2. Then dt = 2 tan θ sec2 θdθ. Using sec2 θ = tan2 θ + 1, Beta function B(x, y) can be rewritten as Z π 2 x−1 2 (tan θ) 2 B(x, y) = 2 x+y · 2 tan θ sec θdθ 0 (1 + tan θ) Z π 2x−2 2 tan θ 2 = 2 2x+2y · tan θ sec θdθ 0 sec θ Z π 2 2x−1 1 = 2 tan θ · 2x+2y−2 dθ 0 sec θ π 2x−1 Z 2  sin θ  = 2 · cos2x+2y−2 θdθ 0 cos θ π Z 2 = 2 sin2x−1 θ cos2y−1 θdθ. 0 Using B(y, x) = B(x, y), we also obtain π Z 2 B(x, y) = 2 cos2x−1 θ sin2y−1 θdθ. 0 This is the second form of Beta function. Consider substitution t = sin2 θ for 0 ≤ θ ≤ π/2. Then dt = 2 sin θ cos θdθ, we obtain the third form of the Beta function: Z 1 B(x, y) = tx−1(1 − t)y−1dt. 0 We conclude that Theorem 3.2. The Beta function has the following forms: Z ∞ tx−1 (1) B(x, y) = x+y dt, 0 (1 + t) 10

π Z 2 (2) B(x, y) = 2 cos2x−1 θ sin2y−1 θdθ, 0 Z 1 (3) B(x, y) = tx−1(1 − t)y−1dt. 0 ∞ Z 2 Example 3.1. Compute e−x dx. 0 − 1 t 2 dt Let us make a change of variable t = x2. Then dt = 2xdx and thus dx = .The 2 integral can be rewritten as Z ∞ Z ∞ 1  −x2 1 −t − 1 Γ 2 e dx = e t 2 dt = . 0 2 0 2 Now, we only need to compute Γ(1/2). Using Beta function,   1 2  2 1 1 Γ 2 1 B , = 1 1  = Γ 2 2 Γ 2 + 2 2 On the other hand,   Z π Z π 1 1 2 2· 1 −1 2· 1 −1 2 B , = 2 cos 2 θ sin 2 θdθ = 2 dθ = π. 2 2 0 0 √ Hence Γ(1/2) = π. We obtain that ∞ √ Z 2 π e−x dx = . 0 2 Z 2π Example 3.2. Compute sin4 θdθ. 0 We know Z 2π Z π Z π   4 2 4 2 2· 5 −1 2· 1 5 1 sin θdθ = 4 sin θdθ = 2 · 2 sin 2 θ cos 2 θdθ = 2B , . 0 0 0 2 2 We compute 5 1 Γ 5  Γ 1  B , = 2 2 . 2 2 Γ(3) Using Γ(x + 1) = xΓ(x), we obtain 5 3 1 1 3√ Γ = · · Γ = π. 2 2 2 2 4 √ √ 5 1 3 π · π 3 Z 2π 3 Hence B , = 4 = π. Thus sin4 θdθ = π. 2 2 2! 8 0 4 Now, let us go back to the proof of Theorem 3.1.

Z ∞  Z ∞  Γ(x)Γ(y) = e−ttx−1dt e−ssx−1ds 0 0 Z ∞ Z ∞ = e−(t+s)tx−1sy−1dtds. 0 0 11 Z ∞ Let us compute e−(t+s)tx−1dt. Consider t = su, we rewrite 0 Z ∞ Z ∞ e−(t+s)tx−1dt = sx e−(u+1)sux−1du. 0 0 Hence the integral becomes Z ∞ Z ∞  Γ(x)Γ(y) = e−(u+1)ssx+y−1ds ux−1du 0 0 Z ∞ Γ(x + y) x−1 = x+y · u du 0 (u + 1) Z ∞ ux−1 = Γ(x + y) x+y du. 0 (1 + u) Here we use Proposition 1.2. This shows that Γ(x)Γ(y) Z ∞ ux−1 = x+y du. Γ(x + y) 0 (1 + u) Z 3 Example 3.3. Compute (x − 1)10(x − 3)3dx. 1 Let t = (x − 1)/2. Then the integral becomes Z 3 Z 1 Z 1 (x − 1)10(x − 3)3dx = (2t)10(2t − 2)32dt = −214 t10(1 − t)3dt. 1 0 0 We obtain Z 3 10!3! (x − 1)10(x − 3)3dx = −214B(11, 4) = −214 · . 1 14!