<<

The Interpolation Problem; the Function

The interpolation problem: given a function with values on some discrete set, like the positive integer, then what would the value of the function be defined for all the reals? Euler wanted to do this for the function. He concluded that Z 1 n! = (− ln x)ndx. 0 Observe that (in modern calculation)

Z 1 1

− ln x dx = −x ln x + x 0 0 = 1 + lim x ln x x→0+ ln x = 1 + lim 1 x→0+ x 1 x = 1 + lim 1 x→0+ − x2 −x = 1 + lim = 1 − 0 = 1. x→0+ 1 Which is consistent with 1! = 1. Now consider the following integration, Z 1 (− ln x)ndx. 0 We apply the (Leibnitz version) of the formula R u dv = uv − R v du.

dv = 1 v = x  1  u = (− ln x)n du = n(− ln x)n−1 − dx x

Z 1 1 Z 1 n n n−1 (− ln x) dx = x(− ln x) + n (− ln x) dx 0 0 0 Since (in modern notation and using the same limit techniques as above)

lim x(ln x)n = 0 x→0+

1 (Euler would have probably said that 0(ln(0))n = 0.) We have:

Z 1 Z 1 (− ln x)ndx = n (− ln x)n−1dx 0 0 which is consistent with our inductive definition of the factorial function: n! = n · (n − 1)!; so for integers the formula clearly works. Then he used the transformation t = − ln x so that x = e−t to obtain the following. Note that when x = 1, t = 0 and when x → 0, t → ∞.

Z 1 Z 0 (− ln x)ndx = (t)nd(e−t) 0 ∞ Z 0 = −(t)ne−tdt ∞ Z ∞ = (t)ne−tdt 0 which is the modern formulation and leads us to the definition of the . For historical reasons (the fact that Gauss’ π function was π(n) = (n − 1)!) the gamma function Γ is defined by

Z ∞ Γ(n + 1) = (t)ne−tdt 0 = n!

1 Lets calculate Γ( 2 ): Z ∞ Γ(n + 1) = (t)ne−tdt 0 Z ∞ 1 − 1 −t Γ = (t) 2 e dt 2 0

2 consider the change of variable t = x2: Z ∞ Z ∞ − 1 −t 2 − 1 −x2 2 (t) 2 e dt = (x ) 2 e dx 0 0 ∞ Z 1 2 = e−x 2xdx 0 x Z ∞ = 2 e−x2 dx 0 Z ∞ = e−x2 dx −∞ 1 √ Γ = π 2

R ∞ −x2 Calculation of −∞ e dx: We will use a change of variable from rectan- gular to polar coordinates. So we will need the |J| for the Jacobian.

x = r cos θ y = r sin θ

 ∂x ∂y   cos θ sin θ  J = ∂r ∂r = ∂x ∂y −r sin θ r cos θ ∂θ ∂θ |J| = r cos2 θ + r sin2 θ = r

3 ∞ ∞ ∞  Z 2 2  Z 2  Z 2  e−x dx = e−x dx e−y dy −∞ −∞ −∞ Z ∞ Z ∞ = e−x2−y2 dxdy −∞ −∞ Z 2π Z ∞ = e−r2 |J|drdθ 0 0 Z 2π Z ∞ = e−r2 rdrdθ 0 0 2π Z 1 2 ∞ −r = − e dθ 0 2 0 Z 2π 1 = dθ 0 2 1 = · 2π = π 2 Z ∞ √ e−x2 dx = π −∞ From our and/or class we will want to calculate

Z ∞ 2 − (x−µ) e 2σ2 dx −∞

Using a change of variable u = x√−µ so that du = √1 So 2σ 2σ

∞ ∞ Z (x−µ)2 Z √ − −u2 e 2σ2 = e 2σdu −∞ √−∞ √ √ = 2σ π = σ 2π

Which gives us the probability density function:

1 (x−µ)2 − 2 fµ,σ(x) = √ e 2σ . σ 2π

Factorial like property of the gamma function.

4 From the definition of the gamma function we have the following:

Γ(n + 1) = n! Γ(n + 2) = (n + 1)! = (n + 1)Γ(n + 1).

So the choice of the gamma function as a generalization of the factorial would be particular nice if we could show this for all x and not just for positive integers. We’d like to prove that Γ(x) = (x − 1)Γ(x − 1). In the following assume x is a particular positive real number (we’ll see why we need positive below). From our definition of Γ(x) we have Z ∞ Γ(x) = tx−1e−tdt. 0 We proceed with integration by parts using Leibnitz notation treating x as a constant: Z Z udv = uv − vdu Z ∞ Γ(x) = tx−1e−tdt 0 u = tx−1; dv = e−tdt du = (x − 1)tx−2dt; v = −e−t Z ∞ Z ∞ tx−1e−tdt = tx−1e−tdt 0 0 ∞ Z ∞ x−1 −t x−2 −t = −t e − (x − 1)t (−e )dt. 0 0

x−1 −t ∞ It’s a simple exercise to evaluate t e |0 as long as x ≥ 1. When x is less than 1 the lower integration limit yields so the integration will only be finite for x ≥ 1. In these cases we have:

Z ∞ ∞ Z ∞ x−1 −t x−1 −t x−2 −t t e dt = −t e − (x − 1)t (−e )dt 0 0 0 Z ∞ = 0 + (x − 1)tx−2e−tdt 0 Γ(x) = (x − 1)Γ(x − 1).

5