<<

1 Taylor-Maclaurin

2 00 2 Writing x = x0 + n4x, x1 = (n − 1)4x, .., we get, (4y0) = y0 (4x) ... and letting n → ∞, a leap of logic reduces the interpolation formula to: 1 y = y + (x − x )y0 + (x − x )2 y00 + ... 0 0 0 0 2! 0 Definition 1.0.1. A f is said to be an Cn function on (a, b) if f is n-times differentiable and the nth f (n) is continuous on (a, b) and f is said to belong to C∞ if every derivative of f exists (and is continuous) on (a, b).

Taylor’s Formula: Suppose f belongs to C∞ on (−R,R). Then for every n ∈ N, and x ∈ (−R,R) we have: 1 1 f(x) = f(0) + f 0(0)x + f (2)(0) x2 + ... + f (n)(0) xn + R (x) 2! n! n

f is said to be analytic at 0 if the remainder Rn(x) → 0 as n → ∞.

There are two standard forms for the remainder. 1. form: Z x 1 (n+1) n Rn(x) = f (t)(x − t) dt n! 0 Proof. Integrating by parts, 1 Z x 1 1 Z x f (n+1)(t)(x − t)ndt = − f (n)(0)xn + f (n)(t)(x − t)n−1dt n! 0 n! (n − 1)! 0 Proof is completed by induction on n on observing that the result holds for n = 0 by the Funda- mental of . 2. Lagrange’s form: There exists c ∈ (0, x) such that: 1 R (x) = f (n+1)(c)xn+1 n (n + 1)! This form is easily derived from the integral form using intermediate value theorem of integral calculus. Lemma 1.0.2. If h(≥ 0) and g are continuous on [a, b], then there exists c ∈ (a, b) such that

Z b Z b h(t)g(t)dt = g(c) h(t)dt a a Proof. If m and M denote the min and max of g on [a, b], then h ≥ 0 implies (assuming h is not identically zero), Z b Z b m ≤ h(t)g(t)dt/ h(t)dt ≤ M a a Now the intermediate value theorem for continuous functions assures the existence of c.

1 Examples: 1 (n+1) n+1 1. Let f(x) = sin(x). Then given x > 0, there exists ξ ∈ (0, x) such that Rn(x) = (n+1)! f (ξ)x 1 n+1 and hence |Rn(x)| ≤ (n+1)! x . Thus Rn(x) → 0 as n → ∞ (uniformly on a finite subinterval 1 3 of R). Consequently, sin(x) = x − 3! x .... Note that even if the series on the r.h.s. converges for every x, to show that it converges to sin(x), it is necessary to prove that Rn → 0 as n → ∞.

2. Let f(x) = log(1 + x), x > −1. Then f (n)(x) = (−1)(n+1)(n − 1)!(1 + x)−n. Using the integral form (and assuming 0 < x < 1) we get: Z x |(x − t)| n 1 |Rn(x)| ≤ ( ) dt 0 1 + t 1 + t Z x |x|n ≤ ( )dt 0 1 + t ≤ xn log(1 + x)

(−1)n −n−1 Using Lagrange’s form, Rn(1) = n+1 1/(1 + ξ) . Thus Rn(x) → 0 as n → ∞ if x ∈ (0, 1]. x3 x5 log(1 + x) = x − 3 + 5 ... for x ∈ (−1, 1].

3. Let f(x) = (1 + x)α where α ∈ R\N, α 6= 0 and |x| < 1. Then 1 α f (k)(x) = (1 + x)α−k, k = 1, 2... k! k

Thus, Z x 1 x − t n α−1 Rn(x) = [α(α − 1)..(α − n)] ( ) (1 + t) dt n! 0 1 + t 1 If rn = n! [α(α − 1)..(α − n)], then |rn+1/rn| → 1 as n → ∞. n n Now if |x| < 1, choose % such that %|x| < 1 and observe that |rn||x| ≤ c(%|x|) → 0 as n → ∞. x−t Consequently, since | 1+t | ≤ |x| for t ∈ (0, x), as n → ∞, Z x n α−1 |Rn(x)| ≤ rn|x| | (1 + t) dt| ⇒ |Rn(x)| → 0 0 Thus, ∞ X α (1 + x)α = 1 + xk k k=1 1 Example: Let α = − 2 . Then for |x| < 1,

− 1  (2k)! 2 = (−1)k k 22k.(k!)2 Hence ∞ X (2k)! (1 + x)−1/2 = 1 + (−1)k xk 22k.(k!)2 k=1

2 2

2.1 Prerequisites

Definition 2.1.1. (a)If {fn} is a of functions defined on a common domain A, we say that {fn} converges to a function f defined on A if for every x ∈ A, fn(x) → f(x) as n → ∞.

(b) We say that fn converges uniformly to f on A if

sup {|fn(x) − f(x)| : x ∈ A} → 0asn → ∞. What are the advantages of having ? Recall that g is said to be continuous at x if g(x + 4x) → g(x) as 4x → 0.

Theorem 2.1.2. Suppose fn → f uniformly on A ⊆ R. The following hold. 1. If each fn is continuous on A, then so is f. R R 2. If A = [a, b], then A fn → A f.

Proof. 1. Fix x0 ∈ A. Given ε > 0, choose N such that |f(x) − fN (x)| < ε/3 for every x ∈ A. In particular, |f(x0) − fN (x0)| < ε/3 and |f(x0 + 4x0) − fN (x0 + 4x0)| < ε/3. By hypothesis, fN is continuous, so there exists δ such that if |4x0| < δ, then

|fN (x0 + 4x0) − fN (x0)| < ε/3

Now, if |4x0| < δ

|f(x0 + 4x0) − f(x0)| ≤ |f(x0) − fN (x0)| + |fN (x0 + 4x0) − fN (x0)| + |fN (x0 + 4x0) − fN (x0)| < ε/3 + ε/3 + ε/3

i.e. f is continuous at x0.

2. Simply observe that Z b | fn − fdx| ≤ (b − a)max{|fn(x) − f(x)| : a ≤ x ≤ b} → 0 a

Examples: n 1. Let A = [0, 1],and fn(x) = x . If f(1) = 1 and f(x) = 0 on [0, 1), then fn → f, but not uniformly.

2. Let A = [0, ∞) fn be the (isosceles) triangular graph with [0, n] and height 2/n at x = n/2. Then fn(x) → 0 uniformly on A.

3. Let A = [0, ∞) fn be the (isosceles) triangular graph with base [0, 1/n] and height 1 at x = 1/n. Then fn(x) → 0 on A, but not uniformly.

3 2.2 Power Series P∞ n A power seriescentered at a is an infinite series of the form 0 cn(x − a) where {cn} ⊆ R. For our purpose, it suffices to set a = 0 though the results are true for any a ∈ R. There exists a number R, so that the series converges whenever |x| < R and diverges whenever |x| > R. For clear reasons, R is called the radius of convergence. If cn+1 ' 1/R or equivalently |c |1/n ' 1/R, then cn n |x| |c xi| ' ( )i i R P k If |x| < R, then by comparison to the geometric series, ckx is convergent. Let

2 f(x) = c0 + c1x + c2x + ...

What makes power series very useful is that they are (almost) as easy to manipulate as polynomi- als. The principal reason is the following. Pn i Theorem If [a, b] ⊆ (−R,R), and x ∈ [a, b], then sn(x) = 0 cix converges uniformly to f(x). i i Proof: As before, |cix | ≤ |b/R| and |b/R| < 1.

The following corollary is now immediate. Corollary If [a, b] ⊆ (−R,R), then,

Z b ∞ Z b X i f(x)dx = c + ci x dx a 0 a

1/n 1/n P∞ i−1 Also, |ncn| ' |cn| ' 1/R and hence for |x| < R, i=n+1 icix → 0 as n → ∞. Hence Z Z 0 0 sn → g ⇒ sn → g

R 0 R 0 0 0 But sn = sn → f and thus f = g or g = f . Thus, sn → f . It follows by induction that f is in C∞ on (−R,R). Discussing the behavior of the power series at x = ±R takes work. P∞ n P Abel’s Theorem: If f(x) = 0 cnx is convergent on (−1, 1) and cn is a of numbers, then X lim f(x) = cn x→1− 1 2 4 Example: Recall that, 1+x2 = 1 − x + x ... Integrating term-by-term we have: −1 x3 x5 tan (x) = x − 3 + 5 ... if x < 1. 1 1 By alternating series theorem, we know that 1 − 3 + 5 ... is convergent. −1 π So by Abel’s theorem, using the fact that tan (1) = 4 we get π 1 1 = 1 − + ... 4 3 5 Question: Let f(x) = e−1/x if x > 0 and 0 otherwise. Show that f ∈ C∞ on R. Is f analytic at 0?

4 3 Applications

3.1 Differential Equations Consider the second order linear of second order.

y00 + p(x)y0 + q(x)y = 0

Definition 3.1.1. A x0 in R is said to be a regular point if p(x) and q(x) have power- in an around x0. Otherwise, it is a singular point. Example 1: y00 + 2xy0 + 2y = 0 P∞ k We look for a solution of the form y = k=0 ckx for x ∈ (−R,R). Differentiate and substitute,

∞ ∞ 0 X k−1 X k−2 y = kckx , y” = k(k − 1)ckx k=1 k=2 Thus, ∞ ∞ ∞ X k X k X k ck+2(k + 1)(k + 2)x + 2 kckx + 2 ckx = 0 k=0 k=0 k=0 Equivalently, 2 (k + 1)(k + 2)c + 2(k + 1)c = 0 ⇒ c = − c , k ≥ 0 k+2 k k+2 k + 2 k In general,for k ≥ 0,

(−1)k (−1)k2k c = c andc = c 2k k! 0 2k+1 1.3.5..(2k + 1) 1 Thus the general solution is:

∞ ∞ X (−1)k X (−1)k2k y = c x2k + c x2k+1 0 k! 1 1.3.5..(2k + 1) k=0 k=0 Example 2 xy00 + y0 + xy = 0

Here x0 = 0 is a singular point. Nonetheless, the power-series solution is possible. In this case we get, c1 = 0 and for k ≥ 0, 1 c = − c k+2 (k + 2)2 k Thus, the solution to Bessel’s equation is: x2 x4 x6 J (x) = 1 − + − + ... 0 22 22.42 22.42.62

5 3.2 Cauchy’s derivation of Newton’s

α.(α−1)..(α−n+1) Given α ∈ R and x ∈ (−1, 1), let rn(α) = n! . Lemma 3.2.1. Then for a fixed x ∈ (−1, 1) and any α ∈ R

|rn+1(α)|/|rn(α)| = |α − n + 2|/|n + 1| → 1

and further the convergence is uniform over α in any closed subinterval of R. The following fact is well-known.

• If {tn} is a sequence of positive numbers, then

1/n tn+1/tn → l ⇒ (tn) → l

Consequently, 1/n n 1/n |rn(α)| → 1 ⇒ [|rn(α)||x| ] → |x| and the convergence is uniform over α in any closed subinterval of R. P∞ k Corollary 3.2.2. If Rn(α, x) = k=n+1 rk(α)x , then for a fixed x ∈ (−1, 1),

Rn(α, x) → 0

and the convergence is uniform over α in any closed subinterval of R.

n n Proof. Choose % ∈ (|x|, 1) such that |rn(α)||x| ≤ % , n ≥ N and α ∈ [−M,M].

∞ X k |Rn(α, x)| ≤ % k=n+1 %n+1 ≤ → 0 1 − |%|

Now define a family of functions f(α, x) by the following convergent series and observe its prop- erties. α.(α − 1) f(α, x) = 1 + αx + x2 + ... 2!

1. f(α1 + α2, x) = f(α1, x).f(α2, x) for all x ∈ (−1, 1).

2. f(n, x) = [f(1, x)]n = (1 + x)n, if n ∈ N.

3. f(α, x) = [f(1, x)]α, if α ∈ Q. 4. For a fixed x ∈ (−1, 1), f(α, x) is a of α.

6 Proof. (1) follows from a simple computation.

(2) It follows by induction on n that,

f(n, x) = f n(1, x)

By the same argument, f(βn, x) = f n(β, x) for any β ∈ R.

m n (3) If α = n , then f(m, x) = [f(αn, x)] = [f(α, x)] . However, f(m, x) = [f(1, x)]m by (2). Thus, f(α, x) = [f(1, x)]m/n.

Pn−1 k (4) If Sn(α) = k=0 rk(α)x , then Sn(α, x) → f(α, x). Thus, for every fixed x ∈ (−1, 1), f(α, x) is a continuous function of α since α belongs to the closed interval [α − 1, α + 1] .

Proposition 3.2.3. For α ∈ R and x ∈ (−1, 1) we have: f(α, x) = (1 + x)α

Proof. If β ∈ Q, then using f(1, x) = 1 + x and Property (3) we get, f(β, x) = (1 + x)β

Now, given α ∈ R\Q, choose {βn} ⊆ Q which converges to α. By continuity,

f(α, x) = lim f(βn, x) n→∞ = lim [f(1, x)]βn n→∞ = lim (1 + x)βn n→∞ = (1 + x)α

Thus, for x ∈ (−1, 1) and α ∈ R, α.(α − 1) (1 + x)α = 1 + αx + x2 + ... 2!

α 1+α P Note: Observe that | k | ≤ [M/k ] implies that for α > 0, n |rn(α)| < ∞. Hence Rn(α, x) → 0 uniformly on [−1, 1]. Thus f(α, x) converges on [−1, 1].

Historical note: This is basically Cauchy’s proof of Newton’s expression of the binomial series. However, Cauchy had simply assumed the continuity of the function f(α, x) without realizing that a non- uniform limit of continuous functions can fail to be continuous. The introduction of the notion of uniform convergence and its application here are attributed to a young Norwegian mathematician named Abel. Abel called Cauchy ”a bigoted Catholic” but praised his mathematical contributions.

7 3.3 Evaluation of ζ(2)

2 1 Now consider the special case of the binomial series with x = t and α = − 2 . Then

∞ 2 −1/2 X 2k (1 − t ) = 1 + akt , |t| < 1 k=1

(2k)! where ak = 22k.(k!)2 . By Wallis’s product formula, 1 ak ∼ √ , k → ∞ πk Integrating the power-series term-by-term we have:

∞ X ak sin−1(x) = x + x2k+1, |x| < 1 2k + 1 k=1

P∞ P∞ 3/2 Now k=1 ak/(2k + 1) ∼ k=1 1/k < ∞. Hence by Weierstass’s M-test, the r.h.s. is continuous on [−1, 1]. If θ = sin−1(x), then

∞ X ak π θ = sin θ + (sin θ)2k+1, |θ| ≤ 2k + 1 2 k=1

and the series converges uniformly on [−π/2, π/2]. Integrate term-by-term,

Z π/2 Z π/2 ∞ Z π/2 X ak θdθ = sin(θ)dθ + (sin θ)2k+1dθ 2k + 1 0 0 k=1 0 Now recall that Z π/2 2k 2 2k+1 2 (k!) I2k+1 = (sin θ) dθ = 0 (2k + 1)! Hence a 1 k I = 2k + 1 2k+1 (2k + 1)2 Term-by-term integration yields

∞ ∞ π2 X 1 X 1 = 1 + = 8 (2k + 1)2 (2k + 1)2 k=1 k=0 Now observe that ∞ ∞ ∞ X 1 1 X 1 X 1 = + n2 4 n2 (2k + 1)2 n=1 n=1 k=0 Thus, π2 ζ(2) = 6

8 4 Infinite Products

This is a take-off on the concept of infinite sums and uses many of the same ideas. We do not explore it in full generality, but deal with cases that provide sufficient structure for examples that interest us.

Definition 4.0.1. Given {an} ⊆ R, and an 6= 0, we say that the infinite product Π(1 + ak) is n convergent if the sequence of partial products pn = Πk=1(1 + ak) converges to a non-zero limit p in R. Otherwise, the infinite product is said to diverge. ∞ When the infinite product Πk=1(1 + ak) converges, then ak → 0. ∞ Theorem 4.0.2. (a) If an ≥ 0 for all n (or an ≤ 0 for all n), then Πk=1(1 + ak) is convergent if P and only if the series an is convergent. ∞ P (b) If ak > −1, then the infinite product Πk=1(1 + ak) is convergent if and only if log(1 + ak) is convergent.

Proof. (a) If an ≥ 0, then pn ≤ pn+1 and sn ≤ sn+1. x sn Also it follows by induction that pn ≥ 1 + sn. Moreover, 1 + x ≤ e ⇒ pn ≤ e . It now follows from monotone convergence theorem that {pn} is convergent if and only if {sn} is.

Q∞ 2 2 Corollary 4.0.3. k=1(1 − x /k ) converges for all x ∈ R.

4.1 Application to ζ(2)

Recall that if {a1, a2, ..., an} are roots of a p(x) and p(0) = 1, then we may write: x x 1 1 p(x) = (1 − )...(1 − ) = 1 − x( + ... + ) + .. a1 an a1 an sin(x) sin(x) Consider the series for x and recall that limx→0 x = 1. sin(x) x2 = 1 − + ... x 3! is a representation of a polynomial of infinite degree with roots x = ±nπ with n ∈ N. In analogy with a polynomial,(and ignoring many questions of rigor, which were resolved much later), Euler claimed: sin(x) x2 x2 = (1 − )(1 − )... x π2 4π2 Now comparing the coefficients of x2 in the factored form and the infinite series, we have: ∞ 1 X 1 1 = π2 n2 3! n=1 Note: If we substitute x = π/2 in Euler’s factorisation formula and then reciprocate both sides, we get: ∞ 2 Y 2n 2n = ( ).( ) π 2n − 1 2n + 1 n=1 which is the formula for Wallis’s product.

9 5 Weierstrass’s Approximation Theorem

Theorem 5.0.1. If f is a continuous function on [a, b], there exists a sequence of {pn} which converges to f uniformly on [a, b]. Bernstein’s Proof: Let [a, b] = [0, 1]. The tools involve the following combinatorial formulae. Pn n k n−k n 1. k=0 k x (1 − x) = (x + 1 − x) = 1. Pn k 2n k n−k 1 2. k=0(x − n ) k x (1 − x) = n x(1 − x). Proof. Write M = max {|f(x)| : 0 ≤ x ≤ 1} and define the Bernstein polynomials: n X k n B (x) = f( ) xk(1 − x)n−k n n k k=0 Given ε > 0, choose δ > 0 such that whenever |x − y| < δ, then |f(x) − f(y)| < ε, which is possible since f is continuous on a closed sub-interval of R . Now n X k n B (x) − f(x) = [f( ) − f(x)] xk(1 − x)n−k n n k k=0 k k Let S1 = {k : | n − x| < δ} and S2 = {k : | n − x| ≥ δ}. Now X k n X n |f( ) − f(x)| xk(1 − x)n−k ≤ ε xk(1 − x)n−k n k k k∈S1 k∈S1 n X n ≤ ε xk(1 − x)n−k k k=0 ≤ ε

k 2 2 −4 Moreover, if k ∈ S2, then (x − n ) ≥ δ and if n ≥ δ , then X k n X n |f( ) − f(x)| xk(1 − x)n−k ≤ 2M xk(1 − x)n−k n k k k∈S2 k∈S2 X k n ≤ 2Mδ−2 (x − )2 xk(1 − x)n−k n k k∈S2 n X k n ≤ 2Mδ−2 (x − )2 xk(1 − x)n−k n k k=0 1 ≤ 2Mδ−2 x(1 − x) n √ 1 ≤ 2M n x(1 − x) n 1 ≤ 2M √ n

√1 Thus |Bn(x) − f(x)| ≤ ε + 2M n for every x ∈ [0, 1] and max{|Bn(x) − f(x)| : x ∈ [0, 1]} → 0 as n → ∞.

10 5.1 Landau’s proof This is similar in character to the techniques used in series. Step 1: Given f ∈ C([0, 1]), let g(x) = f(x) − f(0) − [f(1) − f(0)]x and write |g(x)| ≤ M for all x. Now g ∈ C([0, 1]) and g(0) = g(1) = 0. Extend g by setting g ∼= 0 outside [0, 1]. Step 2: Define the function by Z 1 2 n Kn(x) = cn(1 − x ) , Kn(x)dx = 1 −1 By Bernoulli’s inequality, (1 − x2)n ≥ 1 − nx2 whenever x ∈ (−1, 1). Now, 1 Z 1 = 2 (1 − x2)ndx cn 0 √ Z 1/ n ≥ 2 (1 − x2)ndx 0 √ Z 1/ n ≥ 2 (1 − nx2)dx √0 ≥ 4/ n

It follows that if 0 < δ < 1, then for |x| ∈ [δ, 1],

√ 2 n Kn(x) ≤ n(1 − δ )

Thus Kn(x) → 0 uniformly on δ ≤ |x| ≤ 1 and hence as n → ∞, Z −δ Z 1 [ + ]Kn(t)dt → 0 −1 δ Step 3: Now for x ∈ [0, 1], define the polynomial Pn by Z 1 Pn(x) = Kn(x − t)g(t)dt 0 If ε > 0, choose δ such that if t ∈ [−δ, δ], and x ∈ [0, 1], |g(x − t) − g(x)| < ε. This is possible since g is continuous (and hence uniformly continuous) on [0, 1]. Z 1 |Pn(x) − g(x|) = Kn(t)[g(x + t) − g(x)]|dt −1 Z −δ Z 1 Z δ ≤ 2M[ + ]Kn(t)dt + ε Kn(t)dt −1 δ −δ Z −δ Z 1 ≤ 2M[ + ]Kn(t)dt + ε −1 δ

Thus Pn(x) + f(0) + [f(1) − f(0)]x → f(x) uniformly on [0, 1]

11 6 Bernoulli Numbers

R ∞ t Observe that ζ(2) = 0 et−1 dt, indicating the integrand is a function of interest. Now write:

∞ x X 1 = B xn ex − 1 n! n n=0 It follows that ∞ ∞ X 1 X 1 x = [ B xn].[ xm] n! n m! n=0 m=1 By comparing coefficients, we conclude that:

n−1 1 X n B = 1,B = − , B = 0 ∀n ≥ 2 0 1 2 k k k=0 Exercise: x 1 Show that ex−1 + 2 x is an even function and hence B2n+1 = 0 for n ≥ 1.

However, (−1)n−1(2π)2nB ζ(2n) = 2n 2(2n)! Note: Mathematica displays the Bernoulli numbers under the code BernoulliB[n]. As a point of historical interest, Lady Lovelace who was Byron’s daughter and a contemporary of Queen Victoria is credited with the computation of B6.

6.1 Extension to

Given t ∈ R, write tx xe X bn(t) = xn ex − 1 n! n=0 ∼ Clearly, bn(0) = Bn and b0(t) = 1 and for n ≥ 1, (by comparing coefficients),

n bn(t) X Bn−k = tk n! (n − k)!k! k=0

It is now easy to check that the Bernoulli polynomials {bn(t)} satisfy the following properties for n ≥ 1: Z 1 0 bn(t) = nbn−1(t), bn(t)dt = 0 0 These polynomials are of great interest in Fourier and we will revisit them later.

12 6.2 Sums of positive powers The following result is due to Jacob Bernoulli. For p = 1, 2, ...

n p+1 X 1 X p + 1 kp = B (n + 1)j p + 1 j p−j+1 k=1 j=1

Proof: For x 6= 0,

n n ∞ X X X 1 ekx = (kx)p p! k=0 k=0 p=0 ∞ n X X xp = ( kp) p! p=0 k=0

However, the l.h.s. =reduces to:

(e(n+1)x − 1)/(ex − 1) = [x/(ex − 1)].[(e(n+1)x − 1)/x]

Now,

∞ n X X xp+1 ( kp) = [x/(ex − 1)].[(e(n+1)x − 1)] p! p=0 k=0 ∞ ∞ j X Bi X (n + 1) = [ xi].[ xj] i! j! i=0 j=1 ∞ X p+1 = apx p=0

Collecting coefficients we see that:

p+1 X Bp−j+1 a = = (n + 1)j p (p − j + 1)!j! j=1

Equating the coefficients of xp+1, we have:

n X p k = p!ap k=0 1 = [(p + 1)!]a p + 1 p p+1 1 X p + 1 = B (n + 1)j p + 1 j p−j+1 j=1

13 7

As the on suggest, the study of functions was more or less restricted to C∞ functions. Polynomials, and more generally analytic functions (including transcendental func- tions like the exponential and logarithmic functions) have this property. Fourier’s notion that a discontinuous function could be represented as a sum of and cosines was inconceivable and received with skepticism. Even the arguments presented in the ”Analytic Theory of Heat” (1822) were not rigorous. But the paradoxes contained in it led Cauchy and others to make precise many concepts contained therein. We make our task easier by restricting ourselves to functions which are piecewise-C1 on [−π, π] and assume that f(−π) = f(π). The Fourier series of f is defined by:

∞ ∞ a0 X X F (θ) = + a cos(kθ) + b sin(kθ) 2 k k k=1 k=1 where 1 Z π 1 Z π ak = f(θ) cos(kθ)dθ; bk = f(θ) sin(kθ)dθ π −π π −π Write

n n N−1 a0 X X 1 X s (f, θ) = + a cos(kθ) + b sin(kθ) and σ (f, θ) = s (f, θ) n 2 k k N N n k=1 k=1 k=0

This begs the question: When does F (θ) converge? When it does, how do f(θ) and F (θ) relate? We collect and state basic results.

Theorem 7.0.1. : Suppose f is piecewise-C1 on [−π, π] and assume that f(−π) = f(π).

1. If f is continuous at θ, then sn(f, θ) → f(θ). 2. If f is continuous at every point of [−π, π], then convergence is uniform.

a0 Pn ak Pn bk R θ 3. The sequence 2 θ + k=1 k sin(kθ) − k=1 k cos(kθ) converges to 0 f(t)dt uniformly on [−π, π].

Caution: Mere continuity of f ; sn(f, θ) → f(θ) for every θ ∈ [−π, π]. There exists a function f ∈ C([0, 2π]) whose Fourier series diverges at the point 0. This example is due to Dubois-Raymond.

Corollary 7.0.2. If p is a polynomial, then for every t ∈ (−π, π),

∞ X p(t) = ak cos(kt) + bk sin(kt) k=0

14 7.1 Examples 1. Let f(x) = 1 on (0, π) and f(x) = −1 on (−π, 0) (the square-). Then,

∞ 4 X 1 f(x) = sin(2k + 1)x π 2k + 1 k=0

For what values of x does the derived series converge? Graph f(x), s10(x) and s20(x).

2. Let f(x) = |x|, −π ≤ x ≤ π.

∞ π 4 X 1 f(x) = − cos(2k + 1)x 2 π (2k + 1)2 k=0

Graph f(x) and s5(x).

3. Let f(x) = x, −π < x < π. Then

∞ X sin(nx) x = 2 (−1)n+1 n n=1

Derive Leibniz’s series for π/4.

4. Integrate the previous series term-by-term to show that for x ∈ [−π, π],

∞ π2 X cos(nx) x2 = + 4 (−1)n 3 n2 n=1

(a) Derive the formula for ζ(2). (b) Graph f and s5.

15 7.2 Euler’s Theorem We start with finding Fourier series for the Bernoulli polynomials. R 1 Let b1(x) = x − 1/2 for x ∈ [0, 1] and observe that 0 b1(t)dt = 0. Then ∞ ∞ X X b1(x) = αk cos(2πkx) + βk sin(2πkx), 0 < x < 1 k=1 k=1 R 1 since 0 b1(t)dt = 0 implies α0 = 0. A simple computation shows that for k 6= 0, Z 1 1 αk = 2 (t − ) cos(2πkt)dt = 0 0 2

Z 1 βk = 2 b1(t) sin(2πkt)dt 0 Z 1 1 = 2 (t − ) sin(2πkt)dt 0 2 Z 1 1 1 1 1 = − [t − cos(2πkt)]0 + cos(2πkt)dt πk 2 πk 0 1 = − πk Thus, ∞ 1 X sin(2πnx) b (x) = − , 0 < x < 1 1 π n k=1 Integrating term-by-term results in a series which converges uniformly on [0, 1] (by the M-test).

∞ b2n+1(x) 2 X sin(2πmx) = (−1)n+1 (2n + 1)! (2π)2n+1 m2n+1 m=1

Conclude that B2n+1 = 0 if n ≥ 1. Of greater importance is the next case.

∞ b2k(x) 2 X cos(2πnx) = (−1)k−1 (2k)! (2π)2k n2k n=1 To derive Euler’s theorem by plugging in x = 0. Thus,

∞ B2k 2 X 1 = (−1)k−1 (2k)! (2π)2k n2k n=1 Consequently, (2π)2k B ζ(2k) = (−1)k−1 2k 2 (2k)!

16 7.3 At heart is the theory of orthogonal in a vector with a dot-product and the fundamental identities governing it. Recall that:

Definition 7.3.1. Two vectors w1 and w2 are said to be orthogonal if their dot-product is zero and of a vector w is the square-root of dot-product of w with itself.  It is known that if w1, ...wn are pairwise orthogonal, i.e. wi • wj = 0 whenever i 6= j, then their satisfies,

n X 2 X 2 2 || akwk|| = |ak| ||wk|| k=1

P 2 An example of an infinite-dimensional is the space of {α = {an} : |an| < ∞} and the inner-product (dot-product) of α and β is defined as

∞ X < α, β >= anbn k=1  It is known that if w1, ...wn are pairwise orthogonal, i.e. wi • wj = 0 whenever i 6= j, then their linear combination satisfies,

n X 2 X 2 2 || akwk|| = |ak| ||wk|| k=1

Now consider the vector-space V of piecewise-C1 functions on [−π, π] which are periodic of pe- riod 2π. Next, if f, g ∈ V, we define the inner-product)as follows: 1 Z π 1 Z π < f, g >= fgdx and ||f||2 = |f|2dx π −π π −π Using basic trigonometric identities we prove the following:{cos(nx), sin(mx : n, m ≥ 0} is an orthogonal family of vectors in V and Z π Z π cos2(nx)dx = sin2(nx)dx = π, n ≥ 1 −π −π

n sin(kx) o So, √1 , √1 , √ is an of V and s (f, t) is the finite-dimensional projec- 2π 2π π n tion of f onto the span of {cos(kx), sin(kx) : 0 ≤ k ≤ n}. An extension of the finite-dimensional identity is known as Plancherel’s Theorem:

∞ 1 Z π a2 X |f|2dx = 0 + (|a |)2 + |b |2) π 2 k k −π k=1

17 If x ∈ [−π, π] and f is periodic of period 2π, and

∞ X f(x) = a0/2 + ak cos(kx) + bk sin(kx), a.e. k=1 then orthogonality implies that: Z π Z π f(x) cos(kx)dx = πak, f(x) sin(kx)dx = πbk −π −π If ∞ X ikx f(x) = cke k=−∞ then orthogonality of eikx, −π ≤ x ≤ π implies that

Z π −ikx f(x)e dx = ck(2π) −π Plancherel’s theorem now reads,

∞ 1 Z π X |f(x)|2dx = |c |2 2π k −π k=−∞

Thinking of eit as the basic independent on the unit circle T, by De-Moiver’s theorem,

n X ikt Tn(t) = cke k=−n

is called a . In real form, assuming b0 = 0,

n X Tn(t) = ak cos(kt) + bk sin(kt) k=0 Every Fourier series of a piecewise-continuous function is a . But the converse is false, even if of piecewise-continuous functions is expanded considerably. Example

P∞ n=2 sin(nt)/ log n cannot be a Fourier series of a (Lebesgue)-integrable function. P∞ But n=2 cos(nt)/ log n is.

Exercise: Use Plancherel’s theorem to show that if f is piecewise-C1, then as n → ∞, Z π 2 |f(x) − sn(x)| dx → 0 −π

18 7.4 Dirichlet and Fejer kernels The function (known as the )is defined as: 1 sin(n + 2 )x Dn(x) = x 2 sin( 2 ) Then, 1 Pn R π • Dn(x) = 2 + k=1 cos(kx) ⇒ −π Dn(t)dt = π. R π • sn(x) = 1/π −π Dn(t)f(x + t)dt Recall that n 1 X σ (x) = s (x) n n + 1 n k=0 The function (known as the Fejer kernel) is defined as: 2 1 1 sin (n + 2 )x Fn(x) = 2 x n + 1 2 sin ( 2 ) Then, 1 Pn R π • Fn(x) = n+1 k=0 Dn(x) ⇒ −π Fn(t)dt = π. R π • σn(x) = 1/π −π Fn(t)f(x + t)dt, if f is periodic of period 2π. R −δ R π • If δ ∈ (0, π), then −π Fn(t)dt + δ Fn(t)dt → 0 as n → ∞. We can now prove Fejer’s theorem, which is a trigonometric version of Weierstrass’s theorem.

Theorem 7.4.1. If f is continuous on [−π, π] and periodic of period 2π, then σn(x) → f(x) uniformly for x ∈ [−π, π]. Proof. By hypothesis, f is uniformly continuous and bounded. Let M = max{|f(x)| : −π ≤ x ≤ π}. Also, given ε > 0, we can choose δ ∈ (0, π) so that for all t ∈ (−δ, δ) and x ∈ [−π, π] |f(x + t) − f(x)| < ε Now choose N (depending on δ) such that for n ≥ N, Z −δ Z π Fn(t)dt + Fn(t)dt < ε −π δ Now for x ∈ [−π, π] and n ≥ N, 1 Z π |σn(x) − f(x)| ≤ Fn(t)|f(x + t) − f(x)|dt π −π 1 Z −δ Z π 1 Z δ ≤ [ + ]Fn(t)|f(x + t) − f(x)|dt + Fn(t)|f(x + t) − f(x)|)dt π −π δ π −δ Z π ≤ ε + (2Mε/π) Fn(t)dt −π ≤ 2ε

19 7.5 Applications 1. If g is piecewise-continuous on [−π, π], then Plancherel’s theorem implies that its Fourier coefficients are square-summable. In particular, we have the Riemann-Lebesgue lemma; R π R π • As as n → ∞, −π g(t) cos(nt)dt → 0 and −π g(t) sin(nt)dt → 0.

R π 1 • Consequently, as n → ∞, −π g(t) sin((n + 2 )t)dt → 0.

2. Use the Dirichlet kernel to show that: Z ∞ sin x π dx = 0 x 2 Solution: Write Z π Z π 1 1 1 Z π 1 Dn(x)dx = sin((n + )x)( − )dx + sin((n + )x)/xdx 0 0 2 2 sin(x/2) x 0 2 Now the first integral goes to 0 by the Riemann-Lebesgue lemma. Hence π Z π 1 = lim [sin((n + )x)/x]dx 2 n→∞ 0 2

1 Make a change of variable t = (n + 2 )x. It follows that

∞ (n+ 1 )π Z sin x Z 2 sin(t) π dx = lim dt = 0 x n→∞ 0 t 2

3. However, as n → ∞, Z π |Dn(t)|dt → ∞ −π Observe that for t ∈ [0, π/2], sin t ≤ t ≤ (π/2) sin t. Hence

Z π/2 Z π/2 [| sin(2n + 1)t|/|sint|]dt ≥ [| sin(2n + 1)t|/|t|]dt 0 0 2n (k+1)π X Z 2(2n+1) ≥ [| sin(2n + 1)t|/|t|]dt kπ k=0 2(2n+1) 2n (k+1)π X (2n + 1)π Z 2(2n+1) ≥ c | sin(2n + 1)t|dt k + 1 kπ k=0 2(2n+1) 2n X 1 ≥ c k + 1 k=0

20