<<

Lecture Notes in Harmonic Analysis

Lectures by Dr Charles Moore

Throughout these notes, signifies end proof, N signifies end of exam- ple, and  marks the end of exercise.

Table of Contents

Table of Contents i

Lecture 1 Fourier Analysis 1 1.1 Preliminaries and definitions ...... 1 1.2 Elementary facts ...... 3

Lecture 2 L2 Results 4 2.1 Convolutions ...... 4 2.2 L2 results ...... 5 2.3 Cesàro mean ...... 7

Lecture 3 Fejer Kernels 7 3.1 More about Cesàro means ...... 7

Lecture 4 Dirichlet Kernel 11 4.1 The Fejér kernels are nice ...... 11 4.2 Dirichlet kernels ...... 12 4.3 Pointwise convergence of SN f(x) ...... 13

Lecture 5 The Principle of Uniform Boundedness 13 5.1 Some ...... 13 5.2 The Baire category theorem ...... 14 5.3 The Principle of uniform boundedness ...... 15 5.4 A Tauberian theorem ...... 16

Lecture 6 Hardy’s Tauberian Theorem 16 6.1 Proof of Hardy’s Tauberian theorem ...... 16

Lecture 7 A Covering Lemma 18 7.1 The Principle of localisation ...... 18 7.2 Almost everywhere convergence ...... 19

Lecture 8 Almost Everywhere Convergence 21 8.1 Weak type results ...... 21

Notes by Jakob Streipel. Last updated April 22, 2020.

i ii TABLE OF CONTENTS

Lecture 9 More Almost Everywhere Convergence 24 9.1 Generalising a theme ...... 24

Lecture 10 Herglotz’s Theorem 27 10.1 Making Fourier coefficients out of a sequence ...... 27

Lecture 11 Harmonic Functions 30 11.1 Finishing Herglot’s theorem ...... 30 11.2 Harmonic functions and kernels ...... 31

Lecture 12 Harmonic Functions, continued 32 12.1 Harmonic conjugates and such ...... 32 12.2 values ...... 34

Lecture 13 The Dirichlet Problem 35 13.1 The Classical Dirichlet problem ...... 35

Lecture 14 Converse Problem 37 14.1 Converse to this problem ...... 37

Index 40 FOURIER ANALYSIS 1

Lecture 1 Fourier Analysis

1.1 Preliminaries and definitions Definition 1.1.1 (Fourier coefficients, Fourier series). Let f ∈ L1([−π, π)), i.e.,

1 Z π |f(x)| dx < ∞. 2π −π

For n ∈ Z, we define the nth Fourier coefficient of f to be 1 Z π fˆ(n) = f(t)e−int dt. 2π −π The formal sum ∞ X fˆ(n)eint n=−∞ is called the Fourier series of f. When we write ∞ X f ∼ fˆ(n)eint n=−∞ we mean nothing more or less than the coefficients in the series are the Fourier coefficients of f—we do not imply any sort of convergence (hence formal sum). We will be interested, amongst other things, in

n X ˆ ikx Snf(x) = f(k)e , k=−n called the nth partial sum of f. We want to study when Snf → f, and in what sense this converges. For instance,

(i) does Snf(x) → f(x) for all x, i.e., pointwise?

(ii) does Snf(x) → f(x) almost everywhere, i.e., there exists a set E with |E| = 0 such that Snf(x) → f(x) for all x 6∈ E?

(iii) does Snf → f uniformly? That is, given ε > 0 there exists some N such that for n ≥ N, we have |Snf(x) − f(x)| < ε for all x ∈ [−π, π).

(iv) does Snf → f almost uniformly? By this we mean, given ε > 0, there exists a set E ⊂ [−π, π) such that |E| < ε and Snf(x) → f(x) uniformly on [−π, π) \ E.

p (v) does Snf → f in L norm? I.e., Z π 1 p |Snf(x) − f(x)| dx → 0 2π −π as n → ∞. Date: January 21st, 2020. 2 FOURIER ANALYSIS

(vi) does Snf → f in measure? I.e., given ε > 0, there exists some N such that for n ≥ N,  | x ∈ [−π, π) |Snf(x) − f(x)| > ε | < ε. All by way of saying: there are many modes of convergence, and one can inves- tigate any and all of them (though we likely will not). Other questions one can ask of these objects are, for instance: (i) Given f ∈ L1, what can we say about the sequence  fˆ(n) ? That is, what properties of f lead to properties of  fˆ(n) ? For instance, if f > 0, what about fˆ(n)? What is f is continuous, or twice differentiable?  (ii) Conversely, given a sequence an , does there exist a function f with ˆ f(n) = an? We can also ask these same questions in more general settings. For instance:

Definition 1.1.2 (Fourier coefficients of measure). Suppose µ is a positive, finite measure on [−π, π). Define 1 Z π µˆ(n) = e−int dµ(t), 2π −π the Fourier coefficients of the measure µ. What we have above is a special case of this, since if f is a function in L1, then f(t) dt is a measure, namely by defining Z µ(E) = f(t) dt. E It is possible to abstract this much further, too. Let G be a locally com- pact abelian group, often abbreviated LCA group. (In other words, G is a topological group, i.e., a group endowed with a topology, and the group oper- ations G × G → G by (a, b) 7→ a + b and G → G by a 7→ −a are continuous, and in particular this topology is locally compact.) Then there exists a (unique up to scalar) measure µ on G such that µ(E + x) = µ(E) for all Borel sets E and x ∈ G (i.e., the measure is translation invariant). This measure is called the Haar measure on G. Definition 1.1.3 (Character). A mapping γ : G → C is called a character if |γ(x)| = 1 for all x ∈ G and γ(x + y) = γ(x)γ(y) (i.e., γ is a homomorphism). dx Example 1.1.4. On [−π, π), 2π is a Haar measure, and the (continuous) char- inx acters are γ(x) = e . N Definition 1.1.5 (Dual group). Let G be a locally compact abelian group. We define the dual group Γ of to be the group of all continuous characters on G. Given f ∈ L1(G) and γ ∈ Γ, set Z fˆ(γ) = f(x)γ(−x) dµ(x). G So fˆ is a function on Γ. ∼ Example 1.1.6. In the case where G = [−π, π), we therefore have Γ = Z, since inx we can identify each e (uniquely) by n. N 1.2 Elementary facts 3

1.2 Elementary facts Proposition 1.2.1 (Riemann–Lebesgue lemma). If f ∈ L1([−π, π)), then |fˆ(n)| → 0 as n → ±∞.

Proof. We prove this mostly because it demonstrates a useful strategy: prove something for characteristic functions of intervals; show that it works also for linear combinations of such (i.e., step functions), and use the fact that step functions are (hopefully) dense in whatever space we are concerned with. In particular: let f(x) = χ(a,b)(x). Then

Z π Z b ˆ 1 −inx 1 −inx f(n) = χ(a,b)(x)e dx = e dx 2π −π 2π a −inx b −inb −ina 1 e 1 e − e = = , 2π −in a 2π −in which since the denominator is bounded goes to 0 as n → ±∞. Hence the proposition is true for characteristic functions of intervals, and since very step in this calculation is linear, it is true for linear combinations thereof, meaning step functions. Since step functions are dense in L1, if f ∈ L1([−π, π)) there exists for any ε > 0 some step function s such that kf − sk1 < ε. Since s is a step function and the proposition holds for those, there exists some N such that for |n| ≥ N, |sˆ(n)| < ε. Hence for |n| ≥ N,

1 Z π ˆ ˆ −inx |f(n)| ≤ |f(n) − sˆ(n)| + |sˆ(n)| < (f(x) − s(x))e dx + ε 2π −π 1 Z π ≤ |f(x) − s(x)| dx + ε = kf − sk1 + ε < ε + ε = 2ε. 2π −π

Consider f ∈ L1([−π, π)). We can extend this periodically to a function f ? on R. That is, for x ∈ R, there exists some n ∈ Z such that x − 2nπ ∈ [−π, π), and so we define f ?(x) = f(x + 2πn). Generally speaking, we will abuse notation and just call f by f ?. In particular, this means that

1 Z c+π 1 Z π f(t) dt = f(t) dt 2π c−π 2π −π for any c, since a function thus extended is 2π periodic. This immediately gives us the following:

Proposition 1.2.2. Let f ∈ L1([−π, π)).

(i) If y ∈ R and g(x) = f(x − y), then gˆ(n) = e−iynfˆ(n).

(ii) If m ∈ Z and g(x) = eimxf(x), then gˆ(n) = fˆ(n − m). Proof. Both of these are direct computations. 4 L2 RESULTS

Definition 1.2.3 (Convolution). Let f, g ∈ L1([−π, π)). Define by 1 Z π f ∗ g(x) = f(x − y)g(y) dy 2π −π the convolution of f and g. Proposition 1.2.4. If f, g ∈ L1([−π, π)), then f[∗ g(n) = fˆ(n)ˆg(n).

Lecture 2 L2 Results

First let us remark that the Riemann–Lebesgue lemma is not true for measures: consider δ0 on [−π, π), i.e., δ0(E) = 1 if 0 ∈ E, else δ0(E) = 0. Then Z π ˆ 1 −int 1 δ0(n) = e dδ0(t) = 2π −π 2π uniformly for all n, which clearly does not go to 0.

2.1 Convolutions Let us prove Proposition 1.2.4 from last time. Proof. It is direct computation: 1 Z π 1 Z π 1 Z π f[∗ g(n) = f ∗ g(t)e−int dt = f(t − y)g(y) dye−int dt 2π −π 2π −π 2π −π 1 Z π 1 Z π = f(t − y)g(y)e−in(t−y)e−iny dy dt. 2π −π 2π −π Switching the order of integration (everything’s finite, so no worries) we get 1 Z π 1 Z π f[∗ g(n) = f(t − y)g(y)e−in(t−y)e−iny dy dy 2π −π 2π −π 1 Z π 1 Z π = g(y)e−iny f(t − y)e−in(t−y) dt dy 2π −π 2π −π 1 Z π = fˆ(n) g(y)e−iny dy = fˆ(n)ˆg(n). 2π −π Proposition 2.1.1 (Young’s inequality). Let 1 ≤ p ≤ ∞. If f ∈ Lp([−π, π)) 1 and g ∈ L ([−π, π)), then kf ∗ gkp ≤ kfkpkgk1.

Proof. First note that if kgk1 = 0, then both sides of the inequality are trivially 0, so let us assume kgk1 6= 0. First, let us take 1 ≤ p < ∞. Computing, we have 1 Z π 1 Z π 1 Z π p p p kf ∗ gkp = |f ∗ g(x)| dx = f(x − y)g(y) dy dx 2π −π 2π −π 2π −π 1 Z π Z π g(y) dy p p ≤ f(x − y) kgk1 dx. 2π −π −π kgk12π Notice how g(y) dy is a measure of total mass 1, so by Jensen’s inequality1, since kgk12π Date: January 23rd, 2020. 1Saying, for µ(Ω) = 1 and ϕ convex, ϕ(R f dµ) ≤ R ϕ(f) dµ. Ω Ω 2.2 L2 results 5

ϕ(s) = sp is convex for 1 ≤ p < ∞, we have Z π Z π p 1 p |g(y)| dy p kf ∗ gkp ≤ |f(x − y)| dxkgk1 2π −π −π kgk12π Z π Z π 1 1 p |g(y)| p = |f(x − y)| dx dykgk1 2π −π 2π −π kgk1 Z π p p 1 g(y) dy p p = kfkpkgk1 = kfkpkgk1. 2π −π kgk12π Taking pth roots we are done. This leaves p = ∞. We have |f(x)| ≤ kfk∞ uniformly, so 1 Z π

|f ∗ g(x)| = f(x − y)g(y) dy 2π −π 1 Z π ≤ |f(x − y)||g(y)| dy ≤ kfk∞kgk1. 2π −π Proposition 2.1.2. If f ∈ L1([−π, π)) and f 0 ∈ L1([−π, π)), then

fˆ0(n) = infˆ(n).

Proof. Again straightforward computation, this time with some integration by parts thrown in:

Z π −inx π Z π 0 1 0 −inx f(x)e 1 −inx fˆ (n) = f (x)e dx = − f(x)(−in)e dx 2π −π 2π −π 2π −π 1 Z π = 0 + in f(x)e−inx dx = infˆ(n), 2π −π where in the middle step we have used the periodic extension of f to see that f(−π) = f(π).

2.2 L2 results First note that L2([−π, π)) ⊂ L1([−π, π)). We model much of this discussion from the intuition gained from finite di- mensional inner product spaces: Example 2.2.1. Let V be a finite dimensional inner product space, and let e1, e2, . . . , en be an orthonormal basis for V . In other words, we have an inner product h·, ·i with the property that hei, eji = δij (where δ is the Kronecker delta). Like all inner products this induces a norm, kvk = hv, vi1/2. For any v ∈ V we can write v = a1e1+a2e2+...+anen for some a1, a2, . . . , an, and we have

hv, eji = a1he1, eji + a2he2, eji + ... + anhen, eji = aj, so n X v = hv, ejiej. j=1 N 6 L2 RESULTS

Now as it happens L2([−π, π)) is an inner product space (though infinite dimensional), with the inner product

1 Z π hf, gi = f(t)g(t) dt, 2π −π

 inx inx imx and the characters e happen to be orthogonal, i.e., he , e i = δmn. n∈Z  Proposition 2.2.2. If an ⊂ C, then

N 2 N X inx X 2 ane = |an| . 2 n=−N n=−N

Proof. Straightforward computation—expand the square of the sum as a prod- uct of conjugates; mixed terms disappear.

Proposition 2.2.3. Let f ∈ L2([−π, π)). Then for any N,

N 2 N X inx ˆ 2 X ˆ 2 e f(n) − f = kfk2 − |f(n)| . 2 n=−N n=−N

Proof. By writing the norm as the inner product and expanding, we get

N 2 N 2 N X inx ˆ X inx ˆ 2 D X inx ˆ E e f(n) − f = e f(n) + kfk2 − 2 Re e f(n), f . 2 2 n=−N n=−N n=−N

Now

N N D X E  1 Z π X  2 Re einxfˆ(n), f = 2 Re einxfˆ(n)f(x) dx 2π n=−N −π n=−N N N X X = 2 Re fˆ(n)fˆ(n) = 2 |fˆ(n)|2. n=−N n=−N

Combining these finishes the proof.

Corollary 2.2.4 (Bessel’s inequality). If f ∈ L2([−π, π)), then for all N

N X ˆ 2 2 |f(n)| ≤ kfk2, n=−N and moreover ∞ X ˆ 2 2 |f(n)| ≤ kfk2. n=−∞ 2.3 Cesàro mean 7

2.3 Cesàro mean

As discussed, we wish to discuss in what sense Snf(x) → f(x). This, as it happens, is a hard question—it is true almost anywhere in the Lp sense for p > 1, but it is not at all a simple proof. An easier question is to consider

S f(x) + S f(x) + ... + S f(x) σ f(x) = 0 1 N , N N + 1 i.e., the average.  The motivation for this is that the Cesàro mean of a sequence an , i.e., a + a a + a + a s = a , s = 1 2 , s = 1 2 3 ,..., 1 1 2 2 3 3 is quite well-behaved. For instance:

Theorem 2.3.1. If lim an = a exists, then lim sn = a. n→∞ n→∞

n The converse is not true (for instance, take an = (−1) ).

Lecture 3 Fejer Kernels

3.1 More about Cesàro means

Notice how, since σnf(x) is the mean of S0f(x),S1f(x),...,Snf(x), we can view σnf(x) as 1 n − 1 n σ f(x) = fˆ(−n)e−inx + ... + fˆ(−2)e−i2x + fˆ(−1)e−i1x + fˆ(0)ei0x+ n n + 1 n + 1 n + 1 n n − 1 1 + fˆ(n)einx + fˆ(2)ei2x + ... + fˆ(n)einx. n + 1 n + 1 n + 1

In other words, we can view Snf(x) as weighting each of the Fourier coefficients ˆ ˆ f(−n),..., f(n) with the uniform weight 1, whereas σnf(x) weights the kth |k| Fourier coefficients by n+1 . For these so-called Fejer means, the natural question to ask is what hap- pens as n → ∞. A similar question is to consider all Fourier coefficients, but weight them by something rapidly decaying, say for 0 < r < 1, consider

∞ X ˆ |n| inx arf(x) = f(n)r e , n=−∞ called Abel means. Here, of course, the natural question to ask is what happens as r → 1. Both of these are special cases of a more general theory of so-called Fourier multipliers.

Date: January 28th, 2020. 8 FEJER KERNELS

With this goal in mind, particularly studying σn, let us formulate a more useful, closed form way of expressing this quantity. First notice how

N N X X 1 Z π S f(x) = fˆ(n)einx = einx f(t)e−int dt N 2π n=−N n=−N −π N N X 1 Z π 1 Z π X = f(t)e−in(t−x) dt = f(t) ein(x−t) dt 2π 2π n=−N −π −π n=−N

= f ∗ DN (x) where

N X int DN (t) = e n=−N is called the Dirichlet kernel. The bad news is that whilst this is a very pretty way of representing SN f(x), the Dirichlet kernel is a terrible kernel, in that it is in practice not easy to work with.

If instead we take the Cesàro mean of the Dirichlet kernels, i.e., KN := σN DN , then we get the more well-behaved

1 Z π σN f(x) = f(t)KN (x − t) dt, 2π −π where KN (t) is called the Fejér kernel. As it stands, thinking of the Fejér kernel as the Cesàro mean of the Dirichlet kernels is not very easy to work with either, so we will endeavour to rewrite it in a simple closed form. First, consider the cleverly telescoping sum

N N N N ix X inx − ix X inx X i(n+ 1 )x X i(n− 1 )x e 2 e − e 2 e = e 2 − e 2 n=−N n=−N n=−N n=−N

i(N+ 1 )x −i(N+ 1 )x  1  = e 2 − e 2 = 2i sin N + x . 2

On the other hand, this says that

N ix − ix X inx  1  (e 2 − e 2 ) e = 2i sin N + x , 2 n=−N which means

N X sin((N + 1 )x) D (x) = einx = 2 . N sin( x ) n=−N 2 3.1 More about Cesàro means 9

Hence the Fejér kernel is

3 5 1 sin( 2 x) sin( 2 x) sin((N+ 2 )x) 1 + sin( x ) + sin( x ) + ... + sin( x ) K (x) = 2 2 2 N N + 1 n 1 1 X  1  x = sin k + x sin N + 1 sin( x )2 2 2 2 k=0 n 1 1 X 1 = (cos(kx) − cos((k + 1)x)) N + 1 sin( x )2 2 2 k=0 1 1 − cos((N + 1)x) + 1 = x 2 , N + 1 sin( 2 ) 2 where we have used the product to sum formula for sin in the third step and noticed that the sum is telescoping in the last step. Now the remaining cosine, by the half angle formula for sin, becomes 1 1 N + 1 2 x 2 sin x , N + 1 sin( 2 ) 2 so in all we have the closed form

N+1 2 1 sin( 2 x) KN (x) = x N + 1 sin 2 for the Fejér kernel.

Proposition 3.1.1. The Fejér kernel KN has the following properties:

(i) KN (T ) ≥ 0 for all N and t; 1 Z π (ii) KN (t) dt = 1 for all N; and 2π −π

(iii) if I is any open interval containing 0, then lim kKN χIc k∞ = 0. N→∞ Proof. (i) This is obvious.

(ii) Viewing KN again as σN DN , notice how only the n = 0 terms from each part contributes when integrating from −π to π, so we get 1 Z π 1 + 1 + ... + 1 KN (t) dt = = 1. 2π −π N + 1

(iii) Suppose (−δ, δ) ⊂ I for some δ > 0. For x ∈ [−π, π) \ (−δ, δ),

1 sin( N+1 x)2 1 sin( N+1 x)2 K (x) = 2 ≤ 2 N N + 1 sin x N + 1 δ 2 2 sin( 2 )

δ x since in the range x ∈ [−π, π) \ (−δ, δ), |sin 2 | ≤ |sin 2 |. The denominator is fixed and the numerator is bounded, so as N → ∞ this vanishes. Remark 3.1.2. Any family of functions satisfying conditions (i), (ii), and (iii) is called an approximate identity. 10 FEJER KERNELS

These approximate identities enjoy a couple of powerful features:

Theorem 3.1.3. (i) Suppose f ∈ Lp([−π, π)) where 1 ≤ p < ∞. Then p σnf → f in L .

(ii) If f is continuous and f(π) = f(−π), then σnf(x) → f(x) uniformly.

1 (iii) If f ∈ L ([−π, π)) and if x is a point of continuity of f, then σnf(x) → f(x).

Remark 3.1.4. As we shall see, all of these fail spectacularly for the Dirichlet kernel.

Remark 3.1.5. As hinted at, this theorem is not particular to σn coming from the Fejér kernel—indeed the theorem is true for any approximate identity.

Proof. (i) This is mostly a straight forward, but at one crucial step slightly careful, computation. We have Z π p 1 p kσN f − fkp = |σN f(x) − f(x)| dx 2π −π 1 Z π 1 Z π p

= f(t)KN (x − t) dt − f(x) dx 2π −π 2π −π 1 Z π 1 Z π p

= f(x − t)KN (t) dt − f(x) dx, 2π −π 2π −π where we have shifted the t integral. Since KN (t) has mass 1, we can factor this like 1 Z π 1 Z π p

(f(x − t) − f(x))KN (t) dt dx 2π −π 2π −π 1 Z π  1 Z π p ≤ |f(x − t) − f(x)|KN (t) dt dx. 2π −π 2π −π

KN (t) dt Notice how 2π is a measure of mass 1, so by Jensen’s inequality this be- comes Z π Z π 1 1 p ≤ |f(x − t) − f(x)| KN (t) dt dx 2π −π 2π −π Z π Z π 1 1 p = |f(x − t) − f(x)| dxKN (t) dt 2π −π 2π −π Z π 1 p = kft − fkpKN (t) dt. 2π −π

Here we are letting ft(x) = f(x − t), a shift. We know that p limkft − fkp = 0, t→0 so let us write the above as Z Z 1 p 1 p kft − fkpKN (t) dt + kft − fkpKN (t) dt = I + II. 2π |t|<δ 2π |t|>δ DIRICHLET KERNEL 11

p Choosing δ such that |t| < δ implies kft − fkp < ε, we see immediately that I < ε. On the other hand, with δ fixed, there exists some M such that for N ≥ M, p KN (t) ≤ ε on [−π, π) \ (−δ, δ). Hence if N ≥ M we have II < (2kfkp) ε. Finally, then, if δ small enough and N ≥ M, then

p p kσN f − fkp < ε + ε(2kfkp) .

Lecture 4 Dirichlet Kernel

4.1 The Fejér kernels are nice Proof continued. (b) Let x ∈ [−π, π). Then

1 Z π f(x) Z π

|σN f(x) − f(x)| = f(x − t)KN (t) dt − KN (t) dt 2π −π 2π −π 1 Z π ≤ |f(x − t) − f(x)|KN (t) dt 2π −π 1 Z 1 Z = + = I + II. 2π |t|<δ 2π |t|>δ

By assumption f is continuous on [−π, π], which is closed, so f is uniformly continuous. Hence given ε > 0 there exists δ > 0 such that |t| < δ implies |f(x − t) − f(x)| < ε for all x. Hence

1 Z I < εKN (t) dt < ε. 2π |t|<δ

With δ fixed, choose M so large that if n ≥ M, then

kχ(−δ,δ)KN k∞ < ε.

Therefore, since f is bounded because it is continuous,

1 Z II < 2kfk∞KN (t) dt < 2kfk∞ε. 2π |t|>δ

This means

|σN f(x) − f(x)| = I + II < ε(1 + 2kfk∞) independent of x, so the convergence is uniform.

(c) We play a similar game, only with a slightly different approach at the very end. For the same I and II, we deal with I by noting that since x is a point of continuity of f, there exists for every ε > 0 some δ > 0 so that |f(x−t)−f(x)| < ε for |t| < δ, and so I < ε just as before.

Date: January 30th, 2020. 12 DIRICHLET KERNEL

For II we again choose M large to ensure that KN (t) is small, but this time we bound

|f(x − t) − f(x)| ≤ |f(x − t)| + |f(x)| ≤ kfk1 + |f(x)|, where the L1 norm is bounded by assumption and |f(x)| is constant since x is fixed. The convergence follows. Definition 4.1.1 (Trigonometric polynomial). Any finite sum of the form

N X inx ane n=−M is called a trigonometric polynomial. Corollary 4.1.2. Trigonometric polynomials are dense in LP ([−π, π)), 1 ≤ p < ∞.

p p Proof. Let f ∈ L ([−π, π)). Then σnf → f in L ([−π, π)) as n → ∞. Since σnf is a trigonometric polynomial, the corollary follows.

4.2 Dirichlet kernels

1 Let f ∈ L ([−π, π)). As we have previously discussed, SN f(x) = f ∗ DN (x), and we derived previously how

1 sin((N + 2 )x) DN (x) = x . sin 2

We mentioned at the time that the Dirichlet kernel DN is not a very good kernel. One large reason for this is this: DN is not an approximate identity. To see this, let us show that its L1 norm is not bounded. In particular,

1 Z π sin((N + 1 )t) kD k = 2 dt N 1 t 2π −π sin 2 2 Z π |sin((N + 1 )t)||t| 2 Z π |sin((N + 1 )t)||t| ≥ 2 dt = 2 dt π −π π 0

1 since |sin(x)| ≤ |x|. Changing variables u = (N + 2 )t, this becomes

(N+ 1 )π (N+ 1 )π 2 Z 2 |sin(u)| 2 Z 2 |sin(u)| = du = du. π 0 |u| π 0 u Splitting this integral up into integrals over intervals of length π (throwing away the last half) we get

N−1 N−1 2 X Z (k+1)π |sin(u)| 2 X 1 Z (k+1)π kD k ≥ du ≥ |sin(u)| du N 1 π u π (k + 1)π k=0 kπ k=0 kπ N−1 N 4 X 1 4 X 1 4 = = ≥ log(N + 1). π2 k + 1 π2 k π2 k=0 k=1

Hence kDN k1 → ∞ as N → ∞. 4.3 Pointwise convergence of SN f(x) 13

4.3 Pointwise convergence of SN f(x)

1 1 For f ∈ L ([−π, π)), set TN f = SN f(0). Then TN : L ([−π, π)) → C, and since the sum in SN is linear, TN is a linear functional (on the L1([−π, π)), for those keeping count). Consider in particular f ∈ C([−π, π]). Then

1 Z π

|TN f| = |SN f(0)| = f(0 − t)DN (t) dt 2π −π 1 Z π ≤ kfk∞|DN (t)| dt ≤ kfk∞kDN k1. 2π −π

Hence for every N, TN : C([−π, π]) → C is a linear bounded functional, so kTN k ≤ kDN k1. Fixing an N, define g(t) = sgn DN (t). Using this we create functions gj on [−π, π) such that gj are continuous and gj → g almost everywhere. Then

1 Z π lim TN gj = lim SN gj(0) = lim gj(0 − t)DN (t) dt. j→∞ j→∞ j→∞ 2π −π

Now since gj is bounded by construction, and since DN (t) is bounded because N is fixed, the integrand is bounded and so by Lebesgue’s dominated convergence theorem we can bring the limit inside the integral. Hence

1 Z π lim TN gj = lim gj(−t)DN (t) dt j→∞ 2π −π j→∞ 1 Z π 1 Z π = g(t)DN (t) dt = |DN (t)| dt = kDN k1, 2π −π 2π −π where we have noted that g(−t) = g(t) is even by construction. Hence on C([−π, π]) we have a family TN : C([−π, π]) → C of bounded 4 (kTN k ≤ kDN k1) linear functionals with kTN k ≥ π2 log(N + 1). By the Principle of uniform boundedness we must therefore have

sup|TN f| = ∞ N for all f in a dense Gδ set of C([−π, π]). In other words, for all f in a (that is the intersection of open sets) in C([−π, π]) we have SN f(0) growing without bound, and so for all those f we cannot have Snf(0) → f(0). Given any x0 ∈ [−π, π] we can repeat this argument, and so there exists a dense set of functions in C([−π, π]) such that SN f(x0) 6→ f(x0).

Lecture 5 The Principle of Uniform Boundedness

5.1 Some functional analysis We wish to make more clear the statements at the end of last lecture, where we invoked the Principle of uniform boundedness.

Date: February 4th, 2020. 14 THE PRINCIPLE OF UNIFORM BOUNDEDNESS

Let X be a normed vector space, say the norm is k·k, meaning that we can give it a natural metric d(x, y) = kx − yk. Hence we have a , and so on. We say that (X, k·k) is a Banach space if it is a (i.e., Cauchy sequences converse in the space). For T : X → C, T linear, we define |T x| kT k = sup . x6=0 kxk

Equipped with such an operator norm, we can define X∗, the set of all such linear functionals T that are bounded, i.e., kT k < ∞. This makes (X∗, k·k) a normed space too, in fact it, too, is a Banach space.

Theorem 5.1.1. Let X be a Banach space and let T : X → C be linear. The following are equivalent:

(i) T is bounded.

(ii) T is continuous.

(iii) T is continuous at a point x ∈ X.

Proof. We show the implications in order. First, (i) implies (ii): We have

|T x1 − T x2| = |T (x1 − x2)| ≤ kT kkx1 − x2k.

Since kT k < ∞ by assumption, this is a Lipschitz relation, whence T is contin- uous. Next, (ii) implies (iii): This is trivial, since being continuous everywhere certainly includes being continuous at a point. Finally, (iii) implies (i): Suppose T is continuous at x0 ∈ X. In other words, there exists δ > 0 such that kx0 − xk < δ implies |T x − T x0| < 1 (i.e., take ε = 1 in the continuity definition). Then for x ∈ X, x 6= 0, we have

2kxk  δx  2kxk  δx  |T x| = T = T + T x − T x δ 2kxk δ 2kxk 0 0 2kxk  δx  = T + x − T x . δ 2kxk 0 0

δx Since the two arguments of T are close—notice how 2kxk by construction has 2 norm less than δ—this means this is less than δ kxk. 2 Hence kT k < δ , do T is bounded.

5.2 The Baire category theorem Theorem 5.2.1 (Baire category theorem). Let X be a Banach space. Then the intersection of a countable number of dense open sets in X is dense in X (and by definition it is a Gδ set). 5.3 The Principle of uniform boundedness 15

Proof. Let O1,O2,... be open dense sets in X. Let B(z, r) be an open in X or radios r around z. We need to find some ∞  \  y ∈ B(z, r) ∩ On . n=1 The idea here is fairly simple: we want to somehow construct a good Cauchy sequence, since we know it must converge in X because X is complete. However, we also wish to remain inside all the sets we are intersecting, which at the moment is a problem: they’re open, not closed. Hence, shrink the B(z, r) ball a bit, say halving its radius, so that the of the smaller ball is contained in B(z, r). Since O1 is dense in X, there must exist some point in this smaller ball with x1 ∈ O1. In particular, since O1 is open, there exists a whole open neighbourhood of x1 contained in O1. As with the original point, halve the radius here, take a closure, and there must exist some new point x2 in this new ball with x2 ∈ O2.  Repeat this for all On, and we get a Cauchy sequence xn with, say, xn → y. Since xn lies in all the half radius balls which are closed, this limit lies inside the closed balls, and so we are done. Note how this proof works any .

5.3 The Principle of uniform boundedness We are now equipped to prove the titular result of this lecture, also known as the Banach-Steinhaus theorem. Theorem 5.3.1 (Principle of uniform boundedness). Suppose X is a Banach space. Let Tα, α ∈ Λ, be a collection of bounded linear functionals. Then either

(i) there exists some M such that kTαk ≤ M for all α, or

(ii) sup|Tαx| = ∞ for all x ∈ X belonging to a dense Gδ set. α

 Proof. Let ϕ(x) = sup|Tαx|. For n = 1, 2, 3,..., set Vn = x ∈ X ϕ(x) > α n . Note that if x0 ∈ Vn, then ϕ(x0) > n so there exists some α0 such that

|Tα0 x0| > n.

Since Tα0 is continuous, there exists a neighbourhood U of x0 such that x ∈ U implies |Tα0 x| > n. Hence Vn is open for all n. Certainly it is true that either all Vn are dense in X, or at least one of them is not. Let us study each of these two possibilities in turn. Suppose not all Vn are dense. In other words, there exists some VN that is not dense, which in turn means there exists some x0 and r > 0 such that c B(x0, r) ⊂ VN . If kxk < r, then ϕ(x0 +x) ≤ N, so for every α, kxk < r implies |Tα(x0 +x)| ≤ N. Hence for kxk < r, we have

|Tαx| = |−Tα(x0 + x) + Tαx0| ≤ 2N,

2N so that kTαk ≤ r , whence all Tα are bounded, and (i) holds. 16 HARDY’S TAUBERIAN THEOREM

Else, if all Vn are dense in X, then by the Baire category theorem, ∞ \ V = Vn n=1 is a dense Gδ set in X, so for x ∈ V , ϕ(x) = ∞ (since it is greater than all n ∈ N), so sup|Tαx| = ∞, i.e., (ii) holds. α Our short aside into functional analysis is now over.

5.4 A Tauberian theorem

Theorem 5.4.1 (Hardy’s Tauberian theorem). Suppose f ∈ L1([−π, π)) such ˆ 1 that f(n) = O( n ) as n → ±∞. Then Snf(x) and σnf(x) converge for the same values of x and to the same limits. Also, if σnf(x) converges uniformly on some set, so does Snf(x).

Remark 5.4.2. We know already that if Snf(x) → L, then σnf(x) → L, even without the additional hypothesis on the growth of the Fourier coefficients (this is just saying that if the limit of a sequence exists, the limit of its Cesàro means is the same). In this light, this theorem is a kind of Tauberian theorem, giving a partial converse to the statement.

Recall moreover that we proved σnf(x) → f(x) at a point of continuity of f, giving immediately: 1 ˆ 1 Corollary 5.4.3. If f ∈ L ([−π, π)) and f(n) = O( n ), then Snf(x) → f(x) at any point of continuity of f.

This tells us in particular that the functions f in the dense Gδ set from last ˆ 1 time must not have f(n) = O( n ).

Lecture 6 Hardy’s Tauberian Theorem

6.1 Proof of Hardy’s Tauberian theorem

Proof. As per the remark we already know that Snf(x) → L implies σnf(x) → L, whence we focus our attention on the converse. Let M and N be positive integers with M > N. We compute M + 1 N + 1 σ f(x) − σ f(x) M − N M M − N N M N X M + 1  |j|  X N + 1  |j|  = 1 − fˆ(j)eijx − 1 − fˆ(j)eijx M − n M + 1 M − n N + 1 j=−M j=−N N X M + 1 − |j| − (N + 1 − |j|) X M + 1  |j|  = fˆ(j)eijx + 1 − fˆ(j)eijx M − N M − n M + 1 j=−N N<|j|≤M X M + 1  |j|  = S f(x) + 1 − fˆ(j)eijx. N M − n M + 1 N<|j|≤M

Date: February 6th, 2020. 6.1 Proof of Hardy’s Tauberian theorem 17

We need to get a handle on the error at the end. Notice how

X M + 1  |j|  M + 1 X  N + 1  c 1 − fˆ(j)eijx ≤ 1 − M − n M + 1 M − N M − N |j| N<|j|≤M N<|j|≤M

ˆ 1 by bounding |j| below by N + 1 and f(j) = O( |j| ) by assumption. This then is bounded by X 1 M c ≤ C log . |j| N N<|j|≤M

M Let ε > 0. Then for N large, take M > N such that log N < ε, i.e., M ≤ Neε. Then we have

M + 1 N + 1 σ f(x) − σ f(x) − S f(x) ≤ Cε. M − N M M − N N N

Now suppose σN f(x) → L, meaning that we can choose N large enough so 2 2 that for all n ≥ N, |σnf(x) − L| < ε . In particular, |σN f(x) − L| < ε and 2 |σM f(x) − L| < ε . This gives us

 M + 1 N + 1  |S f(x) − L| = S f(x) − L − L N N M − N M − N  M + 1 N + 1  = S f(x) − (L − Σ f(x) + σ f(x)) − (L − σ f(x) + σ f(x)) N M − N M M M − N N N  M + 1 N + 1   M + 1 N + 1  < S f(x) − σ f(x) − σ f(x) + + ε2 N M − N M M − N N M − N M − N M + N + 2 < Cε + ε2. M − N

Since N ≤ M ≤ eεN, we have

eεN + N + 2 ≤ Cε + ε2. eεN − N

The 2 is negligible in context of N large, so we are left with estimating

eε + 1 ε2. eε − 1

ε2 Now we can choose ε > 0 small enough that eε−1 < Cε since this means ε ≤ C(eε − 1), which for C = 1 is just saying ex lies above its tangent line. This finishes the proof.

Lemma 6.1.1. Suppose f ∈ L1([−π, π)) and assume

1 Z π f(t)

dt < ∞. 2π −π t

Then lim Snf(0) = 0. n→∞ 18 A COVERING LEMMA

Lecture 7 A Covering Lemma

7.1 The Principle of localisation We start by proving the lemma at the end of last lecture. Proof. We compute: Z π 1 1 sin((n + 2 )t) SN f(0) = f(t) t dt 2π −π sin 2 1 Z π f(t)  t t = t sin cos(nt) + cos sin(nt)) dt 2π −π sin 2 2 2 1 Z π 1 Z π f(t) t = f(t) cos(nt) dt + t cos sin(nt) dt. 2π −π 2π −π sin 2 2 Notice how the first integral is almost fˆ(n); in particular it is the real part of this Fourier coefficient. This means that since f ∈ L1([−π, π)), and the Riemann– Lebesgue lemma says its Fourier coefficients must go to 0, the first integral also goes to zero. t t In the same manner, since sin 2 ≈ 2 close to 0, the second integral is also (essentially) the imaginary part of a Fourier coefficient of an L([−π, π)) function f(t) 1 since t is assumed L ([−π, π)), whence it, too, goes to 0.

Theorem 7.1.1 (Principle of localisation). Let f ∈ L1([−π, π)) and assume f vanishes on an open interval I. Then SN f(x) → 0 for all x ∈ I.

Proof. Let x ∈ I. Then 1 Z π f(t)

dt < ∞ 2π −π x − t since I is open, and hence by Lemma 6.1.1 SN f(x) → 0.

Theorem 7.1.2 (Dini’s test). If f ∈ L1([−π, π)) and if 1 Z π |f(t − t ) − f(t )| 0 0 dt < ∞, 2π −π |t| then SN f(t0) → f(t0).

Proof. Set g(t) = f(t + t0) − f(t0), which then means that by construction g satisfies Lemma 6.1.1, whence SN g(0) → 0. Now let us translate this back to f. First, for n 6= 0 we have gˆ(n) = eint0 fˆ(n), and crucially for n = 0 we have ˆ gˆ(0) = f(0) − f(t0), since only the 0th Fourier coefficient detects the constant. Hence N N X ij0 X ˆ ijt0 SN g(0) = gˆ(j)e = f(j)e − f(t0) j=−N j=−N

= SN f(t0) − f(t0).

Since this goes to 0, we see that SN f(t0) → f(t0). Date: February 6th, 2020. 7.2 Almost everywhere convergence 19

7.2 Almost everywhere convergence

A question we have been itching to answer for a while is this: does SN f(x) → f(x) almost everywhere? We know already that it’s definitely not true pointwise—we showed that given any point there exists a dense set of continuous functions all of which do not converge at the point at hand. It gets even worse: in 1925 Kolmogorov showed that there exists some f ∈ 1 L ([−π, π)) such that SN f(x) diverges almost everywhere, and later on went 1 on to show that in fact there exists f ∈ L ([−π, π)) such that SN f(x) diverges for all x. But fear not! There is partial hope. For any p > 1, we have Lp([−π, π)) ( L1([−π, π)), and in such spaces we have better control. More on this in future. Recall

Theorem 7.2.1 (Lebesgue differentiation theorem). If f ∈ L1(Rn), then for almost every x ∈ Rn, 1 Z lim f(y) dy = f(x). r→0 |B(x, r)| B(x,r) This famous result from measure theory can be rephrased in terms that look more familiar to us here and now. Let χ (t) L (t) = B(0,r) . r |B(0, r)| Then the integral in the theorem can be rewritten as 1 Z Z f(y) dy = f(y)Lr(x − y) dy. |B(x, r)| n B(x,r) R In other words, the average we are interested in can be realised as the convolution of f against a kernel Lr. As it happens, and this is not a coincidence, this is a pretty nice kernel. Notice how Z Lr(t) dt = 1, n R n how Lr(t) ≥ 0, and how for any neigbhourhood U of 0, Lr(t) → 0 on R \ U. In other words, Lr, known as a box kernel is an approximate identity! This suggests that we might be able to prove results similar to Lebesgue differentiation theorem for other approximate identities, and indeed we will do 1 this in order to show that σN f → f almost everywhere for f ∈ L ([−π, π)). In order to accomplish this we need some tools. Definition 7.2.2 (Maximal function). For f ∈ L1([−π, π)), we define the max- imal function 1 Z x+h Mf(x) = sup |f(t)| dt. 0

 Lemma 7.2.3. From any family of open intervals Ω = Iα in [−π, π), we can extract a sequence of pairwise disjoint intervals I1,I2,I3,..., such that

∞ [ 1 [ I ≥ I . n 4 α n=1 α

Proof. The proof of this is standard. Set

a1 = sup|I|, I∈Ω which by virtue of living in [−π, π) must be finite (and less than 2π). This means 3 there must exist some I1 ⊂ Ω such that |I1| > 4 a1. (We could have picked any constant between 0 and 1, of course, but this is enough for our purposes.) Now let Ω2 be the subfamily of Ω consisting of intervals disjoint from I1, and play the same game: let a2 = sup |I|, I∈Ω2 3 choose an I2 ∈ Ω2 such that |I2| > 4 a2, and so on, getting a sequence I1,I2,I3,...,In, all disjoint, and in general let Ωn+1 be the intervals from Ω disjoint from I1,I2,...,In and set an+1 = sup |I| I∈Ωn+1

3 and pick In+1 ∈ Ωn+1 such that |In+1| > 4 an+1. This process might eventually stop, if we run out of intervals, but if it does just let In = ∅ for n large, so that we can still talk about a countably infinite family below without splitting into cases. Notice how ∞ ∞ [ X ∞ > Ik = |Ik|, k=1 k=1 since they are disjoint, and hence |Ik| → 0 as k → ∞. This means that ∞ \ Ωk = ∅, k=1 for if there is some interval J in this intersection, then |J| ≤ ak for all k, but the ak must go to zero by the above argument.

Because of this, consider Iα0 ∈ Ω, and let k be the first index such that

Iα0 6∈ Ωk. Then Iα0 ∩ Ik−1 6= ∅, and since 3 3 |Ik−1| > ak−1 = sup |I| 4 4 I∈Ωk−1

3 we have in particular |Ik−1| > 4 |Iα0 |, since Iα0 ∈ Ωk−1. Now consider 4Ik−1, by which we mean an interval with the same centre

(which we’ll cenote c) as Ik−1, but four times the length. Then 4Ik−1 ⊇ Iα0 .

This is easy to see: pick z ∈ Ik−1 ∩ Iα0 . Then for any y ∈ Iα0 , we have 8 d(c, y) ≤ d(y, z) + d(z, c) ≤ |I | + |I | ≤ |I |, α0 k−1 3 k−1 ALMOST EVERYWHERE CONVERGENCE 21

which very loosely bounded by 4|Ik−1|. Then finally ∞ ∞ ∞ ∞ [ [ X X [ Iα ≤ 4Ik ≤ |4Ik| = 4 |Ik| = 4 Ik α k=1 k=1 k=1 k=1 since the Ik are disjoint.

Lecture 8 Almost Everywhere Convergence

8.1 Weak type results Theorem 8.1.1. For f ∈ L1([−π, π)), we have  8π | t Mf(t) > λ | ≤ kfk1 λ for every λ > 0.  Proof. For each sinTλ = t Mf(t) > λ , we can choose an interval Is centred as s such that 1 Z |f(t)| dt > λ. |Is| Is  In doing so we create a collection Is s ∈ Tλ , and from such a family we can, by Lemma 7.2.3, select a subfamily of disjoint I1,I2,... such that ∞ [ 1 [ I ≥ I . n 4 s n=1 s∈Tλ Then ∞  [ [ |Tλ| = | t Mf(t) > λ | ≤ Is ≤ 4 In s∈Tλ n=1 ∞ ∞ X 4 X Z 8π = 4 |I | < |f(t)| dt ≤ kfk . n λ λ 1 n=1 n=1 In

Remark 8.1.2. It is not true that kMfk1 ≤ ckfk1. Notice how, if it were, we would have  kMfk1 ckfk1 | t Mf(t) > λ | ≤ ≤ λ λ by the Chebyshev inequality. This, then, is not true, but only in the tricky sense that the middle step is false—we just showed that in spite of this, the first and last expressions are indeed ordered thusly. This is known as a weak type (1, 1) inequality, because it does not quite give an inequality of 1-norms, but it does give such an inequality in measure, in some sense. Notice how in the proof of the above theorem, when we pass from the in- tegrals over In to kfk1, we might, in principle, lose a fair amount of precision, in the event that the In don’t cover much of [−π, π). We can make this a little more precise:

Date: February 13rd, 2020. 22 ALMOST EVERYWHERE CONVERGENCE

Lemma 8.1.3. For f ∈ L1([−π, π)), we have Z  4 | t Mf(t) > 2λ | ≤ |f(t)| dt λ { t||f(t)>λ| } for all λ > 0. Proof. Fix λ > 0 and write f = g + b where ( f(x), if |f(x)| ≤ λ, g(x) = 0, otherwise is the good function, and ( f(x), if |f(x)| > λ, b(x) = f(x) − g(x) = 0, otherwise is the bad function. Then for any interval I centred at x, we have 1 Z 1 Z 1 Z |f(t)| dt ≤ |g(t)| dt + |b(t)| dt |I| I |I| I |I| I ≤ Mg(x) + Mb(x) ≤ λ + Mb(x).

Taking supremum over all such I, we consequently have

Mf(x) ≤ λ + Mb(x).

Hence    | t Mf(t) > 2λ | ≤ | t Mg(t) > λ | + | t Mb(t) > λ |  8πkbk1 = | t Mb(t) > λ | ≤ λ 4 Z π 4 Z = |b(t)| dt = |f(t)| dt. λ −π λ { t||f(t)>λ| } As discussed, for 1-norms we only have these weak type results, however for p-norms, p > 1, we have stronger results:

Corollary 8.1.4. For 1 < p ≤ ∞, there exists a constant cp such that

kMfkp ≤ cpkfkp.

Proof. First let us handle p = ∞, for it is much quicker. We have 1 Z |f(t)| dt ≤ kfk∞, |I| I and so Mf(x) ≤ kfk∞, and we are done. Now let 1 < p < ∞. We start with a classic trick, consequence of Fubini’s theorem: Z π Z ∞ p 1 p 1 p−1  kMfkp = Mf(t) dt = pλ | t Mf(t) > λ | dλ. 2π −π 2π 0 8.1 Weak type results 23

It then becomes an exercise in using our Lemma 8.1.3 and switching order of integration with Fubini’s theorem strategically. In particular, setting Tλ =  λ y |f(t)| > 2 , Z ∞ Z p 1 p−1 8 kMfkp ≤ pλ |f(t)| dt dλ 2π 0 λ Tλ Z ∞ Z π 4 p−2 = pλ χTλ (t)|f(t)| dt dλ π 0 −π Z π Z ∞ 4 p−2 = pλ χTλ (t)|f(t)| dλ dt π −π 0 4 Z π Z 2|f(t)| = pλp−2|f(t)| dλ dt λ −π 0 Z π λ=2|f(t)| 4 p p−1 = |f(t)| λ dt π −π p − 1 λ=0 4 Z π p = |f(t)| (2|f(t)|)p−1 dt π −π p − 1 p−1 Z π 82 p 1 p p p = |f(t)| dt = cpkfkp. p − 1 2π −π

Hence, taking pth roots, kMfkp ≤ cpkfkp. Remark 8.1.5. Notice how the constant depending on p blows up as p → 1—this illustrates exactly why we cannot have the strong type (1, 1) inequality discussed above.  Corollary 8.1.6. Let LN (t) be a sequence of approximate identities that N∈N are all even. For f ∈ L1([−π, π)), set

∗ f (x) = sup|LN ∗ f(x)|. N Then f ∗(x) ≤ Mf(x). Proof. The idea is the standard wedding cake decomposition, by which we mean approximating a given LN with a finite set of characteristic functions of nested intervals Bk. Taking, without loss of generality, x = 0, something like

m 1 Z π 1 Z π X |L ∗ f(0)| = L (t)f(t) dt ≤ a χ (t)|f(t)| dt N 2π N 2π k Bk −π −π k=1 m m 1 X Z 1 X 1 Z = ak |f(t)| dt = ak|Bk| |f(t)| dt 2π 2π |Bk| k=1 Bk k=1 Bk m Mf(0) X 1 ≤ a |B | ≤ Mf(0)kL k γ, 2π k k 2π N 1 k=1 where γ > 1 measures how far off the wedding cake is from LN . Taking supre- mum over N we get f ∗(0) ≤ Mf(0)γ for all γ > 1, and taking infimum over γ we get f ∗(0) ≤ Mf(0) as desired. 24 MORE ALMOST EVERYWHERE CONVERGENCE

Corollary 8.1.7. If f ∈ L1([−π, π)), then for almost every x ∈ [−π, π),

1 Z x+h lim f(t) dt = f(x). h→0 2h x−h Proof. Note first of all that the result is definitely true for continuous f. Define

1 Z x+h T f(x) = lim sup |f(t) − f(x)| dt. h→0+ 2h x−h

We want to show that T f(x) = 0 almost everywhere, because then the lim inf is also zero, what with the integrand being nonnegative, and so the limit is 0. 1 Let ε > 0. Choose k ∈ N and a continuous g such that kf − gk1 < k , which is doable since continuous functions are dense in L1. Then for every h > 0,

1 Z x+h 1 Z x+h |f(t) − g(t) − (f(x) − g(x))| dt ≤ |f(t) − g(t)| dt + |f(x) − g(x)| 2h x−h 2h x−h ≤ M(f − g)(x) + |f(x) − g(x)|.

Notice how

1 Z x+h |f(t) − f(x)| dt 2h x−h 1 Z x+h 1 Z x+h ≤ |f(t) − g(t) − (f(x) − g(x))| dt + |g(t) − g(x)| dt. 2h x−h 2h x−h Taking lim sup, this tells us that

T f(x) ≤ M(f − g)(x) + |f(x) − g(x)|.

Then by Chebyshev’s inequality we have    | x T f(x) > ε | ≤ | x ∈ M(f − g)(x) > ε | + | x |f(x) − g(x)| > ε | C 1 c ≤ kf − gk + kf − gk ≤ . ε 1 ε 1 kε  Letting k → ∞, this means | x T f(x) > ε | = 0, and so, taking a sequence of ε → 0, this gives us T f(x) = 0 almost everywhere.

Lecture 9 More Almost Everywhere Convergence

9.1 Generalising a theme The key steps in the proof at the end of last lecture was to a) know the result for a dense class and b) have a maximal function that gave us control in a weak or strong type sense. We can play this game in many more situations.

1 Theorem 9.1.1. Let f ∈ L ([−π, π)). Then σN f(x) → f(x) almost every- where. Date: February 18th, 2020. 9.1 Generalising a theme 25

Recall how we have 1 Z π σN f(x) = f(t)KN (x − t) dt, 2π −π where 1 2 1 sin((N + 2 )x) KN (x) = x N + 1 sin 2 is the Fejér kernel. To prove this we will make use of the following lemmata.

Lemma 9.1.2. For all N we have

( π π N + 1, if − N+1 < t < N+1 , |KN (t)| ≤ LN (t) = π2 π (N+1)t2 , if N+1 ≤ |t| ≤ π.

Proof. As it happens, both bounds are true for all t; we’ve just taken the best one where applicable. Hence we compute

N N X  |j|  X  |j|  |K (t)| = 1 − eijt ≤ 1 + 1 − 2i sin(jt) N N + 1 N + 1 j=−N j=1 N X j   1 N(N + 1) = 1 + 2 1 − = 1 + 2 N − = N + 1, N + 1 N + 1 2 j=1

π π 2 whence the first part holds. Next, notice how for x ∈ [− 2 , 2 ], |sin x| ≥ π x, and x 2 |x| |x| therefore we have |sin 2 | ≥ π 2 = π for x ∈ [−π, π]. Therefore

1 2 2 1 sin((N + 2 )x) 1 1 π KN (x) = ≤ = . N + 1 sin x N + 1 |x| 2 (N + 1)x2 2 ( π )

This LN (t) that bounds KN (t) is almost, but not quite, an approximate identity of the kind we studied in the wedding cake argument last time.

Lemma 9.1.3. LN (t) is even, kLN k1 ≤ 2 for all N, and for δ > 0, Ln(t) → 0 uniformly on [−π, π) \ (−δ, δ).

Proof. That LN is even and that it goes to 0 uniformly away from the origin is clear. For the L1 bound, we compute

1 Z π 1  2π Z π π2  kLN k1 = LN (t) dt = (N + 1) + 2 2 dt 2π 2π N + 1 π (N + 1)t −π N+1 2 π 2 2 1 −π 1 −π 1 π = 1 + = 1 + + π π (N + 1)t π π (N + 1)π π (N + 1) N+1 N+1 1 = 2 − ≤ 2. N + 1

Based on the same argument as last time, we therefore have 26 MORE ALMOST EVERYWHERE CONVERGENCE

∗ ∗ Corollary 9.1.4. Define f (x) = sup|LN ∗ f(x)|. Then kf kp ≤ 2cpkfkp and N

 ckfk1 | x f ∗ (x) > λ | ≤ λ for every λ > 0. Correspondingly, we get Corollary 9.1.5. Define 1 Z π

Mf(x) = sup|σN f(x)| = sup KN (x − t)f(t) dt . N N 2π −π

Then kMfkp ≤ cpkfkp and

 ckfk1 | x Mf(x) > λ | ≤ λ for all λ > 0. Proof. This follows since 1 Z π

KN (x − t)f(t) dt ≤ |LN ∗ f(x)| 2π −π since KN is dominated by a nice even and wedding-cake sort of kernel. With this under our belt, we are equipped to prove Theorem 9.1.1. Proof of Theorem 9.1.1. The idea is identical to the proof last time. Let ε > 0 1 and k ∈ N. Pick a trigonometric polynomial g so that kf − gk1 < k , since the trigonometric polynomials are dense in L1([−π, π)). Set

T f(x) = lim sup|f(x) − σN f(x)|. N→∞ As before, we want to show that T f(x) = 0 almost everywhere. Of course we have

|f(x) − σN f(x)| ≤ |f(x) − g(x)| + |g(x) − σN g(x)| + |σN g(x) − σnf(x)|, where the middle term goes to 0 as N → ∞ since σN g(x) is a trigonometric polynomial approximating the trigonometric polynomial g(x). Taking lim sup, we get T f(x) ≤ |f(x) − g(x)| + 0 + M(g − f)(x), and so by Chebyshev’s inequality    | x T f(x) > 2ε | ≤ | x |f(x) − g(x)| > ε | + | x M(g − f)(x) > ε | kf − gk ckf − gk Ckf − gk C ≤ 1 + 1 = 1 < . ε ε ε kε This is true for every k, and the left-hand side is independent of k, so let k  go to infinity and we have | x T f(x) > 2ε | = 0, Now taking a sequence of  epsilons going to 0, this tells us | x T f(x) > 0 | = 0, so T f(x) = 0 almost everywhere. HERGLOTZ’S THEOREM 27

This is the same strategy Carleson used in 1966 to settle Lusin’s conjecture, showing

2 Theorem 9.1.6 (Carleson’s theorem). If f ∈ L ([−π, π)), then SN f(x) → f(x) almost everywhere.

Using this, Hunt showed a year later that if f ∈ Lp([−π, π)) for 1 < p < ∞, then SN f(x) → f(x) almost everywhere as well. This is much, much harder since, as we have discussed, the Dirichlet kernel DN giving rise to SN f(x) is not an approximate identity. In fact, its norm grows like log N without bound. Hence the main result of Carleson and Hunt to accomplish this was

Lemma 9.1.7. For f ∈ L1([−π, π)), set

1 Z π

Mf(x) = sup DN (x − t)f(t) dt . N 2π −π

Then for 1 < p < ∞ there exists some cp depending only on p such that kMfkp ≤ cpkfkp.

Carleson did p = 2, and Hunt adopted Carleson’s argument and did 1 < p < ∞. Carleson’s own proof of this is very terse, but to give a sense of the delicacy of the arugments involved, when it was later written up in a book the proof occupied pages 24 through 121. All this said: assuming this lemma, Carleson’s theorem is straight forward— it follows in exactly the same way as our two previous almost everywhere results, for exactly the same reason.

Lecture 10 Herglotz’s Theorem

10.1 Making Fourier coefficients out of a sequence  A natural question to ask is whether, given some sequence an , we can n∈Z 1 ˆ find a function f ∈ L ([−π, π)) such that an = f(n). Or, if an =µ ˆ(n) for some measure µ.  Definition 10.1.1. A sequence an is called positive definite if for any se-  quence of complex numbers zj with only a finite number of nonzero elements we have X an−mznzm ≥ 0. m,n

  Theorem 10.1.2 (Herglotz’s theorem). Let an ⊂ C. Then an is positive definite if and only if there exists a positive measure µ on [−π, π) such that an =µ ˆ(n) for all n ∈ Z.

Date: February 25th, 2020. 28 HERGLOTZ’S THEOREM

Proof. Let us first prove the converse direction, because that is the easy part.  Suppose an =µ ˆ(n) for some positive measure µ. Consider zj with only finitely many nonzero elements. Then 1 Z π 2 1 Z π X −imt X imt −int 0 ≤ zme dµ(t) = znzme e dµ(t) 2π −π 2π −π m∈Z m,n X 1 Z π X X = z z e−i(n−m)t dµ(t) = z z µˆ(n − m) = z z a , n m 2π n m n m n−m m,n −π m,n m,n where switching the order of summation is motivated by only finitely many zj being nonzero, so the sum is finite. For the forward direction we need much more machinery, and we will get back to it once the machinery is in hand.  Lemma 10.1.3. Given an , then an =µ ˆ(n) for some positive measure µ if

n X  |j|  1 − a eijt ≥ 0 n + 1 j j=−n for all n and t. In other words, an =µ ˆ(n) for some positive measure µ if

n X ijt Kn ∗ aje ≥ 0 j=−n for all n and t. To prove this, let us go on a brief functional analysis tangent. Suppose X is a normed space, and consider a linear mapping T : X → C. Set |T x| kT k = sup = sup |T x|. x6=0 kxk kxk=1

If kT k < ∞, then T is a bounded linear functional, which is equivalent to T being continuous. Let X∗ be the space of all bounded linear functionals on X. Then (X∗, k·k) is in fact a normed space, called the of X. There is no need to stop there: X was a normed space and from it we made X∗, but now X∗ is a normed space, so we can make the dual space X∗∗ of X∗ of bounded linear functionals from X∗ to C. Now consider (X, k·k) again, and the family of bounded linear functionals T ∈ X∗. Then we can also equip X with the from the fam- ily, i.e., the weakest (or coarsest) topology on X such that all T ∈ X∗ are  −1 continuous. That is to say, the weak topology is generated by T (U) T ∈ ∗ X and U ⊂ C open . This topology has fewer open sets than (X, k·k), hence weak. Now X∗ also has two topologies; (X, k·k) and the weak topology from X∗∗. Given x ∈ X, define x∗ ∈ X∗∗ by

x∗(T ) = T x 10.1 Making Fourier coefficients out of a sequence 29 for T ∈ X∗, i.e., the evaluation functional. This gives us get another kind of topology: on X∗, the weak-∗ topology is the weakest topology which makes all x∗ continuous.  ∗−1 This topology is then generated by x (U) x ∈ X and U ⊂ C open , which is a of the generators of the weak topology on X∗, so is indeed weaker (since it makes potentially fewer maps continuous). With this we can make sense of the following tool we need to prove the lemma:

Theorem 10.1.4 (Banach–Alaoglu theorem). The closed unit ball of X∗, i.e.,  T kT k ≤ 1 , is compact in the weak-∗ topology.

Proof of Lemma 10.1.3. Set

n X  |j|  σ (t) = 1 − a eijt. n n + 1 j j=−n

By assumption σn(t) ≥ 0. Then

1 Z π 1 Z π |σn(t)| dt = σn(t) dt = a0, 2π −π 2π −π

1 so kσnk1 = a0 for all n. Hence σn ∈ B(0, a0) ⊂ L ([−π, π)) for all n. But L1([−π, π)) is a proper subset of the space of finite measures on [−π, π), which is equal to C([−π, π))∗. Recall how T : C([−π, π)) → C being bounded and linear means there exists some µ such that 1 Z π T f = f dµ. 2π −π  By Banach–Alaoglu theorem there exists a subsequence σnj and a measure

µ such that σnj → µ in the weak-∗ sense, so

1 Z π 1 Z π σnj (t)g(t) dt → g(t) dµ(t) 2π −π 2π −π for all g ∈ C([−π, π)). Note that if g ≥ 0, then so is

1 Z π σnj (t)g(t) dt ≥ 0 2π −π for all nj. Hence for all g ≥ 0,

1 Z π g(t) dµ(t) ≥ 0 2π −π so µ is a positive measure. For each n ∈ Z, einx ∈ C([−π, π)), so Z π Z π 1 −ikt 1 −ikt σnj (t)e dt → e dµ(t). 2π −π 2π −π 30 HARMONIC FUNCTIONS

But for nj large, with k fixed,

Z π Z π nj 1 −ikt 1 X  |`|  i`t −ikt  |k|  σnj (t)e dt = 1− a`e e dt = 1− ak, 2π −π 2π −π nj + 1 nj + 1 `=−nj because only the ` = k term survives. Let nj → ∞, and this reads µˆ(k) = ak, since the right-hand side is equal to the definition of µˆ(k).

Lecture 11 Harmonic Functions

11.1 Finishing Herglot’s theorem With Lemma 10.1.3 in place we are equipped to finally prove the forward direc- tion of Herglotz’s theorem.  Proof. Assume an n∈ is positive definite. We need to show that an satisfies the lemma, i.e., Z n X  |j|  1 − a eijt ≥ 0 n + 1 j j=−n for all t and n, whence there exists a positive measure µ with µˆ(n) = an. Fix N and t and consider the sequence

e−iNt, e−i(N−1)t, . . . , e−it, 1, eit, . . . , eiNt.

Then by definition of an being positive definite,

X int −imt X ijt 0 ≤ an−me e = ajcj,N e , −N≤n,m≤N j where cj,N counts the number of ways to write j = n − m with |n| ≤ N and |m| ≤ N. In other words, ( 2N − 1 − |j|, if |j| ≤ 2N, cj,N = 0, otherwise.

This means that for all N and t,

2N X ijt 0 ≤ ((2N + 1) − |j|)aje , j=−2N which when divided by 2N + 1 becomes

2N X  |j|  0 ≤ 1 − a eijt. 2N + 1 j j=−2N

Note how no part of this argument requires N be an integer—taking half integers we recover the required inequality. Date: February 27th, 2020. 11.2 Harmonic functions and kernels 31

11.2 Harmonic functions and kernels Let ∞ X n f(z) = anz n=0 be analytic on D (by which we mean it is analytic on some open neighbourhood of D). Let 0 < r < 1, and set

∞ iθ X n inθ fr(θ) = f(re ) = anr e . n=0

These are of course C∞ functions—the are absolutely convergent, so we can differentiate termwise. By Cauchy’s theorem,

Z Z π it 1 f(ξ) 1 f(e ) it fr(θ) = iθ dξ = it iθ ie dt 2πi |ξ|=1 ξ − re 2πi −π e − re 1 Z π f(eit) 1 Z π = dt = f(eit)C (θ − t) dt, i(θ−t) r 2π −π 1 − re 2π −π where ∞ ∞ 1 X X C (s) = = (reis)n = rneisn r 1 − reis n=0 n=0 is the Cauchy kernel. In a slight abuse of notation we will write things like

∞ iθ X inθ f(θ) := f(e ) = ane , n=0

ˆ n and so viewed in this way f(n) = an, and (per above), Cˆr(n) = r . In this view things work out very nicely:

ˆ ˆ n fr(n) = f\∗ Cr(n) = f(n)Cˆr(n) = anr , as expected. Suppose u is harmonic on D, meaning

∂2u ∂2u + = 0. ∂x2 ∂y2

From complex variables we know that u = Re f for some analytic function f = u + iv, with u and v satisfying the Cauchy–Riemann equations ux = vy and uy = −vx. Hence 1 u(z) = (f(z) + f(z)). 2 Now if, as above, we have ∞ X n f(z) = anz n=0 32 HARMONIC FUNCTIONS, CONTINUED we get ∞ ∞ 1X X  u(z) = a zn + a zn + 2 Re a . 2 n n 0 n=1 n=1 Evaluating this at z = reiθ, we write

∞ ∞ ∞ 1X X  X u(reiθ) = a rneinθ + a rneinθ + 2 Re a = c r|n|einθ, 2 n n 0 n n=1 n=1 n=−∞ where  an , if n = 1, 2, 3, . . .,  2 a−n cn = 2 , if n = −1, −2, −3, . . .,  Re a0 if n = 0. Since we are viewing u as the real part of a complex analytic function, consider u real-valued. The above is true also for r = 1, so we have

∞ X inθ u(θ) = cne , n=−∞

iθ ans so uˆ(n) = cn. Hence u(re ) = ur(θ) = u ∗ Pr(θ) where

∞ −1 X n inθ X −n inθ Pr(θ) = r e + r e + 1 n=1 n=−∞ is the so-called Poisson kernel. Let us compute a nicer form of this:

Pr(θ) = Cr(θ) − 1 + Cr(θ) − 1 + 1 = Cr(θ) + Cr(θ) − 1  2   2 1 − reiθ  = Re(2C (θ) − 1) = Re − 1 = Re − r 1 − reiθ 1 − reiθ 1 − reiθ 1 + reiθ  1 + reiθ 1 − re−iθ  = Re = Re · 1 − reiθ 1 − reiθ 1 − re−iθ 1 + reiθ − re−iθ − r2  1 − r2 + r(2i sin θ) 1 − r2 = Re = Re = . 1 − re−iθ − reiθ + r2 1 − 2r cos θ + r2 1 − 2r cos θ + r2

Hence, for u harmonic in D, Z π 1 it ur(θ) = u(e )Pr(θ − t) dt = u ∗ Pr(θ). 2π −π

Lecture 12 Harmonic Functions, continued

12.1 Harmonic conjugates and such

Proposition 12.1.1. If f = u + iv is analytic on D, and if f(0) is real, then Z π iθ 1 f(re ) = u(t)Hr(θ − t) dt 2π −π

Date: March 3rd, 2020. 12.1 Harmonic conjugates and such 33 where Hr(s) = 2Cr(s) − 1 is the Herglotz kernel.

1 Proof. We have u = 2 (f + f), so 1 Z π 1 Z π 1 u(t)Hr(θ − t) dt = (f(t) + f(t))(2Cr(θ − t) − 1) dt 2π −π 2π −π 2 1 Z π 1 Z π 1 Z π = f(t)Cr(θ − t) dt + f(t)Cr(θ − t) dt − f(t) + f(t) dt. 2π −π 2π −π 4π −π

The first integral is simply f(reiθ) by previous calculations, and the last integral is Re f(0) by Cauchy’s mean-value theorem. The middle integral is the trickier part: let 1 Z π g(θ) = f(t)Cr(θ − t) dt. 2π −π ˆ Then gˆ(n) = f(n)Cˆr(n), but

∞ X f(θ) = fˆ(n)e−inθ, n=0 so this has only nonpositive indices, whereas

∞ X n inθ Cr(θ) = r e n=0 has only nonnegative indices. Hence gˆ(n) = 0 except when n = 0, in which case gˆ(0) = fˆ(0)r0 = f(0). Hence when f(0) is real, we get the desired result. Notice how 1 2 1 − reiθ 1 + reiθ H (θ) = 2C (θ) − 1 = 2 − 1 = − = . r r 1 − reiθ 1 − reiθ 1 − reiθ 1 − reiθ Hence 1 + rei(θ−t) eit + reiθ H (θ − t) = = . r 1 − rei(θ−t) eit − reiθ Hence if z = reiθ, we have the real and imaginary parts

1 Z π eit + reiθ 1 Z π eit + reiθ e−it − re−iθ f(z) = u(t) it iθ dt = u(t) it iθ · −it −iθ dt 2π −π e − re 2π −π e − re e − re 1 Z π 1 − r2 i Z π 2r sin(θ − t) = u(t) 2 dt + u(t) 2 dt. 2π −π 1 − 2r cos θ + r 2π −π 1 − 2r cos θ + r In other words, Z π iθ 1 2r sin(θ − t) v(re ) = u(t) 2 dt = Qr ∗ u(θ) 2π −π 1 − 2r cos θ + r 34 HARMONIC FUNCTIONS, CONTINUED where 2r sin(θ − t) Q (s) = r 1 − 2r cos θ + r2 is the conjugate kernel. Hence in summary, if f(z) = u(z) + iv(z) is analytic on D, then 1 (a) f(reiθ) = C ∗ f(eiθ) where C (s) = ; r r 1 − reis 1 − r2 (b) u(reiθ) = P ∗ u(eiθ) where p (s) = ; r r 1 − 2r cos s + r2 iθ iθ (c) f(re ) = Hr ∗ u(e ) if f(0) is real, where Hr(s) = 2Cr(s) − 1; and 2r sin s (d) v(reiθ) = Q ∗ u(eiθ) where Q (s) = . r r 1 − 2r cos s + r2 In other words, knowing the values of the real part of an analytic function on the boundary of D is enough to determine everything about the function in the interior (maybe requiring f(0) real).

12.2 Boundary values Now for sort of the converse problem; we know from above that if we have an analytic function we can understand it by studying it on the unit circle. Now we ask, given a function defined on the unit circle, can we extend it in some sensible way to a function on the interior? Let f be defined on |z| = 1 or, equivalently, on [−π, π). Given such an f, can we extend it to a harmonic function f in D such that f(reiθ) → f(eiθ) in some sense? Can you make such an extended f continu- ous on D? (Probably not in general—the boundary function itself needn’t be continuous to start with). Can you make lim f(reiθ) = f(eiθ) r→1 for every θ? For almost all θ? iθ Or, perhaps, if we consider fr(θ) = f(re ) a family of functions, can we p make fr → f in L ? All of these are variants of so-called Dirichlet problems. We will study several of them. Theorem 12.2.1. Suppose f ∈ L1([−π, π)). Define Z π iθ 1 f(re ) = f ∗ Pr(θ) = f(t)Pr(θ − t) dt. 2π −π Then (i) f(reiθ) is harmonic in D;

p p (ii) if 1 ≤ p < ∞ and if f ∈ L ([−π, π)), then fr → f in L ; (iii) if θ is a point of continuity of f, then

lim f(reiθ) = f(θ); r→1 THE DIRICHLET PROBLEM 35

(iv) if f is continuous, then fr(θ) → f(θ) uniformly. Most of this is a consequence of the following fact:

Lemma 12.2.2. The Poisson kernel Pr(t) is an approximate identity. Proof. First, 1 − r2 P (t) = ≥ 0 r 1 − 2r cos t + r2 since 2r cos t ≤ r2 + cos(t)2 ≤ r2 + 1 by the arithmetic and geometric mean inequality. Secondly, 1 Z π Pr(t) dt = Pr ∗ 1(0) = 1 2π −π per our previous calculation. Finally, if 0 < δ < π, then for δ ≤ |θ| ≤ π,

1 − r2 P (θ) ≤ , r 1 − 2r cos θ + r2 so as r → 1, Pr(θ) → 0 for δ ≤ |θ| ≤ π

Lecture 13 The Dirichlet Problem

13.1 The Classical Dirichlet problem Proof continued. For the second part, let ε > 0. We have by Jensen’s inequality that Z π iθ iθ p 1 i(θ−t) iθ p |f(re ) − f(e )| ≤ |f(e ) − f(e )| Pr(t) dt 2π −π Z Z 1 i(θ−t) iθ p 1 i(θ−t) iθ p = |f(e ) − f(e )| Pr(t) dt + |f(e ) − f(e )| Pr(t) dt 2π |t|<δ 2π δ<|t|<π with δ > 0 to be determined. iθ i(θ−t) Taking the notation f(t)(e ) = f(e ) to be a shift, this means Z Z p 1 p 1 p kfr −fkp ≤ kf(t) −fkpPr(t) dt+ kf(t) −fkpPr(t) dt = I +II. 2π |t|<δ 2π δ<|t|<π

p p It is a fact that for f ∈ L ([−π, π)), f(t) → f in L (it is obviously true for continuous functions, and they’re dense in Lp), so to estimate I, we can choose p δ < 0 small enough that kf(t) − fkp < ε, so 1 Z I < εPr(t) dt ≤ ε. 2π |t|<δ For II, Z p p Z 1 p 2 kfkp II ≤ (2kfkp) Pr(t) dt = Pr(t) dt. 2π δ<|t|≤π 2π δ<|t|≤π

Date: March 10th, 2020. 36 THE DIRICHLET PROBLEM

Now the last integral we can make as small as we like by choosing r sufficiently close to 1 since Pr(t) is an approximate identity, and so we are done. For the third part, if θ is a point of continuity of f, let ε > 0 and choose δ > 0 so that |t| < δ implies |f(ei(θ−t)) − f(eiθ)| < ε. Then Z iθ iθ 1 i(θ−t) iθ |f(re ) − f(e )| ≤ |f(e ) − f(e )|Pr(t) dt 2π |t|<δ Z 1 i(θ−t) iθ + |f(e ) − f(e )|Pr(t) dt 2π δ<|t|≤π Z Z 1 1 i(θ−t) iθ ≤ εPr(t) dt + |f(e ) − f(e )|ε dt 2π |t|<δ 2π δ<|t|≤π by choosing r close enough to 1 for the bound in the second term. Hence

 |f(eiθ)| |f(reiθ) − f(eiθ)| ≤ ε + kfk + ε 1 2π which we can make arbitrarily small. Finally, for the fourth part: if f is continuous then it is uniformly continuous, iθ and so f is bounded and we can bound |f(e )| ≤ kfk∞ above, and take the same δ for every θ.

Remark 13.1.1. The classical Dirichlet problem on D is: assume f is continuous on [−π, π) and periodic, and assume ∆u = 0 on D and u = f. Then ∂D

lim u(reiθ) = f(eiθ), r→1 which we proved: the Poisson kernel to the rescue! Now we want to try to do sort of the opposite: if we have a harmonic function inside D, then can we say that it is the Poisson integral of something on the boundary?

Theorem 13.1.2. Suppose f is a complex-valued harmonic function on D. Set iθ fr(θ) = f(re ). Then (i) for 1 < p ≤ ∞, f is the Poisson integral of an Lp function if and only if sup kfrkp < ∞. 0

Remark 13.1.3. The last theorem already gives one of the directions of (i) and (ii), and the forward direction of (iii) is obvious. Remark 13.1.4. To see that the measure consideration in (ii) is strictly neces- sary, notice how 1 − r2 P (θ) = r 1 − 2r cos θ + r2 CONVERSE PROBLEM 37 is Harmonic (easy way: it’s the harmonic extension of 1 as per the last theorem). Now we have

lim Pr(θ) = 0 r→1 for all θ 6= 0, where we get the limit ∞. Hence

1 Z π Pr(θ) = Pr(θ − t) dδ(t) 2π −π is the Poisson integral of the Dirac measure.

Lecture 14 Converse Problem

14.1 Converse to this problem

iθ iθ Proof. First let us handle the easy half. If f(re ) = Pr ∗ f(e ) and fr → f in p L , then by Theorem 12.2.1, supkfrkp < ∞. r iθ iθ Similarly, if f(re ) = Pr ∗ µ(e ), then

1 Z π 1 Z π 1 Z π iθ |f(re )| dθ = Pr(θ − t) dµ(t) dθ 2π −π 2π −π 2π −π 1 Z π 1 Z π 1 ≤ Pr(θ − t) dθ d|µ|(t) = |µ|([−π, π)), 2π −π 2π −π 2π where by |µ|([−π, π)) is the total variation of µ. Finally, if µ is a positive measure, then

Z π iθ 1 f(re ) = Pr(θ − t) dµ(t) ≥ 0 2π −π since the integrand and the measure are nonnegative. Now for the hard half. First, for 1 < p ≤ ∞, without loss of generality we can assume (by nor-  malising appropriately) that supkfrkp < 1. So fr r is in the unit ball in r Lp. p q ∗ 1 1 p Now since L = (L ) , where p + q = 1 (by which we mean for T : L → C there exists g ∈ Lq such that

1 Z π T f = fg dθ 2π −π or similar representation theorems). q ∗ Hence the fr are also in the unit ball in (L ) . By the Banach–Alaoglu q ∗ p theorem there exists a subsequence frj and g ∈ (L ) = L such that frj → g weak-∗. So Z π Z π

frj (θ)h(θ) dθ → g(θ)h(θ) dθ −π −π

Date: March 12th, 2020. 38 CONVERSE PROBLEM for all h ∈ Lq. In particular, fixing 0 < r < 1 and θ, then 1 Z π 1 Z π Pr(θ − t)frj (t) dt → Pr(θ − t)f(θ) dθ 2π −π 2π −π

q since Pr(θ − t) is a perfectly innocent L function for r and θ fixed. In other words, on the one hand Z π iθ 1 f(rrje ) → Pr(θ − t)f(θ) dθ 2π −π as j → ∞, and on the other hand

iθ iθ f(rrje ) → f(re ) as rj → 1, finishing the first part. For the second part, the p = 1 case, again normalise to supkfrk1 < 1. Then r 1 fr is in the unit ball of L , which is contained in the unit ball of finite measures, which is equal to the unit ball of C([−π, π])∗.

So there exists frj and µ such that frj → µ weak-∗, i.e., Z π Z π

frj (θ)h(θ) dθ → h(θ) dµ(θ) −π −π for all h ∈ C([−π, π]). Again fixing r and θ, we then have Z π Z π 1 iθ 1 Pr(θ − t)frj (t) dt = f(rrje ) → Pr(θ − t) dµ(t) 2π −π 2π −π which, as rj → 1, is 1 Z π Pr(θ − t) dµ(t) 2π −π and as j → ∞ goes to f(reiθ), finishing the second part. Finally if f ≥ 0 on D, then the measure is positive. To see this, notice how from the second part we know that Z π Z π 1 iθ 1 frj (e )h(θ) dθ → h(θ) dµ(θ) 2π −π 2π −π for all h ∈ C([−π, π]). In particular this is true for h ≥ 0, and even more particularly, it is true for h ≥ 0 approximating the characteristic functions of open intervals, and so combinations of same, and so generally we see that µ is nonnegative on any set, so µ ≥ 0. Playing the same classic ε/3 game we have played many times before, we then easily get:

iθ iθ 1 Theorem 14.1.1. If f(re ) = Pr ∗ h(e ), h ∈ L ([−π, π)), then

lim f(reiθ) = h(eiθ) r→1 almost everywhere. 14.1 Converse to this problem 39

1 Proof. Let ε > 0, k ∈ N. Choose g continuous such that kh − gk1 < k . Then

iθ iθ iθ iθ |f(re ) − h(e )| = |Pr ∗ h(e ) − h(e )| iθ iθ iθ iθ iθ ≤ |Pr ∗ (h − g)(e )| + |Pr ∗ g(e ) − g(e )| + |g(e ) − h(e )|.

Recall how we know, because Pr(θ) is an approximate identity and using a wedding cake argument, that

1 Z π sup Pr(θ − t)ϕ(t) dt ≤ CMϕ(θ) r 2π −π where M is the Hardy–Littlewood maximal function. Then we have

iθ iθ iθ T f(θ) := lim sup|Pr ∗ h(e ) − h(e )| ≤ CM(h − g)(e ). r→1 Hence

  iθ ε Ckh − gk1 C | θ T f(θ) > ε | ≤ | θ M(h − g)(e ) > | ≤ ≤ C ε/c εk using the type of weak type-1 inequalities we have established in the past. Now let k → ∞ and we see that  | θ T (θ) > ε | = 0 so T f(θ) = 0 almost everywhere. This theorem is not true for measures: the limit does not recover the func- tion, but instead recovers the absolutely continuous of the measure, as was proved by Fatou in 1904 (though in less generality): Theorem 14.1.2. Let µ be a finite complex-valued measure in the unit circle. Suppose dµ = g(θ) dθ + dν where g ∈ L1([−π, π)), dν and dθ have disjoint support, and dν is singular meaning it is supported on a set of measure 0. If Z π iθ 1 f(re ) = Pr(θ − t) dµ(t), 2π −π then lim f(reiθ) = g(θ) r→1 almost everywhere. We have already explored a particular example of this:

Example 14.1.3. We have

1 Z π Pr(θ) = Pr(θ − t) dδ(t) 2π −π where the limit as r → 1 is 0 if θ 6= 0 and infinity of θ = 0—in other words, we do not recover the delta measure in the limit. N 40 INDEX

Index

A Herglotz kernel ...... 33 Abel mean ...... 7 Herglotz’s theorem ...... 27 absolutely continuous part . . . . . 39 approximate identity ...... 9, 19 I inner product ...... 5 B Baire category theorem ...... 14 J Banach space ...... 14 Jensen’s inequality ...... 4 Banach–Alaoglu theorem ...... 29 Banach-Steinhaus theorem K see principle of uniform bound- Kronecker delta ...... 5 edness ...... 15 L Bessel’s inequality ...... 6 LCA group . . . see locally compact box kernel ...... 19 abelian group C Lebesgue differentiation theorem 19 Cauchy kernel ...... 31 Lebesgue’s dominated convergence Cauchy’s theorem ...... 31 theorem ...... 13 Cauchy–Riemann equations . . . . 31 locally compact abelian group . . . 2 Cesàro mean ...... 7 M character ...... 2 maximal function ...... 19 conjugate kernel ...... 34 convergence P p L ...... 1 partial sum ...... 1 almost everywhere ...... 1 Poisson kernel ...... 32 almost uniform ...... 1 positive definite ...... 27 in measure ...... 2 principle of localisation ...... 18 pointwise ...... 1 principle of uniform boundedness uniform ...... 1 13, 15 convolution ...... 4 R D Riemann–Lebesgue lemma ...... 3 Dini’s test ...... 18 Dirichlet kernel ...... 8 S Dirichlet problem ...... 34 singular measure ...... 39 dual group ...... 2 dual space ...... 28 T total variation ...... 37 F trigonometric polynomial ...... 12 Fejer mean ...... 7 Tuaberian theorem ...... 16 Fejér kernel ...... 8 Fourier coefficient ...... 1 W of measure ...... 2 weak topology ...... 28 Fourier multipliers ...... 7 weak type inequality ...... 21 Fourier series ...... 1 weak-∗ topology ...... 29 wedding cake decomposition . . . .23 H Haar measure ...... 2 Y harmonic function ...... 31 Young’s inequality ...... 4