Class Notes for Harmonic Analysis MATH 693
Total Page:16
File Type:pdf, Size:1020Kb
Class Notes for Harmonic Analysis MATH 693 by S. W. Drury Copyright c 2001–2005, by S. W. Drury. Contents 1 Differentiation and the Maximal Function 1 1.1 Conditional Expectation Operators . 4 1.2 Martingale Maximal Functions . 6 1.3 Fundamental Theorem of Calculus . 8 1.4 Homogenous operators with nonnegative kernel . 9 1.5 The distribution function and the decreasing rearrangement . 11 2 Harmonic Analysis — the LCA setting 22 2.1 Gelfand's Theory of Commutative Banach Algebra . 22 2.2 The non-unital case . 26 2.3 Finding the Maximal Ideal Space . 27 2.4 The Spectral Radius Formula . 29 2.5 Haar Measure . 30 2.6 Translation and Convolution . 31 2.7 The Dual Group . 35 2.8 Summability Kernels . 36 2.9 Convolution of Measures . 38 2.10 Positive Definite Functions . 39 2.11 The Plancherel Theorem . 45 2.12 The Pontryagin Duality Theorem . 46 i 1 Differentiation and the Maximal Function We start with the remarkable Vitali Covering Lemma. The action takes place on R, but equally well there are corresponding statement in Rd. We denote by λ the Lebesgue measure. LEMMA 1 (VITALI COVERING LEMMA) Let be a family of bounded open in- S tervals in R and let S be a Lebesgue subset of R with λ(S) < and such that 1 S I: ⊆ I [2S Then, there exists N N and pairwise disjoint intervals I1; I2; : : : IN of such 2 S that N λ(S) 4 λ(I ): (1.1) ≤ n n=1 X Proof. Since λ(S) < , there exists K compact, K S and λ(K) > 3 λ(S). 1 ⊆ 4 Now, since K I ⊆ I [2S there are just finitely many intervals J ; J ; : : : ; J with K M J . Let 1 2 M ⊆ m=1 m these intervals be arranged in order of decreasing length. Thus 1 m < m ≤ 1 2 ≤ M implies that λ(J ) λ(J ). We will call this the J list.S We proceed m1 ≥ m2 algorithmically. If the J list is empty, then N = 0 and we stop. Otherwise, let I1 be the first element of the J list (in this case J1). Now, remove from the J list, 1 all intervals that meet I1 (including I1 itself). If the J list is empty, then N = 1 and we stop. Otherwise, let I2 be the first remaining element of the J list. Now, remove from the J list, all intervals that meet I2 (including I2 itself). If the J list is empty, then N = 2 and we stop. Otherwise, let I3 be the first remaining element of the J list. Now, remove from the J list, all intervals that meet I3. Eventually, the process must stop, because there are only finitely many ele- ments in the J list to start with. Clearly, the In are pairwise disjoint, because if I meets I and 1 n < n N, then, immediately after I was chosen, all n1 n2 ≤ 1 2 ≤ n1 those intervals of the J list which meet In1 were removed. Since In2 was even- tually chosen from this list, it must be that In1 In2 = . Now, we claim that ? \ ; for every Jm is contained in an interval In which is our notation for the interval with the same centre as In but three times the length. To see this, suppose that Jm was removed from the J list immediately after the choice of In. Then Jm was in the J list immediately prior to the choice of In and we must have that length(J ) length(I ) for otherwise J would be strictly longer than I and m ≤ n m n In would not have been chosen as a longest interval at that stage. Also Jm must meet In because it was removed immediately after the choice of In. It therefore ? follows that Jm In. M⊆ N N N So, K J I? and λ(K) λ(I?) = 3 λ(I ). It ⊆ m=1 m ⊆ n=1 n ≤ n=1 n n=1 n follows that (1.1) holds. S S P P We now get an estimate for the Hardy-Littlewood maximal function. Let us define for f L1(R; ; λ) 2 L 1 Mf(x) = sup f(t)dλ(t) : Iopen interval λ(I) I x I j j Z 2 1 THEOREM 2 We have λ( x; Mf(x) > s ) 4s− f . f j j g ≤ k k1 The result says that Mf satsifies a Tchebychev type inequality for L1. It is 1 easy to see that we do not necessarily have Mf L . For example, if f = 11[ 1;1], 2 − then we have 1 if x 1, j j ≤ Mf(x) = 8 2 if x 1. <> x + 1 j j ≥ j j and Mf is not integrable even though f is. Note that we may also define the :> centred maximal function 1 x+h Mcf(x) = sup f(t)dt : h>0 2h x h Z − 2 Proof. First of all, there is no loss of generality in assuming that f 0. Let a N ≥ 2 and let S = [ a; a] x; Mf(x) > s . Let be the set of intervals I such that − \ f j j g S 1 f(t)dt > s: (1.2) λ(I) ZI Then if x S there is some I such that (1.2) holds and so I is in . Hence 2 S the hypotheses of the Vitali Covering Lemma are satisfied. We can then find N disjoint intervals I such that λ(S) 4 N λ(I ). But n ≤ n=1 n N N P N s λ(I ) f(t)dt = 11 fdλ f n ≤ In ≤ k k1 n=1 n=1 In n=1 ! X X Z Z X Note that the disjointness of the intervals is key here. It is used to show that N 1 11 11. It follows that λ(S) 4s− f . Now it suffices to let a n=1 In ≤ ≤ k k1 −! 1 to obtain the desired conclusion. P The results of this section are easily generalized to Rd and in a variety of ways. In the Vitali Covering Lemma, we can replace intervals by either cubes with sides parallel to the coordinate axes or with balls. In fact, you can easily find in the literature many papers generalizing this lemma by replacing intervals with various families of sets satisfying various sets of conditions. Let us take balls. The key fact about balls that one is using is the following: If r r0 > 0 and if U and U 0 are open balls of radius r and r0 ≥ respectively and if U U 0 = , then U 0 3U where 3U denotes the 6 ; ⊆ ball with the same centre as U and radius 3r. T Then we may define the corresponding maximal function by 1 Mf(x) = sup f(t)dλ(t) : Uopen ball λ(I) U x U j j Z 2 and the estimate that will be obtained is 1 λ( x; Mf(x) > s ) C s− f : f j j g ≤ d k k1 where Cd is a constant depending only on the dimension. There are also corre- sponding estimates for the maximal function on Td, the d-dimensional torus. 3 1.1 Conditional Expectation Operators As an example of orthogonal projections, we can look at conditional expectation operators. These arise when we have two nested σ-fields on the same set. So, let (X; ; λ) be a measure space and suppose that . Then, (X; ; λ ) is an F G ⊆ F G jG equally good measure space. It is easy to see that L2(X; ; λ ) is a closed linear G jG subspace of L2(X; ; λ). It should be pointed out, that trivialities can arise even F when we might not expect them. For example, let X = R2, the Borel σ-field of F R2 and the sets which depend only on the first coordinate. Then unfortunately G L2(X; ; λ ) consists just of the zero vector. G jG The situation is very significant in probability theory, where the σ-fields F and encode which events are available to different “observers”. For example G might encode outcomes based on the first 2 rolls of the dice, while might G F encode outcomes based on the first 4 rolls. A useful example is the case where A = [0; 1[, is the borel σ-field of X. F Then partition [0; 1[ into n intervals and let be the σ-field generated by these G intervals. We take λ the linear measure on the interval. In this case E will turn G out to be the mapping which replaces a function with its average value on each of the given intervals. Well, the orthogonal projection operator is denoted E . We view it as a map G E : L2(X; ; λ) L2(X; ; λ) L2(X; ; λ): G F −! G ⊆ F We usually understand E in terms of its properties. These are G E (f) L2(X; ; λ). • G 2 G (f E (f))gdλ = 0 whenever g L2(X; ; λ). • − G 2 G The probabilists will write this last condition as E(f E (f))g = 0 whenever R − G g L2(X; ; λ), where E is the scalar-valued expectation. 2 G To get much further we will need the additional assumption that (X; ; λ) G is σ-finite. So we are assuming the existence of an increasing sequence of sets G with X = 1 G and λ(G ) < . As an exercise, the reader should n 2 G n=1 n n 1 check that E (11Gf) = 11GE (f) for G .