The good, the bad and the ugly of kernels: why the Dirichlet kernel is not a good kernel Peter Haggstrom www.gotohaggstrom.com [email protected] May 24, 2016 1 Background Even though Dirichlet's name is often associated with number theory, he also did fun- damental work on the convergence of Fourier series. Dirichlet's rigorous insights into the many subtle issues surrounding Fourier theory laid the foundation for what students study today. In the early part of the 19th century Fourier advanced the stunning, even outrageous, idea that an arbitrary function defined on (−π; π) could be represented by an infinite trigonometric series of sines and cosines thus: 1 X f(x) = a0 + [ak cos(kx) + bk sin(kx)] (1) k=1 He did this in his theory of heat published in 1822 although in the mid 1750s Daniel Bernoulli had also conjectured that shape of a vibrating string could be represented by a trigonometric series. Fourier's insights predated electrical and magnetic theory by several years and yet one of the widest applications of Fourier theory is in electrical engineering. The core of Fourier theory is to establish the conditions under which (1) is true. It is a complex and highly subtle story requiring some sophisticated analysis. Applied users of Fourier theory will rarely descend into the depths of analytical detail devoted to rigorous convergence proofs. Indeed, in some undergraduate courses on Fourier theory, the Sampling Theorem is proved on a "faith" basis using distribution theory. In what follows I have used Professor Elias Stein and Rami Shakarchi's book [Stein- Shakarchi] as a foundation for fleshing out the motivations, properties and uses of "good" kernels. The reason for this is simple - Elias Stein is the best communicator of the whole 1 edifice of this part of analysis. I have left no stone unturned in terms of detail in the proofs of various properties and while some students who are sufficienlty "in the zone" can gloss over the detail, others may well benefit from it. An example is the nuts and bolts of the basic Tauberian style proofs which are often ignored in undergraduate anal- ysis courses. 2 Building blocks Using properties of the sine and cosine functions such as (where k and m are inte- gers): 0 if k 6= m R π −π sin(kx) sin(mx)dx = π if k = m = 0 R π −π sin(kx) cos(mx)dx = 0 ( 0 if k 6= m R π −π cos(kx) cos(mx)dx = 2π if k = m = 0 π if k = m 6= 0 the coefficients of the Fourier series expansions could be recovered as: 1 Z π a0 = f(x)dx (2) 2π −π 1 Z π ak = f(x) cos(kx)dx k ≥ 1 (3) π −π 1 Z π bk = f(x) sin(kx)dx k ≥ 1 (4) π −π If you have forgotten how to derive the basic sin and cosine formulas set out above just recall that: R π R π −π sin(kx)dx = −π cos(kx)dx = 0 for k = 1; 2; 3 ::: You also need: 2 1 cos(kx) cos(mx) = 2 cos((k − m)x) + cos((k + m)x); 1 sin(kx) sin(mx) = 2 cos((k − m)x) − cos((k + m)x); and 1 sin(kx) cos(mx) = 2 sin((k − m)x) + sin((k + m)x) The partial sums of the Fourier series of f can be expressed as follows: n 1 Z π X 1 Z π f (x) = f(x)dx + f(t) cos(kt)dt cos(kx) n 2π π −π k=1 −π 1 Z π + f(t) sin(kt)dt sin(kx) (5) π −π n 1 Z π 1 Z π X = f(x)dx + cos(kt) cos(kx) + sin(kt) sin(kx) f(t)dt (6) 2π π −π −π k=1 The exchange of summation and integration is justified because the sums are finite. Hence we have: n 1 Z π 1 X f (x) = + cos(k(t − x)) f(t)dt (7) n π 2 −π k=1 Pn The simplification of k=1 cos(k(t − x) leads to the Dirichlet kernel, thus we need to 1 Pn find a nice closed expression for 2 + k=1 cos(ku) and what better way to search for a closed form than to simply experiment with a couple of low order cases. Thus for n=1 1 π we have to find a nice expression for 2 + cos u. We know that cos u = sin(u + 2 ) so in u analogy with that why not investigate sin(u + 2 ) and see what emerges? u u u sin(u + ) = sin u cos( ) + sin( ) cos u 2 2 2 u u u = 2 sin( ) cos2( ) + cos u sin( ) 2 2 2 u u = sin( )(2 cos2( ) + cos u) (8) 2 2 u = sin( )(cos u + 1 + cos u) 2 u = sin( )(2 cos u + 1) 2 Hence we have that: u 1 sin(u + 2 ) + cos u = u (9) 2 2 sin( 2 ) 3 With this little building block we gamely extrapolate as follows: u 1 sin((2n + 1) 2 ) + cos u + cos 2u + ··· + cos nu = u (10) 2 2 sin( 2 ) To prove that the formula is valid for all n we need to do is apply a standard induction u 1 sin(u+ 2 ) to it. We have already established the base case of n = 1 since + cos u = u = 2 2 sin( 2 ) u sin(3 2 ) u . As usual we assume the formula holds for any n so that: 2 sin( 2 ) u 1 sin((2n + 1) 2 ) T OP +cos u+cos 2u+···+cos nu+cos(n+1)u = u +cos((n+1)u) = u 2 2 sin( 2 ) 2 sin( 2 ) (11) u u u u T OP = sin(nu + ) + 2 sin( ) cos((nu + ) + ) 2 2 2 2 u u u u u u u u = sin( nu) cos( )+cos( nu) sin( )+2 sin( ) cos(nu+ ) cos( )−2 sin( ) sin(nu+ ) sin( ) 2 2 2 2 2 2 2 2 u u u u u u u u = sin( nu) cos( )+cos( nu) sin( )+2 sin( ) cos( ) cos( nu) cos( )−2 sin( ) cos( ) sin( nu) sin( ) 2 2 2 2 2 2 2 2 u u u u − 2 sin2( ) s 2 ( nu) cos( ) − 2 sin2( ) cos( nu) sin( ) 2 2 2 2 u u u u = (1 − 2 sin2( )) sin( nu) cos( ) + (1 − 2 sin2( )) cos( nu) sin( ) 2 2 2 2 u u + sin u cos( nu) cos( ) − sin u sin( nu) sin( ) 2 2 u u u u = cos u sin( nu) cos( )+cos u cos( nu) sin( )+sin u cos( nu) cos( )−sin u sin( nu) sin( ) 2 2 2 2 u u u = cos u sin(nu + ) + sin u cos(nu + ) = sin(u + nu + ) 2 2 2 u = sin((2n + 3) ) (12) 2 u 1 sin((2n+3) 2 ) Hence we do get + cos u + cos 2u + ··· + cos nu + cos (n + 1)u = u . 2 2 sin( 2 ) 4 Thus the formula is true for n+1. If you find that derivation tedious you could start with: u 1 1 1 cos ku sin( ) = fsin((k + )u) − sin((k − )u)g (13) 2 2 2 2 Then you get: n n X u 1 X 1 1 sin( ) cos ku = fsin((k + )u) − sin((k − )u)g 2 2 2 2 k=1 k=1 1 3u u 5u 3u 1 1 = (sin( ) − sin( )) + (sin( ) − sin( )) + ··· + (sin((n + )u) − sin((n − )u)) 2 2 2 2 2 2 2 1 u 1 = − sin( )) + (sin((n + )u) (14) 2 2 2 u Hence on dividing the LHS of (14) by sin( 2 ) we have that: 1 1 sin((n+ 2 )u) cos u + cos 2u + ··· + cos nu = − 1 + u 2 sin( 2 )) 1 1 sin((n+ 2 )u) Finally we have that + cos u + cos 2u + ··· + cos nu = u 2 2 sin( 2 )) So going back to (7) we have: Z π (2n+1)(t−x) 1 sin( 2 ) fn(x) = t−x f(t)dt (15) π −π 2 sin( 2 ) Z π+x (2n+1)(t−x) 1 sin( 2 ) fn(x) = t−x f(t)dt (16) π −π+x 2 sin( 2 ) This works because f(t + 2π) = f(t) ie f is 2π periodic, as are sin and cos. The product of two 2π periodic functions is also 2π periodic since f(x + 2π)g(x + 2π) = f(x)g(x). Comment on integrals of 2π periodic functions 5 A point on a circle can be represented by eiθ and is unique up to integer multiples of 2π. If F is a function "on the circle" then for each real θ we define f(θ) = F (eiθ). Thus f is 2π periodic since f(θ) = f(θ + 2π). All the qualities of f such as continuity, integrability and differentiablity apply on any interval of 2π. There are some fundamental manipulations you can do with 2π periodic functions. If we assume that f is 2π periodic and is integrable on any finite interval [a,b] where a and b are real, we have: Z b Z b+2π Z b−2π f(x)dx = f(x)dx = f(x)dx (17) a a+2π a−2π Noting that f(x) = f(x ± 2π) because of the periodicity and making the substitution u = x ± 2π we see that (using u = x + 2π as our substitution to illustrate): Z b Z b Z b+2π f(x)dx = f(x + 2π)dx = f(u)du (18) a a a+2π R b−2π The substitution u = x − 2π leads to a−2π f(x)dx The following relationships also prove useful: Z π Z π Z π+a f(x + a)dx = f(x)dx = f(x)dx (19) −π −π −π+a R π+a R π R π The substitution u = x+a gives −π+a f(x)dx while −π f(x+a)dx = −π f(x+a)d(x+a) R π which is just −π f(z)dz since the variable integration z simply runs from −π to π.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages42 Page
-
File Size-