Lecture 7: Inversion, Plancherel and Convolution
Total Page:16
File Type:pdf, Size:1020Kb
Lecture 7: Inversion, Plancherel and Convolution Johan Thim ([email protected]) April 2, 2021 \You should not drink and bake" |Mark Kaminski 1 Inversion of the Fourier Transform So suppose that we have u 2 G(R) and have calculated the Fourier transform F u(!). Can we from F u(!) recover the function we started with? Considering that the Fourier transform is constructed by the multiplication with e−i!x and then integration, what would happen if we multiplied with ei!x and integrate again? Formally, 1 R 1 R 1 F u(!)ei!x d! = lim u(t)e−i!tei!x dt d! = lim u(t)e−i!(x−t) dt d! ˆ−∞ R!1 ˆ−R ˆ−∞ R!1 ˆ−R ˆ−∞ 1 R = lim u(t) e−i!(x−t) d! dt; R!1 ˆ−∞ ˆ−R where we changed the order of integration (this can be motivated) but we're left with something kind of weird in the inner parenthesis and we would probably like to move the limit inside the outer integral. First, let's look at the expression in the inner parenthesis: R e−i!(x−t) R e−iR(x−t) eiR(x−t) 2 sin(R(x − t)) e−i!(x−t) d! = = − + = ; x 6= t: ˆ−R −i(x − t) !=−R i(x − t) i(x − t) x − t The Dirichlet kernel (on the real line) Definition. We define the Dirichlet kernel for the Fourier transform by sin(Rx) D (x) = ; x 6= 0; R > 0; R πx and DR(0) = R/π. Note that we changed the normalization of the function. There's a reason for this and we'll get to that soon. For now, observe that 1 1 1 1 i!x F u(!)e d! = lim u(t)DR(x − t) dt = lim u(t + x)DR(t) dt: 2π ˆ−∞ R!1 ˆ−∞ R!1 ˆ−∞ You probably recall the sinc-function, and the Dirichlet kernel on the real line is such a function and for a couple of values of R you can see the graphs below. 1 y R = 1 R = 2 5 R = 5 R = 7 x −9 −8 −7 −6 −5 −4 −3 −2 −1 1 2 3 4 5 6 7 8 9 The Fourier inversion formula Theorem. If u 2 G(R) has right- and lefthand derivatives at x, then 1 R u(x+) + u(x−) lim F u(!)ei!x d! = : R!1 2π ˆ−R 2 Proof. First, we write R 0 1 1 i!x F u(!)e d! = u(t + x)DR(t) dt + u(t + x)DR(t) dt 2π ˆ−R ˆ−∞ ˆ0 and claim that 0 u(x−) 1 u(x+) u(t + x)DR(t) dt ! and u(t + x)DR(t) dt ! ; ˆ−∞ 2 ˆ0 2 as R ! 1. We prove the second identity (the first is proved analogously). To this end, we split the integral in two parts: 1 π 1 u(t + x)DR(t) dt = u(t + x)DR(t) dt + u(t + x)DR(t) dt: ˆ0 ˆ0 ˆπ The reason for this is that we need to exploit different properties of u to prove the desired result. First, let x be fixed. Then the function t 7! t−1 u(t + x) is in G(R), so the Riemann Lebesgue lemma implies that 1 1 1 u(t + x) lim u(t + x)DR(t) dt = lim sin(Rt) dt = 0: R!1 ˆπ R!1 π ˆπ t Turning our attention to the first integral, we write π π π + + u(t + x)DR(t) dt = u(t + x) − u(x ) DR(t) dt + u(x )DR(t) dt ˆ0 ˆ0 ˆ0 π π + + = u(t + x) − u(x ) DR(t) dt + u(x ) DR(t) dt: ˆ0 ˆ0 2 Since D+u(x) exists (by assumption), it is clear that the difference quotient u(t + x) − u(x+) t is bounded and that this expression belongs to E([0; π]). Therefore, the Riemann Lebesgue lemma (again!) implies that π π + + 1 u(t + x) − u(x ) lim u(t + x) − u(x ) DR(t) dt = lim sin(Rt) dt = 0: R!1 ˆ0 R!1 π ˆ0 t Finally, we observe that π Rπ Rπ 1 sin x 1 sin x 1 π 1 DR(t) dt = x = Rt = dx = dx ! · = ; as R ! 1; ˆ0 πR ˆ0 x=R π ˆ0 x π 2 2 due to the following result. 1 sin x π Theorem. dx = . ˆ0 x 2 We defer the proof of this until at the end of the lecture. Uniqueness Corollary. If u; v 2 G(R) and F u(!) = F v(!) for every ! 2 R, then u(x) = v(x) for all x 2 R where u and v are continuous and D±u(x) and D±v(x) exists. An Airy equation Find a (formal) expression for a nonzero solution to u00(x) − xu(x) = 0. Solution. Assuming that u 2 G(R) is twice differentiable with u0; u00 2 G(R), we can take the Fourier transform and obtain that (i!)2U(!) − iU 0(!) = 0 , U 0(!) − i!2U(!) = 0 d 3 3 , e−i! =3U(!) = 0 , U(!) = Cei! =3; d! where C is an arbitrary constant (and we used an integrating factor to solve the differential equation). Therefore, 1 1 1 3 3 u(x) = Cei! =3ei!x d! = D ei(! =3+!x) d!; 2π ˆ−∞ ˆ−∞ where D is some constant, might be an expression for a solution. Now the question is of course if this integral is convergent. Certainly it is not absolutely integrable (why?) and we can't claim that the expression solves the equation by previous results. This is an instance where we would like to extend the Fourier transform to a larger class of functions. 3 2 The Fourier Transform of the Fourier Transform So looking at the inverse Fourier transform, it's almost the same as the Fourier transform. Indeed, the only difference is the sign in the exponent of the exponential and the factor before the integral. This means that the inverse transform has pretty much the same properties as the Fourier transform. This also means the following useful result. Theorem. If u; U 2 G(R) and U(!) = F(u)(!), then 1 F −1(U)(x) = F(F u)(−!)(x) and F(F u(!))(x) = 2πu(−x); 2π for every x where u is continuous and D±u(x) exist. This follows immediately from the definitions of the transforms and the result above. The assumption that D±u(x) exist is superfluous but we do not know that at this point (we'll show that next lecture). If u is discontinuous, but still in G(R), then the equalities still hold if we view the results as elements from L1(R), meaning that the difference has L1-norm zero. Example 1 Find the Fourier transform of . 1 + x2 1 1 Solution. Let u = e−|xj. We know from before that F(u) = F(e−|xj=2)(!) = , and 2 1 + !2 1 since both u and x 7! belong to G(R) and are continuous with right- and lefthand 1 + x2 derivatives at every point, we find that 1 2πu(−x) = F(F u(!))(x) = F (x); 1 + !2 1 1 1 so since 2πu(−x) = 2π · e−|−xj = 2π · e−|xj, it is clear that F (!) = πe−|!j: 2 2 1 + x2 3 Convolution A useful type of \product" of two functions is the convolution (sv. faltning), defined as follows. Convolution Definition. The convolution u ∗ v : R ! C of two functions u: R ! C and v : R ! C is defined by 1 (u ∗ v)(x) = u(t)v(x − t) dt; x 2 R; ˆ−∞ whenever this integral exists. So when does this integral exist? 4 Theorem. If u; v 2 L1(R), then u ∗ v 2 L1(R). Proof. We first prove that u ∗ v is absolutely integrable: 1 1 1 ju ∗ v(x)j dx = u(t)v(x − t) dt dx ˆ−∞ ˆ−∞ ˆ−∞ 1 1 ≤ / monotonicity / ≤ ju(t)v(x − t)j dt dx ˆ−∞ ˆ−∞ 1 1 = / Fubini / = ju(t)v(x − t)j dx dt ˆ−∞ ˆ−∞ 1 1 = ju(t)j jv(x − t)j dx dt: ˆ−∞ ˆ−∞ Note now that 1 1 jv(x − t)j dx = / s = x − t / = jv(s)j ds; ˆ−∞ ˆ−∞ so 1 1 1 1 ju(t)j jv(x − t)j dx dt = ju(t)j dt jv(s)j ds < 1: ˆ−∞ ˆ−∞ ˆ−∞ ˆ−∞ A more compact way of stating this result is that ku ∗ vkL1(R) ≤ kukL1(R)kvkL1(R): The right-hand side is finite by assumption. This does ensure that u ∗ v exists as an element in L1(R), but we need to be a bit more precise in this course. Theorem. If u; v 2 G(R) and either u or v is bounded, then u ∗ v 2 L1(R) is continuous and bounded. Proof. Assume that u is bounded. Note that 1 1 ju ∗ v(x)j = u(t)v(x − t) dt ≤ kuk1 jv(x − t)j dt = kuk1kvkL1(R); ˆ−∞ ˆ−∞ so clearly u ∗ v is bounded. Moreover, using the same type of estimate, 1 ju ∗ v(x + h) − u ∗ v(x)j ≤ kuk1 jv(x + h − t) − v(x − t)j dt; (1) ˆ−∞ | {z } ! 0 as h!0 which would prove that u ∗ v is continuous. The fact that the second integral tends to zero however, is not obvious.