ELLIPTIC REGULARITY Analysis 3 Report
Total Page:16
File Type:pdf, Size:1020Kb
ELLIPTIC REGULARITY Analysis 3 Report Maxim Jeffs December 12, 2016 Introduction In applications of Mathematics, elliptic partial differential equations typically describe stationary or steady-state pro- cesses. Laplace’s equation for instance, ubiquitous in Mathematical Physics, describes phenomena as diverse as steady fluid flows, static charge distributions and gravitating systems. In these applications, the physical interpretation would lead one to expect solutions to be extremely smooth. Mathematically, ellipticity is defined as a seemingly innocuous condition on the coefficients of the highest-order partial derivatives. This condition has remarkable consequences, implying for instance that all weak solutions are in fact infinitely differentiable. Informally, this arises because the ellipticity condition allows one to construct a parametrix, a suitable substitute for an inverse operator. In order to perform this precisely, it is necessary to make the notion of a partial differential operator somewhat more robust, leading to the generalised notion of a pseudodifferential operator, which we introduce in the third section and use to prove the central results for elliptic partial differential equations. The Symbol and Ellipticity A partial differential operator of order m is a linear map P on smooth functions from RN to Cn in the form X P = Aα(x)@α jα|≤m where the Aα are smooth matrix-valued functions. The most important part of a partial differential operator P is its principal symbol. To motivative this definition, let u : RN ! Cn be a Schwartz function. The Fourier inversion theorem then tells us that Z X − · j j (P u)(x) = (2π) N=2 eix p i α Aα(x)^u(p)pα dp (1) RN jα|≤m so that the action of P is largely determined by its principal symbol: X σ~(P )(x; p) := im Aα(x)pα jαj=m which is simply a matrix-valued polynomial. The elliptic operators are defined to be those for which the principal symbol is invertible (as a matrix) for all x and p =6 0. For instance, the scalar Laplacian XN @2 ∆ = − @x 2 i=1 i 1 has principal symbol σ~(x; p) = jpj2, so that it is indeed elliptic. Admittedly, the significance of the ellipticity condition may well seem somewhat obscure. The idea is that we can attempt to invert an elliptic partial differential operator by defining an operator Z − (Qu)(x) = (2π)−N=2 eix·p [~σ(P )(x; p)] 1 u^(p) pα dp RN Unfortunately, this does not actually work: the composition of operators is quite a bit more subtle. However, what one can do is consider a formal series of generalised integral operators that will give an inverse for the operator, up to some compact, infinitely smoothing operator. This shall be our programme for the subsequent sections. Pseudodifferential Operators Equation (1) should be suggestive – what if we were to replace the polynomial expression by a more general function? We ought to impose some growth requirements, since we would not want our function to grow too much faster than a polynomial. Formally, we say that a smooth matrix-valued function σ(x; p) is a (total) symbol of order m if for all j α β j ≤ j j m−|βj multi-indices α; β, there exists a constant C with Dx Dp σ(x; p) C(1 + p ) . We then define the associated pseudodifferential operator of order m by Z (P u)(x) = (2π)−N=2 eix·pσ(x; p)^u(p) dp RN It should at least be clear that if u is a Schwartz function and σ(x; p) has compact x-support, then the resulting function P u will also be Schwartz, so in this respect it will still behave like a differential operator. At this point, the reader may wonder what happens if we formally rewrite P as an integral operator using Z Z Z (P u)(x) = (2π)−N=2 ei(x−y)·pσ(x; p)u(y) dp dy \ = " K(x; y)u(y) dy RN RN RN where we have set Z − − · K(x; y) = (2π) N=2 ei(x y) pσ(x; p) dp (2) RN This, of course, is complete nonsense: K(x; y) does not even converge in a meaningful sense if σ(x; p) is a polynomial. However, being analysts, we are not deterred: the original expression for P still makes sense. We may regard K as being a formal kernel. Equation (2) is an example of an oscillatory integral, and the theory of pseudodifferential operators has deep connections to harmonic analysis and the theory of distributions, which provide the language in which it is properly formulated (see [Nic10]). We will follow a more lowbrow route; instead of directly trying to give meaning to the kernel K, we will cleverly avoid the matter entirely by working formally with the symbols of the operators rather than the operators themselves! Firstly, note that if we take m above to be negative, then we would expect a symbol of order m to yield an operator that is ‘smoothing’ in some sense (to be made precise later), with the amount of smoothing increasing as m ! −∞. This corresponds to the observation that when m is sufficiently negative, the corresponding formal kernel will actually be integrable and we will be able to take a number of derivatives before it becomes singular again. Because of this, we say that X1 σ ∼ σj j=1 ! −∞ 2 N is a formal developmentP for a symbol σ if each σj is a symbol of order mj with mj , and if for every m , the − n − symbol σ j=1 σj is of order m for all sufficiently large n. Returning to the idea of kernels, this may be regarded as an appropriate substitute for ‘convergence’ of the kernels. Although this analogy is somewhat difficult to make precise, it is the central idea of the symbol calculus for pseudodifferential operators. To get the theory off the ground, it is necessary to develop some fairly technical lemmata which we sketch in the next sections, omitting the routine verifications and illustrating the main ideas that make the theory work. In the sequel, we will often omit factors of (2π)−N with the understanding that the arguments are essentially unchanged. 2 The Symbol Calculus We begin with the fundamental: Theorem 1. Every formal series is the formal development of some pseudodifferential operator. Proof. The proof of this theorem is not difficult, but rather unenlightening. The idea is to take a smooth cutoff function ϕ : [0; 1) ! [0; 1] with ϕ ≡ 0 on [0; 1] and ϕ ≡ 1 on [2; 1), and then ‘glue’ the symbols together using 1 ( ) X jpj σ(x; p) = ϕ σ (x; p) r j j=1 j where rj are some real constants tending to infinity that are to be chosen appropriately. There is no problem with convergence here, since the sum will be finite for any given point. The problem comes in managing the slightly tricky estimates to make sure that this does not grow too fast. ■ The next important question to answer is when we can regard formal kernels as pseudodifferential operators. This is answered by Lemma 1. Suppose A(x; y; p) is a smooth matrix-valued function with compact x; y support, such that, for fixed m and given j α β γ j ≤ j j m−|γj α; β; γ multi-indices, there is some constant C such that Dx Dy Dp A(x; y; p) C(1 + p ) . Then the operator Z Z (T u)(x) = (2π)−N ei(x−y)·pA(x; y; p)u(y) dy dp RN RN is a pseduodifferential operator with symbol X ijαj σ ∼ (DαDαA)(x; x; p) α! p u α Proof. As usual, what we really want to say here is that the kernel Z K(x; y) = ei(x−y)·pA(x; y; p) dp RN can be expanded as a ‘Taylor series’. We get around the possible singularity of the kernel K by an ingenious calculation, defining a principal symbol by Z Z 0 σ(x; q) = eix·(p−q)A^(x; p − q; p) dp = eix·p A^(x; p0; p0 + q) dp0 RN RN where the Fourier transform of A is taken in the second variable, so that Z Z Z Z Z (T u)(x) = ei(x−y)·pA(x; y; p)u(y) dy dp = eix·pA^(x; p − q; p)^u(q) dq dp = eix·qσ(x; q)^u(q) dq RN RN RN RN RN using the convolution theorem and the fact that the changes in the order of integration can actually be justified this time. Now we can simply expand A^ using Taylor’s formula! X ijαj A^(x; p0; p0 + q) = (DαA^)(x; p0; q)(p0)α + R (x; p0; p0 + q) α! q ` jα|≤` for some remainder function R`, and then use the properties of the Fourier transform to conclude that X ijαj k(x; q) = (DαDαA)(x; x; p) + r (x; q) α! p u ` jα|≤` for a remainder term r`(x; q) that we can easily show to have small order as ` ! 1 using the bounds on A. ■ 3 For ‘genuine’ differential operators, the support of P u is always a subset of the support of u. One might ask whether this holds for pseudodifferential operators also. The answer is negative in general, but we do have Proposition 1. Suppose P is a pseudodifferential operator with symbol σ(x; p) that has compact x-support. Then for all " > 0, there exists a pseudodifferential operator P 0 with symbol that is a formal development of σ such that any point in the support of P 0u is a distance of at most " from the support of u, for any Schwartz function u.