1 Lp-Spaces, Local Integrability

1 Lp-Spaces, Local Integrability

Partial Differential Equations Mollification (With one colorful figure) 1 Lp-spaces, local integrability I will assume you know what it means for a function to be measurable, integrable, etc. In this course, measurability always means Lebesgue measurability, integrability is Lebesgue integrability. One very nice feature of Lebesgue integrability is that if E is a measurable subset of Rn of positive measure1 and f : E ! R is measurable and non-negative (f(x) ≥ 0 for all x 2 E), then Z f(x) dx E is always defined, possibly equal to 1. A function f : E ! R is integrable over E, in symbols f 2 L1(E), if and only if Z jf(x)j dx < 1: E More generally, one defines Lp(E) for 1 ≤ p; 1 by: f : E ! R is in Lp(E), if and only if Z jf(x)jp dx < 1: E 1 One also defines f 2 L (E) iff ess sup x2Ejf(x)j < 1. The essential supremum or ess sup of a function f over a set E can be defined as the infimum (strangely enough) of the sup of all functions g that are a.e. to f: ess sup x2Ef = inffsup g(x): g(x) = f(x) for a:e:x 2 Eg: x2E This is a notation I'll be using on occasions: Assume p 2 [1; 1]. The conjugate index of p will be denoted by p0; it is defined to be the number in [1; 1] so that the equation 1 1 + = 1 p p0 holds. If p = 1, one interprets 1=p0 = 0 as meaning p0 = 1; if p = 1, one interprets 1=p = 0, so p0 = 1. It would have been, perhaps, easier to simply say: p p0 = ; p − 1 but the equation 1=p + 1=p0 = 1 is usually preferred; it makes it clear at once that p; p0 play symmetric roles and that (p0)0 = p. A few things to notice are: 20 = 2; if 1 ≤ p < 2, then 2 < p0 ≤ 1, and vice-versa: 2 < p ≤ 1 implies 1 ≤ p0 < 2. One defines a norm in the spaces Lp(E) by ( 1=p R jf(x)jp dx ; if p < 1, kfkp = kfkLp(E) = E ess sup x2Ejf(x)j if p = 1. To see it is a norm is not totally trivial. That is, it is trivially a norm (if we interpret, as one does, f = 0 to mean f = 0 a.e.) for p = 1 and for p = 1. Otherwise it is not so obvious. To prove it is a norm also for 1 < p < 1, one 1Lebesgue integration over null sets is a rather stupid endeavor, done only by people who love the number zero so much they can't keep their hands off it. 1 LP -SPACES, LOCAL INTEGRABILITY 2 first proves a fundamental inequality: H¨older'sinequality. It states: Let E be a measurable subset of n and let 0 R f 2 Lp(E), g 2 Lp (E), then fg 2 L1(E) and Z jf(x)g(x)j dx ≤ kfkLp(E)kgkLp0 (E): (1) n R The proof of (1) is based on the following result, which could be a nice exercise in a Calculus course: Assume a; b are two non-negative real numbers. Then, if 1 < p < 1 and p0 = p=(p − 1), 1 1 0 ab ≤ ap + bp p p0 The hint one can give for this exercise is that it suffices to prove it for a; b > 0, to prove first that '(x) = 0 0 xp=p + x−p =p0 ≥ 1 for all x > 0, and to apply it with x = a1=p b−1=p. Once one has this inequality, given now p p0 f 2 L (E), g 2 L (E), one applies it with a = jf(x)j=kfkp, b = jg(x)j=kgkp0 to get 0 1 jf(x)jp jg(x)jp jf(x)g(x)j ≤ p + p0 kfk kgk 0 kfk p p p kgkp0 for a.e. x 2 E. Integrating over E gives 1 Z 1 1 jf(x)g(x)j dx ≤ + 0 = 1: kfkpkgkp0 E p p This proves (1) if 1 < p < 1; the proof when p = 1 (or equivalently when p = 1) is trivial. If p = 2, (1) is known as the Cauchy-Schwarz inequality in the Western part of the world. The Russians prefer to call it the Bunyakovsky inequality, since Viktor Bunyakovsky (1804-1889) had it some 25 years before Cauchy or Schwarz. With (1) it is now easy to prove also for 1 < p < 1 that kf + gkLp(E) ≤ kfkLp(E) + kgkLp(E) (2) p for all f; g 2 L (E); in other words, the main step in seeing k · kLp(E) is a norm. The cases p = 1; 1 being, as mentioned, trivial (I hope that if I say a sufficient number of times that it is trivial you will believe it), one assumes 1 < p < 1 and proceeds as follows (notice that (p − 1)p0 = p): Z Z Z Z p p p−1 p−1 p−1 kf + gkp = jf + gj dx = jf + gjjf + gj dx ≤ jfj jf + gj dx + jgj jf + gj dx E E E E 1=p 1=p0 1=p 1=p0 Z Z p0 Z Z p0 ≤ jfjp dx jf + gjp−1 dx + jgjp dx jf + gjp−1 dx E E E E p=p0 p=p0 = kfkp kf + gkp + kgkp kf + gkp : 0 p=p 0 Dividing both sides by kf + gkp , (2) follows, since p − (p=p ) = p − (p − 1) = 1. p FACT: The spaces L (E) are complete in the metric defined by the k · kLp(E) norm, thus Banach spaces. Definition 1 Let U be open in Rn and let f : U ! R be measurable. We say f is locally integrable in U and write f 2 L1 (U) iff loc Z jf(x)j dx < 1 K for all compact sets K ⊂ U. Notice that if the restriction of f to a compact set K satisfies kfkLp(K) < 1; then Z jf(x)j dx < 1 K This is clear if p = 1, since then both statements say the same thing. Otherwise, with χK denoting the characteristic function of K, jKj the Lebesgue measure of K, Z Z 1=p0 jf(x)j dx = χK (x)jf(x)j dx ≤ kχK kLp0 (K)kfkLp(K) = jKj kfkLp(K) < 1: K K 2 CONVOLUTIONS 3 2 Convolutions Let f; g : Rn ! R be measurable. Then it is not too hard to show that for almost all x 2 Rn, the function y 7! f(x − y)g(y) is measurable. If (and only if) it is also integrable for almost all x 2 Rn, one defines a function f ∗ g : n ! by R R Z f ∗ g(x) = f(x − y)g(y) dy n R for almost all x 2 Rn. I do not know (and I don't really care to know) the EXACT conditions on f; g so that this product is defined. But it is useful to know some basic conditions under which the convolution product is defined. p n q n r n 1 1 1 • If f 2 L (R ); g 2 L (R ) and 1=p + 1=q ≥ 1, then f ∗ g is defined, f ∗ g 2 L (R ) where r = p + q − 1. Moreover, r n p n q n kf ∗ gkL (R ) ≤ kfkL (R ) kgkL (R ): (3) Important subcases of this fact are: The case p = q = 1. In this case r = 1 and one sees that L1(Rn) is an algebra under the convolution product. It is also frequently used with p = 1. Then r = q and it becomes q n 1 n q n kf ∗ gkL (R ) ≤ kfkL (R ) kgkL (R ): The case p = q = 2. In this case r = 1 so f ∗ g is bounded. One can prove a bit more: In this case, f ∗ g is continuous. More generally, if q = p0, then r = 1 and one can prove also that f ∗ g is continuous. 1 n • If f; g 2 Lloc(R ), and at least one of them vanishes outside of a compact set, then f ∗ g is defined. If both vanish outside of compact sets, then f ∗g is locally integrable and vanishes outside of a compact set. Generally speaking, if f(x) = 0 for x2 = A and g(x) = 0 for x2 = B, then f ∗ g(x) = 0 for x2 = A + B. Specifically, supp f ∗ g ⊂ supp f + supp g: Proof of (3). Inequality (3) is known as the Hausdorff, sometimes Hausdorff-Young, inequality. Its proof consists in a moderately clever application of H¨older'sinequality, more precisely of a simple generalization of H¨older's inequality: Let p1; : : : ; pn 2 [1; 1] be such that 1 1 + ··· + = 1: p1 pn Then Z jf1f2 ··· fnj dx ≤ kf1kLp1 (E)kf2kLp2 (E) · · · kfnkLpn (E): (4) E This is easily proved by induction on n; the case n = 2 being just regular H¨older.We need the case n = 3. 1 1 1 1 1 Assume thus that p; q 2 [1; 1], that p + q ≥ 1, and set r = p + q − 1 (notice, incidentally, that 1=p + 1=q ≤ 2 so that 0 ≤ 1=r ≤ 1 and r 2 [1; 1]). Let f 2 Lp(Rn); g 2 Lq(Rn); we may assume that f; g only assume non-negative values. Now 1 1 1 1 1 1 + + = 2 − − + = 1 q0 p0 r q p r and we can use them as powers for an application of H¨older.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us