LAPLACE’S EQUATION, THE NONLINEAR POISSON EQUATION AND THE EFFECTS OF GAUSSIAN WHITE NOISE ON THE BOUNDARY

by

Karim Khader

A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of

Doctor of Philosophy

Department of Mathematics

The University of Utah

May 2010 Copyright c Karim Khader 2010

All Rights Reserved STATEMENT OF DISSERTATION APPROVAL

The dissertation of Karim Khader has been approved by the following supervisory committee members:

Davar Khoshnevisan , Chair 3/17/2010 DateApproved

Klaus Schmitt , Member 3/19/2010 DateApproved

Firas Rassoul-Agha , Member 3/15/2010 DateApproved

Peter Trapa , Member 3/15/2010 DateApproved

Alexander Balk , Member 3/17/2010 DateApproved

and by A__a ron ertr__ am=_ _, Chair of ______"'- ..-" :-- �..,B:_:' = ______the Department of Mathematics

and by Charles A. Wight, Dean of The Graduate School. ABSTRACT

Elliptic partial differential equations (PDE’s) and corresponding boundary value prob- lems are well understood with a variety of boundary data. Over the past 25 years, an abundance of research has been done in stochastic PDE’s (SPDE’s), with an emphasis on equations having a time parameter on domains with low spatial dimension and whose boundary is smooth. The meaning of a solution to a class of elliptic SPDE’s on a domain D ⊂ Rd, d ≥ 2 with Lipschitz boundary ∂D is described. For this class of SPDE’s, the randomness appears as a Gaussian white noise on the boundary of the domain. Existence, uniqueness and regularity results are obtained, and it is shown that these solutions are almost surely classical. For the Laplacian and the Helmholtz operator, the behavior of the solution near the boundary of the unit ball is described and in the case of the Laplacian, the solution is simply the harmonic extension of white noise and so many of the well-known properties of harmonic functions hold. To my beautiful wife Heather and my inspiring daughter Ansley Kate. CONTENTS

ABSTRACT ...... iii NOTATION AND SYMBOLS ...... vii ACKNOWLEDGEMENTS ...... ix

CHAPTERS 1. INTRODUCTION ...... 1

2. ELLIPTIC PDE’S AND HARMONIC FUNCTIONS ...... 8 2.1 Basic Identities ...... 8 2.2 Harmonic Functions ...... 9 2.3 Connection to Laplace’s Equation ...... 12 2.4 Boundary Behavior of Harmonic Functions ...... 15 3. THE SPECTRAL THEOREMS ...... 21 3.1 ...... 21 3.2 Eigenvalue Problem for the Dirichlet Laplacian ...... 25 3.2.1 Dirichlet eigenvalue problem on unit ball B ...... 25 3.3 Harmonic Steklov Eigenproblem ...... 27 3.3.1 The Steklov eigenfunctions ...... 27 3.3.2 Sobolev spaces and related trace spaces ...... 29 3.3.3 Distribution of the Steklov eigenvalues ...... 33 3.3.4 Growth rate of generalized harmonic functions on the unit ball ...... 35 4. PROBABILITY ...... 39 4.1 Random Variables and Distributions ...... 39 4.2 Conditional Probability ...... 41 4.3 Random Fields ...... 41 4.3.1 Existence and regularity ...... 42 4.3.2 Gaussian random field ...... 43 4.3.3 Reproducing kernel ...... 43 4.4 Markov Random Fields ...... 45 5. WHITE NOISE ...... 49 5.1 Construction ...... 49 5.2 White Noise as a Random Distribution ...... 51 6. STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS AND RELATED RESEARCH ...... 56 6.1 Elliptic SPDE’s Driven by White Noise ...... 56 6.2 Nonlinear Elliptic Equation Driven by White Noise ...... 58 6.3 Elliptic PDE’s with Distribution-valued Boundary Data ...... 59 7. LAPLACE’S EQUATION WITH GAUSSIAN WHITE NOISE ON THE BOUNDARY ...... 63 7.1 On a General Domain ...... 63 7.1.1 Existence, uniqueness and distributional properties ...... 63 7.1.2 Regularity ...... 65 7.1.3 Markov property ...... 71 7.2 When the Domain Is a Ball ...... 73 7.2.1 Distributional results ...... 73 7.2.2 Boundary behavior of the solution ...... 75 7.2.3 The in the unit ball ...... 80 8. NONLINEAR POISSON EQUATION WITH WHITE NOISE ON THE BOUNDARY ...... 84 8.1 Existence, Uniqueness and Regularity ...... 84 8.2 The Helmholtz Equation with White Noise on the Boundary ...... 88 8.2.1 Boundary behavior ...... 92 REFERENCES ...... 97

vi NOTATION AND SYMBOLS

We begin by briefly introducing some notation and formulas from calculus that will be important in helping to develop the material that follows. The descriptions below may be a bit loose, but the intention is not to develop the material below. It is simply to remind us of some the material that we need.

- N denotes the set of natural numbers, Z the integers and Z+ = N ∪ {0}. d - R will denote Euclidean space in dimension d ≥ 2, with points labeled as x = (x1, . . . , xd), the components xi ∈ R.

- For real sequences an and bn, we write an ∼ bn to mean that for some constant C > 0, a lim n = C n→∞ bn

d - D will typically denote an open, bounded subset of R , and the boundary of D will be denoted by ∂D.

d - The unit ball in R , centered at zero will be denoted by B. The ball of radius r centered at zero will be denoted by Br and the ball of radius r centered at a will be denoted by Ba,r. - σ will denote surface measure on the boundary, ∂D.

- The volume (surface area) of the unit ball will be denoted by ωd (σd). - The total derivative or gradient of a real-valued function will be denoted by Du = ∇u = ( ∂u ,..., ∂u ) ∂x1 ∂xd α - If α = (α1, . . . , αd), αi ∈ Z+ is a multi-index, the derivative D is given by

|α| α ∂ D = α1 αd ∂x1 ··· ∂xd

where |α| = α1 + ··· + αd. - Leibniz rule for differentiating products is given by the following formula. For functions f and g differentiable on the same domain,

X α! Dα(fg) = DβfDγg. β!γ! β+γ=α - A function f on R is H¨oldercontinuous with exponent α on R if for all x, y ∈ R |f(x) − f(y)| ≤ A|x − y|α

for some A > 0. If a function f is H¨olderon R with exponent α, it’s α-H¨older coefficient Lα is defined as

α |f(x) − f(y)| L (f) := sup α . (0.1) x6=y |x − y|

If (0.1) is finite for α = 1, f is said to be Lipschitz continuous with Lipschitz coefficient given by L(f) := L1(f).

- C(D) will denote the set of continuous functions on D.

- Ck(D) will denote the set of functions having all of order up to k continuous on D.

k k - C0(D),C0 (D) denotes those functions in C(D),C (D) respectively with support contained in D.

- Cα(D) = C0,α(D) denotes the space of H¨oldercontinuous functions on D with H¨olderexponent α. If α = 1, they are Lipschitz.

- Ck,α(D) denotes the space of functions whose derivatives up to order k exist and are H¨oldercontinuous with H¨older exponent α. If α = 1, they are Lipschitz.

k,p k,p - W (D) and W0 (D) are the spaces of functions on D having weak derivative of p k,p up to order k belonging to L (D). The functions in W0 (D) are zero a.e. on the boundary ∂D.

k k,2 k k,2 - H (D) = W (D) and H0 (D) = W0 (D).

viii ACKNOWLEDGEMENTS

I would like to thank Davar Khoshnevisan for proposing this research project, and for his patience, encouragement and the many thoughtful discussions about the problem. I would like to thank Klaus Schmitt for his thoughts and his elegant teaching style and Firas Rassoul-Agha for the seminar-style classes and exposing me to many current works in probability theory. Thanks to Peter Trapa for teaching me that difficult problems are exciting and ensuring that I learn at least some representation theory before finishing complex analysis. Thanks to Alexander Balk for the great course on fluid dynamics together with the variety of demonstrations that helped me understand some of the basic principles of fluids. Thanks to Ken Smith for exposing me to many interesting problems in applied mathematics and statistics, his motivating discussions and support with the statistical projects. I would like to thank my parents Badawe and Joy Khader for their constant support and encouragement in my life, and my wife Heather for her love, patience and interest in my happiness. Lastly I would like to thank my family and friends for their encouragement during this time in my life, their interest in my work and for distracting me from school when it was beneficial. CHAPTER 1

INTRODUCTION

One of the most comprehensive introductions to Stochastic Partial Differential Equa- tions (SPDE’s) is that by J. Walsh [60]. Walsh’s introduction to SPDE’s was developed from a collection of lectures notes that were given in Saint-Flour. In [60], Walsh describes the standard methods of analyzing SPDE’s, including making sense of the meaning of such solutions and verifying existence and uniqueness. The most common SPDE’s include mathematical objects such as white noise W˙ , for which there was no standard way of interpreting it within the context of PDE’s. Walsh developed the notion of mild solution, weak solution and distribution-valued solutions, each of which appeared among the nu- merous examples. Walsh’s lecture notes include basic results on existence, uniqueness and regularity for SPDE’s from many families of operators including hyperbolic, parabolic and elliptic. There are also some basic results on nonlinear SPDE’s where the white noise is an additive term in the equation. These lecture notes helped to stimulate a considerable amount of research in SPDE’s, some of which I will describe shortly. Due to the irregularity of the fundamental solutions for several linear operators par- ticularly as the spatial dimension increases, there are issues when seeking solutions to SPDE’s that are function valued. This has had an effect on the types of problems that have been commonly studied and on the conditions that are assumed. Many of the equations studied have been limited to low spatial dimensions or have imposed smoothing assumptions on the noise term. I will now provide a brief review of some of the results that have been obtained in the area of SPDE’s. While issues of existence and uniqueness are certainly important, the results presented will attempt to highlight the different types of questions that can be asked in SPDE’s in part to make clear some of the distinctions between SPDE’s and PDE’s. Mueller [48, 46] studied some variations on a family of nonlinear heat equations with a multiplicative white noise term. Formally, if we use the notation W˙ (t, x) to denote white noise in time and space, where W˙ is actually thought of as a random distribution, 2 he studied the equation

γ ut = uxx + u W˙ t > 0, 0 ≤ x ≤ J

u(0, x) = u0(x) (1.1)

u(t, 0) = u(t, J) = 0.

In [48], Mueller showed that for u0 nonnegative measure on [0,J], there is a unique nonnegative solution u to (1.1) for 0 ≤ t < τ where τ is the blowup time. Additionally, it was shown that for 1 ≤ γ < 3/2, τ ≡ ∞ and hence blow-up does not occur, but that for γ large enough, there is a positive probability that τ is finite. Improving upon this result in [46], it is shown that for γ > 3/2, there is a positive probability that τ < ∞ and hence that the solution has a positive probability of blowing up in a finite amount of time. A similar equation that was a nonlinear version of the wave equation with a multi- plicative noise term. The equation was given by

d utt = ∆u + a(u)G(t, x) t > 0, x ∈ R

u(0, x) = u0(x) (1.2)

ut(0, x) = u1(x),

where d = 1, 2. In the case that d = 1, G(t, x) is the space-time white noise where for d = 2, G(t, x) is a Gaussian noise term, white in time but correlated in space. This means that G is smoother than white noise in the spatial variable. In [47] it is shown that if a(u) = O(u[log u]1/2−), then there is existence of a solution to (1.2) for all time t > 0. Further parabolic equations studied include a class of parabolic SPDE’s driven by white noise and having a term that forces repulsion away from zero

∂u 1 ∂2u c ∂2W = + + c > 0, (1.3) ∂t 2 ∂x2 u3 ∂t∂x

or reflection at zero ∂u 1 ∂2u ∂2W = + + η. (1.4) ∂t 2 ∂x2 ∂t∂x The nonlinearity in (1.3) drives the repulsion and η in (1.4) is a measure that ensures reflection at 0. In fact, it is known [26] that (1.4) can be viewed as the limit of (1.3) as c ↓ 0 and is therefore considered as a special case of (1.3) with c = 0. In [26], hitting probabilities for the 0 set are studied and are characterized in terms of c. In fact, Dalang, 3

Mueller and Zambotti show that the solution hits 0 at most 4 times and for c > 15/8, the level 0 is never hit. Due to the irregularity of the Green kernel for the wave operator in higher dimensions, an extension of the martingale measure integral was developed [18] and has been used to study such equations particularly for higher dimensions. This extension to the martingale measure integral allows the integral to accommodate as integrand certain distributions. Existence and uniqueness with some mild regularity results are obtained in a class of nonlinear hyperbolic equations with a multiplicative noise term in [20] where the equation studied is given by

∂2 u(t, x) + (−1)k∆(k)u(t, x) = α(u(t, x))F˙ (t, x), (1.5) ∂t2 ∂ u(0, x) = v (x), u(0, x) =v ¯ (x). 0 ∂t 0 2 d −k d Here v0 ∈ L (R ),v ¯0 ∈ H (R ), α has a Lipschitz condition and the noise term is assumed to have some assumptions on its spatial correlation that provide the existence of a function-valued solution. One of the key features in these results is in using weighted L2 spaces for the wave equation when k = 1. The regularity results are improved in [24] where it is shown that the solutions are actually H¨oldercontinuous in time and space. Additionally, in [22, 23, 25], second order hyperbolic equations are studied for which the Gaussian noise is a boundary noise where the domain has a smooth boundary. These equations are given by

˙ Lu(t, x) = F (t, x)δS(x), (t, x) ∈ R+ × D, ∂u u(0, x) = u(0, x) = 0, x ∈ D, (1.6) ∂t ∂u (t, x) = 0, (t, x) ∈ × S. ∂ν R+ Here the operator L is given by

∂2 ∂ L = + 2a + b − ∆, a, b ∈ . ∂t2 ∂t R

The special cases when the domain is a ball or half-space are discussed in which case the noise is spatially homogeneous. When the boundary noise is on the sphere, a characterization of the possible types of noise is given for existence and regularity of function valued solutions. 4

More recently, systems of nonlinear heat equations have been studied with both a multiplicative noise term [28]

2 d ∂ui ∂ ui X (t, x) = (t, x) + σ (u(t, x))W˙ j(t, x) + b (u(t, x)), (1.7) ∂t ∂x2 i,j i j=1

and an additive noise term [27]

2 d ∂ui ∂ ui X (t, x) = (t, x) + σ W˙ j(t, x) + b (u(t, x)), (1.8) ∂t ∂x2 i,j i j=1

for 0 ≤ t ≤ T and x ∈ [0, 1]. Both equations impose initial conditions and Neumann boundary conditions identically zero. Some of the basic results obtained include obtaining bounds on the hitting probabilities in terms of Hausdorff measure and Newtonian capacity in a way that depends on the number of equations d. Addtionally, the Hausdorff measure of the level sets is obtained for the solutions to both of these equations as is the Hausdorff measure of the range of the solution to (1.7). In [40], Dalang et al. study the solution to a system of linear wave equations each driven by a noise term ∂2u ∂2u i (t, x) = i (t, x) + L˙ (t, x), t ≥ 0, x ∈ . (1.9) ∂t2 ∂x2 i R

This noise L˙ i is not a Gaussian noise but more generally a L´evynoise. Among the results given in [40] is a characterization of the types of L´evynoise for which the corresponding solutions have nonvoid zero sets. This characterization is given in terms of the char- acteristic exponent of the L´evynoise. For the solutions whose zero sets are nonvoid, a description of the Hausdorff measure of the zero set is also provided. Some work has also been done in elliptic SPDE’s. These will be described in some detail in Chapter 6 and will be summarized here briefly as well. In [60], it is shown that the solution to Poisson’s equation driven by white noise

∆u = W,˙ (1.10) and more generally, any even order elliptic operator driven by white noise

Pu = W,˙ (1.11) where P is an elliptic operator of even order, has a solution. Depending on the dimension and the order of the operator P, in some cases these solutions are in fact distributions. 5

The remaining results on elliptic SPDE’s described below are given only for dimensions 1 ≤ d ≤ 3. Both [30] and [12] give some results on a nonlinear equation

−∆u(x) + f(u(x)) = W˙ (x), (1.12)

including existence, uniqueness and the fact that the solution is continuous. In [50], Nualart and Tindel show that there exists a solution to Poisson’s equation driven by white noise and having reflection at 0, given by

−∆u(x) + f(u(x)) = W˙ (x) + η (1.13)

where η is a random measure ensuring that u remains positive. Also, in [59], it is shown that the solution to (1.13) has a density that is absolutely continuous with respect to Lebesgue measure. In [31], Donati-Martin and Nualart show that the solution to (1.12) is in fact a Markov random field (as a distribution) if and only if the nonlinearity f is an affine transformation. While I have discussed some results in SPDE’s of parabolic, hyperbolic and elliptic type, an abundance of the research that has been done in SPDE’s that deal with equations that involve time. Because of the nature of the martingale measure, there is a significant difference in the way that equations with time are dealt with that allows for addressing equations having multiplicative noise. Also, in all of the research that has been discussed so far, none of the solutions are classical. Instead, they are at best H¨oldercontinuous of some order. To my knowledge, there have not been any SPDE’s studied whose solutions are classical. Recall that SPDE’s driven by boundary noise were considered in [22, 23, 25], where the noise was smoother than white in the spatial variable. In these notes, the SPDE’s to be discussed are Laplace’s equation, the nonlinear Poisson equation and the Helmholtz equation. The stochastic nature of these equations will appear as white noise on the boundary. The solutions to these equations will be shown to be very different from the solutions of most of the SPDE’s that have been considered. They will have nice regularity properties, with behavior becoming chaotic near the boundary. In fact, in the case of Laplace’s equation with white noise, the solution will be related to work that has been done outside of SPDE’s. This connection is in large part due to the fact that the solution will have a representation as a Gaussian in terms of spherical harmonics. 6

When the dimension d = 2, the solution to this problem has been studied in a very different context in the subject of Gaussian Analytic Functions (GAF’s). In [52], they study random power series on the C with coefficients given by independent standard complex-valued Gaussian random variables. Among the results described in [52] are that for a simply connected, bounded planar domain with smooth boundary D ⊂ C, the joint intensities of the zero set of these GAF’s in D, which is a point process can be described in terms of the Bergman kernel. This result concerning the joint intensities leads to a specific expression in the case that D is the unit disc, which leads to a detailed understanding of the precise probabilistic behavior of the zero set. The zero set of these GAF’s are determinantal processes [39] of which there are numerous examples. Determinantal processes were originally studied in 1975 by Macchi within the context of quantum mechanics. When the dimension d = 3, these solutions are related to some mathematical models that are currently used in astronomy, in particular with respect to the Cosmic Microwave Background (CMB). In order to describe this connection, I will briefly describe what the CMB is which requires an understanding of the Big Bang model in cosmology. This description uses [36, 41] for many of the details in describing CMB. One of the dominant models for the evolution of our universe is known as the hot Big Bang model and views our current universe as being an expansion of an initial hot and dense mass of subatomic particles. For about the first 300,000 years of the existence of the universe, the universe was dense and hot enough that all of the particles were continually scattering while expansion and cooling were occuring at an exponential rate. At about 300,000 years of age, the universe was cool enough to allow for the beginning of the formation of atoms. This time period is termed the time of recombination. During recombination, electrons joined the orbits of stable atomic nuclei and photons were allowed to freely move through the universe. The density of the universe was small enough that most of the photons are assumed to have been moving freely with no interaction up until the present time. This scattering of photons was predicted in late 1960s by a number of physicists and was first observed by the National Astronautics and Space Administration (NASA) satellite mission COBE in the early 1990’s and is the so-called Cosmic Microwave Background. The CMB is therefore thought of as a picture (in the microwave spectrum of electro-magnetism) of the universe at the moment of recombination and therefore provides information on the density distribution of the 7 early universe. It turns out that for the current models for the development of the universe, it is of interest to know whether the CMB follows a Gaussian distribution or not. A Gaussian distribution gives more evidence for the Big Bang model while non-Gaussian provides more evidence for other models including the topological defects or nonstandard inflationary models [41]. Because the CMB is understood as a picture of the early universe on a sphere, the CMB is modeled as a random Fourier series in terms of spherical harmonics. Much work has been done [36, 13, 41, 42, 43, 44] in understanding the distribution of the CMB including developing nonparametric estimation techniques, statistical tests for non-Gaussianity and an improved understanding of the statistical properties of such Guassian Fourier series. CHAPTER 2

ELLIPTIC PDE’S AND HARMONIC FUNCTIONS

Harmonic-function theory will play a major roll in many of the main results contained in these notes. Consequently, we begin with a basic review of harmonic functions together with some basic results concerning elliptic PDE’s. The last section in this chapter will focus on the behavior of harmonic functions near the boundary of the domain. These results on harmonic functions will be of interest for contrasting with the new results described in Chapters 7, 8 and 9 and will be used directly in some of the proofs. We will begin however with the first two sections being devoted to notation, formulas and some basic identities and theorems from calculus.

2.1 Basic Identities We will now describe the basic identities that are instrumental in studying harmonic functions. First recall the divergence theorem from which many of the other important d formulas can be derived. Let D be a bounded domain in R with Lipschitz boundary ∂D and let the outward normal unit vector which is possibly only defined σ-almost everywhere 0 1 be denoted by n. For a vector field v = (v1, v2, . . . , vd) such that v ∈ C (D¯) ∩ C (D). The divergence theorem describes the relationship between the integral over a domain and a boundary integral. In particular, the divergence theorem states that for such a vector field v as described above, Z Z div v dx = v · n dσ. (2.1) D ∂D

d The Laplacian in R which is denoted by ∆ is the differential operator defined as the trace of the Hessian, ∂2 ∂2 ∂2 ∆ = 2 + 2 + ··· + 2 . ∂x1 ∂x2 ∂xd 9

If φ and ψ are C1(D¯)∩C2(D) real-valued functions, we can consider the vector field given by v = φ∇ψ where ∇ψ is the gradient or total derivative of ψ. Then by applying the divergence theorem, we obtain Green’s first identity:

Z Z Z ∂ψ φ∆ψdx + ∇φ · ∇ψdx = φ dσ, (2.2) D D ∂D ∂n where

∂ψ = ∇ψ · n ∂n is the outward normal derivative of ψ. By interchanging the rolls of φ and ψ in the above equation and subtracting the resulting identities, we obtain Green’s second identity:

Z Z  ∂ψ ∂φ  (φ∆ψ − ψ∆φ)dx = φ − ψ dσ. (2.3) D ∂D ∂n ∂n The following theorem, the Rellich-Kondrachov theorem, provides results on con- tinuous and compact imbeddings from Sobolev spaces to either Lp spaces or spaces of continuous or differentiable functions.

0,1 d Theorem 2.1 ([37, Theorem 7.26, p. 171]) Let D be a C domain in R . Then:

(i) If kp < n, then the space W k,p(D) is continuously imbedded in Lp∗ (D), p∗ = dp/(d− kp), and compactly imbedded in Lq(D) for any q < p∗;

d k,p (ii) If 0 ≤ m < k − p < m + 1, then the space W (D) is continuously imbedded in Cm,α(D¯), α = k − d/p − m, and compactly imbedded in Cm,β(D¯) for any β < α.

2.2 Harmonic Functions With the basic identities from calculus discussed, we can now go ahead and present the relevant material on theory and classical PDE’s. There is more than one way to define a harmonic function but we will start with the most simply stated definition.

d 2 Definition 2.2 Let D ⊂ R be an open set. Then a function u ∈ C (D) is called harmonic on D if ∆u = 0 in D. (2.4) 10

Example 2.3 (Fundamental solution) One of the most prominent examples of a har- monic function is given by the so-called fundamental solution to Laplace’s equation. For fixed y, the function Γ defined by

 − 1 log |x − y| for d = 2,  2π Γ(x, y) = (2.5) 1 1  d−2 for d ≥ 3, d(d−2)ωd |x−y|

d and is harmonic on R \{y}. As will be shown, Γ has an indirect role in generating many harmonic functions.

Harmonic functions have many interesting and beautiful properties with interpreta- tions in many of the areas of math that have deep connections with harmonic function theory including pde’s, complex analysis and probability. We will begin by describing some of the basic properties of harmonic functions that will be used in future results. From the identities described earlier come a number of important results concerning harmonic functions. The first we will discuss is the mean value property. There are two basic mean-value properties that are often discussed together. The main idea behind the mean-value properties is the following. If u is a function that is harmonic in D, then for every a ∈ D, the value of u at a is the integral average over any ball or sphere centered at a and contained in D. This is a remarkable result describing both the local way in which harmonic functions change as well as their dependence on values far away from one another. In fact, in physics the steady-state temperature measured at a point in space is defined to be the average energy of the particles in the neighborhood of that point and temperature has a deep mathematical connection to the Laplacian and harmonic functions. The precise statement of the mean-value property is given in the following theorem, see [37].

Theorem 2.4 (Mean value property, [37, Theorem 2.1, p. 14]) Let u ∈ C2(D) ∩

C(D¯) be harmonic in D. Then for any ball Ba,r ⊂ D, 1 Z 1 Z u(a) = d−1 u(z)dσr(z) and u(a) = d u(x)dx. (2.6) r σd ∂Ba,r r ωd Ba,r

In fact not only do harmonic functions have the mean-value property in Theorem 2.4 but they are the only continuous functions that have this property. In other words the mean-value property characterizes harmonic functions among those that are continuous as is stated in the following theorem, also found in [37]. 11

Theorem 2.5 ([37, Theorem 2.7, p. 21]) A C0(D) function u is harmonic if and only if for every ball Ba,r ⊂ D, it satisfies the mean value property

1 Z u(a) = d−1 u(z)dσr(z). (2.7) r σd ∂Ba,r

A number of classifications of sets play a major role in the study of harmonic functions and more generally the study of . It would not make sense to go into depth discussing these various types of sets, but we need at least a basic understanding of two of them. The first thing that needs to be defined in contrast with harmonic functions is the notion of a superharmonic function.

Definition 2.6 An extended real-valued function u on a domain D is superharmonic on D if

(i) u is not identitically +∞ on any connected component of D,

(ii) u > −∞ on D,

(iii) u is lower semicontinuous on D, and

(iv) u is super-mean-valued on D, that is 1 Z u(a) ≥ d−1 u(z)dσr(z) r σd ∂Ba,r

for all balls Ba,r ⊂ D. u is called subharmonic on D if −u is superharmonic on D.

Example 2.7 For fixed y, the fundamental solution Γ(·, y) from Example 2.3 is super- d d harmonic on R . Note that for d ≥ 3, Γ is strictly positive on R but not so when d = 2.

Note that a superharmonic function is not required to be continuous, but if a superhar- monic function is twice differentiable, there is an alternative characterization given by the following lemma.

Lemma 2.8 If u ∈ C2(D), then u is superharmonic on D if and only if ∆u ≤ 0 on D.

With the notion of a superharmonic function, the definition of the first type of set that we need to be familiar, a Greenian set can be given. 12

d Definition 2.9 An open set D ⊂ R is called a Greenian set if it supports a positive, nonconstant superharmonic function.

d From Example 2.7, it is clear that every open set in R , d ≥ 3 is a Greenian set. It turns 2 out that when d = 2, the Greenian sets are those sets D that are not dense in R . Just as a Greenian set was defined in terms of being able to support a certain type of function, the next special set that we define is defined in the same way. In this case, the type of function that is required to be supported is a refinement of a superharmonic function. This function is called a barrier and is defined next.

Definition 2.10 A function w is a local barrier at x ∈ ∂D if w is defined on Λ ∩ D for some neighborhood Λ of x and (i) w is superharmonic on Λ ∩ D, (ii) w > 0 on Λ ∩ D and

(iii) limy→x,y∈Λ∩D w(y) = 0. The function w is a barrier at x ∈ ∂D on D if it satisfies (i), (ii), (iii) with Λ = D and a strong barrier if, in addition, inf{w(z): z ∈ D \ Λ} > 0 for each neighborhood Λ of x.

We can now give the definition of a regular set.

d Definition 2.11 For a bounded open subset D ⊂ R , a point x ∈ ∂D is a regular boundary point if and only if there is a barrier at x. A domain D is called regular if every boundary point is regular.

2.3 Connection to Laplace’s Equation Now that we have discussed some of the basic properties of harmonic functions, a natural question arises. Where are they found or how can they be generated. The answer to this question lies in the connection between harmonic functions and pde’s. Of course the definition given of harmonic functions was in terms of a solution to a pde. So we now introduce the boundary value problem described classically by Laplace’s equation. ∆u = 0 in D,  (2.8) u = g on ∂D.

d For simplicity, suppose g is a continuous function on ∂D and the domain D ⊆ R is bounded with locally Lipschitz boundary ∂D. If we apply Green’s second identity to a 2 1 D function u ∈ C (D) ∩ C (D¯) and the Green’s function G (x, y) for the domain D \ Bx,  > 0 small and let  ↓ 0, we obtain the integral representation for u Z Z u(x) = GD(x, y)∆u(y)dy + u(z)pD(x, z)dσ(z), (2.9) D ∂D 13

where pD is the Poisson kernel given by the outward normal derivative of GD. That is, for fixed x ∈ D, ∂ GD(x, y) = pD(x, y). ∂ny Recall that the Green’s function GD is given by

GD(x, y) = Γ(x, y) + hD(x, y), (2.10) where for x ∈ D fixed, hD solves the equation D  ∆yh (x, y) = 0 in D,  (2.11) hD(x, y) = −Γ(x, y) y ∈ ∂D.  It is known, [33], that this construction works and hence Green’s functions exist on Greenian domains. The Poisson kernel on the other hand does not exist on such a general domain. We will mention later what we will require for the domain. With the assistance of (2.9), a solution to (2.8) can be written as the boundary integral,

Z u(x) = pD(x, z)g(z)dσ(z). (2.12) ∂D It is not always possible to obtain an explicit formula for the functions GD and pD but they are well-known for some simple domains such as the half-space, the unit ball and the annulus. We will use in detail the explicit formula for Poisson’s kernel for the ball. Similarly, Green’s representation formula gives an explicit formula for solutions to Poisson’s equation. A basic summary of existence and uniqueness for Poisson’s equation is given in the following theorem.

Theorem 2.12 ([37, Theorem 4.3, p. 56]) Let D be a bounded, regular domain. If f is bounded, locally H¨oldercontinuous in D and g is continuous on ∂D, then the problem ∆u = f in D,  (2.13) u = g on ∂D. is uniquely solvable.

If the domain D has Lipschitz boundary, then the solution to (2.13) has the integral representation given by Z Z u(x) = f(y)GD(x, y)dy + g(z)pD(x, z)dσ(z). (2.14) D ∂D In Chapter 8, we will be interested in regularity results for the Nonlinear Poisson equation. It turns out that the issues that will be faced there are very similar to those with Poisson’s 14

equation. If we restrict our attention to the case in which the domain D has boundary ∂D that is Lipschitz, then the regularity of the solution is ultimately determined by the so-called Newtonian Potential. For a function f on D, the Newtonian Potential of f is defined as Z Nf(x) = Γ(x, y)f(y)dy. (2.15) D The reason is that Nf is the term in (2.14) having the least amount of regularity. We will briefly summarize how the regularity of the Newtonian Potential is related to the regularity of the function f in the following two Lemmas, [37].

Lemma 2.13 ([37, Lemma 4.1, p. 54]) Let f be bounded and integrable in D, then 1 d Nf ∈ C (R ) and for any x ∈ D, ∂ Z ∂ Nf(x) = Γ(x, y)f(y)dy. (2.16) ∂xi D ∂xi So, already, with very little regularity assumed on f, the Newtonian potential is differen- tiable. With slightly more regularity on f, we have the following result.

Lemma 2.14 ([37, Lemma 4.2, p. 55]) Let f be bounded and locally H¨oldercontinu- ous in D. Then Nf ∈ C2(D) and ∆Nf = f in D.

The last result that we will mention concerning the regularity of the Newtonian potential is called the Calderon-Zygmund theorem. It gives regularity results when the only assumptions on f are integrability conditions and shows that the Newtonian potentials in that case belong to certain Sobolev spaces.

Theorem 2.15 (Calderon-Zygmund, [37, Theorem 9.9, p. 230]) Let f ∈ Lp(D), 1 < p < ∞, and let Nf be the Newtonian potential of f. Then Nf ∈ W 2,p(D), ∆Nf = f a.e. and 2 kD NfkLp(D) ≤ CkfkLp(D), (2.17) where C = C(n, p). Furthermore, when p = 2 we have Z Z |D2Nf(x)|2dx = f 2(x)dx. (2.18) d R D We now move on to the issues that will be of ultimate interest when we describe the recent work described in Chapter 7, that is, the boundary behavior of harmonic functions. 15

2.4 Boundary Behavior of Harmonic Functions Among the questions of interest concerning behavior of harmonic functions is how they behave near the boundary. These questions have often been addressed in special domains d d such as the unit ball B = {x ∈ R : |x| < 1} or the half-space R+ = {x = (x1, . . . , xd) ∈ d R : xd > 0}, when there are explicit formulas for the Poisson kernel. However, there are also results concerning the boundary behavior in more general domains. Another issue that has been of interest is how the behavior near the boundary is related to the regularity of the boundary data. In this section, the focus will be on the case when the domain d D is the unit ball B ⊂ R and summarize some known results on boundary behavior in terms of existence of limits at the boundary, weak convergence at the boundary and upper bounds on the growth of harmonic functions. These results will all be related to the regularity of the boundary data. The Poisson kernel for the unit ball will be denoted simply by p(x, θ). Let M(∂B) denote the space of complex Borel measures on ∂B. M(∂B) is a with total variation norm k · k given by kµk = |µ|(∂B) where |µ| := total variation of µ.

For a function u defined on B, the r-dilate, denoted by ur for r ∈ [0, 1] is defined to be

ur(x) = u(rx). (2.19)

The following results give bounds on the growth of various Lp(∂B) norms of the r-dilate as r approaches 1 in terms of norms of the boundary data.

Theorem 2.16 ([7, Theorem 6.4, p. 113]) We have the following growth estimates.

(a) If µ ∈ M(∂B) and Z u(x) = p(x, θ)dµ(θ), (2.20) ∂B

then kurkL1(∂B) ≤ kµk for all r ∈ [0, 1).

(b) If 1 ≤ p ≤ ∞, f ∈ Lp(∂B) and Z u(x) = p(x, θ)f(θ)dσ(θ), (2.21) ∂B

then kurkLp(∂B) ≤ kfkLp(∂B) for all r ∈ [0, 1). 16

The results given in Theorem 2.16 should not be too surprising since it is known that for nonconstant harmonic functions, both the sup and inf are obtained on the boundary of the domain. So roughly speaking, moving out on concentric spheres from the origin of B should result in increasing in norm, with the limit being some norm of the boundary data. There is also a sense in which the dilates themselves converge at the boundary. This point is given in the following theorem that says for boundary data from a class of Lp(∂B) spaces, the dilates converge to the boundary data in the respective Lp(∂B) space.

Theorem 2.17 ([7, Theorem 6.7, p. 114]) Suppose that 1 ≤ p < ∞. If f ∈ Lp(∂B) and Z u(x) = p(x, θ)f(θ)dσ(θ), (2.22) ∂B then kur − fkLp(∂B) −→ 0 as r −→ 1.

The theorem fails when p = ∞ and one can show that kur −fkLp(∂B) −→ 0 as r −→ 1 for f ∈ L∞(∂B) if and only if f ∈ C(∂B). Also, concerning a similar result with the boundary data a measure, it turns out that it isn’t true for every measure µ ∈ M(∂B) that ur converges to µ in the space M(∂B). It is only true in the case that µ is absolutely continuous with respect to σ. To obtain a general convergence result at the boundary when the boundary data consists of L∞(∂B) functions or Borel measure data M, we need to consider a weaker type of convergence. The appropriate sense of convergence is called weak-∗ convergence. If X is a normed ∗ ∗ linear space with dual space X , then we say that a family {Λr : r ∈ [0, 1)} ⊂ X converges to Λ weak-∗ if Λr(x) −→ Λ(x) for each x ∈ X. With this notion of convergence, we have a theorem giving weak-∗ convergence in the spaces M(∂B) and L∞(∂B). Keep in mind that dual space to the space of Borel measures are the continuous functions, C(∂B)∗ = M(∂B) and the dual space to the essentially bounded functions are those that are integrable, L1(∂B)∗ = L∞(∂B), so the duality product is simply the canonical integral.

Theorem 2.18 ([7, Theorem 6.9, p. 116]) (a) If µ ∈ M(∂B) and Z u(x) = p(x, θ)dµ(θ), (2.23) ∂B

then ur −→ µ weak-∗ in M(∂B) as r −→ 1. 17

(b) If f ∈ L∞(∂B) and Z u(x) = p(x, θ)f(θ)dσ(θ), (2.24) ∂B ∞ then ur −→ f weak ∗ in L (∂B) as r −→ 1.

Next we introduce a family of harmonic function spaces, called Hardy spaces, which as will be seen can be identified by their boundary data. A description of growth estimates for functions in the Hardy spaces will be given where the growth rates depend on the boundary data. This result will be have an analogous result that we will present in Chapter 2. The main theorem in this section is termed Fatou’s Theorem which gives results on limiting behavior almost everywhere at the boundary. This will end our preliminary discussion on the behavior of harmonic functions near the boundary of their domain. We define the harmonic Hardy spaces hp(B) for 1 ≤ p ≤ ∞ to be the collection of

harmonic functions on B with finite k·khp norm where the k·khp norm is defined in terms of the r-dilate as

kukhp = sup kurkLp(∂B). (2.25) 0≤r<1 There clearly is a connection between these spaces and those harmonic functions for which we have discussed boundary behavior up until this point. Endowed with the norm k · khp , the hp(B) spaces are Banach spaces. Some basic observations about the Hardy spaces are firstly that if u ∈ h∞(B), then

kukh∞ = sup |u(x)| x∈B so that h∞(B) is the collection of bounded harmonic functions on B. Also because the ball B has finite measure, we have the following nesting of the Hardy spaces:

hp(B) ⊂ hq(B) for 1 ≤ q < p ≤ ∞. (2.26)

The following theorem describes the precise relationship between the Hardy spaces hp(B) and the boundary functions (or boundary measures) that are identified to them.

Theorem 2.19 ([7, Theorem 6.13, p. 119]) The Poisson integral has the following surjective isometries:

R 1 (a) The map µ 7→ ∂B p(·, θ)dµ(θ) is a linear isometry of M(∂B) onto h (B). 18

R p (b) For 1 < p ≤ ∞, the map f 7→ ∂B f(θ)p(·, θ)dµ(θ) is a linear isometry of L (∂B) onto hp(B).

We will see later that for a whole family of Sobolev spaces and Distribution spaces, consisting of harmonic functions and generalized harmonic functions respectively, there is an analogous isometry between the spaces of harmonic functions and the boundary spaces that generate them. These spaces will be constructed in terms of Steklov eigenfunction expansions and we will even be able to describe growth rate estimates similar to those found in the next proposition. For functions that belong to Hardy spaces, there are upper bounds on the rates of growth near the boundary in terms of the hp(B) norms. They are given in the following proposition.

Proposition 2.20 ([7, Proposition 6.16, p. 120]) Suppose 1 ≤ p < ∞. If u ∈ hp(B), then  1 + |x| 1/p |u(x)| ≤ kuk p (2.27) (1 − |x|)d−1 h for all x ∈ B.

In fact, these bounds in Proposition 2.20 are not optimal. It can be shown for instance that when 1 < p < ∞,

(1 − |x|)(d−1)/p|u(x)| −→ 0 as |x| → 1. (2.28)

We are now ready to discuss the final result concerning the boundary behavior of harmonic functions, Fatou’s theorem. In order to discuss Fatou’s theorem, we need to describe briefly the notion of nontangential convergence which relies on a particular geometric structure. To begin, we should describe the basic geometrical object for this d −1 definition, a cone with vertex x ∈ R and with angle of aperture cos (1 − α/2) and axis of symmetry in the direction ξ, denoted by Kα(x, ξ) where 0 < α < 2. Because we are dealing with the unit ball as the domain, some modifications need to be made to the basic cone since it is not contained within the unit ball. The cone Kα(0, ξ) given by

d 2 Kα(0, ξ) = {x ∈ R : |ξx − ξ| < α}. (2.29)

We will assume that this cone does not contain the vertex 0 as the spherical coordinate for 0 is not defined. We truncate this cone, obtaining the basic cone for the purposes of 19

nontangential convergence, Cα(0, ξ) = Kα(0, ξ) ∩ B0,1/2 in the unit ball. Through rigid motions, we can obtain any cone Cα(x, ξ). We will be interested only in cones with vertex on the sphere and axis of symmetry through the vertex and will write Cα(ξ) for Cα(ξ, −ξ). We are now ready to define tangential convergence

Definition 2.21 A function u defined on B has a nontangential limit L at ξ ∈ ∂B if u(x) −→ L as x −→ ξ, with x remaining in Cα(ξ) for every 0 < α < 2.

Fatou’s Theorem basically says that an h1(B) function has a nontangential limit at almost every boundary point. Keeping in mind the imbeddings of the Hardy spaces hp, (2.26), the Fatou theorem holds for every . The proof relies on several other operators termed maximal functions. We will not go into the detail of these operators as we will leave the proofs outside of these notes. The statement of the Fatou theorem will be made with two theorems followed by a corollary.

Theorem 2.22 ([7, Theorem 6.39, p. 135]) For f ∈ L1(∂B), Z u(x) = p(x, θ)f(θ)dσ(θ) (2.30) ∂B has nontangential limit f(ξ) for almost all ξ ∈ ∂B.

Theorem 2.22 gives a much stronger notion of convergence then any that we have dis- cussed. In fact, we will not find any analogues to the Fatou theorem in the new results described in later chapters.

Theorem 2.23 ([7, Theorem 6.42, p. 136]) If µ ⊥ σ, then Z u(x) = p(x, θ)dµ(θ) (2.31) ∂B has nontangential limit 0 almost everywhere on ∂B.

Combining the results from Theorem 2.22 and Theorem 2.23, we have the following simple Corollary.

Corollary 2.24 ([7, Corollary 6.4, p. 137]) Suppose µ ∈ M(∂B) and dµ = fdσ + dµs is the Lebesgue decomposition of µ with respect to σ. Then Z u(x) = p(x, θ)dµ(θ) (2.32) ∂B has nontangential limit f(ξ) for almost all ξ ∈ ∂B. 20

In fact, in [9] it is shown that Fatou’s theorem also holds on domains with Lipschitz boundary. All of the convergence results described in this section have analogues on the d half space R+. These results can be found for instance in [8]. CHAPTER 3

THE SPECTRAL THEOREMS

The general theory of eigenvalue problems is well-developed for many classes of differ- ential operators but there will not be any need to introduce the general theory to describe the results in these notes. We will need some basic understanding about some specific eigenvalue problems though, so the main focus of this chapter will be to review some of the details concerning some specific eigenvalue problems. The three eigenvalue problems that we are primarily interested in are for the Dirichlet Laplacian, the Laplace-Beltrami operator on the sphere and the Harmonic Steklov eigenvalue problem.

3.1 Spherical Harmonics Before describing the basic results concerning the eigenvalue problem for the Laplace- Beltrami operator on the sphere, I will take some time to describe the function spaces that are the foundation along with some important identities. While it is not the most efficient way of describing the solution to the eigenvalue problem, there are some real benefits in approaching the subject in this way. These functions will be among the most important for the purposes of calculations appearing later on in these notes and they will serve as important examples for the Steklov eigenvalue problem. We will describe some formulas that will be the backbone for many of the calculations to be obtained later on and will add a level of intuition about these functions that will be useful for our understanding of an appreciation of harmonic function theory as well as the new results. This discussion will parallel the presentation given in [49]. We start by describing the basic type of function of interest, a homogeneous poly- nomial. A homogeneous polynomial p of order n in d dimensions is a polynomial in d variables such that for each t ∈ R,

p(tx) = tnp(x). 22

It is well-known that among the homogeneous polynomials, there are harmonic polynomi- ∗ d als. We will use Yn to denote the collection of homogeneous harmonic polynomials also called homogeneous harmonics of order n in d dimensions. We will identify an element in ∗ d Yn having certain symmetry properties, so we next describe the basic symmetries that we are interested in.

We will use Od to represent orthogonal d × d matrices and for fixed α define the

isotropy group Jd,α by

Jd,α = {A ∈ Od : Aα = α, α ∈ ∂B}.

The isotropy group Jd,α contains all matrices that keep the direction α fixed. The spaces ∗ d Yn are invariant with respect to the orthogonal transformations in Jd,α, meaning that ∗ d for a homogeneous harmonic H ∈ Yn and an orthogonal transformation A ∈ Jd,α, the ∗ d polynomial HA ∈ Yn. It turns out that there is a unique homogeneous harmonic Ld,n called the Legendre harmonic that satisfies the following three conditions:

∗ d • Ld,n ∈ Yn

• Ld,nA = Ld,n for all A ∈ Jd,α

• Ld,n(α) = 1.

d With the homogeneous harmonics defined, we now define Yn to be the space of homogeneous harmonics when restricted to the sphere ∂B. These functions will be called

spherical harmonics and the Legendre harmonic Ld,n when restricted to the sphere is

called the Legendre polynomial and is denoted by Pd,n. The term spherical harmonic is sometimes used in the literature for homogeneous harmonics, generalizations of the spherical harmonics and sometimes other related functions, but we will use it only in d regards to functions in Yn. d d,n The space Yn is a finite dimensional vector space with dimension denoted by N = d d,0 d,1 dim(Yn), where it is known that N = 1, N = d and for n ≥ 2, (2n + d − 2)(n + d − 3)! N d,n = . (3.1) n!(d − 2)!

It can be shown using Stirling’s formula for instance that N d,n ∼ nd−2. 23

d 2 An orthonormal basis for Yn under the usual L (∂B) inner-product Z hf, gi∂B = fg ∂B will always be denoted by d,n d,n d,n Y1 ,Y2 ,...,YN d,n and the functions d,n d,n d,n H1 ,H2 ,...,HN d,n will denote the corresponding homogeneous harmonics. It is straightforward to see that d a basis for Yn remains a basis under transformations by Jd,α. This in part leads to the Addition Theorem, the first of two main theorems that will be used numerous times.

Theorem 3.1 (Addition theorem, [49, Theorem 2, p. 18]) For the Legendre poly- nomial Pd,n and for ξ, η ∈ ∂B,

N d,n X N d,n Y d,n(ξ)Y d,n(η) = P (ξ · η). (3.2) j j σ d,n j=1 d We will make use of some estimates on the bounds of the homogeneous harmonic polynomials and their derivatives. We start by stating Theorem 4 from [57]. It gives bounds on the L2(∂B) norm of the derivatives of homogeneous harmonics as well as a uniform bound on the derivative of the spherical harmonics. While the Theorem does not specify conditions on the multi-index α, the Theorem is still true in the trivial case when |α| = 0.

Theorem 3.2 ([57, Theorem 4, p. 119]) (a) There are constants Bk,d such that for ∗ every H ∈ Yn, Z Z α 2 2 2|α| 2 |D H| dσ ≤ (B|α|,d) n |H| dσ, (3.3) ∂B ∂B d (b) There are constants Ck,d such that for every spherical harmonic Y ∈ Yn, Z α 2 2 2|α|−2+d 2 |D Y | ≤ (C|α|,d) n |Y | (3.4) ∂B

We will use the Theorem 3.2 to prove the following simple Lemma.

Lemma 3.3 For a multi-index α with |α| ≥ 0, there is a constant Ak,d such that for each ∗ homogeneous harmonic H ∈ Yn, Z α 2 2 2(n−|α|) 2|α|+d−2 2 |D H(x)| ≤ A|α|,d|x| n H dσ. (3.5) ∂B 24

Proof. Because H is a homogeneous harmonic of order n, DαH is a homogeneous harmonic of order n − |α|. So, by Theorem 3.1, Z α n−|α| α D H(x) = |x| Pd,n−|α|(ξx · θ) D H(θ)dσ(θ) ∂B A simple application of the Cauchy-Schwartz inequality yields the inequality Z Z α 2 2(n−|α|) 2 α 2 |D H(x)| ≤ |x| Pd,n−|α|(ξx · θ)dσ(θ) |D H(θ)| dσ(θ). (3.6) ∂B ∂B and by Theorem 3.2 (a), we get

2(n−|α|) d,n−|α| Z α 2 |x| N 2 2|α| 2 |D H(x)| ≤ (B|α|,d) n H dσ. (3.7) σd ∂B

So finally, using the fact that N d,n−|α| ∼ (n − |α|)d−2, we obtain the result. The other formula that will be useful is the Poisson identity, which gives a closed form expression for a power series whose coefficients are given by the Legendre polynomials and the dimensions of the respective spaces of spherical harmonics.

Lemma 3.4 (Poisson identity, [49, Lemma 2, p. 46]) For r ∈ [0, 1), t ∈ [−1, 1] and d ≥ 2, ∞ 1 − r2 X = N d,nrnP (t). (3.8) (1 + r2 − 2rt)d/2 d,n n=0 At this point, we have covered the basics of the homogeneous harmonics and spherical harmonics on which some important estimates and formulas of later results depend. We will now discuss the eigenvalue problem for the Laplace-Beltrami operator on the sphere. The Laplace-Beltrami operator is a differential operator that can be extended to differentiable manifolds. The precise way this is done is not important to describe as it is straightforward to obtain on the sphere from the ordinary Laplacian. In short, the Laplace-Beltrami operator is the angular part of the Laplacian when decomposed into radial and angular components.

We will denote the Laplace-Beltrami operator by ∆θ. Then the main result is that there is a sequence of eigenvalues

θ λn = n(n + d − 2), n = 1, 2,..., (3.9)

d,n d each having multiplicity N and with eigenspace identical to Yn. Hence,

d,n d,n d,n ∆θYj = n(n + d − 2)Yj , 1 ≤ j ≤ N (3.10) 25

d and the eigenfunctions can be taken to be the basis elements of Yn. When taken over all n, the eigenfunctions form a basis for L2(∂B). That is

∞ 2 M d L (∂B) = Yn. n=0

3.2 Eigenvalue Problem for the Dirichlet Laplacian The results in this section will be used primarily in the analysis done in Chapter 8. We will go straight into the description of the main results of the eigenvalue problem for the Dirichlet Laplacian. Formally, the problem is to find nontrivial solutions to the problem −∆φ = λφ in D  . (3.11) φ = 0 on ∂D It is well-known [37, 45] that if D is a bounded domain, there is a sequence of eigenvalues

0 < λ1 < λ2 ≤ λ3 ≤ · · · ,

∞ 1 λn −→ ∞ and a corresponding sequence of eigenfunctions {en}n=1 in H (D) satisfying

−∆e = λ e in D  n n n (3.12) en = 0 on ∂D ∞ for each n = 1, 2,.... Additionally, the sequence {en}n=1 of eigenfunctions can be chosen to form an orthonormal basis for L2(D). In the case that D has Lipschitz boundary ∂D, using Green’s representation formula, we see that the eigenfunctions satisfy the integral equation Z D en(x) = −λnen(y)G (x, y)dy. (3.13) D

The above equation will be used in Chapter 9 as well as, with the notation λd := λ1 and w := e1, the fact that the principal eigenfunction, w can be used as a weight. What allows us to use w as a weight is the following lemma.

Theorem 3.5 ([37, Theorem 8.38, p. 214]) e1 can be chosen such that e1 > 0 on D.

3.2.1 Dirichlet eigenvalue problem on unit ball B When the domain is the unit ball B, the eigenfunctions are well-known and can be obtained using separation of variables. The equation 26

−∆h = h in B (3.14)

is solvable by separation of variables using spherical coordinates with h(x) = h(r, θ) = f(r)a(θ). This is because the Laplacian decomposes into a radial and angular part giving

 ∂2 d − 1 ∂ 1  ∆h(x) = + + ∆ f(r)a(θ) (3.15) ∂r2 r ∂r r2 θ

where ∆θ is the Laplace-Beltrami operator on the sphere ∂B. Writing (3.14) with the Laplacian written in radial and angular coordinates then becomes  ∂2 d − 1 ∂ 1  − + + ∆ f(r)a(θ) = f(r)a(θ). (3.16) ∂r2 r ∂r r2 θ We recall from the previous section that the spherical harmonics are solutions to the eigenvalue problem

d,n d,n ∆θYj = n(n + d − 2)Yj . (3.17)

d,n If we then replace a in (3.16) with Yj then the equation simplifies to the well-known ordinary differential equation  ∂2 d − 1 ∂  n(n + d − 2) + + 1 − f(r) = 0. (3.18) ∂r2 r ∂r r2 This is Bessel’s equation which has been studied and has solutions given by the so-called

Bessel functions Jd,n of order n and in dimension d. Therefore, putting these results all together, we have that d,n d,n ∆[Jd,nYj ] = −Jd,nYj . (3.19)

d,n Because Yj are orthogonal on the sphere ∂B, as j and n vary, the family of functions d,n Jd,nYj is orthogonal on B. In order to solve (3.11), we need to find a corresponding sequence of eigenvalues and satisfy the boundary conditions. This, it turns out, requires a minor modification to these solutions to (3.14). Note that for any λ > 0,

√ d,n √ d,n ∆[Jd,n( λr)Yj (θ)] = −λJd,n( λr)Yj (θ). (3.20)

It is known that for each fixed n, there is an increasing sequence of positive zeros ∞ {µn,m}m=1 of the Bessel functions Jd,n. Therefore, it is straightforward to see that with 2 λn,m = µn,m (3.21) 27

the family of functions given by

p d,n Jd,n( λn,mr)Y (θ) e (x) = j (3.22) j,n,m p d,n kJd,n( λn,m ·)Yj kL2(B) solve the equation

−∆e = λ e in B  j,n,m n,m j,n,m (3.23) ej,n,m = 0 on ∂B and form an orthonormal basis for L2(B). These functions will be used in Chapter 9 when we study the Helmholtz equation.

3.3 Harmonic Steklov Eigenproblem This section outlines many of the results coming from the papers [4, 5, 6], which de- scribe the formulation of and the solution to some Steklov eigenvalue problems. Although [6] contains the basic results for Steklov eigenvalue problems for the Schr¨oedingeroperator and a family of elliptic operators, our main interest is in the Steklov eigenvalue problem with the Laplacian. This is called the Harmonic Steklov eigenvalue problem. In [5, 4], Auchmuty constructs various Sobolev and trace spaces of functions and their correspond- ing distributions. These spaces are chatacterized in terms of series representations of the Steklov eigenvalues and their corresponding eigenfunctions. After describing these spaces, we will use some known estimates on the eigenvalues to give a precise characterization of the elements of the function spaces as well as their distributions. We will finish the section giving an upper bound on the growth rate for generalized harmonic functions and their derivatives on the unit ball.

3.3.1 The Steklov eigenfunctions To begin with, we make the following assumption (A) on the domain D.

(A) D is a bounded domain for which together with its boundary ∂D, the Gauss-Green theorem, Rellich theorem, and compact trace theorem hold.

The compact trace theorem as we will need it says that the trace mapping Γ : H1(D) −→ L2(∂D) is a compact operator. The assumption that the Gauss-Green theorem hold will enable the use of variational principles used to solve the Steklov eigenvalue problem. Rellich’s theorem and the compact trace theorem are needed in order to obtain convergence in the right spaces 28

and to make sense of boundary value problems. For us, the simplest setting for which all of these conditions hold is if the domain D has a Lipschitz boundary ∂D. So throughout this chapter, we will take D to be such a domain. A function s is called a Steklov eigenfunction with eigenvalue µ and weight ρ if it satisfies ∆s = 0 in D  . (3.24) ∇s · n = µρs on ∂D where ρ ∈ L∞(∂D). The following weaker formulation gives the definition of a Steklov eigenfunction that is useful for using calculus of variations to solve the eigenvalue problem. Namely, with the aid of Green’s second identity, we see that a function s ∈ H1(D) satisfying (2.25) is called a Steklov eigenfunction for the Laplacian with eigenvalue µ and weight ρ if it satisfies the equation Z Z ∇s · ∇vdx = µ ρsvdσ, for all v ∈ H1(D). (3.25) D ∂D The inner-product that will be used on the space H1(D) is not the usual inner-product

but an equivalent one denoted by h·, ·i∂ and defined as Z Z hu, vi∂ := ∇u · ∇vdx + ρuvdσ, (3.26) D ∂D 1 for u, v ∈ H (D). The corresponding norm will be denoted by || · ||∂ and functions that are orthogonal (orthonormal) with respect to the inner-product in (4.18) will be called ∂ - orthogonal (orthonormal). A detailed construction of the eigenfunctions and eigenvalues is given in ([4]) using methods of the calculus of variations but I will briefly summarize the results. It is well-known that there is a sequence of ∂ - orthonormal eigenfunctions 1 s0, s1, s2,... ∈ H (D) with corresponding eigenvalues 0 = µ0 < µ1 ≤ µ2 ≤ · · ·. The Steklov eigenfunctions generate a Hilbert space which will be denoted by H(D), the space of harmonic functions on D which are also in H1(D). Because the Steklov eigenfunctions

are ∂- orhonormal, (3.25) together with (3.26) show that their traces Γsk are orthogonal in L2(∂D, ρdσ). Specifically, the calculation yields that Z δi,j = hsi, sji∂ = (1 + δi) ΓsiΓsjρdσ (3.27) ∂D

where δi,j is the Kronecker delta function. The following elementary result concerning the behavior of the Steklov eigenvalues is given by Theorem 7.2 in [6].

Theorem 3.6 ([6, Theorem 7.2, p. 337]) Each eigenvalue µj has finite multiplicity

and µj −→ ∞ as j → ∞. 29

Remark 3.7 In [4], it is assumed unnecessarily that the weight function ρ satisfies Z ρdσ = 1. (3.28) ∂D The justification for this assumption is so that the bilinear form in (3.26) is an inner- product. However, assuming ρ ∈ L∞(D) is enough for that to be true. As a consequence of this assumption, there are minor errors in the examples and formulas in [6, 5, 4], and some unnecessary notation is introduced in [5, 6]. In fact, it is common to take ρ ≡ 1 [10, 53] and that will be our assumption throughout these notes. We will now look at an example that extends the example in [5, 4] on Steklov eigenfunctions on the unit ball to arbitrary dimension.

Example 3.8 (Steklov eigenvalue problem on B) It is straightforward to show that when the domain is the unit ball B, the homogeneous d,n harmonics Hj are the Steklov eigenfunctions with corresponding eigenvalues given by n d,n µj = n for 1 ≤ j ≤ N . The fact that they are ∂-orthogonal follows from the orthogonality of the spherical harmonics in L2(∂B). To normalize them, we simply need

to compute their k · k∂ norm. The square of the norm can be computed by (3.25) and (3.26) and is given by Z d,n 2 d,n 2 ||Hj ||∂ = (n + 1) |Hj (y)| dσ(y) ∂B = n + 1

Therefore, the functions Hd,n sd,n := √ j (3.29) j 1 + n are the ∂-orthonormal harmonic Steklov eigenfunctions for the unit ball.

3.3.2 Sobolev spaces and related trace spaces We are now prepared to construct some harmonic Sobolev spaces, trace spaces and the distribution spaces that they are in duality with. These spaces will be important, mainly because our ultimate interest is in addressing boundary value problems with distributions on the boundary. We will also see that similar to Theorem 2.19, there is a 1-1 correspondence between elements of the harmonic Sobolev spaces and the trace spaces. 30

If we denote by S the set of Steklov eigenfunctions and let HF (D) denote the set of ∞ 1 finite linear combinations of functions in S. Then clearly HF (D) ⊂ C (D)∩H (D). For

each s ∈ R, we define the s-inner product on HF (D) as follows. If u, v ∈ HF (D) with P∞ ˆ P∞ series representations u = n=0 bnsn and v = n=0 cˆnsn respectively, then

∞ X 2(s−1) hu, vis := (1 + µn) bˆncˆn (3.30) n=0

s with induced norm denoted by k · ks. For each such s, let H (D) denote the completion s of HF (D) with respect to the norm k · ks. Then each of the spaces H (D) is a Hilbert space having an orthonormal basis given by the sequence of functions

1−s ∞ {sn(1 + µn) }n=0. (3.31)

We will refer to the sequences {ˆbn}n≥0 and {cˆn}n≥0 as the Steklov Fourier coefficients. Note that for s = 1, it is clear that because of the definition of the inner-product, (3.30) that H1(D) = H(D). Of course, these spaces are nested with

s1 s2 H (D) ⊂ H (D), s1 > s2. (3.32)

For s < 1, Hs(D), we will sometimes refer to the elements of Hs(D) as generalized harmonic functions or harmonic distributions. The duality pairing between the various spaces Hs(D) is given in the following theorem which is Theorem 5.1 in [4].

Theorem 3.9 ([4, Theorem 5.1, p. 8]) Assume that D satisfies (A), and that F is a continuous linear functional on H1+θ(D), for θ > 0. Then there is a unique generalized 1−θ harmonic function f ∈ H (D) with Steklov Fourier coefficients {fˆn}n≥0 such that

∞ X F (u) = fˆncˆn (3.33) n=0 where the Steklov Fourier coefficients for u are {cˆn}n≥0. Moreover, the dual norm of F is kfk1−θ.

It will also be helpful and not terribly difficult to develop a related family of Sobolev spaces for functions on the boundary ∂D, or trace spaces. This is straightforward due to the orthogonality of the traces of the Steklov eigenfunctions. 31

First, we give special notation to the orthonormal basis for L2(∂D). We set

p s¯n(x) = 1 + µnΓsj(x), for x ∈ ∂D and n ≥ 0. (3.34)

Then from (3.27), it is easy to show that the collection S¯ = {s¯n}n≥0 forms an orthonormal 2 basis in L (∂D). We define the space HF (∂D) to be the space of all finite linear ¯ combinations of functions in S and define for each s ∈ R an inner product h·, ·is,∂D

on HF (∂D). For f, g ∈ HF (∂D) with series expansions given by

∞ ∞ X X f = aˆns¯n, g = ˆbns¯n, (3.35) n=0 n=0 the s, ∂D-inner product is given by

∞ X 2s ˆ hf, gis,∂D = (1 + µn) aˆnbn, (3.36) n=0

with the corresponding norm given by k·ks,∂D. Then for each s ∈ R, we define the spaces s H (∂D) to be the completions of HF (∂D) with respect to the norm k · ks,∂D. We will

refer to the sequences {aˆn}n≥0 and {ˆbn}n≥0 as the Fourier coefficients of f and g. It is clear from (3.36)that H0(∂D) = L2(∂D) and the familiy of spaces Hs(∂D) is nested

s1 s2 H (∂D) ⊂ H (∂D), s1 > s2. (3.37)

These trace spaces for s < 0 contain distributions or generalized boundary functions. For s < 0, f ∈ Hs(∂D) even though the series representation for f may not converge, we may still write the following formal identity

∞ X f = fˆns¯n n=0

where the sequence {fˆn}n≥0 are the generalized Fourier coefficients of f. The meaning of course is just that the generalized function is identified with it’s generalized Fourier coefficients. The term generalized will often be dropped so knowing the context will be important in distinguishing between series that converge in L2(∂D) and those that do not. The duality pairing between the various spaces is made explicit in the following theorem, Theorem 5.3 in [5]. 32

Theorem 3.10 ([5, Theorem 5.3, p. 8]) Assume that D satisfies hypothesis (A), Hs(∂D) is defined as above for some s > 0 and F is a continuous linear functional on Hs(∂D). Then there is a unique generalized function f ∈ H−s(∂D) such that

∞ X F (g) = fˆngˆn, (3.38) n=0 where the sequences {fˆn}n≥0, {gˆn}n≥0 are the Fourier coefficients for f and g respectively.

Moreover, the dual norm of F is given by kfk−s,∂D.

We now briefly discuss the connection that has been suggested between the trace spaces and the sobolev spaces. That of an isometry. First, consider for fixed s ∈ R the s s+1/2 map Es : H (∂D) −→ H (D) given by

∞ X 1/2 s Esg(x) := (1 + µn) gˆnsn(x), x ∈ D, g ∈ H (∂D). (3.39) n=0

Es is an operator that takes boundary data and returns the harmonic extension onto D and so has the same action as the Poisson integral. A close look at the action of the operator Es in fact suggests there is at least a formal series expansion in terms of Steklov eigenfunctions for the Poisson kernel. Keeping in mind that the Fourier coefficientsg ˆn are given by p gˆn = hg, s¯ni∂D = 1 + µnhg, sni∂D, (3.40)

we see that for s ≥ 0, Z D Esg(x) = p (x, z)g(z)dσ(z) (3.41) ∂D where the Poisson kernel pD(x, z) has series representation given by

∞ D X p (x, z) = (1 + µn)sn(x)Γsn(z), x ∈ D, z ∈ ∂D. (3.42) n=0 The main result concerning this mapping provides an analogue to Theorem 2.19 as it identifies boundary data with the harmonic extension. The following theorem is Theorem 6.3 in [4].

Theorem 3.11 ([4, Theorem 6.3, p. 10]) Assume D satisfies hypothesis (A). Then s s+1/2 given s ∈ R, the mapping Es is an isometric isomorphism of H (∂D) and H (D).

We now look at an example for which we can evaluate the series in (2.46) and confirm that it indeed does give the Poisson kernel. This is when the domain is the unit ball B. 33

Example 3.12 Recall from Example 3.8, that the Steklov eigenfunctions for the unit ball were given by Hd,n(x) √j (3.43) 1 + n n d,n and had eigenvalues µj = n, 1 ≤ j ≤ N . Note that the trace operator simply gives d,n d,n ΓHj (θ) = Yj (θ), the spherical harmonics. Taking these computations and using (3.42) together with Theorem 3.1 and Lemma 3.4, we see that this series representation agrees with the well-known formula for the Poisson kernel on the unit ball. That is when using polar coordinates we obtain

∞ N d,n d,n d,n X X Hj (x) ΓHj (θ) p(x, θ) = (1 + n) √ √ (3.44) 1 + n 1 + n n=0 j=1

∞ N d,n X X n d,n x d,n = |x| Yj (ξ )Yj (θ) (3.45) n=0 j=1 ∞ 1 X = N d,n|x|nP (ξx · θ) (3.46) σ d,n d n=0 1 − |x|2 = d−2 . (3.47) σd|x − θ| A series representation of a solution u having boundary data f ∈ Hs(∂B) can be obtained and is given by ∞ N d,n X X n d,n ˆn u(x) = |x| Yj (ξx)fj . (3.48) n=0 j=1

3.3.3 Distribution of the Steklov eigenvalues In this section, we will look at the rate of growth of the Steklov eigenvalues. These rates of growth will allow us to give very precise characterizations of the elements of the trace spaces Hs(∂D) in terms of the Fourier coefficients. To my knowledge, the characterizations of the elements in the trace spaces as described in Lemma 3.13 have not been studied before by anybody else and so although this is not the main focus of these notes, it may be a new result and is indeed helpful in describing some of the later work. The growth rate can be estimated from the counting function N(µ, D) which is known. 34

First, let us recall the definition of the counting function. If µ0, µ1, µ2,... is the list of eigenvalues, then the counting function

N(µ, D) = #{i ∈ Z+ : µi ≤ µ}.

It is well known [10, 53] that for a domain D with Lipschitz boundary, the counting function for the Steklov eigenvalues is asymptotically given by

d−1 d−1 N(µ, D) = Cdσ(∂D)µ + o(µ ) as µ −→ ∞. (3.49)

This asymptotic identity can be inverted to obtain growth estimates of the eigenvalues.

In particular, we obtain the asymptotic identity for the eigenvalues µn to be given as

1/(d−1) 1/(d−1) µn = C(d, D)n + o(n ) as n −→ ∞. (3.50)

With the estimate in (3.50), we can obtain more precise estimates on the asymptotics on the Fourier coefficients for functions f ∈ Hs(∂D). From (3.36), we know that f ∈ Hs(∂D) if and only if

∞ X 2s 2 (1 + µn) |fˆn| < ∞. (3.51) n=0

1/(d−1) By (3.50), we can replace 1 + µn with n obtaining the equivalent condition that

∞ X 2s/(d−1) 2 n |fˆn| < ∞. (3.52) n=0 Thus the asymptotics of the Fourier coefficients must satisfy

2s/(d−1) 2 1+ n |fˆn| = O(1/n ). (3.53)

What we have just shown are the basic estimates proving the following lemma.

Lemma 3.13 A function f is in Hs(∂D) if and only if for each  > 0, the Fourier coefficients fˆn satisfy the growth conditions given in the following asymptotic identity.  1  |fˆ |2 = O (3.54) n n[2s+(d−1)(1+)]/(d−1) 35

3.3.4 Growth rate of generalized harmonic functions on the unit ball If we consider the case in which the domain is the unit ball, B, we can obtain growth estimates of elements u ∈ Hs+1/2(B) near the boundary ∂B. We will focus our attention to the generalized harmonic spaces for which s is negative. The analysis will be similar to that described in the previous section. I am not aware of any estimates like the one that will be described in Theorem 3.15 so I am not sure if anybody has studied this particular behavior. Although in the last section, we developed asymptotics for Steklov eigenvalues, in this section we will simply use the known eigenvalues in the case of the unit ball to obtain the convergence condition for the Fourier coefficients and hence the growth rate estimates. The estimates that we will obtain can be obtained using a power series having a known closed form expression. In fact, there are multiple formulas that one could use. We will use the following formula, a generalization of the Poisson identity in Lemma 3.4. It can be found in [49] and says

Lemma 3.14 ([49, Lemma 1, p. 46]) For r ∈ [0, 1) and t ∈ [−1, 1] ∞ X 1 rnCν(t) = . (3.55) n (1 + r2 − 2rt)ν n=0

ν The functions Cn(t), ν ≥ 0 are called ultraspherical harmonics or Gegenbauer polynomi- als. By evaluating the expression in (3.55) at t = 1, we see that the formula simplifies giving ∞ X 1 rnCν(1) = . (3.56) n (1 − r)2ν n=0 ν It is well-known [49] that the coefficients Cn(1) are given by n + 2ν − 1 Cν(1) = (3.57) n n and that these binomial coefficients grow at a polynomial rate in n. Specifically, they grow at a rate given by the following asymptotic relationship.

ν 2ν−1 Cn(1) ∼ n . (3.58)

We now have the basic estimations which will allow us to provide growth estimates for functions in Hs(B). Combining (3.58) and (3.56) we can show that ∞ X C rnn2ν−1 ≤ , (3.59) (1 − r)2ν n=0 36

for some constant C > 0.

s+1/2 Theorem 3.15 If u ∈ H (B) for s < (d − 1)/2, then there are constants C1,C2 > 0 such that for a multi-index, α with |α| ≥ 0, C |u(x)| ≤ 1 , x ∈ B, (3.60) (1 − |x|)γ1(,s,d) C |Dαu(x)| ≤ 2 , x ∈ B (3.61) (1 − |x|)γ2(,s,d,|α|) for every  > 0 and where γ1 and γ2 are given by

d − 1 − 2s γ (, s, d) = −  (3.62) 1 2 and 2|α| + d − 1 − 2s γ (, s, d, |α|) = − . (3.63) 2 2

s+1/2 s ˆn Proof. If u ∈ H (B), there is an f ∈ H (∂B) with Fourier coefficients fj such that

Esf = u and ∞ N d,n X X 2s ˆn 2 n |fj | < ∞. (3.64) n=0 j=1 ˆn In order for the sum in (2.65) to be finite, the growth of the Fourier coefficients fj should be asymptotically like N d,n X ˆn 2 −2s−1−2 |fj | = O(n ), (3.65) j=1 for any  > 0. Using spherical coordinates, we have the trivial bound based on the series representation for u (3.48) given by

∞ N d,n X n X d,n x ˆn |u(x)| ≤ r |Yj (θ )||fj |. (3.66) n=0 j=1 Using the Cauchy-Schwartz inequality, we can bound from above the summation over j

N d,n  N d,n 1/2 N d,n 1/2 X d,n x ˆn X d,n x 2 X ˆn 2 |Yj (θ )||fj | ≤ |Yj (θ )| |fj | j=1 j=1 j=1

≤ C(N d,n)1/2(n−2s−1−2)1/2 (3.67) for some constant C > 0 which leads to ∞ X |u(x)| ≤ C rn(N d,nn−2s−1−2)1/2. (3.68) n=0 37

Now by using the fact that the dimensions N d,n are asymptotically

N d,n = O(nd−2), (3.69)

this gives the estimate ∞ 0 X n d−1−2s−2 −1 |u(x)| ≤ C r n 2 , (3.70) n=0 for a new constant C0 > 0. By the estimate in (3.59), we see that u can be estimated from above one more time by

∞ 00 X d−1−2s− C 0 n 2 −1 |u(x)| ≤ C r n ≤ d−1−2s−2 , (3.71) n=0 (1 − r) 2 which concludes the proof of (3.60). For (3.61), we differentiate the series representation which can be done due to the fact that the series will be shown to converge absolutely on compact subsets of B. We then obtain the bound ∞ N d,n α X X α d,n ˆn |D u(x)| ≤ |D Hj (x)||fj | (3.72) n=|α| j=1 and again, applying the Cauchy-Schwartz inequality to the summation over j together α d,n with an application of Lemma 3.3 to the terms |D Hj (x)|, we obtain

∞ α X n−|α| |α|+(d−2)/2 −s−(1+2)/2 |D u(x)| ≤ A|α|,d |x| n n n=|α|

∞ X n−|α| 2|α|+d−2s−1−2 −1 = A|α|,d |x| n 2 . (3.73) n=|α| 0 By (3.59), there are constants Ck,d > 0, Ck,d > 0 such that

∞ α X k 2|α|+d−2s−1−2 −1 |D u(x)| ≤ C|α|,d |x| k 2 (3.74) k=1

0 C|α|,d ≤ 2|α|+d−2s−1−2 . (3.75) (1 − |x|) 2 This finishes the proof of 3.61 and of the Theorem. Note that the previous theorem is analogous to Proposition 2.20 which gave upper bounds on the growth rates of functions belonging to the Hardy spaces hp(B). Addition- ally, the Hardy spaces were characterized by their boundary data just as the spaces Hs(B) are characterized by their boundary data. The importance of this section is due to the 38 fact that our interest will lie in random boundary data. As will be shown in Chapter 4, the boundary data actually belong to some generalized trace spaces and so, provided we can identify the precise trace spaces for our distributions, we will be able to give precise bounds on the rate of growth. CHAPTER 4

PROBABILITY

In this chapter, we begin with a brief introduction to the notation and some of the basic definitions that will be used from probability theory. This will include random variables, the multivariate normal distribution, moment generating functions, cumulant generating functions and conditional probability. In addition, we will describe the notion of a random field along with the key theorems related to random fields such as Kolmogorov’s consistency theorem and Kolmogorov’s continuity theorem. These theorems describe conditions for which there are existence and regularity results for random fields. We will finish the chapter by discussing the Markov property and how it is defined in terms of arbitrary random fields.

4.1 Random Variables and Distributions A probabilitity space (Ω, F,P ) is a measure space with P (Ω) = 1. A random variable X is defined to be a measurable function X :Ω −→ R. The expectation of the random R variable X is defined as the integral E[X] = Ω X(ω)P (dω), when it exists and we denote the spaces Lp(Ω) for p ≥ 1 simply by Lp. With the notation given above, Lp consists of all random variables X with finite absolute pth moment E[|X|p] < ∞, having associated p 1/p norm kXkLp = (E[|X| ]) .

Given d random variables X1,X2,...,Xd, we call the vector-valued variable

⊥ X = (X1,X2,...,Xd)

a vector-valued random variable. When they exist, the mean vector and covariance matrix of X are given by µ = E[X], C = E[(X − µ)(X − µ)⊥], (4.1)

where the expectations are taken entry by entry. 40

A random vector X is said to have d-dimensional multivariate normal distribution d with mean vector µ and covariance matrix C if for each subset A ⊂ R , Z 1  1  P (X ∈ A) = exp − (x − µ)⊥C−1(x − µ) dx, (4.2) d/2 1/2 A (2π) |C| 2 where |C| denotes the determinant of C. The integrand in (3.6) is called the density for the multivariate normal distribution. Of course, random variables can take their values in many other spaces, not just Euclidean space, but we will not need much more than this in these notes as the random fields that we will encounter will be based on the notion of a random vector. For a real-valued random variable X, the moment generating function of X, denoted

MX (t) when it exists is defined as

Xt MX (t) := E[e ]. (4.3)

When the moment generating function exists, it typically exists for some range of t,

|t| ≤ t0 for some t0 > 0. In all cases of interest here, it will be true that the following formula for the moment generating function will hold:

∞ X tn M (t) = E[Xn] . (4.4) X n! n=0

From (4.4), we can easily see the basic fact that the kth derivative at 0 returns the th (k) k k moment MX (t) = E[X ]. We will actually be more interested in understanding a closely related generating function called the cumulant generating function. The cumulant

generating function will be denoted by CX (t) and is related to the moment generating

function by the following formula: CX (t) := log MX (t). The cumulant generating function can be written as a series ∞ X tk C (t) = c , (4.5) X k k! k=1 and the coefficients ck are called the cumulants of X. It is easy to show, based on the relationship between the moment generating function and the cumulant generating function, that c1 and c2 give the mean and variance of X respectively. Also, we will use the fact that if a ∈ R, ∞ X tk C (t) = akc . (4.6) aX k k! k=1 41

4.2 Conditional Probability Because a complete introduction to conditional probability would be a bit technical, we will describe just the basic ideas behind conditional probability without worrying about technical issues. Given a probability space (Ω, F,P ) and a random variable X defined on the probability space we first define conditional expectation. For a sub σ-algebra A ⊂ F the expected value of X given the σ-algebra A is denoted by E[X|A ] and is the A -measurable random variable that satisfies Z Z XP (dω) = E[X|A ]P (dω) for all A ∈ A . (4.7) A A

In words, E[X|A ] is the projection of X from L2(F) onto L2(A ) and is therefore, the closest random variable to X among all A -measurable random variables. It is often thought of as the best prediction for X given the knowledge of all possible events in A . With conditional expectation defined, we can now define the conditional probability. For an event F ∈ F, the probability of F given A is defined as

P (F |A ) := E[IF |A ], (4.8) where IF is the indicator for the event F . These definitions will be the foundation upon which the topic of the Markov property for various random fields will be discussed. We are now ready to discuss the meaning of a random field.

4.3 Random Fields Random fields can essentially be thought of as random functions. The parameter space that they are indexed by simply needs to be a topological space, so there are a wide variety of random fields available. The random fields that we will be interested in will take their values in R and so are real-valued random fields. In this section, we will discuss the existence results of random fields through Kolmogorov’s consistency theorem which basically says that under certain conditions on the finite dimensional distributions (fdd’s) of the random field, we can build suitable infinite products of probability spaces on which the random field lives. The conditions are very nice as they allow one to only be concerned about specifying the fdd’s and hence it is particularly easy to construct Gaussian random fields. The final part of this section, continuity of the random field, makes sense to talk about since random fields are random functions from one topological space to another topological space. This will be the result given by Kolmogorov’s continuity theorem. 42

4.3.1 Existence and regularity Definition 4.1 Given a complete probability space (Ω, F,P ) and a topological space T , a measurable mapping X :Ω × T −→ R is called a real-valued random field.

We think of X as a random function where for fixed ω, X(ω, ·) is a random path indexed by T and for fixed t, X(·, t) is a real-valued random variable. Before going straight into the existence theorem, we need to discuss a bit about fdd’s and the conditions that are expected of them in order to obtain existence of a random field. The fdd’s of a random field are given by a family of probabilities

P {X(t1) ∈ A1,...,X(tn) ∈ An} = Ft1,...,tn (A1,...,An). (4.9)

The following definition and theorem can be found in [58] and although we are only concerned about real-valued random fields, the setting is given a bit more abstract, so I will include the abstraction. We will instead assume that the random field takes its values in a measure space (S, S).

Definition 4.2 A family of finite dimensional distributions is said to be consistent if

(C1) Ft1,...,tn (A1,...,Ak−1, ·,Ak+1,...,An), for 1 ≤ k ≤ n is a measure on (S, S).

(C2) F (A ,...,A ) = F (A ,...,A ) for any permutation t1,...,tn 1 n ti1 ,...,tin i1 in

(i1, . . . , in) of (1, . . . , n).

(C3) Ft1,...,tn−1,tn (A1,...,An−1, S) = Ft1,...,tn−1 (A1,...,An−1)

Theorem 4.3 (Kolmogorov’s consistency theorem[58, p. 8]) Assume that T is a topological space, S is a complete separable metric space with borel σ-algebra S. If a family of function Ft1,...,tn (A1,...,An) has the consistency conditions then there exists a probability space (Ω, F,P ) and a random function X : T × Ω −→ S such that

P {X(t1) ∈ A1,...,X(tn) ∈ An} = Ft1,...,tn (A1,...,An) (4.10) for all n ≥ 1, t1, . . . , tn ∈ T and A1,...,An ∈ S.

Next we look at the conditions that establish continuity of random fields given by Kolmogorov’s continuity theorem. Kolmogorov’s continuity theorem is a kind of sobolev imbedding theorem saying that if a random field X is continuous in Lp for p large enough, then X has a modification that is actually almost surely continuous. 43

Theorem 4.4 (Kolmogorov’s continuity theorem[29, Theorem 4.3, p. 9]) Suppose d that X is a stochastic process on a compact, convex set K ⊂ R and there are constants C > 0, p > 0 and γ > d such that for all t, s ∈ K

E[|X(t) − X(s)|p] ≤ C|t − s|γ. (4.11)

Then X has a continuous modification X¯, which satisfies

X¯(t) − X¯(s) sup < ∞, (4.12) θ t6=s |t − s| for 0 ≤ θ < (γ − d)/p.

We now discuss Gaussian random fields as these will be our primary interest for the remainder of the notes.

4.3.2 Gaussian random field A Gaussian random field is a random field for which all finite dimensional distributions have the multivariate normal distribution (4.2).

Definition 4.5 A random field X :Ω × T −→ R is a Gaussian random field if for each collection t1, t2, . . . , tn ∈ T the random vector

(X(t1),X(t2),...,X(tn)) (4.13)

is a Gaussian random vector.

In fact, the construction of a Gaussian random field can be accomplished quite easily with the aid of Kolmogorov’s consistency theorem. Indeed, because of Theorem 4.3, all that is needed is to specify the fdd’s for a Gaussian random vector is to specify the mean function m : T −→ R and a positive definite function C : T × T −→ R giving the covariance structure of the random field. Once this is accomplished, it is easy to see that the fdd’s satisfy the consistency conditions (C1) - (C3).

4.3.3 Reproducing kernel Hilbert space There is a specific structure that describes a lot about Gaussian processes, the cor- d responding reproducing kernel Hilbert space. Given a domain D ⊂ R , a Hilbert space H(D) of functions with inner product and norm given by

hf, gi, kfk = phf, fi, f, g ∈ H(D) (4.14) 44

is called a Reproducing Kernel Hilbert Space (RKHS) if for each x ∈ D, there is a function K(x, ·) ∈ H(D) such that

f(x) = hf(·),K(x, ·)i for all f ∈ H(D). (4.15)

The function K(x, ·) is the so-called reproducing kernel for H(D). Because the reproduc- ing kernel itself belongs to H(D), it satisfies the equation

K(x, y) = hK(y, ·),K(x, ·)i. (4.16)

From the identity in (17), we have the following properties for the kernel K.

(1) K is symmetric, K(x, y) = K(y, x) for all x, y ∈ D.

(2) K is positive definite, for x1, . . . , xk ∈ D and a1, . . . , ak ∈ R,

k X K(xi, xj)aiaj ≥ 0. (4.17) i,j=1

(3) K satisfies the inequality

|K(x, y)|2 ≤ K(x, x)K(y, y). (4.18)

Two important points are left to be addressed, that is the issue of existence and uniqueness of the reproducing kernel K. By Riesz’s representation theorem, if the linear functional

δx : H(D) −→ R defined by

δx(f) = f(x) (4.19) is continuous, then there exists such a kernel. Suppose that K and K0 both satisfy the reproducing kernel properties. Then we have the following

kK(x, y) − K0(x, y)k2 = hK − K0,K − K0i = hK − K0,Ki − hK − K0,K0i = 0.

Therefore, given a Hilbert space H(D), we know under what conditions it is a RKHS. If there is a function K : D ×D −→ R that satisfies the properties (1) - (3), then there exists a RKHS H(D) with K as its reproducing kernel. This is essentially accomplished 45

by completing a hilbert space that is generated by K. Specifically, consider the space of functions n  X  F = f : D −→ R : f(·) = aiK(xi, ·), ai ∈ R, xi ∈ D, n ≥ 1 . (4.20) i=1 Then by defining an inner product on F by

n m  X X  hf, gi = aiK(xi, ·), bjK(yj, ·) i=1 j=1

n m X X := aibjK(xi, yj), (4.21) i=1 j=1 one can check that it satisfies the reproducing kernel property, that is for f ∈ F ,

f(x) = hf(·),K(x, ·)i. (4.22)

By completing the space F with the norm k · k induced by the inner product defined in (4.21), we obtain a Hilbert space of functions H(D) that is a RKHS with reproducing kernel K. It is straightforward to see that the covariance kernel C for a Gaussian process X indexed by a set T satisfies the properties listed in (1) - (3) so that to each Gaussian process, there is a RKHS, H(T ). In the context of Gaussian processes, H(T ) is called the Cameron-Martin Space and identifies the process to the elements of H(T ). We will be able to identify the Cameron-Martin space for the feature Gaussian field in Chapter 7.

4.4 Markov Random Fields Our final topic in this chapter is that of the Markov property for random fields. The Markov property is probably most well-known when the random field is defined on the parameter space R+. In this case, the parameter is often thought to represent time and the statement of the Markov property in this case is that, given the “present value” of the random field, the “future” values are independent of the “past.” Another way to say this is that the “future” values of the random field given the “past” actually only depend on the “present” value. We will describe mathematically what is meant by both of the above statements of course generalizing them to random fields defined on unordered parameter sets. This material along with the details that are not included here can be found in [56]. 46

Let A1, B, A2 be σ-algebras of events having the following property: Given the knowledge of all events from B, events A2 ∈ A2 are independent of events A1 ∈ A1. Mathematically, this is described by saying that

P (A1 ∩ A2|B) = P (A1|B) · P (A2|B) (4.23) for all events A1 ∈ A1,A2 ∈ A2. If (4.23) holds, then we say that the σ-algebra B

splits A1 and A2. Using a standard approximation argument, one can show that (4.23) is equivalent to saying that

E[ξ1 · ξ2|B] = E[ξ1|B]E[ξ2|B], a.s. (4.24)

2 2 for all ξ1 ∈ L (A1) and ξ2 ∈ L (A2). (4.23) and (4.24) are basically the statement that A1

and A2 are independent given B. An alternative formulation of this property provides a more direct understanding of the use of the word Markov in the way that it is commonly understood.

Theorem 4.6 ([56, p. 56]) B splits the algebras A1, A2 if and only if the sequence

A1, B, A2 is Markov; that is

P (A|A1 ∨ B) = P (A|B) for A ∈ A2. (4.25)

If we think of A1, B, A2 as the events of the “past, present” and “future” respectively, then Theorem 4.6 says that given knowledge of all events up until the present, the future only depends on the present which is close to the classical notion of the Markov property. I will briefly discuss the notion of a random field and a continuous random field as given in [56]. The notion of continuous described below should not be confused with the almost sure continuity coming from Kolmogorov’s continuity theorem. My reason for introducing these new definitions will be that the notation that accompanies these definitions is extremely useful in providing clear and concise statements regarding Markov random fields. We denote a family of σ-algebras by A (S) for domains S ⊂ T , where T is a locally-compact metric space.

Definition 4.7 Given all open domains S ⊂ T , we call the family A (S), S ⊂ T a random field if it has the following property:

A (S1 ∪ S2) = A (S1) ∨ A (S2). (4.26) 47

The random field is said to be continuous if

 [  _ [ A Sn = A (Sn),S = Sn (4.27) n n n and the open domains Sn are a countable collection of montonically increasing domains.

Example 4.8 Let {ξ(t), t ∈ T } be a random function and A (S) be the σ-algebra of events generated by the variables ξ(t), t ∈ S. Then the collection {A (S),S ⊂ T }, forms a continuous random field.

We can now begin to describe what it means to be a Markov random field in a “local” sense. We say that for a family of open sets S ⊂ T , the set S splits S1 and S2 if the

σ-algebra A (S) splits A (S1) and A (S2). We extend this notion of splitting by a closed set by saying that a closed set Γ splits the domains S1 and S2 if S1 and S2 are split by every sufficiently small -neighborhood Γ of Γ. We will let G denote a system of open domains S ⊂ T , and for each such S denote

S1 = S, Γ,S2 = T \ S,¯ (4.28) where Γ is a closed set containing the topological boundary ∂S of the domain S. S2 is called the complementary domain for S1, Γ is a boundary between S1 and S2.

Definition 4.9 We call a random field A (S),S ⊂ T , Markov with respect to the system

G , if for every domain S ∈ G , the boundary Γ splits S1 = S and S2 = T \ S¯.

It can be shown that if

   A (Γ ) = A (S1 ∩ Γ ) ∨ A+(Γ) ∨ A (S2 ∩ Γ ), (4.29) where \  A+(Γ) = A (Γ ), (4.30) >0 then a random field is Markov with respect to a system G if and only if the sequence

A (S1), A+(Γ), A (S2) (4.31)

is Markov for every domain S ∈ G . The distinction soon to be made between the “local” Markov property defined above and the Markov property given in the next definition is that, so far, the Markov property 48

depends on the system of domains given. We will next extend this definition to the largest system of domains for which the Markov property makes sense. The largest such system is called a complete system of domains and is defined next.

Definition 4.10 A complete system G of open domains is a system that satisfies the following properties:

• It contains all domains that are relatively compact or have compact complements,

• It contains all sufficiently small -neighborhoods of the boundary Γ = ∂S, S ∈ G .

And now the Markov property on a complete system is the following.

Definition 4.11 A random field A (S),S ⊂ T is called Markov if it is Markov with respect to a complete system G of open domains S ⊂ T . CHAPTER 5

WHITE NOISE

In this chapter, we will be providing a sequence of steps that will allow us to construct the Gaussian white noise integral (or simply, white noise integral) on an L2(X) function space. Because of Kolmogorov’s consistency theorem, it is possible to construct the white noise integral in just a couple of lines, by specifying its mean function and covariance structure. Instead, we will take a bit more time to construct it in a way analogous to the construction of the abstract integral from measure theory, starting with the white noise ”random measure.” The reason for this is that it highlights many of the intuitive features of white noise and the white noise integral, hopefully enhancing an appreciation for its purpose especially in SPDE’s.

5.1 Construction Consider a σ-finite measure space (X, M, µ) and let F ⊆ M consist of all sets of finite µ-measure. The function C : F × F −→ R defined as

C(A, B) = µ(A ∩ B), A, B ∈ F

is positive-definite because for A1,A2,...,An ∈ F and a1, a2, . . . , an ∈ R,

n n 2 X Z  X  aiajC(Ai,Aj) = ai1Ai (x)dµ(x) ≥ 0. (5.1) i,j=1 X i=1 Because of Theorem 4.3, there is a centered (mean zero) Gaussian process indexed by F with covariance given by C. Namely, for A, B ∈ F, W has the following distributional properties

(i) W (A) ∼ N(0, µ(A))

(ii)E[ W (A)W (B)] = µ(A ∩ B). 50

For Gaussian random variables, uncorrelated is equivalent to independent so that (ii) implies that W is independent on disjoint sets. Also (ii) together a simple L2 calculation show that W is finitely additive on disjoint sets a.s., that is for disjoint sets A1,A2,...,An,

n n  [  X W Aj = W (Aj) a.s. (5.2) j=1 j=1 and so W is almost surely a random measure on algebras of sets. W is not, however, a random measure as it is not countably additive, but it is an L2-valued measure and that is all that we need to define the white noise integral, or stochastic integral. We start by defining the integral of an indicator function. Consider the function

1A : X −→ R, where A ∈ F, we define the integral of 1A with respect to W as

Z 1A(x) W (dx) = W (1A) := W (A) (5.3) X We then extend the definition of the integral to simple functions by linearity. Let

n X E = {f : f = ai1Ai , ai ∈ R ,Ai ∈ F,Ai ∩ Aj = ∅ if i 6= j} (5.4) i=1 be the set of simple functions.

For f ∈ E, we define the integral of f with respect to W as

n Z X f(x) W (dx) = W (f) := aiW (Ai). (5.5) X i=1 Because sums of independent Gaussian random variables are again Gaussian, W (f) is Gaussian and for f, g ∈ E a simple calculation shows that,

E[W (f)] = 0 (5.6) and

Z E[W (f)W (g)] = f(x)g(x) dµ(x). (5.7) X We can now view W as a map 51

W : E ⊂ L2(X) −→ L2(Ω) and because of the identity in (5.7), W is an isometry between E and it’s image W (E). Because L2(X) and L2(Ω) are complete and E is dense in L2(X), W (f) can be defined for any f ∈ L2(X). 2 In particular, if fn −→ f in L (X), then we define

2 W (f) = lim W (fn) in L (Ω). (5.8) n

The limit is again a Gaussian random variable and we have the following basic properties of the white noise integral.

2 Proposition 5.1 For f, g ∈ L (X) and a, b ∈ R, W (f) is a centered, Gaussian random variable with covariance structure given by Z E[W (f)W (g)] = f(x)g(x)dµ(x). (5.9) X Additionally, it is almost surely linear:

W (af + bg) = aW (f) + bW (g) a.s. (5.10)

As we have suggested earlier in the definitions of the white noise integral, the following notation will have the same meaning Z W (f) = f(x)W (dx) for f ∈ L2(X) (5.11) X and will be called the Gaussian white noise integral of f, or the integral of f with respect to white noise.

5.2 White Noise as a Random Distribution We will often make use of Fourier-series estimates in the L2(X) spaces and so we will begin this section obtaining some basic results, investigating the action of white noise on the Fourier series. First, because L2(X) is a Hilbert space, it has an orthonormal basis. Consider an orthonormal basis, denoted by b1, b2, b3,... 52

2 P∞ ˆ 2 ˆ For f ∈ L (X), we have f = n=1 fnbn in L (X), where the Fourier coefficients fn are given by Z ˆ fn = hf, bniL2(X) = f(x)bn(x)dµ(x). (5.12) X If g is also in L2(X), then it is known that

∞ Z X fgdµ = fˆngˆn. (5.13) X n=0 The way white noise acts through Fourier-series will be used in later results so it will be important to understand convergence, in particular under what conditions there is an anologue to (5.13) such as

∞ Z X f(x)W (dx) = fˆnWˆ n, (5.14) X n=1 where Wˆ n := W (bn) are the generalized Fourier coefficients of white noise. The sense in which (5.14) holds is described in the following proposition.

Proposition 5.2 For f ∈ L2(X), the white noise integral of f can be written as

∞ X W (f) = fˆnWˆ n, (5.15) n=1 where the convergence of the series is in L2.

Proof. By the properties of the white noise integral, Z E[W 2(f)] = f 2(x)dµ(x), (5.16) X ∞ ∞ Z X ˆ X ˆ2 2 E[W (f) fnW (bn)] = fn = f (x)dµ(x), (5.17) n=1 n=1 X and  ∞ 2 ∞ Z X ˆ X ˆ2 2 E fnW (bn) = fn = f (x)dµ(x). (5.18) n=1 n=1 X Equations (5.16)-(5.18) together imply that

∞ 2  X   E W (f) − fˆnW (bn) = 0, (5.19) n=1 and the proof is complete. 2 Because {bn}n≥1 is orthonormal in L (X), the random Fourier coefficients {Wˆ n}n≥1 are iid N(0, 1). The law of the iterated logarithm (LIL) gives almost sure bounds on 53

the growth rate of Wˆ n which together with (5.15) suggests that if the Fourier coefficients

fˆn decay fast enough, the convergence in (4.19) will hold almost surely. Indeed, for the space of functions f whose Fourier coefficients decay fast enough, white noise would almost surely be a distribution on that space. We now focus on making this precise, obtaining specific distribution spaces that the white noise belongs to. In [60], a rigorous and complete description is given about what is meant by a random distribution and how to show that a given random linear functional is a random d distribution a.s. It is argued in [60] that a white noise on a domain D ⊂ R belongs to the standard Sobolev space of negative index. The specific result is that W ∈ H−n(D) for n > d/2. We can avoid going into the details of the descriptions given in [60] on random distributions. Instead, what we have developed in Sections 3.3 and 5.2 is enough to describe the trace spaces of negative index that white noise belongs to. The result described in Theorem 5.3 is actually a stronger result than that given in [60], and with very simple arguments. Out notation for white noise will now change from that used up until this point. We will take as our L2-function space upon which the white noise is defined to be those boundary functions, L2(∂D). We will denote surface white noise by S. The main result that we will need here is the LIL for an iid sequence of N(0, 1) random variables. This is well-known and can be found for instance in [29] and says that for an

iid sequence Sˆn of N(0, 1) random variables,

Sˆ Sˆ lim sup √ n = − lim inf √ n = 1 a.s. (5.20) n→∞ 2 ln n n→∞ 2 ln n From this result, the statement

2 |Sˆn| = O(ln n) a.s. (5.21)

follows immediately. Following from the definition of the k · ks,∂D norm induced by the inner-product from (3.36), we see that the white noise S ∈ Hs(∂D) if and only if it has

finite norm kSks,∂D < ∞. If kSks,∂D is finite, its square is given by

∞ 2 X 2s ˆ2 kSks,∂D = (1 + µn) Sn (5.22) n=0 which in light of (3.50) is finite if and only if

∞ X 2s/(d−1) ˆ2 n Sn < ∞. (5.23) n=1 54

Finally, because of (5.21), we arrive at the condition that S ∈ Hs(∂D) almost surely if

∞ X ln n n2s/(d−1) < ∞. (5.24) n=1 What we have just shown by combining the estimates (5.21) - (5.24) is that the white noise is an almost sure distribution, belonging to some of the trace spaces Hs(∂D). The details are summarized in the following theorem and the remainder of the proof following.

Theorem 5.3 The white noise S based on L2(∂D, σ) is in H−s(∂D) a.s. if and only if s > (d − 1)/2.

Proof. In fact, the estimates (5.21) - (5.24) show that if s > (d−1)/2 then S ∈ H−s(∂D) a.s. It remains, however, to prove the other direction. That is, we need to show that if S ∈ H−s(∂D), then s > (d − 1)/2. This will be accomplished by showing that if s ≤ (d − 1)/2, then S 6∈ H−s(∂D). Consider the random partial sum defined by

N X −2s/(d−1) ˆ2 P (s, N) := n Sn. (5.25) n=1

The proof will be completed provided that limN→∞ P (s, N) = ∞ a.s. for s ≤ (d − 1)/2.

A simple calculation shows that because Sˆn are iid N(0, 1),

N X E[P (s, N)] = n−2s/(d−1) (5.26) n=1 and N X Var[P (s, N)] = 2 n−4s/(d−1). (5.27) n=1 From (5.26) and (5.27) follow the estimates

 ln N if s = (d − 1)/2 E[P (s, N)] ∼ (5.28) N (−2s+d−1)/(d−1) if s < (d − 1)/2 and Var[P (s, N)] ∼ N (−4s+d−1)/(d−1). (5.29)

It is clear that lim E[P (s, N)] = ∞ for s ≤ (d − 1)/2. N→∞ An application of Chebyshev’s inequality for a > 0 results in   P (s, N) Var[P (s, N)] C P − 1 > a ≤ ≤ . (5.30) E[P (s, N)] a2E2[P (s, N)] N 55

Therefore, P (s, N) −→ 1 as n → ∞ E[P (s, N)] in probability and because P (s, N) is increasing in N together with the fact that E[P (s, N)] converges to infinity, P (s, N) converges to infinity almost surely. The proof is finished. While Walsh shows that white noise W ∈ H−n(D) for n > d/2, he does not provide the reverse implication that W ∈ H−n(D) only for n > d/2. Consequently, this is a stronger result than that discussed in [60] and relies only on the estimates of eigenvalues as well as the LIL behavior of an iid sequence of Gaussian random variables. CHAPTER 6

STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS AND RELATED RESEARCH

We begin the section on Stochastic Partial Differential Equation’s (SPDE’s) by dis- cussing briefly how they have been approached. One of the most comprehensive intro- ductions to the subject is that by John Walsh [60]. It describes an approach to SPDE’s that is considered by many today the standard, with many different types of operators and many detailed examples. Following Walsh’s introduction to SPDE’s came a number of results in SPDE’s and the area has grown rather quickly with the discovery of more applications. As there are significant differences when compared with PDE’s, there has been plenty of work towards understanding them. In this chapter, I will summarize briefly some of the results that have been obtained in SPDE’s, as this will provide an understanding of the methods that will be used in the later chapters. In order to discuss either parabolic or hyperbolic SPDE’s with white noise, it would be necessary to describe the martingale measure which is derived from the standard white noise. Essentially, the martingale measure allows one to split up a white noise integral into an integral over a ”time” parameter and a ”space” parameter, becoming a double integral. Since we do not make use of the martingale measure for any of the work done in the rest of these notes, the focus in this chapter will be on describing previous work on elliptic SPDE’s for which all of the necessary tools have already been developed.

6.1 Elliptic SPDE’s Driven by White Noise We will begin by looking at a fairly elementary example which is treated in [12, 60, 51]. Consider for example the equation given by

−∆u = W in D,  (6.1) u = 0 on ∂D, 57

where D is a regular domain. We would like to formulate various types of solutions and then discuss those solutions. If W˙ were a nice continuous function on D, then the solution would be given by the equation Z u(x) = GD(x, y)W (y)dy (6.2) D by Green’s representation formula. With the use of integration by parts, we can formulate ∞ the problem to define a weaker notion of solution. Take a function ϕ ∈ C0 (D) and multiply the equation in (6.1) by ϕ and integrate over D to obtain Z Z − ∆u(x)ϕ(x)dx = ϕ(x)W (x)dx, (6.3) D D which is equivalent to Z Z − u(x)∆ϕ(x)dx = ϕ(x)W (x)dx. (6.4) D D

However, because W˙ is not even a function on D, none of the integrals on the right-hand side of (6.2)-(6.4) are defined. We simply make distributional sense out of the integrals by replacing the random “measure” W (x)dx by what has already been defined, the white noise W (dx). Doing this we would say that a mild solution of (6.1) is given by Z u(x) = GD(x, y)W (dy) (6.5) D provided that it exists whereas a weak solution would be a random function u that satisfies Z Z ∞ − u(x)∆ϕ(x)dx = ϕ(x)W (dx) for all ϕ ∈ C0 (D). (6.6) D D Equation (6.5) is well-defined provided GD(x, ·) ∈ L2(D). Because the fundamental 2 d solution to Laplace’s equation from Example 2.3, Γ is in L (R ) when d ≤ 3 together with the relationship in (2.10), GD(x, ·) ∈ L2(D) only when d ≤ 3. Therefore, there is not a mild solution in dimensions d ≥ 4. The following proposition summarizes existence and regularity in the case of a mild solution for d = 2 and d = 3.

Lemma 6.1 ([12, Lemma 2.1]) A unique mild solution to (6.5) exists for d ≤ 3 and is given by Z u(x) = GD(x, y)W (dy). (6.7) D For d = 2, u is H¨older 1 −  continuous for every  > 0 while for d = 3, u is H¨older 3/8 −  continuous for every  > 0. 58

It is a straightforward exercise to show that the mild solution is a weak solution. In order to make progress on the equation in dimensions d ≥ 4, we need to formulate what is meant by a distributional solution. The following result on the existence of a distribution-valued solution can be found in [60], and is actually a special case of the example provided there.

Proposition 6.2 ([60, Theorem 9.1, p. 417]) If n > d, then equation (6.1) has a unique H−n-valued solution defined by Z u(ϕ) = GD(ϕ, y)W (dy) for ϕ ∈ Hn. (6.8) D

One final result to just mention here can be found in [56], and that is the fact that when understood as distributions, the solutions to (6.1) are Markov random fields.

6.2 Nonlinear Elliptic Equation Driven by White Noise An extension of the previous problem was studied in [12], with the main interest in studying the nonlinear equation driven by white noise. The equation was given by −∆u + f(u) = g + W˙ in D,  (6.9) u = 0 on ∂D,

d 2 with D a bounded, regular domain in R for dimensions 1 ≤ d ≤ 3, and g ∈ L (D). The nonlinearity f was a function

f(u)(x) = f(x, u(x)), with f measurable and locally bounded on D × R. f(x, ·) was also assumed to be a continuuous and nondecreasing function for each x ∈ D. There are two formulations for a solution that are described, a weak form and an integral form. The weak form is 2 ¯ obtained by multiplying (6.9) by φ ∈ C0 (D) ∩ C(D) with φ|∂D ≡ 0 and integrating over D formally using integration by parts. Doing this one obtains Z Z − u(x)∆φ(x)dx + f(u)(x)φ(x)dx D D Z Z = g(x)φ(x)dx + φ(x)W (dx). (6.10) D D An a.s. bounded function u is said to be a weak solution of (6.9) if it satisfies the weak equation, (6.10). An a.s. bounded function u from D to R is said to solve the integral form of equation (6.9) if: Z u(x) + GD(x, y)f(u)(y)dy D 59

Z Z = GD(x, y)g(y)dy + GD(x, y)W (dy) for x ∈ D. (6.11) D D It can be shown that the two formulations are equivalent, that is an a.s. bounded function u satisfies (6.11) if and only if it satisfies (6.10). The main result contained in [12] regarding the solution to (6.9) is given in the following theorem.

d Theorem 6.3 ([12, Theorem 2.5]) Let D be a bounded domain of R , 1 ≤ d ≤ 3, with regular boundary. Then (6.9) possesses a unique solution which is a.s. continuous on D.

Just as in the linear case, there is a result for the Markov property in the nonlinear case. This was studied in [31, 32], and the main theorem says the following.

Theorem 6.4 ([31, Theorem 3.1, p. 109]) Suppose that f ∈ C2 and that f 0 > 0. Then as a distribution, the solution to (6.9) is a Markov random field if and only if f is an affine function.

6.3 Elliptic PDE’s with Distribution-valued Boundary Data In this section, I will describe briefly some work done in PDE’s with distributional boundary data. In fact, the research used purely analytic methods and is mostly related to my work through its formulation with distributions on the ”boundary”. As will soon be seen, the problems were not formulated with distributions literally on the boundary of the domain, but on the boundary in a looser sense. It is essentially the study of a family of PDEs whose solutions are represented by distributions on open sets contained in the domain, these open sets possibly extending out to the boundary. These are not viewed as random PDE’s overtly, it is noted that the distribution space used can be viewed as a probability space, and the solutions have a representation as a conditional expectation. In [55], R¨ockner studies a linear PDE with a distribution-valued boundary condition. d The setup is as follows. For a domain D ⊂ R , d ≥ 2 and an open set Λ ⊂ D, the PDE studied is given by the equation Lu = f on Λ  (6.12) u = Φ on Λc := D \ Λ

1,2 0 where L is an elliptic operator, f ∈ W0 (Λ) ,Λ ⊂ D is an open subset of D and Φ is a distribution. Specifically, L has the form

Lu = L0u + u · µ (6.13) 60

where d X ∂  ∂  L u = − a u , (6.14) 0 ∂x ij ∂x i,j=1 i j ∞ with the coefficients aij ∈ Lloc(D), aij = aji and µ is a positive Radon measure on D. In what follows, U (Uc) represents the set of all open (relatively compact open) subsets of D and a function h is L-harmonic if it satisfies Lh = 0. The σ-algebra σ(Λc) is generated by the duality products

0 ∞ c hΦ, viD0,D, Φ ∈ D , v ∈ C0 (Λ ) (6.15)

We will now state the main result of the paper [55]. The version below is given in [54].

Theorem 6.5 ([54, Theorem 2.3, p. 308]) Let Λ ∈ U. Then there exists a family of 0 0 probability measures PΛ, Λ ∈ U on D , a linear subspace Ω(Λ) of D and a linear map 0 H¯Λ : Ω(Λ) −→ D such that for each Φ ∈ Ω(Λ):

(i) Ω(Λ) ∈ σ(Λc).

0 0 (ii) PΛ0 (Ω(Λ)) = 1 for every Λ ∈ U, Λ ⊂ Λ .

(iii) H¯Λ(Φ) is represented on Λ by an L-harmonic function.

c (iv) H¯Λ(Φ) = Φ on the interior, int(Λ ).

(v) If Λ is relatively compact, regular and Φ is represented by a continuous function,

then H¯Λ(Φ) is the ordinary solution of the with boundary data Φ.

(vi) If Φ is represented by an L-harmonic function on Λ, then Φ ∈ Ω(Λ) and H¯Λ(Φ) = Φ. In particular, ¯ ¯ ¯ HΛ(HΛ0 (Φ)) = HΛ0 (Φ) (6.16)

for Λ0 ∈ U, Λ ⊂ Λ0, Φ ∈ Ω(Λ0).

¯ c (vii) For every v ∈ D, the map Φ 7→ hHΛ(Φ), viD0,D is σ(Λ )- measurable if we set 0 H¯Λ(Φ) = 0, and for Φ ∈ D \ Ω(Λ)

¯ c hHΛ(Φ), viD0,D = EΛ0 [Φ(v)|σ(Λ )](Φ) (6.17)

0 0 0 PΛ0 -a.s. in Φ ∈ D for every Λ ∈ U, Λ ⊂ Λ . 61

In a following paper [54], R¨ockner and Zegarlinski studied a class of quasi-linear PDE’s with boundary data given by a distribution using the same type of analysis as in the linear 2 case. In this case, it was assumed that the domain D is a subset of R . The class of differential operators that they studied were of the form

LV u = Lu + V (u) (6.18)

1 where V ∈ C (R) and L is the elliptic operator given by (6.13). Let Λ ⊂ D be an open set. Then, formally, the equation studied is given by

L u = f on Λ,  V (6.19) u = Ψ on D \ Λ,

0 1,2 0 where Ψ ∈ D and f ∈ W0 (Λ) .

The rigorous formulation is similar to that in equation (6.12). For Λ ∈ Uc, h an 1,2 0 0 L-harmonic solution on Λ, f ∈ W0 (Λ) find Ξ ∈ D such that

¯ 1,2 Ξ = u + HΛ(Φ), u ∈ W0 (Λ) (6.20)

and u satisfies Lu + V (u + h) = f. (6.21)

Some preliminary assumptions on V that will be used in the main theorem are

(i) There exists a function g on R, g ≥ 0, and constants α > 0, p > 1 such that g ◦ h ∈ Lp(Λ) and

0 α|s| |V (s + t)| ≤ g(t)e , for all s, t ∈ R. (6.22)

(ii) There exist constants γ ≥ 0, 0 ≤ β < 1, such that

β V (s) − V (t) ≥ −γ((s − t) + 1), for all s, t ∈ R, s ≥ t. (6.23)

Under the assumptions described above, we have the main result of [54].

1,2 0 Theorem 6.6 ([54, Theorem 3.6, p. 317]) Assume that V (h) ∈ W0 (Λ) . Then for 1,2 0 every f ∈ W0 (Λ) , (6.29) has a solution. Furthermore, this solution is unique if in

addition, there exists a constant γ1 > 0 such that V (s) − V (t) ≥ −γ1(s − t) for all

s, t ∈ R, s ≥ t and µ − γ1dx is a positive Radon measure. 62

By assuming a bit more regularity on V and f, we can improve the regularity of the solution u.

1 Theorem 6.7 ([54, Theorem 3.7, p. 317]) Let V ∈ C (R) fulfill (6.30) and let f ∈ p q Lloc(Λ) for some p > 1. Assume that there exists f1, f2 ∈ Lloc(Λ), q > 2, and f3 ∈ Lt(Λ), t > 1, such that in the sense of distributions

∂f1 ∂f2 µ = + + f3. (6.24) ∂x1 ∂x2

1,2 Let u ∈ Wloc (Λ) be a solution of

Lu + V (u + h) = f, (6.25) then u is locally H¨oldercontinuous on Λ.

The following theorem follows from standard results that are found in [37].

∞ Theorem 6.8 ([54, Theorem 3.8, p. 318]) Assume that aij, f ∈ C (Λ) and that µ ∞ ∞ has a C -Radon Nikodym density with respect to dx on Λ. Let V ∈ C (R) fulfill (6.30) 1,2 and let u ∈ Wloc (Λ) be a solution of

Lu + V (u + h) = f, (6.26)

then u ∈ C∞(Λ). CHAPTER 7

LAPLACE’S EQUATION WITH GAUSSIAN WHITE NOISE ON THE BOUNDARY

7.1 On a General Domain This chapter describes the derivation of Laplace’s equation with Gaussian white noise on the boundary of a domain D with Lipschitz boundary ∂D. We discuss some of the basic distibutional properties and regularity properties and see that it has the Markov randomfield property. We consider a special case, that is when the domain is the unit d ball B ⊂ R . In this case, we can derive specific formulas for the covariance function, and describe different ways to analyze the behavior near the boundary. Some of these results parallel the boundary behavior discussed in Section 2.4 of harmonic functions. We finish by looking at the derivatives of these solutions to Laplace’s equation. We shall see that those derivatives satisfy the same type of SPDE but with a different boundary noise.

7.1.1 Existence, uniqueness and distributional properties The main reasons that we have for restricting attention to a domain D with Lipschitz boundary are technical. First, many of the results obtained in the analysis of the Steklov eigenvalue problem will be used. For these results, a Lipschitz boundary condition was sufficient for the analysis to hold. Also, it is an important fact that the Poisson kernel exists and belongs to L2+(σ) on domains with Lipschitz boundary for some  > 0. This is [9, Theorem 5.1]. Formally we are interested in the equation given by

∆u = 0 in D,  (7.1) u = S on ∂D, where S is white noise on the boundary ∂D. Following the formulation of Walsh, we can develop the notion of a weak solution through multiplication of (7.1) by test functions followed by a formal application of integration by parts, removing derivatives of u. In 64

this case, the integration by parts comes from applying Green’s second identity, (2.3). The mild solution will be a solution to a modification of Green’s representation formula (2.9) as described below. Suppose that u satisfies (7.1) and is C2 and that S is a continuous function on ∂D. ∞ Note that for ϕ ∈ C0 (D), multiplication by ϕ and integration over D with Green’s second identity gives

Z u(y)∆ϕ(y)dy = 0. (7.2) D The problem with this formulation is that it does not depend on the boundary data at all. As was shown in Section 5.2, the boundary white noise is known to belong to the Sobolev spaces of negative index H−s(∂D) for s > (d − 1)/2, and so we expect the solution u on the boundary to be a distribution in H−s(∂D) for s > (d − 1)/2, agreeing with the given white noise. We will see when we discuss the boundary behavior that this indeed is the case. Still supposing that the boundary white noise S is a nice, continuous function, the notion of a mild solution comes simply from Green’s representation formula

Z u(x) = pD(x, z)S(z)dσ(z). (7.3) ∂D As was mentioned earlier, the harmonic measure on D has a Poisson kernel density with respect to the surface measure when the domain has Lipschitz boundary; however, because S is not a function at all, the expression in (7.3) is not well-defined and so needs to be modified. The interpretation of S(z)dσ(z) on the RHS in (7.3), however, is as a random “measure” or distribution but we have already rigorously constructed white noise as a random linear functional on L2(∂D). Therefore, we simply interpret the terms S(z)dσ(z) as the Gaussian white noise S(dz). Making this change leads to the equations

Z ∞ u(y)∆ϕ(y)dy = 0 a.s. for ϕ ∈ C0 (D), (7.4) D and Z u(x) = pD(x, z)S(dz). (7.5) ∂D In fact, (7.5) is well-defined and so we do not need to consider solutions to (7.4), Indeed, for fixed x, pD(x, ·) is in L2(∂D) which is all that is needed for the integral to be defined. We take (7.5) to be the definition of the mild solution to (7.1) and so clearly a solution exists. Due to the basic properties of the Gaussian white noise integral, we have the following simple Proposition whose proof follows directly from Proposition 5.1. 65

Proposition 7.1 The solution u to (7.1) is a centered Gaussian process on D with covariance given by Z C(x, y) := E[u(x)u(y)] = pD(x, z)pD(y, z)dσ(z). (7.6) ∂D Additionally, the covariance has a series expansion in terms of the Steklov eigenfunctions

∞ X C(x, y) = (1 + µn)sn(x)sn(y). (7.7) n=0

The covariance kernel C is the reproducing kernel for the RKHS H(D) with inner-product

h·, ·i∂. We will also see, when we consider the equation on the unit ball, that there is a nice closed-form expression for C. Now, using Kolmogorov’s continuity criterion we can immediately address the issue of continuity of the solution.

7.1.2 Regularity The facts to follow regarding the regularity of the solution are not common in the study of SPDE’s. As has been discussed, much of the literature in SPDE’s deals with solutions that are not classical and are at best continuous. These results are in part a consequence of the fact that the white noise commonly appears in the operator. In (7.1), the white noise is on the boundary and the operator is the standard Laplacian.

Theorem 7.2 u is a.s. continuous in D.

Proof. Fix  > 0 and let x, y ∈ D where

D = {z ∈ D : d(z, ∂D) > }. (7.8)

D ∞ D Because p (·, z) is C in D and D is bounded, ∇xp (x, z) is uniformly bounded in the

closure D so by the mean value theorem, there is a constant A > 0 such that

D D |p (x, z) − p (y, z)| ≤ A|x − y|, (7.9)

Therefore, Z 2 D D 2 2 2 E[(u(x) − u(y)) ] = (p (x, z) − p (y, z)) dσ(z) ≤ A |x − y| (7.10) ∂D Because u(x)−u(y) is also centered Gaussian, the higher-order moments can be estimated as 66

p p E[|u(x) − u(y)| ] ≤ C,p|x − y| . (7.11)

For any p > d, Kolmogorov’s continuity theorem implies

1−d/p |u(x) − u(y)| ≤ K,p|x − y| a.s. (7.12)

Consequently u is continuous a.s. in D. Because a countable union of null-sets is null, we can let  go to 0 through the rationals for the conclusion. In fact, not only is the solution to (7.1) continuous a.s., but the following theorem demonstrates that the mild solution is almost surely a classical solution to (7.1).

Theorem 7.3 u is a.s. harmonic in D.

Proof.

Let a ∈ D and r > 0 be such that the open ball centered at a with radius r, Ba,r ⊆ D.

For x ∈ Ba,r, let the harmonic lifting of u in Ba,r be given by

Z v(x) = u(ξ)pr(x, ξ)dσ(ξ), (7.13) ∂Ba,r where pr is the Poisson kernel for Ba,r. I will show that u(a) = v(a) a.s. by showing that E[(u(a) − v(a))2] = 0. Because Z u(a) = u(a)p(a, ξ)dσ(ξ) ∂Ba,r

together with (7.13), # Z 2 2 h  E[(u(a) − v(a)) ] = E (u(a) − u(ξ))pr(a, ξ)dσ(ξ) (7.14) ∂Ba,r

" #  Z Z  = K(r, d, a)E (u(a) − u(ξ))(u(a) − u(η))dσ(ξ)dσ(η) . (7.15) ∂Ba,r ∂Ba,r

However, by the definition of u in (7.5) and Fubini’s theorem, the expectation given in (7.15) becomes 67

" Z Z  Z  E (pD(a, y) − pD(ξ, y))S(dy) ∂Ba,r ∂Ba,r ∂D #  Z  × (pD(a, z) − pD(η, z))S(dz) dσ(ξ)dσ(η) ∂D

Z Z Z = [pD(a, y) − pD(ξ, y)][pD(a, y) − pD(η, y)]dσ(y)dσ(ξ)dσ(η) (7.16) ∂Ba,r ∂Ba,r ∂D

Z  Z 2 = (pD(a, y) − pD(ξ, y))dσ(ξ) dσ(y) = 0. (7.17) ∂D ∂Ba,r The last equality follows from the fact that pD(·, y) is harmonic. Therefore, u(a) = v(a) a.s. and because of (13), u satisfies the averaging property of harmonic functions on ∂Ba,r. Because a and r were arbitrary, u satisfies the averaging property of harmonic functions on all spheres ∂Ba,r ⊆ D with r rational and a with rational coordinates. Because u is a.s. continuous, u is a.s. harmonic. At this point, it is easy to show that the solution to (7.1) is a weak solution, as it satisfies (7.4). It turns out that we could have provided the same regularity result by showing directly that u as defined in (7.5) is a weak solution. This involves a stochastic Fubini lemma. We will need this theorem anyway, so I will prove the lemma and then show how it can be used.

Lemma 7.4 Let (M, M, µ) and (S, Σ, σ) be σ-finite measure spaces. Let W be a white 2 noise based on L (σ) and f : M × S → R be a function such that

Z Z !2 f(x, y)dµ(x) dσ(y) < ∞. (7.18) S M Then a stochastic version of Fubini’s theorem holds a.s. That is

Z Z ! Z Z ! f(x, y)W (dy) dµ(x) = f(x, y)dµ(x) W (dy) (7.19) M S S M almost surely, and the stochastic integrals in (7.19) is centered Gaussian with variance given by

!2  Z Z 2 Z Z E f(x, y)dµ(x)W (dy) = f(x, y)dµ(x) dσ(y). (7.20) S M S M 68

Proof. The condition in (7.18) guarantees the existence of the integrals in (7.19). To show that the integrals in (7.19) are equal a.s., it is enough to show that

" Z Z Z Z !2# E f(x, y)W (dy)dµ(x) − f(x, y)dµ(x)W (dy) = 0. (7.21) M S S M

This can be accomplished by expanding the product and taking the expectation in (7.21). I will attempt to simplify the computations by looking at the following three estimates. Estimates 1 and 2 follow from the classical Fubini Theorem while Estimate 3 is a consequence of the properties of the Gaussian white noise integral. Of course, a straightforward calculation yields

Z Z Z Z !2 f(x, y)W (dy)dµ(x) − f(x, y)dµ(x)W (dy) (7.22) M S S M

:= I1 + I2 + I3,

where Z Z 2 I1 = f(x, y)W (dy)dµ(x) , (7.23) M S Z Z  Z Z  I2 = −2 f(x, y)W (dy)dµ(x) f(x, y)dµ(x)W (dy) , (7.24) M S S M and Z Z 2 I3 = f(x, y)dµ(x)W (dy) . (7.25) S M

So the three estimates to be obtained are the expectation of the terms I1, I2 and I3. Estimate 1: " Z !2# E[I1] = E W (f(x, ·))dµ(x) (7.26) M

" Z ! Z !# = E W (f(x1, ·))dµ(x1) W (f(x2, ·))dµ(x2) (7.27) M M

h Z Z i = E W (f(x1, ·))W (f(x2, ·))dµ(x1)dµ(x2) (7.28) M M

Z Z Z = f(x1, y)f(x2, y)dσ(y)dµ(x1)dµ(x2) (7.29) M M S

Z  Z 2 = f(x, y)dµ(x) dσ(y). (7.30) S M 69

Estimate 2: " Z ! Z !# E[I2] = −2E W (f(x, ·))dµ(x) W f(x, ·)dµ(x) (7.31) M M

" Z Z ! !# = −2E W f(x, ·)dµ(x) W (f(x, ·))dµ(x) (7.32) M M

Z Z Z = −2 f(x1, y)f(x2, y)dµ(x1)dσ(y)dµ(x2) (7.33) M S M

Z Z !2 = f(x, y)dµ(x) dσ(y). (7.34) S M

Estimate 3:

h  Z 2i Z  Z 2 E[I3] = E W f(x, ·)dx = f(x, θ)dx dσ(θ) (7.35) B ∂B B

Putting Estimates 1–3 together while expanding and taking expectations of the left hand side in (7.21) verifies the identity in (7.21). The variance estimate follows from the properties of the Gaussian white noise integral found in Proposition 5.1. We are now in a position to show that u is a weak solution.

∞ Proposition 7.5 For any ϕ ∈ C0 (D),

Z u(x)∆ϕ(x)dx = 0. (7.36) D

Proof. Because the function pD(x, z)∆ϕ(x) satisfies the condition 7.18 in Lemma 7.4 on D × ∂D, we can apply Lemma 7.4 and the fact that pD(·, z) is harmonic to obtain Z Z Z u(x)∆ϕ(x)dx = pD(x, z)S(dz)∆ϕ(x)dx, (7.37) D D ∂D which is almost surely equal to Z Z D ∆xp (x, z)ϕ(x)dxS(dz) = 0. (7.38) ∂D D 70

In fact, the previous proposition gives us a way to address regularity of u without an application of Kolmogorov’s continuity theorem. As a consequence of the general fact that the Laplacian is a hypoelliptic operator and with an almost sure application of Weyl’s lemma, the regularity of u is given. The only condition that needs to be checked 1 of u is that it is Lloc(D). We will recall the statement of Weyl’s lemma which is found in [17].

d Theorem 7.6 (Weyl’s lemma, [17, Theorem 4.7, p. 118]) Let D ⊂ R be open and 1 let u ∈ Lloc(D) satisfy Z ∞ u(x)∆φ(x)dx = 0 for all φ ∈ C0 (D) (7.39) D then u ∈ C∞(D) and ∆u = 0 in D.

It is well known that Poisson integrals of measures can be differentiated under the integral. Lemma 7.4 also shows that one can obtain the derivative of u by just differen- tiating through the stochastic integral and because u is a classical solution to (7.1), we have that each of the derivatives of u is harmonic in D.

α Theorem 7.7 If α = (α1, . . . , αn) is a multi-index, then the |α| order derivative D u of u is given by Z α α D D u(x) = Dx p (x, θ)S(dθ), (7.40) ∂D and Dαu is harmonic in D.

Proof. The fact that Dαu is harmonic follows easily from the fact that u is harmonic. To compute the derivative Dαu, we compute the distributional derivative and show that ∞ it agrees with (7.40). For ϕ ∈ C0 (D), by Lemma 7.4, the integration by parts formula and the definition of the distributional derivative, we have Z Dαu(ϕ) = (−1)|α| u(x)Dαϕ(x)dx (7.41) D Z Z = (−1)|α| p(x, z)Dαϕ(x)S(dz)dx (7.42) D ∂D Z Z α = Dx p(x, z)S(dz)ϕ(x)dx, a.s. (7.43) D ∂D And that completes the proof. 71

7.1.3 Markov property We will now show that the solution u is a Markov random field. Let us recall what it means to be a Markov random field in this particular context. If S ⊂ D is an open set, u(S) will denote the σ-algebra generated by u when restricted to S. Sc denotes the complement of S in D and and an  neighborhood of the boundary ∂S will be denoted by ∂S. So u having the Markov property in this setting means that u(∂S) splits u(S) and u(Sc) for every  > 0 and for every S in a complete system of sets as defined in Definition 4.10. In fact, the Markov property here is a bit stronger as we will prove that u(∂S) splits u(S) and u(Sc). The proof basically relies on the fact that harmonic measure exists on Greenian sets, which in this case are all open subsets of D. Consequently, values of u inside S depend only on the boundary values on ∂S.

Theorem 7.8 The solution u to equation (7.1) is a Markov random field.

Proof. Let S be an open set such that S ⊂ D. Denote the harmonic measure with h c respect to x ∈ S by µS(x, ·). We will show that for all x ∈ S and y ∈ S , u(x) and u(y) are conditionally independent given the σ-algebra σ(∂S). But u(x) is σ(∂S)-measureable because Z h u(x) = u(z)dµS(x, dz). (7.44) ∂S Therefore, E[u(x)u(y)|σ(∂S)] = u(x)E[u(y)|σ(∂S)] (7.45)

= E[u(x)|σ(∂S)]E[u(y)|σ(∂S)]. (7.46)

c Using the exactly same argument, we can see that for x1, . . . , xn ∈ S and y1, . . . , ym ∈ S n m and two functions f : R −→ R, g : R −→ R measurable on their respective Borel σ-algebras, we have the identity

E[f(u(x1), . . . , u(xn))g(u(y1), . . . , u(ym))|σ(∂S)] (7.47)

= E[f(u(x1), . . . , u(xn))|σ(∂S)] · E[g(u(y1), . . . , u(ym))|σ(∂S)]. (7.48)

If X and Y are respectively in L2(σ(S)) and L2(σ(Sc)), then they can be approximated by functions of the form

f(u(x1), . . . , u(xn)) and g(u(y1), . . . , u(ym)). (7.49) 72

Because the operator E[ · |σ(∂S)] is a closed operator, by the dominated convergence theorem it follows that

E[XY |σ(∂S)] = E[X|σ(∂S)] · E[Y |σ(∂S)]. (7.50)

That shows that u(S) and u(Sc) are split by u(∂S) and consequently, u(S) and u(Sc) are split by u(∂S) for every  > 0. Because S was an arbitrary open set with S ⊂ D, u is a Markov random field with respect to a complete system of open sets in Definition 4.10. We finish this section with a trivial example in the sense that the results follow directly from earlier works. The main reason for discussing this example is that we will encounter solutions to this type of problem while we are discussing the solution to the nonlinear Poisson equation. What we are interested in here is to make sense of Poisson’s equation with white noise on the boundary. Consider the equation

∆u = f in D  (7.51) u = S on ∂D. where f is H¨oldercontinuous. A mild form of the solution is obtained from Green’s representation formula Z Z u(x) = f(y)GD(x, y)dy + pD(x, θ)S(dθ) (7.52) D ∂D which is well-defined and is simply the superposition of two solutions. One is the solution to Poisson’s equation with zero boundary conditions, and the other is the solution to Laplace’s equation with white noise on the boundary. Because the stochastic part is what was described earlier, the solution has similar distributional properties.

Proposition 7.9 The solution to (7.51) is normally distributed with mean Z f(y)GD(x, y)dy (7.53) D and covariance structure Z E[u(x)u(y)] = pD(x, z)pD(y, z)dσ(z). (7.54) ∂D

Because the stochastic term is almost surely harmonic, the regularity of this solution is straightforward and is determined by the regularity of the mean function in (7.53). 73

7.2 When the Domain Is a Ball When the domain is the unit ball B we can provide concrete formulas to the basic distributional computations. Additionally, the fact that the solution is isotropic on the unit ball adds a symmetry that can be used to give a number of different ways of understanding the behavior of the solution near the boundary. We are also able to use some known results that relate the distribution of the supremum of random fields to various quantities related to the geometry of the domain and the statistical properties of the random field. These calculations are normally very technical and difficult without very many general circumstances for which the computations can be done in generality. One of the types of random fields for which these computations are easily obtainable however is when the random field is isotropic. Finally, we will examine the derivative, obtaining bounds on the rate of growth and provide some basic relationships between radial derivatives at different points within the domain.

7.2.1 Distributional results Recall that by expanding the denominator of the Poisson kernel and then applying Poisson’s identity we can obtain a series representation for the Poisson kernel. By then applying the Addition theorem, we can obtain another series representation, which happens to be the series written in terms of the Steklov eigenfunctions.

2 ∞ 1 1 − |x| X n Nd,n p(x, θ) = = |x| Pd,n(ξx · θ) (7.55) σ 2 d σ d (1 + |x| − 2|x|(ξx · θ)) 2 n=0 d

∞ N d,n X X n d,n d,n = |x| Yj (ξx)Yj (θ). (7.56) n=0 j=1 With this, we can now easily obtain a closed form formula for the covariance of the solution with the above formulas.

Lemma 7.10 For x and y in B

1 1 − (|x||y|)2 E[u(x)u(y)] = , (7.57) 2 d/2 σd (1 + (|x||y|) − 2|x||y|(ξx · ξy)) where ξx = x/|x| and ξy = y/|y|.

Proof. First, note that 74

Z 1 E[u(0)u(y)] = p(0, θ)p(y, θ)dσ(θ) = , (7.58) ∂B σd for all y ∈ B. If x and y are not zero, then by the orthogonality of the spherical harmonics, Poisson’s identity and the Addition theorem, we get

Z E[u(x)u(y)] = p(x, θ)p(y, θ)dσ(θ) (7.59) ∂B

∞ Nd,n ∞ X X X Nd,n = (|x||y|)n Y d,n(ξ )Y d,n(ξ ) = (|x||y|)n P (ξ · ξ ) (7.60) j x j y σ d,n x y n=0 j=1 n=0 d

1 1 − (|x||y|)2 = . (7.61) 2 d/2 σd [1 + (|x||y|) − 2|x||y|(ξx · ξy)]

Since the covariance depends on x and y only through the lengths |x|, |y| and the inner product ξx · ξy, and since these terms are rotationally invariant, the solution is isotropic. This is not surprising since the Poisson kernel is rotationally invariant and white noise is based on the rotationally invariant spherical measure. We can make some simple and interesting observations from the covariance structure. First, if x = 0, then the covariance is 1/σd, independently of y ∈ B. Second, if we let x approach the boundary

∂B, then the covariance becomes the Poisson kernel p(y, ξx) which is positive and never zero. Therefore, at each point y ∈ B, the solution u has positive correlation with the values of u at every other point in the ball. If we now consider what happens as the field moves along a ray from the origin to the boundary of the sphere, then we obtain a further simplification of the covariance as well as an expression for the variance.

Corollary 7.11 (i) The variance of u(x) is given by

2 2 1 1 + |x| E[u (x)] = 2 d−1 . (7.62) σd (1 − |x| )

(ii) If x and y lie on the same radial path, that is ξx = ξy, then

1 1 + |x||y| E[u(x)u(y)] = d−1 . (7.63) σd (1 − |x||y|) 75

When compared with the growth results for harmonic functions in the Hardy spaces hp(B), we see that the standard deviation and variance of u are essentially upper bounds for growth of h2(B) and h1(B) functions, respectively, given in Proposition 2.20. As we saw in Chapter 5, we cannot in general write a power series representation with iid N(0, 1) coefficients and expect that it will converge in any nice sense. However, u does has a series representation that does converge a.s. It is convenient for many of the calculations to use the series representation for u of which we write in two different ways:

∞ N d,n X X n d,n ˆd,n u(x) = |x| Yj (ξx)Sj (7.64) n=0 j=1 ∞ 1 X = N d,n|x|nl (ξ ), (7.65) σ d,n x d n=0 ˆd,n where the Sj are the Fourier coefficients for white noise

Z ˆd,n d,n Sj = Yj (θ)S(dθ), (7.66) ∂B and ld,n(ξx) are given by Z ld,n(ξx) = Pd,n(ξx · θ)S(dθ). (7.67) ∂B Combining the fact that the spherical harmonics form an orthonormal basis of L2(∂B) together with the Addition theorem, we obtain the following result concerning the random x functions ld,n(ξ ) in the series representation in Equation (7.65).

ˆd,n d,n Lemma 7.12 The family of coefficients Sj , 1 ≤ j ≤ N , n ≥ 0, forms an iid sequence of standard-normal random variables and the sequence ld,n forms an independent sequence of mean-zero Gaussian random functions on ∂B having for fixed n a covariance structure given by σ E[l (ξ)l (η)] = d P (ξ · η) for ξ, η ∈ ∂B. (7.68) d,n d,n N d,n d,n

7.2.2 Boundary behavior of the solution We describe the behavior near the boundary of u on the unit ball B. We will begin 2 by considering how the L (∂Br)-norms grow as we let r ↑ 1. Certainly, they are not bounded because if they were, then they would belong to h2(B). They are also random, but as will be seen, we can with almost sure precision describe their behavior. 76

Consider the norms defined below.  Z 1/2 2 2 kukr := kukL (∂Br) = u (y)dσr(y) . (7.69) ∂Br We can use the series representation for u in (7.64) to obtain a series representation for 2 the square of the norms kukr in (7.69). Specifically, by squaring and integrating over

∂Br, and then by applying the monotone convergence theorem, we obtain

∞ ∞ N d,n N d,m Z 2 X X X X ˆd,n ˆd,m n+m d,n d,m kukr = Sj Sk |x| Yj (ξx)Yk (ξx)dσr(x) n=0 m=0 j=1 k=1 ∂Br

∞ N d,n d−1 X X ˆd,n 2 2n = r |Sj | r . (7.70) n=0 j=1 This is a weighted series of chi-squared random variables. In general, not much is known about the distribution of such a series. But as the next theorem shows, we can compute 2 the moments for kukr explicitly through its cumulant-generating function.

2 Theorem 7.13 The cumulative-generating function for kukr is

∞ k tkuk2 X u t C 2 (t) = log E[e r ] = c (r) for |t| < 1/2, (7.71) kukr k k! k=1

u where the cumulants ck(r) are given by

2k−1(k − 1)!rk(d−1)(1 + r2k) cu(r) = . (7.72) k (1 − r2k)d−1

Proof. First note that for |t| < 1/2 and r < 1, the sum

∞ ∞ 1 X X (2tr2n+d−1)k N d,n (7.73) 2 k n=0 k=1 is absolutely convergent so that the order of summation can be interchanged. This is because for |t| < 2 and r < 1,

∞ ∞ 2n+d−1 k 1 X X d,n (2tr ) N (7.74) 2 k k=1 n=0

∞ d−1 k 2k 1 X (2tr ) 1 + r = < ∞ (7.75) 2 k (1 − r2k)d−1 k=1 77

ˆd,n 2 Because the random variables |Sj | are independent with chi-squared distributions, an application of the monotone convergence theorem shows that

∞ N d,n   tkuk2 Y Y (tr2n+d−1S2(Y d,n)) E[e r ] = E e k . (7.76) n=0 k=1

Because the moment-generating function of a χ2(1) random variable is given by

−1/2 Mχ2 (t) = (1 − 2t) , (7.77) we see that ∞ N d,n 2 Y Y E[etkukr ] = (1 − 2r2n+d−1t)−1/2 (7.78) n=0 k=1 ∞ Y d,n = (1 − 2r2n+d−1t)−N /2. (7.79) n=0 So by the continuity of log with the Taylor’s series expansion of log(1 − x), we get

∞ d,n 2 X −N log E[etkukr ] = log(1 − 2r2n+d−1t) (7.80) 2 n=0

∞ ∞ X N d,n X (2r2n+d−1t)k = (7.81) 2 k n=0 k=1

∞ ∞ 1 X (2trd−1)k X = N d,n(r2k)n (7.82) 2 k k=1 n=0

∞ 1 X (2trd−1)k 1 + r2k = (7.83) 2 k (1 − r2k)d−1 k=1

∞ X 2k−1(k − 1)!rk(d−1)(1 + r2k) tk = . (7.84) (1 − r2k)d−1 k! k=1 This concludes the proof. The simple connection that was described in Chapter 4 between the cumulant-generating function and the moment-generating function implies that the cumulant-generating func- tion contains information about the moments. In particular, the mean and variance are the first two cumulants.

2 Corollary 7.14 For the process kukr defined above, the expected value and variance are given by 78

(i) rd−1(1 + r2) E[kuk2] = ; (7.85) r (1 − r2)d−1

(ii) 2r2(d−1)(1 + r4) Var[kuk2] = . (7.86) r (1 − r4)d−1

2 We will be able to show in the following theorem that the norm kukr has a fixed growth rate with probability 1. This will be demonstrated by showing first that the norm 2 kukr converges along a sequence rn ↑ 1, and then a convexity argument finishes the proof.

Theorem 7.15 For d ≥ 2,

2 d−1 d−2 kukr(1 − r) −→ 2 a.s. as r ↑ 1. (7.87)

Proof. By Chebyshev’s inequality and Corollary 7.14, for each  > 0, there exists a

constant Cd > 0 such that

! kuk2 P r − 1 >  ≤ C (1 − r)d−1. (7.88) 2 d E[kukr] 1 With the sequence rn = 1 − n2 , we can apply the Borel-Cantelli lemma along with the estimate in (7.88), to see that

2 kukrn 2 −→ 1 a.s. as n → ∞. (7.89) E[kukrn ] This implies together with (7.86) that

2 d−1 d−2 kukrn (1 − rn) −→ 2 a.s. as n → ∞ (7.90)

2 d−1 Because kukr and (1 − r) are convex and increasing together with (7.90) the proof is finished.

Our next result asserts that, as a random distribution, the dilation ur converges in L2 to white noise.

2 Theorem 7.16 Suppose that f ∈ L (∂B). Then the dilation ur of u satisfies Z 2 ur(θ)f(θ)dσ(θ) −→ S(f) in L as r −→ 1. (7.91) ∂B 79

Proof. This follows from the following three estimates.

 Z 2 ∞ N d,n X X 2n d,nˆ 2 E ur(θ)f(θ)dσ(θ) = r |fj | , (7.92) ∂B n=0 j=1

 Z   ∞ N d,n X X n d,nˆ 2 E ur(θ)f(θ)dσ(θ) S(f) = r |fj | , (7.93) ∂B n=0 j=1 and ∞ N d,n 2 X X d,nˆ 2 E[S (f)] = |fj | . (7.94) n=0 j=1 By the monotone convergence theorem, we can expand and take limits to show that  Z 2 lim E ur(θ)f(θ)dσ(θ) − S(f) (7.95) r→1 ∂B

∞ N d,n X X = lim (rn − 1)2|fˆd,n|2 = 0 (7.96) r→1 j n=0 j=1

The last boundary-behavior that we provide gives an almost-sure upper bound on the growth rate for the solution u as x tends to the boundary. In fact, this follows from an application of Theorem 3.15 and Theorem 5.3. The only observation that we need to make is that the constant in Theorem 3.15 will be random due to the fact that by the law of the iterated logarithm,

ˆd,n 2 0 |Sj | ≤ C ln n (n ≥ N) , (7.97) for N a random variable. With this modification, the proof of Theorem 7.17 follows from the fact that by Theorem 5.3, S ∈ H−s(∂D) for s > (d − 1)/2. Therefore, the exponent γ from Theorem 3.15 is given by  γ(, s, d) = d − 1 − . (7.98) 2 Note that this is analogous to Proposition 2.20, but the bounds there are not optimal and so this is an improvement.

Theorem 7.17 For every  > 0, there exists a random variable C > 0 such that u satisfies C |u(x)| ≤ , a.s. for all x ∈ B. (7.99) (1 − |x|)d−1− 80

7.2.3 The derivative in the unit ball In this section we investigate some of the basic properties of the derivative of u. It is difficult to give as much detail in describing the behavior of the derivatives as we did for the solution u. This is due to a number of reasons. First, the various derivatives α Dx p(x, θ) of the Poisson kernel in general do not have nice-closed form representations. Another property of u that is lost when taking derivatives is that the derivatives Dαu are not in general isotropic random fields. Because of these two differences between u and the derivative Dαu, it is difficult to provide nice formulas such as those in the previous section. What we will be able to say, however, is which spaces Hs(B) the derivatives Dαu belong to, and so we can give bounds on the growth rates of the derivatives. We will also look at radial derivatives that do preserve the isotropic property and will allow us to give some reasonable estimates of the covariance structure of the random field. In fact, we will be able to estimate the covariance structure between radial derivatives of different order and at different points in the ball. By Theorem 3.15 and Theorem 7.17, we have the following Theorem giving bounds on the growth rate for the derivatives. The proof is essentially that of Theorem 3.15 since it is known that S ∈ H−s(∂B) for s > (d − 1)/2 almost surely.

Theorem 7.18 For each multi-index α, there exists a random variable C > 0 such that for each  > 0, Dαu satisfies

C |Dαu(x)| ≤ , a.s. for all x ∈ B. (7.100) (1 − |x|)|α|+d−1−

We will now take a look at some of the distributional results concerning the derivative Dαu for a multi-index α. First, a bit of notation. If clarification is necessary, the notation α α Dx may be used to indicate the derivative is with respect to the x variable. Clearly, D u is a centered Gaussian random field. We are interested in considering the covariance which can easily be obtained as a series expansion as shown in the following lemma.

Lemma 7.19 For x, y ∈ B, the derivative Dαu has covariance

1 − (|x||y|)2 E[Dαu(x)Dαu(y)] = DαDα . (7.101) x y 2 2 d/2 σd(1 + |x| |y| − 2x · y)

Proof. By the series expansion for the derivative, Lemma 3.4, Poisson’s identity and the ˆd,n fact that the Fourier coefficients Sj are iid mean zero, we have 81

∞ N d,n α α X X α d,n α d,n E[D u(x)D u(y)] = Dx Hj (x)Dy Hj (y) (7.102) n=0 j=1

∞ N d,n α α X X d,n d,n = Dx Dy Hj (x)Hj (y) (7.103) n=0 j=1 1 − (|x||y|)2 = DαDα . (7.104) x y 2 2 d/2 σd(1 + |x| |y| − 2x · y)

One can see by looking at the covariance of the derivative ∂ u given by Lemma 7.19 ∂x1 that indeed the derivatives are not isotropic in general. For the rest of this section, we will focus our efforts on analyzing the radial derivatives. 1 The notation for the radial derivatives will be as follows. Let Dr = Dr denote the derivative with respect to the radial coordinate. Higher order derivatives are defined k k−1 iteratively as usual so that for k = 1, 2,...Dr = DrDr .

Theorem 7.20 Assume l ≥ k and that ξx = ξy = ξ.

(i) For x, y 6= 0

l−k ∞ 2 l k |y| X (n!) n−l E[Dru(x)Dr u(y)] = Nd,n (|x||y|) , (7.105) σd (n − l)!(n − k)! n=l

and there exist constants c = cd,l,k > 0 and C = Cd,l,k > 0 such that

c|y|l−k C|y|l−k ≤ E[Dl u(x)Dku(y)] ≤ . (7.106) (1 − |x||y|)l+k+d−1 r r (1 − |x||y|)l+k+d−1

(ii) If y 6= 0, then l k l!l! Nd,l l−k E[Dru(0)Dr u(y)] = |y| . (7.107) σd(l − k)! (iii) For all x ∈ B, ( 0 if l > k, E[Dl u(x)Dku(0)] = (7.108) r r l!l! Nd,l if l = k. σd 82

l k Proof. From the series expansion for Dru and Dr u together with the orthogonality of ˆd,n the centered random variables Sj , a straightforward calculation yields

" ∞ ∞ # 1 X X N d,nn!m!N d,m E[Dl u(x)Dku(y)] = E |x|n−l|y|m−kl (ξ )l (ξ ) r r σ2 (n − l)!(m − k)! d,n x d,m y d n=l m=k

∞ d,n 1 X N n!n! n−l n−k = |x| |y| Pd,n(ξx · ξy) σd (n − l)!(n − k)! n=l

l−k ∞ d,n 2 |y| X N (n!) n−l = (|x||y|) Pd,n(ξx · ξy). (7.109) σd (n − l)!(n − k)! n=l Because of the fact that

N d,n(n!)2 ∼ nd−2+k+l, (n − l)!(n − k)!

together with Lemma 3.14, if we take ξx = ξy then we get that there exist positive 0 0 constants cd,l,k, cd,l,k, Cd,l,k and Cd,l,k such that the following inequalities hold.

l−k ∞ l k cd,l,k|y| X k+l+d−2 n−l E[Dru(x)Dr u(y)] ≥ n (|x||y|) (7.110) σd n=l

c0 |y|l−k ≥ d,l,k (7.111) (1 − |x||y|)l+k+d−1 and

l−k ∞ l k Cd,l,k|y| X k+l+d−2 n−l E[Dru(x)Dr u(y)] ≤ n (|x||y|) (7.112) σd n=l

C0 |y|l−k ≤ d,l,k . (7.113) (1 − |x||y|)l+k+d−1 If we now focus our attention on (7.109), then we see that if l > k and we take y = 0, then the covariance is zero. If l = k and y = 0, then we just obtain the leading coefficient. If x = 0, then we obtain the leading term together with |y|l−k.

We finish this section with a corollary that looks more closely at the result of the previous theorem when the order of the derivatives are the same. 83

Corollary 7.21 • For x, y 6= 0,

c C ≤ E(Dku(x)Dku(y)) ≤ (7.114) (1 − |x||y|)2k+d−1 r r (1 − |x||y|)2k+d−1

and c C ≤ E(Dku(x))2 ≤ . (7.115) (1 − |x|2)2k+d−1 r (1 − |x|2)2k+d−1

• For all x ∈ B, k k 2 Nd,k E(Dr u(x)Dr u(0)) = (k!) . (7.116) σd CHAPTER 8

NONLINEAR POISSON EQUATION WITH WHITE NOISE ON THE BOUNDARY

8.1 Existence, Uniqueness and Regularity A mild solution to the nonlinear Poisson equation,

−∆u = f(u) in D  (8.1) u = S on ∂D can be formulated just as has been done numerous times by using Green’s representation formula (2.9) and is given by the following integral equation Z Z u(x) = f(u(y))GD(x, y)dy + pD(x, ξ)S(dξ). D ∂D The regularity assumption that we will use in order to obtain existence and uniqueness is that we assume f is a Lipschitz function with Lipschitz constant L(f). Recall that we can and will assume that the first eigenfunction of the Laplacian, e1 is strictly positive in D, and the principle eigenvalue will be denoted by λd. We will consider e1 as a weight function and will write w = e1.

Theorem 8.1 If f in (8.1) is Lipschitz continuous with Lipschitz constant

L(f) < λd,

2,1 then there exists a unique solution u ∈ Wloc (D) a.s. to the equation

Z Z u(x) = f(u(y))GD(x, y)dy + pD(x, z)S(dz). (8.2) D ∂D 85

Proof. Existence and uniqueness are established using the contraction mapping theorem in L1(wdx) where w is the principal eigenfunction of the Laplacian. Recall that we can 1 assume w > 0 on D and we define the following sequence {un}n≥0 ⊂ L (wdx). Set Z D u0(x) = p (x, z)S(dz). ∂D

Then for n ≥ 1, we define un recursively by Z Z D D un(x) = f(un−1(y))G (x, y)dy + p (x, z)S(dz). D ∂D

∞ First, the functions {un}n=0 are well defined. Indeed u0 is a.s. harmonic and so for n ≥ 1, un are solutions to the SPDE given by −∆u = f(u ) in D,  n n−1 (8.3) un = S˙ on ∂D, are almost surely in C2(D). The next step is to show that the sequence belongs to 1 1 L (wdx). This can be accomplished by showing that u0 ∈ L (wdx) and then that the 1 1 increments un+1 − un also belong to L (wdx). The condition that u0 ∈ L (wdx) is the condition that Z Z Z u0(x)w(x)dx = p(x, z)w(x)S(dz)dx < ∞ a.s. (8.4) D D ∂D By the stochastic Fubini, Lemma 7.4, it is sufficient to check that

Z Z !2 p(x, z)w(x)dx dσ(z) < ∞. (8.5) ∂D D

But by the classical Fubini theorem,

Z Z !2 p(x, z)w(x)dx dσ(z) (8.6) ∂D D Z Z = w(x1)w(x2)C(x1, x2)dx1dx2, (8.7) D D where the function C(x1, x2) is the covariance kernel (7.6) with expansion in terms of Steklov eigenfunctions given by

∞ X C(x1, x2) = (1 + µn)sn(x1)sn(x2). (8.8) n=0 86

In fact, by the dominated convergence theorem and Green’s second identity together with √ 2 the fact that { 1 + µnΓsn}n≥0 is an orthonormal basis for L (∂D), the integral in (8.7) has the following series expansion

∞   2 1 X ∂w , s¯n . (8.9) λ2 ∂n d n=0 ∂D

By (3.36), the integral in (8.7) is equal to the L2(∂D) norm

1 2 2 k∇w · nk∂D. λd

1 Now that we have shown that u0 ∈ L (wdx), we turn our attention to the increments un+1 − un for n ≥ 0. For n ≥ 1 we can apply Fubini’s theorem and the fact that f is Lipschitz to obtain the following sequence of inequalities:

kun+1 − unkL1(wdx) Z Z D = f(un(y)) − f(un−1(y))w(x)G (x, y)dydx D D Z Z D ≤ |f(un(y)) − f(un−1(y))| · |w(x)G (x, y)|dydx (8.10) D D Z Z D ≤ L(f) −w(x)|un(y) − un−1(y)|G (x, y)dydx (8.11) D D Z Z D = L(f) |un(y) − un−1(y)| −w(x)G (x, y)dxdy (8.12) D D L(f) Z = w(y)|un(y) − un−1(y)|dy (8.13) λd D L(f) = kun − un−1kL1(wdx). (8.14) λd Similarly, L(f) ku1 − u0kL1(wdx) ≤ ku0kL1(wdx). (8.15) λd 1 Therefore, we have shown that un ∈ L (wdx) for each n ≥ 0. Because L(f) < λd, the contraction mapping theorem can be used, together with the sequence of inequalities (8.10)–(8.14) to obtain existence and uniqueness of a solution u ∈ L1(wdx) to equation 2 (8.2). Note, however, that since the sequence {un}n≥1 solves (8.3) and is C (D), we have the estimate 87

k∆un+1 − ∆unkL1(wdx) = kf(un) − f(un−1)kL1(wdx) (8.16)

L(f) ≤ kun − un−1kL1(wdx), (8.17) λd and so the sequence {∆un}n≥1 is Cauchy and has a limit. The fact that the limit is f(u) comes from the estimate

L(f) k∆un+1 − f(u)kL1(wdx) ≤ kun − ukL1(wdx). (8.18) λd

2,1 Therefore, u ∈ Wloc (D) and the proof is finished. For regularity, we need to use some well-known results on Sobolev embeddings as well p d as the Calderon-Zygmund inequalities that relate the regularity of an L (R ) function to the Newtonian potential. These were stated earlier, the Rellich-Kondrachov Theorem (Theorem 2.1) and Theorem 2.15. The proof of the theorem follows a basic bootstrapping argument.

Theorem 8.2 The solution u to (8.1) is in C2(D) and if in addition to the Lipschitz k k+2 ∞ ∞ condition f ∈ C (R), then u ∈ C (D). If f ∈ C (R) then u ∈ C (D).

0 Proof. First, note that if D0 is an open set with D ⊂ D, then for x ∈ D0, we can write u as Z 0 Z 0 u(x) = f(u(y))GD (x, y)dy + u(z)pD (x, z)dσ(z). (8.19) D0 ∂D0 That is,

u = N(f, u) + u0 (8.20) where Z 0 N(f, u)(x) = f(u(y))GD (x, y)dy (8.21) D0 and Z D0 u0(x) = u(z)p (x, z)dσ(z). (8.22) ∂D0 0 ∞ 0 Because u0 is almost-surely harmonic on D , it is C (D ) a.s. Therefore, the regularity of u in D0 depends on the regularity of N(f, u) in D0. However, the regularity of N(f, u) depends on that of the Newtonian potential Nf(u) on D0 because of (1.11)-(1.12). We will show that Nf(u) ∈ C2(D0) by bootstrapping some well-known regularity results relating Lp and W k,p spaces. These are the Calderon-Zygmund and Rellich theorems. 88

The first observation is that because f is Lipschitz, u and f(u) are in the same Lp(D0) spaces. The first step will be to show that u ∈ W 2,d(D0), which implies by Rellich’s theorem 0 that u is continuous on D . We will do this using induction with the induction steps as follows:

(1a) Starting with u ∈ W 2,p(D0) ⊂ W 1,p(D0), we use Rellich’s theorem to show that u ∈ Lp∗ (D0) where p∗ = pd(d − p).

(1b) Apply the Calderon-Zygmund theorem to show that Nf(u) and hence u is in W 2,p∗ (D0) ⊂ W 1,p∗ (D0). Set p = p∗.

Starting with p = 1, which we know to be true from Theorem 8.1, after repeating (1a)-(1b) d−1 times, we can conclude that u ∈ W 2,d(D0) and hence by Rellich’s theorem, 0 u ∈ C0,α(D ) for α < 1. Finally, applying Lemma 2.14, we see that Nf(u) and hence u is C2(D0). Because D0 ⊂ D was arbitrary, u ∈ C2(D). There are a number of theorems (Theorem 6.17 or Theorem 9.19 in [37]) that can be k k+2 applied directly to finish the proof thus showing that if f ∈ C (R), then u ∈ C (D) ∞ ∞ and if f ∈ C (R), then u ∈ C (D).

8.2 The Helmholtz Equation with White Noise on the Boundary In this section, we look at one final example, the Helmholtz equation on the unit ball B with white noise on the boundary. Existence, uniqueness and regularity don’t exactly follow from Theorem 8.1 and Theorem 8.2. This is because the Lipschitz condition is not met as we will see. So, the problem that we are interested in formulating is given by the equation

∆H = −αH in B,  (8.23) H = S on ∂B. There are two main reasons for having interest in the Helmholtz equation. One is because we have almost developed enough of the necessary background to study it and the other is that the solution happens to be remarkably similar to the solution to Laplace’s equation with white noise on the boundary. In fact, it is said in [49] that solutions to the Helmholtz equation are sometimes called metaharmonic. Here, we will see that in 89

fact, the results that we found on the behavior near the boundary hold for the solution to (8.23) as well. The main difference is that there are not the same simple closed-form expressions for many of the analogues. The one issue that we need to address before we go on to analyze the solution is a slight technicality on the parameter α. In Chapter 3, we saw how to solve the PDE ∆h = −αh, on B (8.24)

for α > 0, but not if α < 0. Fortunately, one simply needs to modify the solution in the case α > 0 slightly to obtain what is needed. The effect that the sign of α plays in the solution is practically negligible in terms of the analysis that we will be doing here. The sign of α will essentially indicate whether the solution is composed of the Bessel functions that were introduced in Section 3.2.1 or whether it has the modified Bessel functions which will be introduced shortly. The proofs in this section will be carried out assuming that α > 0, while the case when α < 0 is basically the same. It is easy to show that the modified Bessel function

n Id,n(r) = (−i) Jd,n(ir)

satisfies the equation

 ∂2 d − 1 ∂  n(n + d − 2) + − 1 + I = 0. (8.25) ∂r2 r ∂r r2 d,n Consequently we get a solution to ∆h = h is given by

d,n d,n ∆(Id,nYj ) = Id,nYj . (8.26)

So by h as √ d,n h(x) = Id,n( −αr)Yj (θ),

we obtain a solution to (8.24). This analysis then leads to the following simple theorem.

√ Theorem 8.3 Provided that α ∈ R is such that α is not a positive zero of a Bessel function, the equation

∆H = −αH in B,  (8.27) H = S on ∂B, has a solution H ∈ C∞(B) and is white noise on the boundary ∂B given by 90

∞ N d,n √ X X Jd,n( αr) H(x) = √ Y d,n(θ)Sˆd,n if α > 0, (8.28) J ( α) j j n=0 j=1 d,n

∞ N d,n √ X X Id,n( −αr) H(x) = √ Y d,n(θ)Sˆd,n if α < 0. (8.29) I ( −α) j j n=0 j=1 d,n

We will use Lemma 3 from [49] to prove Theorem 8.3.

Lemma 8.4 With r ∈ R+ fixed, d r n Γ( 2 )( 2 ) Jd,n(r) ∼ d as n −→ ∞, (8.30) Γ(n + 2 )

d r n Γ( 2 )( 2 ) Id,n(r) ∼ d as n −→ ∞. (8.31) Γ(n + 2 )

In fact, a slightly stronger statement is shown in [49]. That is, Lemma 8.4 holds uniformly in r ∈ [0, r0], for each fixed r0 > 0. Because of this stronger fact, we have the following implications of Lemma 8.4. For α > 0 and as n −→ ∞,

√ √ J ( αr) I ( αr) d,n √ ∼ rn and d,n √ ∼ rn. (8.32) Jd,n( α) Id,n( α) Among other things, these estimates allow us to prove that the series representations given in (8.28) and (8.29) are almost surely analytic functions. Proof.[Theorem 8.3]: Of course, for α > 0, the series in (8.28) is analytic due to the fact that by (8.32), there exists a constant C > 0 such that ∞ N d,n √ X X Jd,n( αr) d,n d,n √ |Y (θ)||Sˆ | (8.33) J ( α) j j n=0 j=1 d,n

∞ N d,n X X n d,n ˆd,n ≤ C r |Yj (θ)||Sj | (8.34) n=0 j=1 which is almost surely analytic as was shown in Chapter 6. Because (8.28) is analytic, it can be differentiated term by term and so it becomes straightforward to show that (8.28) satisfies the Helmholtz equation. Formally taking r = 1, it is clear that this is the series representation of the white noise S on the boundary ∂B and so (8.28) has the right boundary condition. 91

We now briefly describe the distributional properties of the solution to the Helmholtz equation.

Proposition 8.5 The solution to (8.27) is a centered Gaussian random field, and for x, y ∈ B, with θz = z/|z|: √ (a) If α > 0 and α not a zero of a Bessel function, then

∞ N d,n √ √ X X Jd,n( α|x|) Jd,n( α|y|) E[H(x)H(y)] = √ √ Y d,n(θ )Y d,n(θ ) (8.35) J ( α) J ( α) j x j y n=0 j=1 d,n d,n

∞ 1 X N d,n √ √ = √ J ( α|x|)J ( α|y|)P (θ · θ ). σ J 2 ( α) d,n d,n d,n x y d n=0 d,n (b) If α < 0, then

∞ N d,n √ √ X X Id,n( −α|x|) Id,n( −α|y|) E[H(x)H(y)] = √ √ Y d,n(θ )Y d,n(θ ) (8.36) I ( −α) I ( −α) j x j y n=0 j=1 d,n d,n

∞ 1 X N d,n √ √ = √ I ( −α|x|)I ( −α|y|)P (θ · θ ). σ I2 ( −α) d,n d,n d,n x y d n=0 d,n

Proof. The formulas (8.35) – (8.36) follow from the orthogonality of the iid Gaussian variables in the series representation. The fact that H is centered Gaussian follows from the dominated convergence theorem and the fact that limits of Gaussian random variables are again Gaussian. The following corollary follows directly from the previous proposition.

Corollary 8.6 For every x ∈ B: √ (a) If α > 0 and α not a zero of a Bessel function, then

∞ 1 X N d,n √ Var[H(x)] = √ J 2 ( α|x|). (8.37) σ J 2 ( α) d,n d n=0 d,n

(b) If α < 0, then ∞ 1 X N d,n √ Var[H(x)] = √ I2 ( −α|x|). (8.38) σ I2 ( −α) d,n d n=0 d,n 92

From the estimates in (8.32), we know that there exist constants

C1,C2,C3,C4 > 0 such that C (1 − |x||y|) C (1 − |x||y|) 1 ≤ E[H(x)H(y)] ≤ 2 , (1 + |x|2|y|2 − 2x · y)d/2 (1 + |x|2|y|2 − 2x · y)d/2 and

C (1 + |x|2) C (1 + |x|2) 3 ≤ Var[H(x)] ≤ 4 . (1 − |x|2)d−1 (1 − |x|2)d−1 This shows that the covariance structure for the solutions to Laplace’s equation and the Helmholtz equation with white noise on the boundary are very similar to one another.

8.2.1 Boundary behavior In this last section, we describe the boundary behavior of (8.28)–(8.29). Each of the results that will be described has an analogue given in the section on the boundary behavior for the solution to Laplace’s equation. As mentioned earlier, the proofs will be described for α > 0 while for the and case α < 0, a similar argument holds. 2 For H, there too is an almost sure growth rate for it’s L (∂Br) norms as we let r ↑ 1. Consider the norms defined by

 Z 1/2 2 2 kHkr := kHkL (∂Br) = H (y)dσr(y) . (8.39) ∂Br We can use the series representation for H to obtain a series representation for the square 2 of the norm, kHkr. Specifically because

∞ N d,n √ X X Jd,n( αr) H(x) = √ Y d,n(θ)Sˆd,n. (8.40) J ( α) j j n=0 j=1 d,n

By the monotone convergence theorem, the square of the norms are given by

∞ N d,n 2 √ X X Jd,n( αr) kHk2 = rd−1 |Sˆd,n|2 √ . (8.41) r j J 2 ( α) n=0 j=1 d,n This is a series of weighted chi-squared random variables. We will first obtain the cumulant-generating function for kHkr from which we can obtain expressions for the mean and variance. 93

2 Theorem 8.7 kHk has cumulant generating function C 2 given by r kHkr

∞ k X H t C 2 (t) = c (r) , (8.42) kHkr k k! k=1 where the cumulants are given by

∞ 2k √ X Jd,n( αr) cH (r) = 2k−1rk(d−1)(k − 1)! N d,n √ , (8.43) k J 2k ( α) n=0 d,n √ provided that α > 0 and α is not a zero of a Bessel function, and

∞ 2k √ X Id,n( −αr) cH (r) = 2k−1rk(d−1)(k − 1)! N d,n √ , if α < 0. (8.44) k I2k ( −α) n=0 d,n

Proof. d,n 2 By the monotone convergence theorem, and because the |Sj | are iid chi-squared random variables, we have that

d,n √ ∞ N   2  2 Y Y Jd,n( αr) d,n E[ekHkrt] = E exp trd−1 √ |S |2 (8.45) J 2 ( α) j n=0 j=1 d,n

d,n √ ∞ N  2 −1/2 Y Y Jd,n( αr) = 1 − 2trd−1 √ (8.46) J 2 ( α) n=0 j=1 d,n √ ∞  2 −N d,n/2 Y Jd,n( αr) = 1 − 2trd−1 √ . (8.47) J 2 ( α) n=0 d,n Therefore, the cumulant generating function is given by

kHk2t C 2 (t) = log (E[e r ]). kHkr

By the monotone convergence theorem, continuity of the log and the Taylor’s series expansion for log(1 − x), we see that √ ∞ d,n  J 2 ( αr)  X N d−1 d,n 2 d,n C 2 (t) = − log 1 − 2tr √ S (Y ) (8.48) kHkr 2 J 2 ( α) j n=0 d,n √ ∞ ∞ d,n  d−1 2 k X X N 2tr Jd,n( αr) = √ (8.49) 2k J 2 ( α) n=0 k=1 d,n √ ∞ ∞ d,n  d−1 2 k X X N 2tr Jd,n( αr) = √ (8.50) 2k J 2 ( α) n=0 k=1 d,n 94

√ ∞ k−1 k(d−1) k ∞ 2k X 2 r t X Jd,n( αr) = N d,n √ (8.51) 2k J 2k ( α) k=1 n=0 d,n √ ∞ k  ∞ 2k  X t X Jd,n( αr) = 2k−1rk(d−1)(k − 1)! N d,n √ . (8.52) k! J 2k ( α) k=1 n=0 d,n This finishes the proof. Again, the connection between the cummulant generating function and the moments is that the first two cumulants give the mean and variance, respectively. This is summarized in the following corollary.

2 Corollary 8.8 For the process kHkr defined above, the expected value and variance are given by

(i) ∞ 2 √ X Jd,n( αr) E[kHk2] = rd−1 N d,n √ if α > 0, (8.53) r J 2 ( α) n=0 d,n ∞ 2 √ X Id,n( −αr) E[kHk2] = rd−1 N d,n √ if α < 0. (8.54) r I2 ( −α) n=0 d,n (ii) ∞ 4 √ X Jd,n( αr) Var[kHk2] = 2r2(d−1) N d,n √ if α > 0, (8.55) r J 4 ( α) n=0 d,n ∞ 4 √ X Id,n( −αr) Var[kHk2] = 2r2(d−1) N d,n √ if α < 0. (8.56) r I4 ( −α) n=0 d,n

2 We are now in a position to describe the almost sure rate of growth for kHkr.

Theorem 8.9 For d ≥ 2, there exists a non-random constant C > 0 such that

2 d−1 kHkr(1 − r) −→ C in probability as r ↑ 1. (8.57)

Proof. H 2 2 First, consider the cumulant generating function for the variable ϕ (r) := kHkr/E[kHkr] given by ∞ X tk C H (t) = c (r) , (8.58) ϕ (r) k k! k=1 H with cumulants ck. The cumulants ck can be expressed in terms of the cumulants ck and H c1 described in (4.6). This relationship is because scaling a random variable comes out 95

in the cumulants with an exponent the same as the index of the cumulant. Therefore, we have cH (r) c (r) = k . (8.59) k H k [c1 (r)]

Clearly, c1(r) = 1 for 0 < r < 1. Together with the estimates (8.32) and the Poisson identity Lemma 3.4, it is straightforward to show that for k ≥ 1, there are constants

C1,C2 > 0 such that 2k−1rk(d−1)(k − 1)!(1 + r2k) 2k−1rk(d−1)(k − 1)!(1 + r2k) C ≤ cH (r) ≤ C 1 (1 − r2k)d−1 k 2 (1 − r2k)d−1 so that for k ≥ 2,

lim ck(r) = 0. (8.60) r↑1 Therefore, by the monotone convergence theorem, we have that

lim CϕH (r)(t) = t, (8.61) r↑1

which implies that the variable ϕH (r) converges in distribution and hence in probability to 1.

2 This result shows that the L (∂Br) norms of the solution of Laplace’s equation and the Helmholtz equation go to infinity with the same rate of growth. The next result we will describe is analogous to the result in Theorem 7.16, that of the

weak convergence of the dilation ur. This result demonstrates the fact that the solution converges in distribution to white noise on the boundary.

2 Theorem 8.10 Suppose that f ∈ L (∂B). Then the dilation of H, Hr satisfies Z 2 Hr(θ)f(θ)dσ(θ) −→ S(f) in L as r −→ 1. (8.62) ∂B Proof. From the estimates

d,n √  Z 2 ∞ N 2 X X Jd,n( αr) ˆ E H (θ)f(θ)dσ(θ) = √ |f d,n|2, (8.63) r J 2 ( α) j ∂B n=0 j=1 d,n

 Z   ∞ N d,n √ X X Jd,n( αr) ˆ E H (θ)f(θ)dσ(θ) S(f) = √ |f d,n|2, (8.64) r J ( α) j ∂B n=0 j=1 d,n and ∞ N d,n 2 X X d,nˆ 2 E[S (f)] = |fj | , (8.65) n=0 j=1 96

we can expand and take limits to show that  Z 2 lim E Hr(θ)f(θ)dσ(θ) − S(f) = 0. (8.66) r→1 ∂B

This final result concerning boundary behavior gives an almost-sure upper bound on the growth rate for the solution H in terms of the distance from the boundary. It is again, precisely the same growth rate as in Theorem 7.17.

Theorem 8.11 For each  > 0, there exists a random variable C > 0 such that H satisfies C |H(x)| ≤ a.s. for all x ∈ B. (8.67) (1 − |x|)d−1−

Proof. The key to this proof is (8.32). From the series representation for H, we have the upper bound ∞ N d,n √ X X Jd,n( αr) d,n d,n |H(x)| ≤ √ |Y (θ)||Sˆ | (8.68) J ( α) j j n=0 j=1 d,n which by (8.32) is bounded above by

∞ N d,n X X n d,n ˆd,n C r |Yj (θ)||Sj |, (8.69) n=0 j=1

ˆd,n for some constant C > 0. Then by the LIL for Sj together with the fact that from d,n d−2 Theorem 3.2, |Yj (θ)| = O(n ), this is almost surely bounded from above by

∞ ∞ X X C0 rnnd−2 ln n ≤ C0 rnnd−2+ (8.70) n=0 n=0

for a random variable C0 > 0 and any  > 0. This together with the estimate found in Lemma 3.14 completes the proof. REFERENCES

[1] R. J. Adler, The Geometry of Random Fields, Wiley Series in Probability and Mathematical Statistics, Wiley, 1981.

[2] R. J. Adler and J. E. Taylor, Random Fields and Geometry, Springer Monographs in Mathematics, Springer, 2007.

[3] N. Aronszajn, Theory of reproducing kernels, Transactions of the American Mathe- matical Society, 68, 337-404, 1950.

[4] G. Auchmuty, Reproducing kernels for Hilbert spaces of real harmonic functions, Submitted June, 2009

[5] G. Auchmuty, Spectral characterization of the trace spaces Hs(∂Ω), SIAM Journal on Mathematical Analysis, 2006

[6] G. Auchmuty, Steklov eigenproblems and the representation of solutions of elliptic boundary value problems, Numerical Functional Analysis and Optimization, 2004

[7] S.Axler, P.Bourdon and W.Ramey, Harmonic Function Theory, Springer-Verlag New York, Inc., 2001.

[8] R. Ba˜nuelosand C. N. Moore, Probabilistic Behavior of Harmonic Functions, Progress in Mathematics, v. 175, Birkh¨auser,1999.

[9] R. F. Bass, Probabilistic Techniques in Analysis, Probability and its Applications, Springer, 1995.

[10] J. von Below and G. Francois, Spectral asymptotics for the Laplacian under an eigenvalue dependent boundary condition, Bull. Belg. Math. Soc. v. 12, pp. 505-519, 2005.

[11] P. Billingsley, Convergence of Probability Measures, Wiley Series in Probability and Mathematical Statistics, Wiley, 1981.

[12] A. Buckdahn and E. Pardoux, Monotonicity methods for white noise driven spde’s, M. Pinsky, Diffusion Processes and Related Problems in Analysis, v. 1, Birkh¨auser, pp. 219-233, 1990.

[13] P. Cabella and D. Marinucci, Statistical challenges in the analysis of cosmic mi- crowave background radiation, The Annals of Applied Statistics, v. 3, no. 1, pp. 61-95, 2009.

[14] A. P. Calderon and A. Zygmund, Singular integral operators and differential equa- tions, American Journal of Mathematics, v. 79, no. 4, October 1957, pp. 901-921. 98

[15] K. L. Chung and R. J. Williams, Introduction to Stochastic Integration, Probability and it’s Applications, Birkh¨auser,1990. [16] D. Conus and R. C. Dalang, The non-linear stochastic wave equation in high dimensions, Electronic Journal of Probability, v. 13, pp. 629-670, 2008. [17] B. Dacorogna, Introduction to the Calculus of Variations, Imperial College Press, 2004. [18] R. C. Dalang, Extending martingale measure stochastic integral with applications to spatially homogeneous spde’s, Electronic Journal of Probability, v. 4, no. 6, pp. 1-29, 1999. [19] R. C. Dalang and J. B. Walsh, Time-reversal in hyperbolic spde’s, Annals Prob, v. 30, no. 1, pp. 213-252, 2002. [20] R. C. Dalang and C. Mueller, Some non-linear spde’s that are second order in time, Electronic Journal of Probability, v. 8, no. 1, pp. 1-21, 2003. [21] R. C. Dalang and E. Nualart, Potential theory for hyperbolic spde’s, Annals Prob., v. 32, pp. 2099-2148, 2004. [22] R. C. Dalang and O. L´ev`eque, Second-order linear hyperbolic spde’s driven by isotropic gaussian noise on a sphere, Annals Prob., v. 32, pp. 1068-1099, 2004. [23] R. C. Dalang and O. L´ev`eque, Second-order linear hyperbolic spde’s driven by boundary noises, Seminar on Stochastic Analysis, Random Fields and Applications IV, Ascona, Switzerland 2002, Progress in Probability 58, Birkh¨auser,pp. 83-93, 2004. [24] R. C. Dalang and M. Sanz-Sol´e, Regularity of the sample paths of a class of second order spde’s, J. Functional Analysis, v. 227, pp. 304-337, 2005. [25] R. C. Dalang and O. L´ev`eque, Second-order linear hyperbolic spde’s driven by homo- geneous gaussian noise on a hyperplane, Transactions of the American Mathematical Society, v. 368, pp. 2123-2159, 2006. [26] R. C. Dalang, C. Mueller and L. Zambotti, Hitting properties of parabolic spde’s with reflection, Annals Prob., v. 34, 2006. [27] R. C. Dalang, D. Khoshnevisan and E. Nualart, Hitting probabilities for systems of non-linear stochastic heat equations with additive noise, ALEA, v. 3, pp. 231-271, 2007. [28] R. C. Dalang, D. Khoshnevisan and E. Nualart, Hitting probabilities for systems of non-linear stochastic heat equations with multiplicative noise, Probab. Theory and Rel. Fields, v. 144, pp. 371-427, 2009. [29] R. Dalang, D. Khoshnevisan, D. Nualart, C. Mueller and Y. Xiao, A Minicourse on Stochastic Partial Differential Equations, Lecture Notes in Math, vol. 1962, Springer, Berlin, 2008. [30] A. Dembo and O. Zeitouni, Maximum a-posteriori estimation of elliptic gaussian fields observed via a nonlinear channel, Journal of Multivariate Analysis, v. 35, pp. 151-167, 1990. 99

[31] C. Donati-Martin and D. Nualart, Markov property for elliptic stochastic partial differential equations, Stochastics Stochastics Rep. v. 46, pp. 107-115, 1994.

[32] C. Donati-Martin, Quasi-linear elliptic stochastic partial differential equation: Markov property, Stochastics, v. 41, pp. 107-115, 1992.

[33] J. L. Doob, Classical Potential Theory and Its Probabilistic Counterpart, A Series of Comprehensive Studies in Mathematics, Springer, 1984.

[34] M. Engliˇs,D. Lukkassen, J. Peetre and L. Persson, On the formula of Jacques-Louis Lions for reproducing kernels of harmonic and other functions, Journal f¨urdie reine und angewandte Mathematik, 2004

[35] L. Evans, Partial Differential Equations, Graduate Studies in Mathematics, v. 19, American Mathematical Society, 1998.

[36] C. R. Genovese, C. J. Miller, R. C. Nichol, M. Arjunwadkar and L. Wasserman, Nonparametric inference for the cosmic microwave background, Statistical Science, v. 19, no. 2, pp. 308-321, 2004.

[37] D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer, 1977.

[38] L. L. Helms, Potential Theory, Universitext, Springer, 2009.

[39] J. B. Hough, M. Krishnapur, Y. Peres and B. Vir´ag, Determinantal processes and independence, Probability Surveys, v. 3, pp. 206-229, 2006.

[40] D. Khoshnevisan and E. Nualart, Level sets of the stochastic wave equation driven by a symmetric L´evynoise, Bernoulli, v. 14, pp. 899-925, 2008.

[41] D. Marinucci, Testing for non-gaussianity on cosmic microwave background radia- tion: A review, Statistical Science, v. 19, no. 2, pp. 294-307, 2004.

[42] D. Marinucci and M. Piccioni, The Empirical process on gaussian spherical harmon- ics, The Annals of Statistics, v. 32, no. 3, pp. 1261-1288, 2004.

[43] D. Marinucci, High-resolution asymptotics for the angular bispectrum of spherical random fields, Then Annals of Statistics, v. 34, no. 1, pp. 1-41, 2006.

[44] D. Marinucci and G. Peccati, Ergodicity and gaussianity for spherical random fields, To appear in the Journal of Mathematical Physics, 2010.

[45] D. Mitrovi´cand D. Zubrini´c,ˇ Fundamentals of Applied Functional Analysis, Pitman Monographs and Surveys in Pure and Applied Mathematics, Addison Wesley Long- man, 1998.

[46] C. Mueller, The critical parameter for the heat equation with a noise term to blow up in finite time, The Annals of Probability, v. 28, no. 4, pp. 1735-1746, 2000.

[47] C. Mueller, Long time existence for the wave equation with a noise term, The Annals of Probability, v. 25, no. 1, pp. 133-151, 1997. 100

[48] C. Mueller, Singular initial conditions for the heat equation with a noise term, The Annals of Probability, v. 24, no. 1, pp. 377-398, 1996.

[49] C. M¨uller, Analysis of Spherical Symmetries in Euclidean Spaces, Applied Mathe- matical Sciences, v. 129, Springer, 1998.

[50] D. Nualart and S. Tindel, Quasilinear stochastic elliptic equations with reflection, Stochastic Processes and their Applications, v. 57, pp. 73-82, 1995.

[51] D. Nualart, The Malliavin Calculus and Related Topics, Probability and its Appli- cations, Springer-Verlag, 1995.

[52] Y. Peres and B. Vir´ag, Zeros of the i.i.d. gaussian power series: a conformally invariant determinanantal process, Acta Math, to appear.

[53] J. P Pinasco and J. D. Rossi, Asymptotics of the spectral function for the Steklov problem in a family of sets with fractal boundaries, Applied Mathematics E-notes, v. 5, pp. 138-146, 2005.

[54] M. R¨ockner and B. Zegarlinski, The Dirichlet problem for quasi-linear partial differ- ential operators with boundary data given by a distribution, Stochastic Processes and their Applications, pp. 301-326, 1990

[55] M. R¨ockner, A Dirichlet Problem for Distributions and Specifications for Random Fields, Memoirs of the American Mathematical Society, v. 54, no. 324, March 1985

[56] Y. A. Rozanov, Markov Random Fields, Springer-Verlag New York Inc, 1982.

[57] R. T. Seeley, Spherical harmonics, The American Mathematical Monthly, v. 73, no. 4, part 2: Papers in Analysis, 1966, pp. 115-121.

[58] A. V. Skorokhod, Lectures on the Theory of Stochastic Processes, VSP, 1996.

[59] S. Tindel, Quasilinear stochastic elliptic equations with reflection: the existence of a density, Bernoulli, v. 4, pp. 445-459, 1998.

[60] J. B. Walsh, An Introduction to Stochastic Partial Differential Equations, In: Ecole d’ete de probabilities de Saint-Flour, XIV - 1984, Lecture Notes in Math, vol. 1180, Springer, Berlin, pp. 265 - 439.