Diffusion Processes with Reflection

vorgelegt von Dipl. math. oec. Sebastian Andres aus Berlin

Von der Fakult¨at II – Mathematik und Naturwissenschaften der Technischen Universit¨at Berlin zur Erlangung des akademischen Grades

Doktor der Naturwissenschaften – Dr. rer. nat. –

genehmigte Dissertation

Promotionsausschuss Vorsitzender: Prof. Dr. Reinhold Schneider Gutachter: Prof. Dr. Jean-Dominique Deuschel Prof. Dr. Lorenzo Zambotti

Tag der wissenschaftlichen Aussprache: 13. Mai 2009

Berlin 2009 D 83

Preface

Es gibt nichts Praktischeres als eine gute Theorie. Immanuel Kant

In modern theory diffusion processes with reflection arise in various manners. As an example let us consider the reflected Brownian motion, that is a Brownian motion on the positive real line with reflection in zero. There are several possibilities to construct this process, the simplest and most intuitive one is certainly the following: Given a standard Brownian motion B, starting in some x 0, a reflected Brownian motion is obtained by taking the absolute value ≥ B . By the Itˆo-Tanaka formula we have | | t B = x + sign(B ) dB + L , | t| s s t Z0 where (Lt)t 0 is the of B in zero, i.e. it is continuous, nondecreasing and supp(dL) ≥ t ⊆ t 0 : B = 0 . Note that β := sign(X ) dB is again a Brownian motion by L´evy’s { ≥ t } t 0 s s characterization theorem. R Another possibility to construct a reflected Brownian motion is to apply the Skorohod re- flection principle. Given a standard Brownian motion W there exists a unique pair (X,L) such that

X = x + W + L , x 0, t 0, (0.1) t t t ≥ ≥ where X 0 for all t, L starts in zero and is continuous, monotone nondecreasing and supp(dL) t ≥ ⊆ t 0: Xt =0 , i.e. it increases only at those times, when X is zero. Moreover, the local time { ≥ } + L is given by Lt =[ x infs t Ws] . Here the reflected Brownian motion (Xt) is constructed by − − ≤ adding the local time L. Equation (0.1) is a simple example for a so called Skorohod-SDE. Since β defined above is a Brownian motion, one can easily check that

X =d B . | | A much more abstract and analytic method to introduce the reflected Brownian motion is to define it via its Dirichlet form. The reflected Brownian motion is the diffusion associated with the Dirichlet form

1 ∞ 1,2 (f,g) = f ′(x) g′(x) dx, f,g W ([0, )). E 2 ∈ ∞ Z0

iii iv

Another meaningful class of stochastic processes involving some kind of reflection are Bessel processes. The ρδ with Bessel dimension δ > 0 is again a diffusion process taking nonnegative values, which is associated with the Dirichlet form

δ ∞ δ 1 1,2 δ 1 (f,g) = C f ′(x) g′(x) x − dx, f,g W ([0, ),x − dx), E δ ∈ ∞ Z0 Cδ denoting a positive constant. Usually Bessel processes are introduced by defining the square of a Bessel process (see e.g. Chapter XI in [56]). For integer dimensions, i.e. for δ [1, ) N, ρδ ∈ ∞ ∩ can be obtained by taking the Euclidian norm of a δ-dimensional Brownian motion. In particular, the one-dimensional Bessel process ρ1 is just the reflected Brownian motion discussed above. For δ > 1, the Bessel process ρδ starting in some x 0 can be described as the unique solution of the ≥ SDE δ 1 t 1 ρδ = x + − ds + B . (0.2) t 2 ρδ t Z0 s The drift term appearing in this equation effects a repulsion, which is strong enough to force the process to stay nonnegative, so that no further reflection term is needed. Finally, for 0 < δ < 1 the situation is much more complicated, because the drift term is now attracting and not repulsive. As a consequence, a representation of ρδ in terms of an SDE like (0.1) or (0.2) is not possible in this case, because ρδ is known not to be a semi-martingale if δ < 1 (see e.g. Section 6 in [57]). Nevertheless, ρδ admits a Fukushima decomposition

ρδ = B +(δ 1)H , t 0, t t − t ≥ as the sum of a Brownian motion and a zero energy process (δ 1)H, which also produces the − reflection in this case, see [13] for details. In this thesis, diffusion processes will appear in nearly every fashion mentioned so far. The thesis consists of two parts. The first part contains some pathwise differentiability results for Skorohod SDEs. In the second part a particle approximation of the Wasserstein diffusion is established, where the approximating process can be intepreted as a system of interacting Bessel process with small Bessel dimension. The second part is a joint work with Max von Renesse, TU Berlin.

Pathwise Differentiability for SDEs with Reflection Consider some closed bounded connected domain G in Rd, d 2, and for any starting point ≥ x G the following SDE of the Skorohod type: ∈ t t X (x) = x + b(X (x)) dr + w + γ(X (x)) dl (x), t 0, t r t r r ≥ Z0 Z0 ∞ Xt(x) G, dlt(x) 0, 1lG ∂G(Xt(x)) dlt(x)=0, t 0, ∈ ≥ \ ≥ Z0 where w is a d-dimensional Brownian motion and b is a continuously differentiable drift. Fur- thermore, for every x ∂G, γ(x) denotes the direction of reflection at x, for instance in the case ∈ v of normal reflection γ(x) is the inner normal field on ∂G. Finally, l(x) the local time of X(x) in ∂G, i.e. it increases only at those times, when X(x) is at ∂G. In the first part of this thesis we investigate the question whether the mapping x X (x), 7→ t t > 0, is differentiable almost surely and whether one can characterize the derivatives in order to derive a Bismut formula. Deuschel and Zambotti have solved this problem in [26] for the domain G = [0, )d and several related questions have already been considered in [6]. ∞ In Chaper 1 (cf. [8]), we consider the case where G is a convex polyhedron and where the directions of reflection along each face are constant but possibly oblique. The differentiability is obtained for every time t up to the first time when the process X hits two of the faces simul- taneously. Similarly to the results in [26], the derivatives evolve according to a linear ordinary differential equation, when the process X is in the interior of the domain, and they are projected to the tangent space, when it hits the boundary. In particular, when X reaches the end of an excursion interval, the derivative process jumps in the direction of the corresponding direction of reflection. This evolution becomes rather non-trival due to the complicated structure of the set of times, when the process is at the boundary, which is known to a.s. a closed set of zero Lebesgue measure without isolated points. Chapter 2 (cf. [7]) deals with the case where G is a bounded smooth domain with normal reflection at the boundary. The additional difficulties appearing here are caused on one hand by the presence of curvature and on the other hand by the fact that one cannot apply anymore the technical lemma, on which the proofs in [26] and [8] are based, dealing with the time a Brownian path attains its minimum. As a result we obtain an analogous time evolution for the derivatives as described above. In contrast to the results in [26], this evolution gives not a complete characterization of the derivatives in this case. Therefore, a further SDE-like equation is established, characterizing the derivatives in the coordinates w.r.t. a moving frame.

Particle Approximation of the Wasserstein Diffusion The Wasserstein diffusion, recently constructed in [66] by using abstract Dirichlet form methods, is roughly speaking a reversible Markov process, taking values in the set of probability measures P on the unit interval, whose intrinsic metric is given by the L2-Wasserstein distance. In order to improve the intuitive and mathematical understanding of that process, we establish a reversible particle system consisting of N diffusing particles on the unit interval, whose associated empirical measure process converges weakly to the Wasserstein diffusion in the high-density limit, while we have to assume Markov uniqueness for the Dirichlet form, which induces the Wasserstein diffusion (cf. [9]). This approximating particle system can be interpreted as a system of coupled, two-sided Bessel processes with small Bessel dimension. In particular, for large N the drift term, which appears in the SDE describing formally the particle system, is attracting and not repulsive. In analogy to the Bessel process it is not contained in the class of Euclidian semi-martingales. A detailed analysis of the approximating system, in particular Feller properties, will be subject of Chapter 3 (cf. [10]) based on harmonic analysis on weighted Sobolev spaces. A crucial step here is to verify the Muckenhoupt condition for the reversible measure, which will imply the doubling property and a uniform local Poincar´einequality. From this one can derive the Feller property using an abstract result by Sturm [59]. vi

To obtain the convergence result one has to prove tightness and in a second step one has to identify the limit. While tightness easily follows by some well-established tightness-criteria, the identification of the limit is more difficult. The proof uses the parametrization of in P terms of right-continuous quantile functions. Then, the convergence of the empirical measure process is equivalent to the convergence of the process of the corresponding quantile functions. This convergence is then obtained by proving that the associated Dirichlet forms converge in the Mosco sense to the limiting Dirichlet form, where we use the generalized version of Mosco convergence with varying base spaces established by Kuwai and Shioya (cf. [50]). The verification of the Mosco conditions is based on an integration by parts formula for the invariant measure of the limiting process (Prop. 7.3 in [66]) and on the fact that the logarithmic derivatives of the equilibrium distributions converge in an appropiate L2-sense. Moreover, the assumption of Markov uniqueness is a crucial ingredient in the proof of the Mosco convergence. vii

Zusammenfassung

Im ersten Teil der vorliegenden Dissertation werden L¨osungen von stochastischen Differentialglei- chungen mit Reflektion auf pfadweise Differenzierbarkeit nach dem deterministischen Anfangswert untersucht. Fur¨ ein abgeschlossenes, beschr¨anktes, zusammenh¨angendes Gebiet G in Rd, d 2, ≥ und fur¨ einen deterministischen Startpunkt x G, betrachtet man die folgende stochastische ∈ Differentialgleichung vom Skorohod-Typ:

t t X (x) = x + b(X (x)) dr + w + γ(X (x)) dl (x), t 0, t r t r r ≥ Z0 Z0 wobei w eine d-dimensionale Brownsche Bewegung sei und b eine stetig differenzierbare Drift- funktion. Weiterhin bezeichnet γ(x), x ∂G, die Reflektionsrichtung am Punkt x und l(x) die ∈ Lokalzeit von X(x) in ∂G, d.h. l(x) w¨achst nur an den Zeitpunkten, an denen X(x) am Rand ist. Es wird nun die Frage untersucht, ob die Abbildung x X (x), t > 0, fast sicher differen- 7→ t zierbar ist und wie man die Ableitungen charakterisieren kann, um daraus eine Bismut-Formel herzuleiten. In Kapitel 1 wird der Fall betrachtet, dass G ein konvexes Polyeder ist, wobei die Reflek- tionsrichtungen entlang der Randfl¨achen zwar konstant sind, aber durchaus schief sein durfen.¨ Kapitel 2 behandelt dann den Fall, wo G ein glatt-berandetes Gebiet ist mit Reflektion am Rand in Normalenrichtung.

Im zweiten Teil der Arbeit wird eine Partikelapproximation fur¨ die Wasserstein Diffusion bewiesen. Bei der Wasserstein Diffusion handelt es sich um einen reversiblen Markov Prozess mit Werten in der Menge von Wahrscheinlichkeitsmaßen auf dem Einheitsintervall. Das Ziel ist es, ein Partikelsystem zu definieren, bestehend aus N auf dem Einheitsintervall diffundierenden Partikeln, so dass der zugeh¨orige Prozess von empirischen Maßen schwach gegen die Wasserstein Diffusion konvergiert fur¨ N . →∞ Das approximierende Partikelsystem kann interpretiert werden als ein System von interagie- renden Bessel-Prozessen mit kleiner Bessel-Dimension. Insbesondere hat fur¨ große N der Drift- term, welcher in einer formalen SDE-Beschreibung des Systems auftritt, den Effekt, dass die Partikel sich gegenseitig anziehen. Eine Konsequenz hiervon ist, dass analog zu Bessel Prozessen mit Dimension kleiner als eins das System kein Semimartingal ist. Eine detaillierte Untersuchung des Partikelsystems, insbesondere Feller Eigenschaften, ist in Kapitel 3 enthalten. Die schwache Konvergenz des Partikelsystems gegen die Wasserstein Diffusion wird in Kapi- tel 4 bewiesen. Dabei wird zun¨achst Straffheit gezeigt und in einem zweiten Schritt der Limes iden- tifiziert. W¨ahrend Straffheit leicht aus etablierten Kriterien folgt, ist die Identifikation des Limes schwieriger. Hier besteht der wesentliche Beweisschritt darin, Mosco-Konvergenz der zugeh¨origen Dirichlet Formen nachzuweisen, wobei die verallgemeinerte Form von Mosco-Konvergenz von Ku- wae und Shioya verwendet wird. Bei diesem Beweisschritt muss vorausgesetzt werden, dass die Dirichlet Form, die die Wasserstein Diffusion erzeugt, Markov eindeutig ist. Dieser zweite Teil ist ein gemeinsames Projekt mit Max von Renesse, TU Berlin. viii

Acknowledgement

Foremost, I sincerely thank my advisor Prof. Jean-Dominique Deuschel for giving me the op- portunity to write a PhD thesis in an exciting area of and for his continual support during the last years. I am also very indebted to Max von Renesse, my co-author of the second part, for his patience and for innumerable instructive discussions. Moreover, I would like to thank Lorenzo Zambotti for accepting the task of being the co-examiner of this thesis, for his hospitality during my stay in Paris as well as for many fruitful discussions and helpful hints and comments. Many thanks to Erwin Bolthausen and the research groups at the University of Zurich and ETH Zurich for a very pleasant stay in spring 2008. I am grateful to a lot of further mathemati- cians for useful comments, in particular Krzysztof Burdzy, Michael R¨ockner, Alain-Sol Sznitman and many others. During my time as PhD student, I very much appreciated the inspiring atmosphere in the stochastic community at the Technical University, Humboldt University and the Weierstrass Institute. I thank my friends and colleages and all members of the group for the great time I spent here. Finally, I am very grateful to my family and friends for their steady support throughout the years. Financial support by the Deutsche Forschungsgemeinschaft through the International Re- search Training Group “Stochastic Models of Complex Processes” is gratefully acknowledged. Contents

I Pathwise Differentiability for SDEs with Reflection 1

1 SDEs in a convex Polyhedron with oblique Reflection 3 1.1 Introduction...... 3 1.2 ModelandMainResult ...... 4 1.3 MartingaleProblem and Neumann Condition ...... 8 1.4 ProofoftheMainResult ...... 12 1.4.1 Continuity w.r.t. the Initial Condition ...... 12 1.4.2 ComputationoftheLocalTimes ...... 13 1.4.3 ComputationoftheDifferenceQuotients ...... 15 1.4.4 ProofoftheDifferentiability ...... 18

2 SDEs in a Smooth Domain with normal Reflection 21 2.1 Introduction...... 21 2.2 MainResultsandPreliminaries ...... 22 2.2.1 GeneralNotation...... 22 2.2.2 SkorohodSDE ...... 23 2.2.3 MainResults ...... 24 2.2.4 Localization...... 26 2.2.5 Example: ProcessesintheUnitBall ...... 28 2.3 ProofoftheMainResult ...... 28 2.3.1 Lipschitz Continuity w.r.t. the Initial Datum ...... 28 2.3.2 ConvergenceoftheLocalTimeProcesses ...... 32 2.3.3 ProofoftheDifferentiability ...... 36 2.3.4 TheNeumannCondition ...... 46

II Particle Approximation of the Wasserstein Diffusion 49

3 The Approximating Particle System 51 3.1 IntroductionandMainResults ...... 51 3.2 Dirichlet Form and Integration by Parts Formula ...... 54 3.3 FellerProperty ...... 56 3.3.1 Feller Properties of Y ...... 57

ix x Contents

3.3.2 Feller Properties of Y insideaBox ...... 60 3.3.3 Feller Properties of XN ...... 65 3.4 Semi-MartingaleProperties ...... 69

4 Weak Convergence to the Wasserstein Diffusion 73 4.1 Introduction ...... 73 4.2 MainResult ...... 76 4.3 Tightness ...... 76 4.4 IdentificationoftheLimit ...... 78 4.4.1 The -Parameterization ...... 78 G 4.4.2 Finite Dimensional Approximation of Dirichlet Forms inMoscoSense . . . 80 4.4.3 ConditionMoscoII’ ...... 83 4.4.4 ConditionMoscoI ...... 87 4.4.5 ProofofProposition4.4 ...... 94 4.5 A non-convex (1 + 1)-dimensional φ-interfacemodel ...... 95 ∇ A Conditional Expectation of the 97

Bibliography 101 Part I

Pathwise Differentiability for SDEs with Reflection

1

Chapter 1

SDEs in a convex Polyhedron with oblique Reflection

1.1 Introduction

We consider a Markov process with continuous sample paths, characterized as the strong solution of a stochastic differential equation (SDE) of the Skorohod type, where the domain G is a convex polyhedron in Rd, i.e. G is the intersection of a finite number of half spaces. The process is driven by a d-dimensional standard Brownian motion and a drift term, whose coefficient function is supposed to be continuously differentiable and Lipschitz continuous. At the boundary of the polyhedron it reflects instantaneously, the possibly oblique direction of reflection being constant along each face. N Let G = i=1 Gi, where each Gi is a closed half space with inward normal ni. The direction of reflection on the faces ∂Gi will be denoted by constant vectors vi. As an example one might T think of the process of a Brownian motion in an infinite two-dimensional wedge, established by Varadhan and Williams in [64] (see Figure 1.1). Existence and uniqueness for solutions of SDEs with oblique reflecting boundary conditions on polyhedral domains are ensured by a result of Dupuis and Ishii in [29]. The study of such SDEs is motivated by several applications: For instance, these processes arise as diffusion approximations of storage systems or of single-server queues in heavy traffic (see e.g. Section 8.4 in [20] for details). In this chapter we show that the solution of the Skorohod SDE is pathwise differentiable with respect to the deterministic initial value and to characterize the pathwise derivatives up to time τ, when at least two faces of G are hit simultaneously for the first time. This is an addition to the results of [26], where Deuschel and Zambotti considered such a differentiability problem for SDEs on the domain G = [0, )d with normal reflection at the boundary. Our proceeding ∞ will be quite similar to that in [26], in particular we shall use the same technical lemma dealing with the minimum of a Brownian path (see Lemma 1 in [26]). The resulting derivatives are described in terms of an ODE-like equation. When the process is away from the boundary, they evolve according to a simple linear ordinary differential equation, and when it hits the boundary, they have a discontinuity; more precisely, they are projected to the tangent space and jump in direction of the corresponding reflection vector (cf. Section 1.3 below). In addition, we provide

3 4 SDEs in a convex Polyhedron with oblique Reflection

1 x 6

@ n2 @ ∂G2 v2 @R ?

6 @I n1 v@1 @ - 2 0 ∂G1 x

Figure 1.1: Two-dimensional Wegde with oblique Reflection a Bismut-Elworthy formula for the gradient of the transition semigroup of the process which is stopped in τ (see Corollary 1.5 below). A crucial step in the proof of the differentiability result is to show that the solution of the Skorohod SDE depends Lipschitz continuously on the initial value. To do this we shall apply a criterion given in [28]. In particular, we have to ensure that a certain static geometric property holds (cf. Assumption 2.1 in [28]), so that an additional restriction to the directions of reflection is needed. Our result is similar to a system, which has been introduced by Airault in [1] in order to develop probabilistic representations for the solutions of linear PDE systems with mixed Dirichlet- Neumann conditions on a regular domain in Rn. However, in contrast to [1] we study pathwise differentiability properties of a process with reflection following [26], but with possibly oblique reflection. The material presented in this chapter is contained in [8]. The chapter is organized as follows: In Section 1.2 we state the main result and in Section 1.4 we prove it. In Section 1.3 we investigate the results in detail, while we establish a martingale problem connected with the derivatives and we check the Neumann condition.

1.2 Model and Main Result

In this chapter we denote by . the Euclidian norm, by .,. the canonical scalar product and by k k h i e =(e1,...,ed) the standard basis in Rd, d 2. We consider processes on the domain G, which is ≥ a convex polyhedron, i.e. G Rd takes the form G = N G , where each G := x : x, n c ⊆ i=1 i i { h ii ≥ i} is a closed half space with inward normal ni and intercept ci. The boundary of the polyhedron T consists of the sides ∂G = x : x, n = c and with each side ∂G we associate a constant, i { h ii i} i possibly oblique direction of reflection vi, pointing into the interior of the polyhedron. We always adopt the convention that the directions v are normalized such that v , n = 1. For every i h i ii 1.2 Model and Main Result 5

6ni

AKA vi A v A i⊥¨¨* A ¨¨ A ¨¨ A¨ - ni⊥

∂Gi

Figure 1.2: Choice of ni⊥ and vi⊥ i 1,...,N , let v , n span n , v be such that ∈{ } i⊥ i⊥ ∈ { i i} v , v⊥ = n , n⊥ =0, n⊥, v⊥ = n , v =1, n , v⊥ > 0, (1.1) h i i i h i i i h i i i h i ii h i i i which implies v , n = v , n (cf. Figure 1.2). Furthermore, for every i let n1 := n , n2 = n h i i⊥i −h i⊥ ii i i i i⊥ and (nk) be such that n1,...,nd is an orthonormal basis of Rd. i k=3,...,d { i i } To ensure Lipschitz continuity (see Lemma 1.11 below) and pathwise existence and uniqueness, a further assumption on the directions of reflection is needed, namely that either

n = v or a n , v > a n , v (1.2) i i i h i ii j |h i ji| j=i X6 for some positive constants ai and for all i (cf. Theorem 2.1 in [28]). The set of continuous real-valued functions on G is denoted by C(G), and Cb(G) denotes the set of those functions in C(G) that are bounded on G. For each k N, Ck(G) denotes the set ∈ of real-valued functions that are k-times continuously differentiable in some domain containing k k G, and Cb (G) denotes the set of those functions in C (G) that are bounded and have bounded partial derivatives up to order k. Furthermore, for f C1(G) we denote by f the gradient of ∈ ∇ f and in the case where f is Rd-valued by Df the Jacobi matrix. Finally, ∆ denotes the Laplace differential operator on C2(G) and D := v, the directional derivative operator associated v h ∇i with the direction v G. ∈ For any starting point x G, we consider the following stochastic differential equation of the ∈ Skorohod type:

t i Xt(x) = x + b(Xr(x)) dr + wt + vi lt(x), t 0, 0 ≥ Z Xi (1.3) i ∞ i Xt(x) G, dlt(x) 0, 1lG ∂Gi (Xt(x)) dlt(x)=0, t 0, i 1,...,N , ∈ ≥ \ ≥ ∈{ } Z0 6 SDEs in a convex Polyhedron with oblique Reflection where w is a d-dimensional Brownian motion on a complete probability space (Ω, , P). For every i F i, l (x) denotes the local time of X(x) in ∂Gi, i.e. it increases only at those times, when X(x) is at the boundary ∂G . The components bi : G R of b are supposed to be in C1(G) and Lipschitz i → continuous. Then, existence and uniqueness of strong solutions of (1.3) are guaranteed in the case of normal reflection by the results of [60], since G is convex, and in the case of oblique reflection by [29], since by condition (1.2) the assumptions of Case 2 in [29] are fulfilled (cf. Remark 3.1 in [29]). Notice that there is one degree of freedom in defining the local times in the Skorohod SDE i i i (1.3): Setting ˜l (x) = hi l (x) for any real constants hi > 0, ˜l (x) satisfies the conditions in (1.3) i 1 ˜i as well. Thus, it is possible to replace vi l (x) by hi− vi l (x) in the Skorohod SDE. Consequently, the norm of the reflection vectors vi does not affect the Skorohod equation, so that the vectors can be thought to be normalized. However, we shall use the normalization v , n = 1 chosen h i ii above, to simplify the computations in the sequel. Furthermore, by the there exists a probability measure P˜(x), which is equivalent to P, such that the process

t W i(x) := bi(X (x)) dr + wi, t 0, i 1,...,d , (1.4) t r t ≥ ∈{ } Z0 is a d-dimensional Brownian motion under P˜(x). Next we define the τ by

τ := inf t 0: X (x) ∂G ∂G , i = j , x G, (1.5) { ≥ t ∈ i ∩ j 6 } ∈ to be the first time, when the process hits at least two of the faces simultaneously. The following simple example shows that even under the assumption in (1.2) τ can a.s. be infinite and finite as well.

2 π Example 1.1. Let G = R+, i.e. G is a two-dimensional wedge with angle 2 and inward normals n = e , i 1, 2 (cf. Figure 1.1). We choose v = n and v = ( tan θ, 1), where θ ( π , π ) i i ∈ { } 1 1 2 − ∈ − 4 4 denotes the angle between n2 and v2, such that the vector v2 points towards the corner if θ is positive. Then, the assumption in (1.2) holds with a1 = a2 = 1. From Theorem 2.2 in [64] we know that

0 if θ 0, P[τ < ] = ≤ ∞ (1 if θ > 0, for any starting point x G 0 . Nevertheless, τ has infinite expectation for every θ ( π , π ) ∈ \{ } ∈ − 2 2 (see Corollary 2.3 in [64]).

We set

Ci := s 0: X (x) ∂G , r (t) := sup(Ci [0,t]), i 1,...,N , { ≥ s ∈ i} i ∩ ∈{ } N i with the convention sup := 0, and furthermore C := i=1 C and r(t) := max i=1,...,N ri(t). ∅ { } Then, for every i, Ci [0, τ) is known to be a.s. a closed set of zero Lebesgue measure without ∩ S 1.2 Model and Main Result 7 isolated points (closed relative to [0, τ)) and t r (t) is locally constant and right-continuous. 7→ i For t [0, τ) we define ∈ 0 if t < inf C, s(t) := (i if r(t) = ri(t), i.e. s(t) = i if the last hit of the boundary before time t was in ∂Gi, and s(t)=0 if up to time t the process has not hit the boundary yet. Let (An)n be the family of connected components of [0, τ) C. A is open, so that there exists q A Q, n N. Let a := inf A be the starting \ n n ∈ n ∩ ∈ n n points and bn := sup An be the endpoints of the excursion intervals. Finally, let τ0 := 0 and τ < τ < τ <...<τ be the jump times of t s(t) on [0, τ), i.e. at every time τ , ℓ > 0, the 0 1 2 7→ ℓ process X has crossed the polyhedron from one face to another one. The following theorem gives a representation of the derivatives of X in terms of an ODE-like equation:

Theorem 1.2. For all t [0, τ) and x G a.s. the mapping y Xt(y) is differentiable at x and, ∈ ∈ 7→d setting ηt := DvXt(x) = limε 0(Xt(x + εv) Xt(x))/ε, v R , there exists a right-continuous → − ∈ modification of η such that a.s. t η = v + Db(X (x)) η dr, if s(t)=0, t r · r Z0 d t (1.6) k k ηt = ηr(t) , vi⊥ ni⊥ + ηr(t) , ni ni + Db(Xr(x)) ηr dr, if s(t) = i. h − i h − i r(t) · Xk=3 Z d The proof of Theorem 1.2 is postponed to Section 1.4. If we consider the case G = R+ and normal reflection at the boundary, i.e. vi = ni = ei, the result corresponds to that of Theorem 1 d in [26]. In the special case where N = d and the normals ni form an orthonormal basis of R , it is also possible to provide a representation for the derivatives, which is very similar to that in [26], by using essentially the same arguments as in the proof of Theorem 1 and Proposition 1 in [26]. Remark 1.3. If x ∂G for any i, t = 0 is a.s. an accumulation point of C and we have r(t) > 0 ∈ i a.s. for every t > 0. Therefore, in that case η = v and η = v, v n + d v, nk nk, i.e. 0 0+ h i⊥i i⊥ k=3h i i i there is discontinuity at t = 0. P Remark 1.4. The equation (1.6) does not characterize the derivatives, since it does not admit a unique solution. Indeed, if the process (η ) solves (1.6), then the process (1 + l (x))η , t 0, t t t ≥ also does. A characterizing equation for the derivatives is given Theorem 1.6 below. As soon as pathwise differentiability is established, we can immediately provide a Bismut- τ Elworthy formula: Define Xt (x) := Xt(x)1l t<τ and for all f Cb(G) the associated transition { } ∈ semigroup P f(x) := E[f(Xτ (x))], x G, t > 0. Setting ηij := ∂Xi(x)/∂xj for t [0, τ) and t t ∈ t t ∈ ηij :=0 on [τ, ), i,j 1,...,d , we get ∞ ∈{ } Corollary 1.5. For all f C (G), t > 0 and x G: ∈ b ∈ d ∂ 1 t E τ ki k i Ptf(x) = f(Xt (x)) ηr dwr , i 1,...,d , (1.7) ∂x t " 0 # ∈{ } Z Xk=1 8 SDEs in a convex Polyhedron with oblique Reflection and if f C1(G): ∈ b

d ∂ ∂f P f(x) = E (Xτ (x)) ηki , i 1,...,d . (1.8) ∂xi t ∂xk t t ∈{ } Xk=1   Proof. Formula (1.8) is straightforward from the differentiability statement in Theorem 1.2 and the chain rule. For formula (1.7) see the proof of Theorem 2 in [26].

Finally, we give another equation, which characterizes η locally. For that purpose, we set

k k l k ηt,e if s(t)=0, e , Db(Xt(x)) e if s(t)=0, yt := h ki and ct(k, l) := h k · li ( ηt, n if s(t) = i, ( n , Db(Xt(x)) n if s(t) = i. h i i h i · ii Theorem 1.6. There exists a right-continuous modification of η and y, respectively, such that y is characterized as the unique solution of

t d k k l yt = v,e + cr(k, l) yr dr, k 1,...,d , if t < inf C (1.9a) h i 0 ∈{ } Z Xl=1 and

t d 1 l yt = cr(1, l) yr dr, ri(t) Z Xl=1 t d ri(t) d 2 1 2 l l yt = vi⊥, ni yτℓ + yτℓ + cr(2, l) yr dr + vi⊥, ni cr(2, l) yr dr, (1.9b) h i − − τℓ h i τℓ Z Xl=1 Z Xl=1 t d k k l yt = yτℓ + cr(k, l) yr dr, k 3,...,d , − τℓ ∈{ } Z Xl=1 if t inf C, ℓ such that t [τℓ, τℓ+1) and with s(t) = i. If x ∂Gi for some i, i.e. inf C =0, we ≥ k ∈ k k ∈ also need to specify y0 = yτ0 := v, ni , k =1,...,d. − − h i Again the proof is postponed to Section 1.4.

1.3 Martingale Problem and Neumann Condition

In this section we investigate the derivatives of X, established in Theorem 1.2, in detail. Let v be arbitrary but fixed. From the representation of the derivatives in (1.6) it is obvious that (ηt)0 t<τ evolves according to a linear differential equation, when the process X is in the interior ≤ of the polyhedron, and that it is projected to the tangent space, when X is at the boundary. Furthermore, if X hits the boundary ∂G at some time t and we have r(t ) = r(t ), i.e. t is i i i− 6 i i 1.3 Martingale Problem and Neumann Condition 9 the endpoint b of an excursion interval A for some n N, then also η has a discontinuity at t n n ∈ i and jumps as follows:

d k k ηti = ηti , vi⊥ ni⊥ + ηti , ni ni . h − i h − i Xk=3 k d d k k Recall that ni ; k = 1,...,d is an orthonormal basis of R , so that ηti = k=1 ηti , ni ni { } − h − i and P

ηti ηti = ηti , vi⊥ ni⊥ ni⊥ ηti , ni ni = ηti , ni vi, (1.10) − − h − − i −h − i −h − i where the last equality follows from Lemma 1.7 below. Consequently, we observe that at each time, when X reaches the boundary ∂G , η is projected to the tangent space, since η , n = 0, i h ti ii and jumps in direction of v or v , respectively. Finally, if X (x) ∂G and t r(t) is i − i ti ∈ i 7→ continuous in t = ti, there is also a projection of η, but since in this case ηti is in the tangent − space, the projection has no effect and η is continuous at time ti. Lemma 1.7. For all i 1,...,N and η Rd: ∈{ } ∈

η, v⊥ n⊥ n⊥ η, n n = η, n v . h i − i i i −h ii i −h ii i Proof. By the choice of v and n in (1.1) we have v = n + v , n n and v = v , n n +n , i⊥ i⊥ i i h i i⊥i i⊥ i⊥ h i⊥ ii i i⊥ which is equivalent to

v , n⊥ n⊥ = v n , v⊥ n⊥ = v , n⊥ n . h i i i i i − i i − i −h i i i i Hence,

η, v⊥ n⊥ n⊥ η, n n = η, n v , n⊥ n⊥ η, n n = η, n (v n ) η, n n h i − i i i −h ii i −h iih i i i i −h ii i −h ii i − i −h ii i = η, n v . −h ii i

From the observations above it becomes clear that the process (Xt(x), ηt)0 t<τ is Markovian ≤ with state space G Rd. Next we want to provide the infinitesimal generator for this Markov × process. For that purpose we define the operator as follows: Let the domain ( ) be that set L D L of continuous bounded functions F on G Rd satisfying the following conditions: × i) For every η Rd, F (., η) C2(G) and the Neumann boundary condition holds: ∈ ∈ b D F (., η)(x)=0 for x ∂G , i 1,...,N . (1.11) vi ∈ i ∈{ } ii) For every x G, we have F (x,.) C1(Rd), i.e. bounded and continuously differentiable ∈ ∈ b with bounded partial derivatives, satisfying the following boundary conditions: If x ∂G ∈ i for all η Rd: ∈ F (x, η) = F (x, η η, n v ), D F (x,.)(η)=0. (1.12) −h ii i vi 10 SDEs in a convex Polyhedron with oblique Reflection

Note that by the jump behaviour of η, provided in (1.10), and by the boundary condition (1.12) we have for every F ( ) and t < τ: ∈ D L F (Xt, ηt) = F (Xt, ηt ). (1.13) − Finally, the operator is defined by: L F (x, η) := 1F (., η)(x) + 2F (x,.)(η), F ( ), L L L ∈ D L where d 1 ∂F 1F (., η)(x) := ∆F (., η)(x) + bi(x) (., η)(x), L 2 ∂xi Xi=1 d d i 2 ∂b k ∂F F (x,.)(η) := k (x) η i (x,.)(η). L ∂x ! ∂η Xi=1 Xk=1 Proposition 1.8. For F ( ), ∈ D L t F (X (x), η ) F (x, η ) F (X , η ) ds, t<τ, t t − 0 − L s s Z0 is a martingale.

Proof. Since (Xt(x), ηt)t<τ is a semimartingale (see Proposition 1.9 below), we may apply Itˆo’s formula for right-continuous (see e.g. Section II.7 in [55]) to obtain F (X (x), η ) F (x, η ) t t − 0 t t 1 t = F (X (x), η ) dX (x) + F (X (x), η ) dη + ∆F (., η )(X (x)) ds ∇x s s s ∇η s s s 2 s s Z0 Z0 Z0 + F (Xs(x), ηs) F (Xs(x), ηs ) ηF (Xs(x), ηs ) (ηs ηs ) { − − −∇ − · − − } 0

N

ηF (Xs(x), ηs ) (ηs ηs ) 1l Xs(x) ∂Gm − ∇ − · − − { ∈ } 0

= ηs , nm Dvm F (Xs(x),.)(ηs ) 1l Xs(x) ∂Gm , h − i − { ∈ } 0

Since (Xt(x), ηt)t<τ is Markovian, we can conclude from Proposition 1.8 that the restriction of its generator to ( ) coincides with . Note that Proposition 1.8 does possibly not give a D L L complete description of the law of the process (X, η) because it only states existence and not uniqueness for solutions of the associated martingale problem.

Proposition 1.9. (ηt)0 t<τ is a process of bounded variation. ≤ Proof. It suffices to show that the sizes of the jumps of η on [0,t] are summable for every t < τ. On one hand, η has a jump at time τℓ for each ℓ. Since the number of crossing through the polyhedron from one face to another one up to time t is finite a.s., it is enough to show that

ηr ηr < , for every τℓ t. k − −k ∞ ≤ r (τ ,τ t) ∈ ℓXℓ+1∧ Let τ t and i be such that s(r) = i for all r (τ , τ ). Setting := n N : A ℓ ≤ ∈ ℓ ℓ+1 Aℓ { ∈ n ⊂ (τ , τ t) , we use (1.10) and (1.6) to obtain ℓ ℓ+1 ∧ }

ηr ηr = ηbn ηbn = ηbn , ni vi k − −k k − −k kh − i k r (τ ,τ ) n ℓ n ℓ ∈ Xℓ ℓ+1 X∈A X∈A bn v n , Db(X (x)) η dr ≤k ik |h i r · ri| n an X∈Aℓ Z c (b a ) c(t τ ), ≤ n − n ≤ − ℓ n X∈Aℓ for some positive constant c. The uniform boundedness of ηr in r, which is used here, will follow from Lemma 1.11 below.

Finally, the following induction argument gives another confirmation of our results, namely they imply that the Neumann condition holds for X. τ Corollary 1.10. Let again Xt (x) := Xt(x)1l t<τ . Then, for all f Cb(G) and t > 0, the { } ∈ transition semigroup P f(x) := E[f(Xτ (x))], x G, satisfies the Neumann condition at ∂G: t t ∈ x ∂G = D P f(x)=0. ∈ i ⇒ vi t Proof. Let x ∂G . By a density argument it is sufficient to consider bounded functions f, which ∈ i are continuously differentiable and have bounded derivatives. Then, for each t > 0 we obtain by dominated convergence and the chain rule:

Dvi Ptf(x) =E f(Xt(x)) Dvi Xt(x) 1l t [0,τ) . ∇ { ∈ } Thus, it suffices to show   D X (x)=0, 0

Thus, for every t [0, τ ), ∈ 1

t d 1 l yt = cr(1, l) yr dr, ri(t) Z Xl=1 t d ri(t) d 2 l l yt = cr(2, l) yr dr + vi⊥, ni cr(2, l) yr dr, 0 h i 0 Z Xl=1 Z Xl=1 t d k l yt = cr(k, l) yr dr, k 3,...,d , 0 ∈{ } Z Xl=1 and by Gronwall’s Lemma it follows that y =0on[τ0, τ1). In order to prove y =0on[τℓ, τℓ+1) for some ℓ > 0, note that we have yτ = 0 by the induction assumption, so the claim follows ℓ− again by Theorem 1.6 and Gronwall’s Lemma.

1.4 Proof of the Main Result

1.4.1 Continuity w.r.t. the Initial Condition

The first step to prove Theorem 1.2 is to show the Lipschitz continuity of x (X (x)) w.r.t. the 7→ t t sup-norm topology on a finite time interval:

Lemma 1.11. For an arbitrary but fixed T > 0, let (X (x)) and (X (y)), 0 t T , be solutions t t ≤ ≤ of (1.3) for some x, y G. Then, there exists a positive constant K, only depending on T , such ∈ that a.s.

i i i) sup Xt(x) Xt(y) K x y , ii) sup lt(x) lt(y) K x y for all i. t [0,T ] k − k ≤ k − k t [0,T ] | − | ≤ k − k ∈ ∈

Proof. By the assumption in (1.2), Theorem 2.1 in [28] ensures that Assumption 2.1 in [28] holds. Thus, we can apply Theorem 2.2 in [28] to obtain

t sup X (x) X (y) K x y + K sup [b(X (x)) b(X (y))] dr , k t − t k ≤ 1k − k 1 r − r t [0,T ] t [0,T ] Z0 ∈ ∈ and by the Lipschitz continuity of b we get

T sup Xt(x) Xt(y) K1 x y + K2 sup Xr(x) Xr(y) ds, t [0,T ] k − k ≤ k − k 0 r s k − k ∈ Z ≤ for some positive constants K1 and K2, and i) follows by the Gronwall Lemma. Using again Theorem 2.2 in [28], the Lipschitz property of b and i) we obtain ii). 1.4 Proof of the Main Result 13

1.4.2 Computation of the Local Times Recall the definition of the P˜(x) Brownian motion W (x) in (1.4); the Skorohod SDE in (1.3) can be rewritten as follows: X (x) = x + W (x) + v li(x), t 0, (1.14) t t i t ≥ Xi so that X (x), n = x, n + W (x), n + Lˆi(x) + li(x), t 0, h t ii h ii h t ii t t ≥ since < vi, ni >= 1, where Lˆi(x) := v , n lj(x), t 0. t h j ii t ≥ j=i X6 Note that W (x), n is again a Brownian motion under P˜(x) by Levy’s characterization theorem, h ii since n is a unit vector. The local time li(x) is carried by the set of times t, when X (x), n c = i h t ii− i 0, so that it can be computed directly by Skorohod’s Lemma (see e.g. Lemma VI.2.1 in [56]). This yields + i ˆi lt(x) = x, ni + ci inf Ws(x), ni + Ls(x) , t 0. −h i − s t h i ≥  ≤   Fix any q . Since X (x), n c = 0 and t li(x) is increasing, we have for all s r (q ): n h ri(qn) ii − i 7→ t ≤ i n i i i W (x), ni + Lˆ (x) = x, ni + ci l (x) x, ni + ci l (x) h ri(qn) i ri(qn) −h i − ri(qn) ≤ −h i − s = X (x), n + c + W (x), n + Lˆi (x) (1.15) −h s ii i h s ii s W (x), n + Lˆi (x). ≤h s ii s Therefore, for all t A : ∈ n + i i i l (x) = l (x) = x, ni + ci W (x), ni Lˆ (x) . (1.16) t ri(qn) −h i −h ri(t) i − ri(t) h i Next we compute the local times of the process with perturbed starting point. Set xε := x + ε v, ε R, v Rd, where ε is always supposed to be sufficiently small, such that x lies in ∈ ∈ | | ε G i,j: i=j(∂Gi ∂Gj). We start with a preparing lemma: \ 6 ∩ LemmaS 1.12. Let i 1,...,N and 0 s 0 such that a.s. ϑ is the only time, when W (x ), n = W (x), n + W (x ) W (x), n h ε ii h ii h ε − ii attains its minimum over [s, t] for all ε < ∆. | | Proof. Since W (x), n is a Brownian motion under P˜(x), by Lemma 1 in [26] there exists a h ii random variable γ such that every γ-Lipschitz perturbation of W (x), n attains its minimum h ii only at ϑ. Using Lemma 1.11 and the Lipschitz continuity of b we find a ∆ > 0 such that supr [s,t] b(Xr(xε)) b(Xr(x)), ni γ for all ε < ∆. This implies that h(r) := Wr(xε) ∈ |h − i| ≤ | | h − W (x), n = h(s) + r b(X (x )) b(X (x)), n du is a γ-Lipschitz perturbation for such ε, and r ii s h u ε − u ii the claim follows. R 14 SDEs in a convex Polyhedron with oblique Reflection

Lemma 1.13. For all i and q , n N, there exists a random ∆i > 0 such that for all ε < ∆i n ∈ n | | n a.s.: + i i l (xε) = xε, ni + ci W (xε), ni Lˆ (xε) . (1.17) qn −h i −h ri(qn) i − ri(qn) h i i Proof. We need only to consider the case s(qn) = i. Indeed, if qn < inf C we can use Lemma 1.11 to find a ∆i > 0, such that X (x ) ∂G for all t [0, q ] and for all ε < ∆i , which implies n t ε 6∈ i ∈ n | | n li (x ) = li (x)=0. If q > inf Ci and s(t) = i, we setq ˜ := sup q : q < q , s(q ) = i and qn ε qn n 6 n { k k n k } again by Lemma 1.11 there exists a ∆i > 0, such that X (x ) ∂G for all t [˜q , q ] and for n t ε 6∈ i ∈ n n all ε < ∆i , which implies li (x ) = li (x ). | | n qn ε q˜n ε Let now qn be such that s(qn) = i. Using again Skorohod’s Lemma, we obtain for all ε: + i ˆi lqn (xε) = xε, ni + ci inf Ws(xε), ni + Ls(xε) −h i − s qn h i  ≤   +  = xε, ni + ci inf (fε(s) + gε(s)) , −h i − s qn  ≤  where f (s) := W (x ), n + Lˆi (x) and g (s) := Lˆi (x ) Lˆi (x). From the calculation in (1.15) ε h s ε ii s ε s ε − s above we know that W (x), n + Lˆi (x) attains its minimum over [0, q ] at r (q ), and we have h ii s n i n to show that for sufficiently small ε : | | inf (fε(s) + gε(s)) = fε(ri(qn)) + gε(ri(qn)). (1.18) s qn ≤ Recall that qn < τ, i.e. the process X hits the faces of the polyhedron G only successively, i i and recall that C is the support of l (x). Thus, there exists a time qn− < qn such that 2d := i i lqn (x) l − (x) > 0 and Xs(x) j=i ∂Gj for all s [qn−, qn] (note that we might have qn− =0 in − qn 6∈ 6 ∈ the case where x ∂G and q < inf Cj). We apply Lemma 1.11 and find a ∆ > 0 such ∈ i n S j=i n′ that also X (x ) ∂G for all s 6[q , q ] and ε < ∆ . Hence, for such ε it follows that s ε 6∈ j=i j S∈ n− n | | n′ ˆi ˆi 6 L (x), L (xε) and gεSare constant on [qn−, qn], so that fε + gε attains its minimum over [qn−, qn] at the same time as W (x ), n . By Lemma 1.12, possibly after choosing a smaller ∆ , we know h ε ii n′ that this time is ri(qn), so that

inf (f (s) + g (s)) = f (r (q )) + g (r (q )), ε < ∆′ . (1.19) − ε ε ε i n ε i n n s [qn ,qn] ∀| | ∈ Proceeding as in (1.15), we get for all s q : ≤ n− i i i W (x), ni + Lˆ (x) = x, ni + ci l (x) = x, ni + ci l (x) h ri(qn) i ri(qn) −h i − ri(qn) −h i − qn i i = x, ni + ci l − (x) 2d x, ni + ci ls(x) 2d −h i − qn − ≤ −h i − − (1.20) = X (x), n + c + W (x), n + Lˆi (x) 2d −h s ii i h s ii s − W (x), n + Lˆi (x) 2d. ≤h s ii s − Using the Lipschitz continuity of b and Lemma 1.11, we find a ∆n′′ > 0 such that s d sup Ws(xε) Ws(x), ni = sup b(Xr(xε)) b(Xr(x)), ni dr , ε < ∆n′′, s qn |h − i| s qn 0 h − i ≤ 2 ∀| | ≤ ≤ Z

1.4 Proof of the Main Result 15 i.e. for such ε (1.20) implies

inf f (s) f (r (q ))= inf W (x ), n + Lˆi (x) W (x ), n Lˆi (x) − ε ε i n − s ε i s ri(qn) ε i ri(qn) s qn − s qn h i −h i − ≤ ≤   ˆi inf Ws(x), ni + Ls(x) sup Ws(xε) Ws(x), ni ≥ − h i − − |h − i| (1.21) s qn s qn ≤   ≤ i W (xε), ni Lˆ (x) −h ri(qn) i − ri(qn) 3 d W (x ) W (x), n d. ≥ 2 −h ri(qn) ε − ri(qn) ii ≥

By Lemma 1.11 ii) there exists a random ∆n′′′ > 0 such that a.s. d sup gε(s) < , ε < ∆n′′. (1.22) s qn | | 2 ∀| | ≤ Now using (1.21) and (1.22) we obtain for ε < min(∆ , ∆ ): | | n′′ n′′′ d inf (fε(s) + gε(s)) inf fε(s) sup gε(s) >d + fε(ri(qn)) − ≥ − − − | | − 2 s qn s qn s qn ≤ ≤ ≤ d = f (r (q )) + > f (r (q )) + g (r (q )), (1.23) ε i n 2 ε i n ε i n so that (1.18) follows from (1.19) and (1.23) for all ε < ∆i := min(∆ , ∆ , ∆ ). | | n n′ n′′ n′′′ 1.4.3 Computation of the Difference Quotients Next we compute the difference quotients of X. For fixed v Rd we set x = x + εv, ε = 0 and ∈ ε 6 1 η (ε) := (X (x ) X (x)) , t 0. t ε t ε − t ≥

Let now t [0, τ) and n and ℓ be such that t An and t [τℓ, τℓ+1). We choose ∆n > 0 such ∈ ∈ i i ∈ i that a.s. for all ε < ∆n we have for all i that lqn (x) = lqn (xε)=0if qn < inf C , and both of | | i them are strictly positive if qn > inf C , and finally that formula (1.17) holds. From (1.3) we deduce directly

N X (x ) X (x) = x x + W (x ) W (x) + v lj(x ) lj(x) . (1.24) t ε − t ε − t ε − t j t ε − t Xj=1   If s(t) = 0 we get immediately

1 t η (ε) = v + (b(X (x )) b(X (x))) dr. (1.25) t ε r ε − r Z0 Let us now consider the case s(t) = i, i 1,...,N . Then, possibly after choosing a smaller ∈ { } ∆ , we may suppose that lj(x ) is constant on [τ , q t] for every j = i since t < τ. In (1.24) n ε ℓ n ∨ 6 16 SDEs in a convex Polyhedron with oblique Reflection we use (1.16) and (1.17) to obtain

X (x ) X (x) t ε − t j j =xε x + Wt(xε) Wt(x) + vj l (xε) l (x) − − ri(qn) − ri(qn) j=i X6   i i + vi xε x, ni W (xε) W (x), ni Lˆ (xε) Lˆ (x) . −h − i−h ri(qn) − ri(qn) i − ri(qn) − ri(qn)    Hence,

X (x ) X (x), n h t ε − t ii i i = xε x, ni + Wt(xε) Wt(x), ni + Lˆ (xε) Lˆ (x) h − i h − i ri(qn) − ri(qn) i i xε x, ni W (xε) W (x), ni Lˆ (xε) Lˆ (x) −h − i−h ri(qn) − ri(qn) i − ri(qn) − ri(qn) t   = b(X (x )) b(X (x)), n dr h r ε − r ii Zri(qn) and we get 1 t η (ε), n = b(X (x )) b(X (x)), n dr. (1.26) h t ii ε h r ε − r ii Zri(qn) Recall the definition of v and n in (1.1). Since v , n = v , n , we have i⊥ i⊥ h i i⊥i −h i⊥ ii

X (x ) X (x), n⊥ t ε − t i D E j j = xε x, n⊥ + Wt(xε) Wt(x), n⊥ + vj, n⊥ l (xε) l (x) h − i i h − i i h i i ri(qn) − ri(qn) j=i X6   i i + v⊥, ni xε x, ni + W (xε) W (x), ni + Lˆ (xε) Lˆ (x) h i i h − i h ri(qn) − ri(qn) i ri(qn) − ri(qn)  j j  = xε x, n⊥ + Wr (q )(xε) Wr (q )(x), n⊥ + vj, n⊥ l (xε) l (x) h − i i h i n − i n i i h i i ri(qn) − ri(qn) j=i X6   + x x, v⊥, n n + W (x ) W (x), v⊥, n n ε − h i ii i ri(qn) ε − ri(qn) h i ii i D E D t E j j + vj, v⊥, ni ni l (xε) l (x) + b(Xr(xε)) b(Xr(x)), n⊥ dr. h i i ri(qn) − ri(qn) − i j=i ri(qn) X6 D E  Z D E By the choice of v and n , clearly v = v , n n + n , so that i⊥ i⊥ i⊥ h i⊥ ii i i⊥

X (x ) X (x), n⊥ t ε − t i D E j j = xε x + Wr (q )(xε) Wr (q )(x) + vj l (xε) l (x) , v⊥ − i n − i n ri(qn) − ri(qn) i * j=i + X6   t + b(Xr(xε)) b(Xr(x)), ni⊥ dr. r (qn) − Z i D E 1.4 Proof of the Main Result 17

Since W (x ) W (x) is continuous, v , v = 0 and lj(x ) and lj(x) are constant on [τ , r (q )] ε − h i i⊥i ε ℓ i n for all j = i by the choice of ∆ , we obtain 6 n 1 t ηt(ε), ni⊥ = ηri(qn) (ε), vi⊥ + b(Xr(xε)) b(Xr(x)), ni⊥ dr. (1.27) − ε r (qn) − D E D E Z i D E and on the other hand we use again v = v , n n + n to obtain i⊥ h i⊥ ii i i⊥ 1 t ηt(ε), ni⊥ = ητℓ (ε), vi⊥ + b(Xr(xε)) b(Xr(x)), ni⊥ dr − ε τ − D E D E Z ℓ D E 1 ri(qn) + v⊥, n b(X (x )) b(X (x)), n dr. (1.28) h i ii ε h r ε − r ii Zτℓ Finally, for every k 3,...,d , we have v , nk = 0, so that ∈{ } h i i i

k j j k Xt(xε) Xt(x), n = xε x + Wt(xε) Wt(x) + vj l (xε) l (x) , n . − i − − ri(qn) − ri(qn) i * j=i + D E X6   Thus,

t k k 1 k ηt(ε), ni = ηri(qn) (ε), ni + b(Xr(xε)) b(Xr(x)), ni dr (1.29) − ε r (qn) − D E D E Z i D E and t k k 1 k ηt(ε), ni = ητℓ (ε), ni + b(Xr(xε)) b(Xr(x)), ni dr. (1.30) − ε τ − D E D E Z ℓ D E Since nk; k =1,...,d is an orthonormal basis of Rd we obtain by (1.26), (1.27) and (1.29), { i } d k k η (ε) = η (ε), n n + η (ε), n⊥ n⊥ + η (ε), n n t h t ii i t i i t i i D E Xk=3 D E d t k k 1 = ηri(t) (ε), vi⊥ ni⊥ + ηri(t) (ε), ni ni + (b(Xr(xε)) b(Xr(x))) dr. (1.31) h − i h − i ε ri(t) − Xk=3 Z Then, from (1.25) and (1.31) we get

1 t η (ε) = η (ε) + [b(X (x )) b(X (x))] dr t an ε r ε − r Zan d t 1 ∂b = η (ε) + (Xα,ε) dα ηkj(ε) dr an ∂xk r r Zan k=1 Z0  t X1 = η (ε) + Db(Xα,ε) η (ε) dαdr, t A , an r · r ∈ n Zan Z0 α,ε where Xr := αX (x )+(1 α)X (x), α [0, 1]. r ε − r ∈ 18 SDEs in a convex Polyhedron with oblique Reflection

1.4.4 Proof of the Differentiability

Now we argue similar to Step 5 in the proof of Theorem 1 in [26]. Recall that sups [0,t] ηs(ε) c1 ∈ k k ≤ for some positive constant c and ε = 0 by Lemma 1.11. Let (ε ) be any sequence converging 1 6 ν ν to zero. By a diagonal procedure, we can extract a subsequence (νl)l such that ηr(qn)(ενl ) has a limitη ˆ Rd as l for all n N. Let nowη ˆ : [0, τ) C Rd be the unique solution of r(qn) ∈ →∞ ∈ \ → t ηˆ :=η ˆ + Db(X (x)) ηˆ dr, t A . t r(qn) r · r ∈ n Zr(qn) Then, we get for ε (0, ∆ ) and t A , | | ∈ n ∈ n α,ε ηt(ε) ηˆt ηr(qn)(ε) ηˆr(qn) + c1 sup Db(Xr ) Db(Xr(x)) k − k ≤k − k r An k − k ∈ t + c η (ε) ηˆ dr, 2 k r − rk Z0 α,ε c > 0 denoting the Lipschitz constant of b. Since η (ε ) ηˆ , X νl X (x) uniformly 2 r(qn) νl → r(qn) → r in r [0,t] and the derivatives of b are continuous, we obtain by Gronwall’s Lemma that η (ε ) ∈ t νl converges toη ˆ uniformly in t A for every n. Thus, since C has zero Lebesgue measure, η (ε ) t ∈ n t νl converges toη ˆ for all t [0, τ) C as l and by the dominated convergence theorem we get t ∈ \ →∞ t ηˆt = v + Db(Xr(x)) ηˆr dr, t [0, inf C), 0 · ∈ Z t ηˆ =η ˆ + Db(X (x)) ηˆ dr, t [inf C, τ), t r(t) r · r ∈ Zr(t) where we have extendedη ˆ on C such that it is right-continuous. Since C has zero Lebesgue measure this does not affect the value ofη ˆ , t [0, τ) C. The proof of Theorem 1.2 is now t ∈ \ complete once we have shown thatη ˆ does not depend on the chosen subsequence. To that aim we define yε =(y1,ε,...,yd,ε), t 0, by t t t ≥ η (ε),ek if s(t)=0, yk,ε := t t h ki ( ηt(ε), n if s(t) = i. h i i Furthermore, we set

1 ek, Db(Xα,ε) el dα if s(t)=0, cε(k, l) := 0 t t 1h k α,ε · li ( n , Db(Xt ) n dα if s(t) = i. R0 h i · ii Then, since R d 1 1 α,ε l [b(Xt(xε)) b(Xt(x))] = Dnl b(Xt ) dα ηt(ε), ni , ε − 0 i h i Xl=1 Z  we obtain on one hand from (1.25) that for t < inf C and ε small enough,

t d k,ε k ε l,ε yt = v,e + cr(k, l) yr dr, k 1,...,d . h i 0 ∈{ } Z Xl=1 1.4 Proof of the Main Result 19

On the other hand from (1.26), (1.28) and (1.30) we get for sufficiently small ε that yε satisfies for t [τ , τ ) and i such that s(t) = i: ∈ ℓ ℓ+1 t d 1,ε ε l,ε yt = cr(1, l) yr dr, ri(t) Z Xl=1 t d ri(t) d 2,ε 1,ε 2,ε ε l,ε ε l,ε yt = vi⊥, ni yτℓ + yτℓ + cr(2, l) yr dr + vi⊥, ni cr(2, l) yr dr, h i − − τℓ h i τℓ Z Xl=1 Z Xl=1 t d k,ε k,ε ε l,ε yt = yτℓ + cr(k, l) yr dr, k 3,...,d . − τℓ ∈{ } Z Xl=1 We set

k k l k ηˆt,e if s(t)=0, e , Db(Xt(x)) e if s(t)=0, yt := h ki and ct(k, l) := h k · li ( ηˆt, n if s(t) = i, ( n , Db(Xt(x)) n if s(t) = i. h i i h i · ii Since η (ε) converges along the chosen subsequence toη ˆ uniformly in t A for every n, we t t ∈ n obtain by the dominated convergence theorem that yt satisfies the system (1.9). In order to complete the proof of Theorem 1.2 we shall now prove thatη ˆ is characterized by the system (1.9).

Lemma 1.14. The system (1.9) admits a unique solution.

Proof. We shall prove uniqueness of the solution on every interval [τℓ, τℓ+1) by induction over ℓ. If inf C > 0, on [τ0, τ1) = [0, inf C) existence and uniqueness is clear. Otherwise, the initial condition is specified as in Theorem 1.9 and we get existence and uniqueness on [τ0, τ1) by the same argument as for general ℓ. For t [τ , τ ), ℓ 0, the system is given by ∈ ℓ ℓ+1 ≥ t d 1 1 l yt = yτℓ + cr(1, l) yr dr, ri(t) Z Xl=1 t d ri(t) d 2 2 l l yt = yτℓ + cr(2, l) yr dr + vi⊥, ni cr(2, l) yr dr, τℓ h i τℓ Z Xl=1 Z Xl=1 t d k k l yt = yτℓ + cr(k, l) yr dr, k 3,...,d , τℓ ∈{ } Z Xl=1 where the initial condition is uniquely specified by the induction assumption:

1 k 1 2 k k yτ =0, yτ = vi⊥, ni yτ + yτ , yτ = yτ . ℓ ℓ h i ℓ− ℓ− ℓ ℓ− d For any fixed T > 0, let H be the totality of R -valued adapted processes (ϕt), t [0, T ], whose 2 ∈ paths are a.s. c`adl`ag and which satisfy supt [0,T ] E[ ϕt ] < . On H we introduce the norm ∈ k k ∞ 2 1/2 ϕ H = sup E ϕt , k k t [0,T ] k k ∈   20 SDEs in a convex Polyhedron with oblique Reflection and for any ϕ H we define the process I(ϕ) by ∈ t d 1 1 l I(ϕ)t = 1l t [τ ,τ ) yτ + cr(1, l) ϕr dr , { ∈ ℓ ℓ+1 } ℓ ri(t) ! Z Xl=1 t d ri(t) d 2 2 l l I(ϕ)t = 1l t [τ ,τ ) yτ + cr(2, l) ϕr dr + vi⊥, ni cr(2, l) ϕr dr , { ∈ ℓ ℓ+1 } ℓ τℓ h i τℓ ! Z Xl=1 Z Xl=1 t d k k l I(ϕ)t = 1l t [τ ,τ ) yτ + cr(k, l) ϕr dr , k 3,...,d , { ∈ ℓ ℓ+1 } ℓ τℓ ! ∈{ } Z Xl=1 for t [0, T ]. Since the system is linear in y with uniformly bounded coefficients, one can easily ∈ verify that I(ϕ) H for every ϕ H and that for any ϕ, ψ H ∈ ∈ ∈ I(ϕ) I(ψ) c ϕ ψ , k − kH ≤ k − kH for some constant c not depending on ϕ and ψ. Hence, we obtain uniqueness on [τ , τ T ) by ℓ ℓ+1 ∧ standard arguments via Picard-iteration. Since T is arbitrary, we get uniqueness on [τℓ, τℓ+1). Chapter 2

SDEs in a Smooth Domain with normal Reflection

2.1 Introduction

This chapter deals with the pathwise differentiability for the solution (Xt(x))t 0 of a stochastic ≥ differential equation (SDE) of the Skorohod type in a smooth bounded domain G Rd, d 2, ⊂ ≥ with normal reflection at the boundary. Again, the process (Xt(x)) is driven by a d-dimensional standard Brownian motion and by a drift term, whose coefficients are supposed to be continuously differentiable and Lipschitz continuous, i.e. existence and uniqueness of the solution are ensured by the results of Lions and Sznitman in [51]. We prove that for every t > 0 the solution Xt(x) is differentiable w.r.t. the deterministic initial value x and we give a representation of the derivatives in terms of an ordinary differential equation. As an easy side result, we provide a Bismut-Elworthy formula for the gradient of the transition semigroup. The resulting derivatives evolve according to a simple linear ordinary differential equation, when the process is away from the boundary, and they have a discontinuity and are projected to the tangent space, when the process hits the boundary. This evolution becomes rather complicated because of the structure of the set of times, when the process hits the boundary, which is known to be a.s. a closed set with zero Lebesgue measure without isolated points. The system is similar to the one introduced by Airault in [1] in order to develop probabilistic representations for the solutions of linear PDE systems with mixed Dirichlet-Neumann conditions in a smooth domain in Rd. A further similar system appears in Section V.6 in [42], which deals with the heat equation for diffusion processes on manifolds with boundary conditions. In a sense, our result can be considered as a pathwise version of the results in [1, 42]. Again we shall use some of the techniques established in [26], where Deuschel and Zambotti proved such a pathwise differentiability result with respect to the initial data for diffusion processes in the domain G = [0, )d. The proof of the main result in [26] is based on the fact that a ∞ Brownian path, which is perturbed by adding a Lipschitz path with a sufficiently small Lipschitz constant, attains its minimum at the same time as the original path (see Lemma 1 in [26]). This is due to the fact that a Brownian path leaves its minimum faster than linearly. In [26] as well

21 22 SDEs in a Smooth Domain with normal Reflection as in the previous chapter this is used in order to provide an exact computation of the reflection term in the difference quotient via Skorohod’s lemma. In this chapter, the approach is quite similar: Using localization techniques introduced by Anderson and Orey (cf. [5]) we transform the SDE locally into an SDE on a halfspace (cf. Sec- tion 2.2.4 below). Then, in order to compute the local time we need to deal with the pathwise minimum of a continuous martingale in place of the standard Brownian motion. Since the pertur- bations are now no longer Lipschitz continuous, i.e. Lemma 1 of [26] does not apply, and because of the asymptotics of a Brownian path around its minimum (cf. Lemma 2.12 below) one cannot necessarily expect that an analogous statement to Lemma 1 in [26] holds true in this case. Never- theless, one can show that the minimum times converge sufficiently fast to obtain differentiability (see Proposition 2.13). Another crucial ingredient in the proof is the Lipschitz continuity of the solution with respect to the initial data. This was proven by Burdzy, Chen and Jones in Lemma 3.8 in [16] for the reflected Brownian motion without drift in planar domains, but the arguments can easily be transferred into our setting (see Proposition 2.8). This will give pathwise convergence of the difference quotients along a subsequence. In order to show that the limit does not depend on the chosen subsequence, we shall characterize the limit in coordinates with respect to a moving frame as the solution of an SDE-like equation (cf. Section 4 in [1]). A pathwise differentiability result w.r.t. the initial position of a reflected Brownian motion in smooth domains has also been proven by Burdzy in [14] using excursion theory. The resulting derivative is characterized as a linear map represented by a multiplicative functional for reflected Brownian motion, which has been introduced in Theorem 3.2 of [17]. In constrast to our main results, the SDE considered in [14] does not contain a drift term and the differentiability is shown for the trace process, while we consider the process on the original time-scale. However, we can recover the term, which is mainly characterizing the derivative in [14], describing the influence of curvature of ∂G (cf. Remark 2.6 below). In another recent article [54] Pilipenko studies flow properties for SDEs with reflection and receives Sobolev differentiability in the initial value. In general, reflected Brownian motions have been investigated in several articles, where the question of coalescence or noncoalescence of synchronous couplings is of particular interest. For planar convex domains this has been studied by Cranston and Le Jan in [22] and [23], for some classes of non-smooth domains by Burdzy and Chen in [15], and for two-dimensional smooth domains by Burdzy, Chen and Jones in [16], while the case of a multi-dimensional smooth domain is still an open problem. The material presented in this chapter is contained in [7]. The chapter is organized as follows: In Section 2.2 we give the precise setup and some further preliminaries and we present the main results. Section 2.3 is devoted to the proof of the main results.

2.2 Main Results and Preliminaries

2.2.1 General Notation As in the last chapter we denote by . the Euclidian norm, by .,. the canonical scalar product k k h i and by e = (e1,...,ed) the standard basis in Rd, d 2. Let G Rd be a connected closed ≥ ⊂ bounded domain with C3-smooth boundary and G its interior and let n(x), x ∂G, denote the 0 ∈ 2.2 Main Results and Preliminaries 23 inner normal field. For any x ∂G, let ∈ π (z) := z z, n(x) n(x), z Rd, x −h i ∈ denote the orthogonal projection onto the tangent space. The closed ball in Rd with center x and radius r will be denoted by B (x). The transposition of a vector v Rd and of a matrix A Rd d r ∈ ∈ × will be denoted by v∗ and A∗, respectively. The set of continuous real-valued functions on G is denoted by C(G), and Cb(G) denotes the set of those functions in C(G) that are bounded on G. For each k N, Ck(G) denotes the set of real-valued functions that are k-times continuously ∈ k k differentiable in G, and Cb (G) denotes the set of those functions in C (G) that are bounded and have bounded partial derivatives up to order k. Furthermore, for f C1(G) we denote by the ∈ ∇ gradient of f and in the case where f is Rd-valued by Df the Jacobi matrix. Finally, ∆ denotes the Laplace differential operator on C2(G) and D := v, the directional derivative operator v h ∇i associated with the direction v Rd. The symbols c and c , i N, will denote constants, whose ∈ i ∈ value may only depend on some quantities specified in the particular context.

2.2.2 Skorohod SDE For any starting point x G, we consider the following stochastic differential equation of the ∈ Skorohod type:

t t X (x) = x + b(X (x)) dr + w + n(X (x)) dl (x), t 0, t r t r r ≥ Z0 Z0 (2.1) X (x) G, dl (x) 0, ∞ 1l (X (x)) dl (x)=0, t 0, t ∈ t ≥ G0 t t ≥ Z0 where w is a d-dimensional Brownian motion on a complete probability space (Ω, , P) and l(x) F denotes the local time of X(x) in ∂G, i.e. it increases only at those times, when X(x) is at the boundary of G. The components bi : G R of b are supposed to be in C1(G) and Lipschitz → continuous. Then, existence and uniqueness of strong solutions of (2.1) are guaranteed by the results in [60] in the case, where G is a convex set, and for arbitrary smooth G by the results in [51]. The local time l is carried by the set

C := s 0: X (x) ∂G . { ≥ s ∈ } We define r(t) := sup(C [0,t]) ∩ with the convention sup := 0. Then, C is known to be a.s. a closed set of zero Lebesgue ∅ measure without isolated points and t r(t) is locally constant and right-continuous. Let (A ) 7→ n n be the family of connected components of the complement of C. An is open, so that there exists q A Q, n N. Note that for each t > inf C we have X (x) ∂G. n ∈ n ∩ ∈ r(t) ∈ For later use we introduce now a moving frame. Let x O(x) be a mapping on G taking 7→ values in the space of orthogonal matrices, which is twice continuoulsly differentiable, such that for x ∂G the first row of O(x) coincides with n(x). Such a mapping, which exists locally, can ∈ 24 SDEs in a Smooth Domain with normal Reflection

be constructed on the whole domain G by using the partition of unity. Writing Os = O(Xs(x)) we get by Itˆo’s formula

d k dOs = αk(Xs(x)) dws + β(Xs(x)) ds + γ(Xs(x)) dls(x), (2.2) Xk=1 for some coefficient functions αk, β and γ.

2.2.3 Main Results

Theorem 2.1. For all t > 0 and x G a.s. the mapping y Xt(y) is differentiable at x and, ∈ 7→d setting ηt := DvXt(x) = limε 0(Xt(x + εv) Xt(x))/ε, v R , there exists a right-continuous → − ∈ modification of η such that a.s. for all t > 0:

t ηt = v + Db(Xr(x)) ηr dr, if t < inf C, 0 · Z t (2.3) ηt = πX (x)(ηr(t) ) + Db(Xr(x)) ηr dr, if t inf C. r(t) − · ≥ Zr(t) Remark 2.2. If x ∂G, t = 0 is a.s. an accumulation point of C and we have r(t) > 0 a.s. for ∈ every t > 0. Therefore, in that case η0 = v and η0+ = πx(v), i.e. there is discontinuity at t = 0.

Remark 2.3. The equation (2.3) does not characterize the derivatives, since it does not admit a unique solution. Indeed, if the process (η ) solves (2.3), then the process (1 + l (x))η , t 0, t t t ≥ also does. A characterizing equation for the derivatives is given Theorem 2.5 below.

Note that this result corresponds to that for the domain G = [0, )d in Theorem 1 in [26]. ∞ The proof of Theorem 2.1 as well as the proofs of Theorem 2.5 and Corollary 2.7 below are postponed to Section 2.3. As soon as pathwise differentiability is established, we can immediately provide a Bismut-Elworthy formula: Define for all f C (G) the transition semigroup P f(x) := ∈ b t E[f(X (x))], x G, t > 0, associated with X. t ∈ Corollary 2.4. Setting ηij := ∂Xi(x)/∂xj, we have for all f C (G), t > 0 and x G: t t ∈ b ∈ d ∂ 1 t E ki k i Ptf(x) = f(Xt(x)) ηr dwr , i 1,...,d , (2.4) ∂x t " 0 # ∈{ } Z Xk=1 and if f C1(G): ∈ b

d ∂ ∂f P f(x) = E (X (x)) ηki , i 1,...,d . (2.5) ∂xi t ∂xk t t ∈{ } Xk=1   Proof. Formula (2.5) is straightforward from the differentiability statement in Theorem 2.1 and the chain rule. For formula (2.4) see the proof of Theorem 2 in [26]. 2.2 Main Results and Preliminaries 25

From the representation of the derivatives in (2.3) it is obvious that (ηt)t evolves according to a linear differential equation, when the process X is in the interior of G, and that it is projected to the tangent space, when X hits the boundary. Furthermore, if X is at the boundary at some time t and we have r(t ) = r(t ), i.e. t is the endpoint of an excursion interval, then also η 0 0− 6 0 0 has a discontinuity at t0 and jumps as follows:

ηt0 = πX (x)(ηt0 ). (2.6) t0 −

Consequently, we observe that at each time t0 as above, η is projected to the tangent space and jumps in direction of n(X (x)) or n(X (x)), respectively. Finally, if X (x) ∂G and t r(t) t0 − t0 t0 ∈ 7→ is continuous in t = t0, there is also a projection of η, but since in this case ηt0 is in the tangent − space, the projection has no effect and η is continuous at time t0. Set Y := O η , t 0, where O denotes the moving frame introduced in Section 2.2.2. Let t t · t ≥ t P = diag(e1) and Q = Id P and − Y 1 = P Y and Y 2 = Q Y t · t t · t to decompose the space Rd into the direct sum Im P Ker P . We define the coefficient functions ⊕ c(t) and d(t) to be such that

d 1 2 1 2 ck(t) ck(t) k d (t) d (t) 3 4 dwt + 3 4 dt ck(t) ck(t) d (t) d (t) Xk=1     d 1 k 1 1 = α (X (x)) O− dw + O Db(X (x)) O− + β(X (x)) O− dt. k t · t t t · t · t t · t k=1 X   2 1 Furthermore, we set γ (t) := γ(X (x)) O− Q. t · t · Theorem 2.5. There exists a right-continuous modification of η and Y , respectively, such that Y is characterized as the unique solution of

d t t 1 1 1 1 2 2 k 1 1 2 2 Yt =1l t

Remark 2.6. Note that for all t C = supp dl(x), ∈ 2 1 1 Φ := Q O Dn(X (x)) O− Q = Q O S(X (x)) O− Q, t · t · t · t · − · t · t · t · where for every x ∂G, S(x) denotes the symmetric linear endomorphism acting on the tangent ∈ space at x, which is known as the shape operator or the Weingarten map, characterized by the relation S(x)v = D n(x) for all v in the tangent space at x. The eigenvalues of S(x) are the − v principal curvatures of ∂G at x, and its determinant is the Gaussian curvature. Hence, the linear term in the equation for the derivatives in [14] can be recovered in our results. Finally, we give another confirmation of the results, namely they will imply that the Neumann condition holds for X. Corollary 2.7. For all f C (G) and t > 0, the transition semigroup P f(x) := E[f(X (x))], ∈ b t t x G, satisfies the Neumann condition at ∂G: ∈ x ∂G = D P f(x)=0. ∈ ⇒ n(x) t 2.2.4 Localization In order to prove Theorem 2.1 we shall use the localization technique introduced in [5]. Let U , U ,... be a countable or finite family of relatively open subsets of G covering G . Every { 0 1 } 0 U is attached with a coordinate system, i.e. with a mapping u : U Rd, giving each point m m m → x U the coordinates u (x)=(u1 (x),...,ud (x)) such that: ∈ m m m m i) U G and the corresponding coordinates are the original Euclidian coordinates. If m > 0 0 ⊆ 0 the mapping um is one to one and twice continuously differentiable and we have

U ∂G = x U : u1 (x)=0 , U G = x U : u1 (x) > 0 . m ∩ { ∈ m m } m ∩ 0 { ∈ m m } ii) There is a positive constant d such that for every x G there exists an index m(x) N 0 ∈ ∈ such that B (x) U . d0 ⊆ m(x) iii) For every m > 0, ui (x), n(x) = δ for all x U ∂G. h∇ m i 1i ∈ m ∩ iv) For every m 0 and i 1,...,d , the functions ≥ ∈{ } bi : U R x ui (x), b(x) + 1 ∆ui (x), m m → 7→ h∇ m i 2 m

σi : U Rd x ui (x), m m → 7→ ∇ m satisfy

i i sup sup bm(x) + σm(x) < , m x Um | | k k ∞ ∈  and there exists a constant c, not depending on m and i, such that

bi (x) bi (y) + σi (x) σi (y) c x y , x, y U . | m − m | k m − m k ≤ k − k ∀ ∈ m 2.2 Main Results and Preliminaries 27

Note that these conditions imply n(x) = u1 (x) for all x U ∂G. Since ∂G is supposed to ∇ m ∈ m ∩ be C3, the function bi and σi are continuously differentiable. Fix any δ (0,d ). We define a m m 0 ∈ 0 sequence of stopping times (τℓ)ℓ by

τ := 0, τ := t > τ : dist(X (x), ∂U ) <δ , ℓ 0, 0 ℓ+1 { ℓ t mℓ 0} ≥ where, for every ℓ 0, m := m(X (x)) N such that B (X (x)) U . The dependence ≥ ℓ τℓ ∈ d0 τℓ ⊆ mℓ of τ on x will be suppressed in the notation. Note that by construction X (x) U for all ℓ t ∈ mℓ t [τ , τ ] and m = m for every ℓ since δ

0 if t [0, τℓ], ℓ t 1 t 1 ∈ Mt (x) := b (Xr(x)) dr + σ (Xr(x)) dwr if t [τℓ, τℓ+1], (2.8)  τℓ mℓ τℓ mℓ ∈ M ℓ (x) if t τ . R τℓ R ≥ ℓ+1  Furthermore, we set L (x) := l (x) l (x) if t [τ , τ ], ℓ 0, so that t t − τℓ ∈ ℓ ℓ+1 ≥

u1 (X (x)) =u1 (X (x)) + M ℓ(x) + L (x), t [τ , τ ]. (2.9) mℓ t mℓ τℓ t t ∈ ℓ ℓ+1

By the Girsanov Theorem there exists a probability measure P˜ℓ(x), which is equivalent to P and under which M ℓ(x) is a continuous martingale. The process is given by

t [M ℓ(x)] = σ1 (X (x)) 2 dr, t [τ , τ ], t k mℓ r k ∈ ℓ ℓ+1 Zτℓ which is strictly increasing in t on [τ , τ ] . We set ρℓ := inf s : [M ℓ(x)] > t . We can apply ℓ ℓ+1 t { s } the Dambis-Dubins-Schwarz Theorem, in particular its extension in Theorem V.1.7 in [56], since ℓ ℓ in our case the limit limt [M (x)]t =[M (x)]τ < is existing, to conclude that the process →∞ ℓ+1 ∞ Bℓ(x) := M ℓ (x) for t < [M ℓ(x)] , Bℓ(x) := M ℓ (x) for t [M ℓ(x)] , (2.10) t ρt τℓ+1 t τℓ+1 ≥ τℓ+1 ˜ ℓ is a Pℓ(x)-Brownian motion w.r.t. the time-changed filtration stopped at time [M (x)]τℓ+1 and we ℓ ℓ ℓ have M (x) = B ℓ for all t [τℓ, τℓ+1]. In particular, on [τℓ, τℓ+1] the path of M (x) attains t [M (x)]t ∈ a.s. its minimum at a unique time. 28 SDEs in a Smooth Domain with normal Reflection

2.2.5 Example: Processes in the Unit Ball We end this section by considering the example of the unit ball to illustrate our results. Let the domain G = B (0) be the closed unit ball in Rd. Then, for x ∂G, the inner normal field is 1 ∈ given by n(x) = x and the orthogonal projection onto the tangent space by π (z) = z z,x x, − x −h i z Rd. The Skorohod equation can be written as ∈ t t X (x) = x + b(X (x)) dr + w X (x) dl (x), t 0, t r t − r r ≥ Z0 Z0 ∞ Xt(x) G, dlt(x) 0, 1l Xt(x) <1 dlt(x)=0, t 0, ∈ ≥ {k k } ≥ Z0 and the system describing the derivatives becomes

t ηt = v + Db(Xr(x)) ηr dr, if t < inf C, 0 · Z t ηt = ηr(t) ηr(t) ,Xr(t)(x) Xr(t)(x) + Db(Xr(x)) ηr dr, if t inf C. − −h − i · ≥ Zr(t) 1 2 In this example the nonnegative function u(x) := 2 (1 x ), defined globally on G, can be chosen 1 −k k as the first component um of the coordinate mappings um for all m. Thus, for the analogue to (2.9) we get

t t d u(X (x)) =u(x) X (x), b(X (x)) dr X (x) dw t + l (x) t − h r r i − r r − 2 t Z0 Z0 =:u(x) + Mt(x) + lt(x).

Finally, the quadratic variation process of the semimartingale M ℓ is given by

t [M(x)] = X (x) 2 dr, t 0. t k r k ≥ Z0 2.3 Proof of the Main Result

2.3.1 Lipschitz Continuity w.r.t. the Initial Datum The first step in order to prove the differentiability results is to show the Lipschitz continuity of x (X (x)) w.r.t. the sup-norm topology on a finite time interval. 7→ t t

Proposition 2.8. Let T be an arbitrary positive finite stopping time and let (Xt(x)) and (Xt(y)), t 0, be two solutions of (2.1) for some x, y G. Then, there exists a positive constant c only ≥ ∈ depending on T such that a.s.

sup Xt(x) Xt(y) x y exp(c(T + lT (x) + lT (y))). t [0,T ] k − k≤k − k ∈ 2.3 Proof of the Main Result 29

Proof. The case x = y is clear and it suffices to consider the case T < inf t : X (x) = X (y) . { t t } We shall proceed similarly to Lemma 3.8 in [16]. Since ∂G is C2-smooth, there exists a positive constant c < such that for all x ∂G and all y G, 1 ∞ ∈ ∈ x y, n(x) c x y 2. (2.11) h − i ≤ 1 k − k Let T := 0 and for k 1, 0 ≥ 1 Tk :=inf t Tk 1 : Xt(x) Xt(y) XTk− (x) XTk− (y) , 2 XTk− (x) XTk− (y) ≥ − k − k 6∈ 2 k 1 − 1 k k 1 − 1 k T. ∧   Then, by Itˆo’s formula we obtain for any k 1 and t (Tk 1, Tk], ≥ ∈ − X (x) X (y) X (x) X (y) k t − t k−k Tk−1 − Tk−1 k t X (x) X (y), b(X (x)) b(X (y)) = h r − r r − r i dr X (x) X (y) ZTk−1 k r − r k t X (x) X (y), n(X (x)) t X (y) X (x), n(X (y)) + h r − r r i dl (x) + h r − r r i dl (y) X (x) X (y) r X (x) X (y) r ZTk−1 k r − r k ZTk−1 k r − r k t t c X (x) X (y) dr + c X (x) X (y) (dl (x) + dl (y)) ≤ 2 k r − r k 1 k r − r k r r ZTk−1 ZTk−1 Tk c X (x) X (y) (dr + dl (x) + dl (y)), ≤ 3 k Tk−1 − Tk−1 k r r ZTk−1 where we have used (2.11) and the Lipschitz continuity of b. Hence, for any t (Tk 1, Tk], ∈ − Xt(x) Xt(y) k − k 1 + c3 Tk Tk 1 + lTk (x) lTk− (x) + lTk (y) lTk− (y) X (x) X (y) ≤ − − − 1 − 1 k Tk−1 − Tk−1 k  exp c3 Tk Tk 1 + lTk (x) lTk− (x) + lTk (y) lTk− (y) , ≤ − − − 1 − 1 and  k 1 X (x) X (y) X (x) X (y) − XT (x) XT (y) k t − t k = k t − t k k j − j k x y XTk−1 (x) XTk−1 (y) XTj−1 (x) XTj−1 (y) k − k k − k jY=1 k − k k

exp c3 Tj Tj 1 + lTj (x) lTj− (x) + lTj (y) lTj− (y) ≤ − − − 1 − 1 j=1 Y  exp(c (T + l (x) + l (y))) ≤ 3 k Tk Tk exp(c (T + l (x) + l (y))) , ≤ 3 T T which proves the proposition.

Remark 2.9. For an arbitrary positive finite stopping time T and every ℓ 0 we define the ≥ stopping time

τˆ (y) := inf t > τ : X (x) X (y) δ0 τ T, y G, ℓ { ℓ k t − t k ≥ 2 } ∧ ℓ+1 ∧ ∈ 30 SDEs in a Smooth Domain with normal Reflection

with δ0 as in Section 2.2.4. Then, by the definition of τℓ we have that Xt(y) Uml for all ℓ ∈ ℓ t [τℓ, τˆℓ(y)]. Thus, the process (Mt (y))0 t T , y G, can be defined similarly to M (x) by ∈ ≤ ≤ ∈ 0 if t [0, τℓ], ℓ t 1 t 1 ∈ M (y) := b (Xr(y)) dr + σ (Xr(y)) dwr if t [τℓ, τˆℓ(y)], t  τℓ mℓ τℓ mℓ ∈ ℓ M (y) if t τˆℓ(y). R τˆℓ(y) R ≥ By Proposition 2.8 there exists for every finite stopping time T > 0 a random ∆T > 0 such that δ0 sup Xt(x) Xt(y) < , y B∆T (x) G, t [0,T ] k − k 2 ∀ ∈ ∩ ∈ i.e. for such y we haveτ ˆ (y) = τ T . ℓ ℓ+1 ∧ In the next lemma we collect some immediate consequences of Proposition 2.8.

Lemma 2.10. Fix some arbitrary real T > 0 and let ∆T and τˆℓ(y) be as in Remark 2.9. Then, for every ℓ 0 we have: ≥ i) For all x, y G and p > 1, ∈ p ℓ ℓ p E sup Mt (x) Mt (y) 1l x y <∆T c x y "t [0,T ] − {k − k }# ≤ k − k ∈ ℓ for some positive constant c = c(p, T ). There exists a modification of x (Mt (x))t [0,T ] 7→ ∈ which is continuous w.r.t. the sup-norm topology. ii) For all x G and s, t [0, T ] and p > 1, ∈ ∈ p E M ℓ(x) M ℓ(x) c t s p/2, t − s ≤ | − | h i for some positive constant c = c(p, T ) not depending on x. There exists a modification of ℓ 1 (Mt (x))t [0,T ] which is H¨older continuous of order α for every α (0, 2 ) such that ∈ ∈ p M ℓ(x) M ℓ(x) E t s sup sup | − α | < . x G s=t t s ∞ ∈ " 6  | − |  # ˜ ℓ ℓ ℓ iii) We set for abbreviation Mt (x, y) := (Mt τˆ (y)(y) Mt τˆ (y)(x)), t [0, T ], x, y G. Let ∧ ℓ − ∧ ℓ ∈ ∈ x G and (xi)i I G be a family of points in G such that xi = x for all i I. Then, for ∈ ∈ ⊂ 6 ∈ all s, t [0, T ] and p > 1 we have ∈ 1 p E ˜ ℓ ˜ ℓ p/2 sup p Mt (x,xi) Ms (x,xi) c t s , i I xi x − ≤ | − | ∈ k − k h i

for some positive constant c = c(p, T ). Moreover, for every i I there exists a modification ˜ ℓ ∈ 1 of (Mt (x,xi))t [0,T ] which is H¨older continuous of order α for every α (0, 2 ) such that ∈ ∈ p M˜ ℓ(x,x ) M˜ ℓ(x,x ) E t i s i p sup | − α | c xi x s=t t s ≤ k − k " 6 | − | ! # with some positive constant c not depending on xi. 2.3 Proof of the Main Result 31

1 1 Proof. i) It follows directly from Proposition 2.8, the uniform Lipschitz continuity of bm and σm and the Burkholder inequality that

p ℓ ℓ p E sup Mt (x) Mt (y) c1 x y "t [0,τˆℓ(y)] − # ≤ k − k ∈

for some positive constant c1. Note that the random constant exp(c(T +lT (x)+lT (y))), appearing in Proposition 2.8, has finite expectation. Since by the definition of ∆T ,

p p ℓ ℓ ℓ ℓ sup Mt (x) Mt (y) 1l x y <∆T sup Mt (x) Mt (y) , t [0,T ] − {k − k } ≤ t [0,τˆ (y)] − ∈ ∈ ℓ

this implies the estimate in i). To prove now the existence of a continuous modification one only needs to modify slightly the proof of Kolmogorov’s continuity theorem (cf. e.g. Theorem I.2.1 in [56]). 1 1 ii) The estimate is clear again by Burkholder’s inequality. Note that the functions bm and σm are uniformly bounded so that the constant does not depend on x. Furthermore, again by the proof of Kolmogorov’s continuity theorem it follows that in this case the Lp norm of the H¨older norm of the modification does also not depend on x. iii) In the following the symbol c denotes a constant whose value may change from one oc- curence to the other one. Let 0 s t T and sets ˆ := s τˆ (x ) and tˆ := t τˆ (x ). Then, ≤ ≤ ≤ i ∧ ℓ i i ∧ ℓ i tˆ sˆ t s for all i. By the definition of M ℓ and M˜ ℓ we have | i − i|≤| − |

1 p E ˜ ℓ ˜ ℓ sup p Mt (x,xi) Ms (x,xi) i I xi x − ∈ k − k h ˆ i p 1 ti c sup E (b1 (X (x )) b 1 (X (x)))) dr p mℓ r i mℓ r ≤ i I xi x " sˆi − # ∈ k − k Z ˆ p 1 ti + c sup E (σ1 (X (x )) σ1 (X (x))) dw . p mℓ r i mℓ r r i I xi x " sˆi − # ∈ k − k Z

Using the uniform Lipschitz continuity of bm and Proposition 2.8 the first term can be estimated by

t s p c sup | − | E sup X (x ) X (x) p c t s p. x x k r i − r k ≤ | − | i I i "r [ˆsi,tˆi] # ∈ k − k ∈

For the second term we get the following estimate by Burkholder’s inequality, the uniform Lip- 32 SDEs in a Smooth Domain with normal Reflection

schitz continuity of σm and again by Proposition 2.8:

1 r p c sup E sup σ1 (X (x )) σ1 (X (x)) dw p mℓ r i mℓ r r i I xi x r [ˆs ,tˆ ] sˆi − ∈ k − k " i i Z # ∈  ˆ p/2 1 ti E 2 c sup p Xr(xi) Xr(x) dr ≤ i I xi x  sˆ k − k  ∈ k − k Z i ! t s p/2   c sup | − | E sup X (x ) X (x) p ≤ x x p k r i − r k i I i "r [ˆsi,tˆi] # ∈ k − k ∈ c t s p/2 ≤ | − | and we obtain the desired estimate. The existence of a H¨older continuous modification and the uniform Lp-bound of its H¨older norm follow again from the Kolmogorov criterion.

2.3.2 Convergence of the Local Time Processes

We fix from now on an arbitrary T > 0. In the following let (An) be the family of connected components of [0, T ] C and q be as above. Furthermore, let (τ ) be the sequence of stopping \ n ℓ ℓ times defined as before by

τ := 0, τ := t > τ : dist(X (x), ∂U ) <δ T, ℓ 0. 0 ℓ+1 { ℓ t mℓ 0} ∧ ≥ We may suppose that the q are chosen in such a way that for every n we have [r(q ), q ] n n n ⊂ [τ , τ ] if r(q ) [τ , τ ] for some ℓ. ℓ ℓ+1 n ∈ ℓ ℓ+1 In order to compute the local time l(x), recall that on every interval [τℓ, τℓ+1], ℓ 0, l(x) is 1 ≥ carried by the set of times t, when umℓ (Xt(x)) = 0. Therefore, we can apply Skorohod’s Lemma (see e.g. Lemma VI.2.1 in [56]) to equation (2.9) to obtain

+ 1 ℓ Lt(x) = umℓ (Xτℓ (x)) inf Ms (x) , t [τℓ, τℓ+1]. − − τℓ s t ∈  ≤ ≤  Fix any q > inf C and ℓ such that q [τ , τ ] . Since u1 (X (x)) = 0 and t L (x) is n n ∈ ℓ ℓ+1 mℓ r(qn) 7→ t non-decreasing, we have for all τ s r(q ): ℓ ≤ ≤ n M ℓ (x) = u1 (X (x)) L (x) u1 (X (x)) L (x) r(qn) − mℓ τℓ − r(qn) ≤ − mℓ τℓ − s = u1 (X (x)) + M ℓ(x) M ℓ(x). − mℓ s s ≤ s Moreover, L(x) is constant on [r(q ),t] for all t A [τ , τ ], so that n ∈ n ∩ ℓ ℓ+1 + L (x) = L (x) = u1 (X (x)) M ℓ (x) , t A [τ , τ ]. (2.12) t r(qn) − mℓ τℓ − r(qn) ∈ n ∩ ℓ ℓ+1 h i ℓ Note that r(qn) is the unique time in [τℓ, qn], when M (x) attains its minimum. Analogously we compute the local time of the process with perturbed starting point. For fixed v Rd we set ∈ 2.3 Proof of the Main Result 33 x := x + εv, ε R, where ε is always supposed to be sufficiently small, such that x lies in ε ∈ | | ε G. Furthermore, there exists a random ∆ > 0 such that X (x ) U andτ ˆ (x ) > q for all n τℓ ε ∈ mℓ ℓ ε n ε ( ∆ , ∆ ) (cf. Remark 2.9). As above we obtain for such ε: ∈ − n n + L (x ) = L (x ) = u1 (X (x )) M ℓ (x ) , (2.13) qn ε rε(qn) ε − mℓ τℓ ε − rε(qn) ε h i ℓ where rε(qn), defined similarly as r(qn), is the unique time in [τℓ, qn], when M (xε) attains its minimum.

Lemma 2.11. For all q we have r (q ) r(q ) a.s. for ε 0. n ε n → n → Proof. We fix some q and ℓ such that q [τ , τ ]. For every sequence (ε ) converging to zero n n ∈ ℓ ℓ+1 k k we can extract a subsequence of (rεk (qn)), still denoted by (rεk (qn)), converging to somer ˆ(qn). By construction we have a.s. ℓ ℓ Mr (q )(xεk ) Mr(q )(xεk ) εk n ≤ n for every k. Note that on one hand the right hand side converges to M ℓ (x) as k by r(qn) → ∞ Lemma 2.10 i). On the other hand the left hand side converges to M ℓ (x), since rˆ(qn)

ℓ ℓ ℓ ℓ ℓ ℓ Mr (q )(xεk ) Mrˆ(q )(x) Mr (q )(xεk ) Mr (q )(x) + Mr (q )(x) Mrˆ(q )(x) εk n − n ≤ εk n − εk n εk n − n ℓ ℓ ℓ ℓ sup Mt (xεk ) Mt (x) + Mrε (qn)(x) Mrˆ(qn)(x) , ≤ t [0,T ] − k − ∈ which tends to zero for k by Lemma 2.10 i) and ii). Thus, M ℓ (x) M ℓ ( x). Since rˆ(qn) r(qn) → ∞ ℓ ≤ r(qn) is unique time in [τℓ, qn], when M (x) attains its minimum, this impliesr ˆ(qn) = r(qn).

Lemma 2.12. Let (Wt)t 0 be a Brownian motion on (Ω, , P). For all T > 0, let ϑ : Ω [0, T ] ≥ F → be the random variable such that a.s.

W 0. 0 R Proof. It suffices to consider the case T = 1. We recall the following path decomposition of a Brownian motion, proven in [25]. Denoting by (M, Mˆ ) two independent copies of the standard (see [56]), we set for all r (0, 1), ∈ r t √rM(1) + √rM( − ), t [0, r] V (t) := − r ∈ r ˆ t r ( √rM(1) + √1 rM( 1−r ), t (r, 1] − − − ∈ 34 SDEs in a Smooth Domain with normal Reflection

d Let now (τ,M, Mˆ ) be an independent triple, such that τ has the arcsine law. Then, Vτ = W . This formula has the following meaning: τ is the unique time in [0, 1], when the path attains minimum √τM(1). The path starts in zero at time t = 0 and runs backward the path of M − on [0, τ] and then it runs the path of Mˆ . Moreover, it was proved in [43] that the law of the Brownian meander is absolutely continuous w.r.t. the law of the three-dimensional Bessel process (Rt)t 0 on the time interval [0, 1] starting in zero. We recall that a.s. ≥ R lim inf t 1 t 0 √ ≥ → th(t) for every function h satisfying the conditions in the statement (see [44], p. 164). Since the same asymptotics hold for the Brownian meander at zero, the claim follows.

Proposition 2.13. Let δ > 0 be arbitrary. Then, for all qn there exists a random ∆n > 0 such that δ p rε(qn) r(qn) E | − | 1l 0< ε <∆n c < (2.14) ε { | | } ≤ ∞  | |   for every p > 1 and a constant c = c(δ, p) not depending on ε. In particular, we have for every δ > 0 and p > 1 r (q ) r(q ) δ | ε n − n | 0 in Lp as ε 0. ε −→ → Proof. To prove (2.14) it is enough to consider the case δ < 1. By construction we have for every q and ℓ such that q [τ , τ ], n n ∈ ℓ ℓ+1 M ℓ (x ) M ℓ (x ). rε(qn) ε ≤ r(qn) ε ℓ ℓ ˜ ℓ Since for ε small enough qn < τˆℓ(xε) (see Remark 2.9) we have Mt (xε) = Mt (x) + Mt (x,xε) for every t [τ , q ] with M˜ ℓ(x,x ) defined as in Lemma 2.10 iii), this is equivalent to ∈ ℓ n ε M ℓ (x) M ℓ (x) M˜ ℓ (x,x ) M˜ ℓ (x,x ), rε(qn) − r(qn) ≤ r(qn) ε − rε(qn) ε which implies

ℓ ℓ M˜ ℓ (x,x ) M˜ ℓ (x,x ) Mr (q )(x) Mr(q )(x) r(qn) ε rε(qn) ε ε n − n 1l − 1l . (2.15) (1 δ)/2 rε(qn)=r(qn) (1 δ)/2 rε(qn)=r(qn) rε(qn) r(qn) − { 6 } ≤ rε(qn) r(qn) − { 6 } | − | | − | ℓ ℓ ℓ P˜ ℓ Recall that M (x) = B[M ℓ(x)] , where B is a ℓ(x)-Brownian motion (see (2.10)) and B attains · . ℓ ℓ ℓ its minimum over [M (x)]τℓ , [M (x)]qn at time [M (x)]r(qn). Hence, applying Lemma 2.12 with h(t) = tδ/2 it follows that   (1+δ)/2 ℓ ℓ ℓ ℓ 1 ℓ ℓ Mrε(qn)(x) Mr(qn)(x) = B[M ℓ(x)] B[M ℓ(x)] [M (x)]rε(qn) [M (x)]r(qn) − rε(qn) − r(qn) ≥ 2 − (1+δ )/2 rε(qn) 1 1 2 = 2 σmℓ (Xr(x)) dr r(qn) k k Z

2.3 Proof of the Main Result 35 for all ε ( ∆ , ∆ ) for some positive ∆ . Since ∈ − n n n σ1 (X (x)) 2 = u1 (X (x)) 2 = n(X (x)) 2 =1, k mℓ r(qn) k k∇ ml r(qn) k k r(qn) k we have by Lemma 2.11, possibly after choosing a smaller ∆ , that σ1 (X (x)) 2 is bounded n k mℓ r k away from zero uniformly in r between r(qn) and rε(qn). Thus,

M ℓ (x) M ℓ (x) c r (q ) r(q ) (1+δ)/2 rε(qn) − r(qn) ≥ 1 | ε n − n | and we derive from (2.15) that

˜ ℓ ˜ ℓ Mt (x,xε) Ms (x,xε) c r (q ) r(q ) δ sup − . 1 ε n n (1 δ)/2 | − | ≤ s,t∈[0,T ] t s − s=t | − | 6 Finally, we get for every p > 1 using Lemma 2.10 iii)

p ˜ ℓ ˜ ℓ Mt (x,xε) Ms (x,xε) E r (q ) r(q ) δp 1l c E sup − c ε p, ε n n 0< ε <∆n 2 (1 δ)/2 3 | − | { | | } ≤  s=t  t s −   ≤ | | h i 6 | − |     and we obtain (2.14). Since δ is arbitrary, the Lp-convergence follows immediately from (2.14) by H¨older’s inequality and Lemma 2.11.

Corollary 2.14. For every q and ℓ such that q [τ , τ ] we have n n ∈ ℓ ℓ+1 1 ℓ ℓ i) M (xε) M (xε) 0 a.s. as ε 0, ε r(qn) − rε(qn) −→ →

ii) 1 l (x ) l (x ) 0 a.s. as ε 0. ε r(qn) ε − rε(qn) ε −→ →

Proof. ii) follows from i). Indeed, by Proposition 2.8 and Lemma 2.11 we can choose ε so small that lr(qn)(xε) = lrε(qn)(xε)=0if qn < inf C and lr(qn)(xε), lrε(qn)(xε) > 0 if qn > inf C. In the first case ii) is trivial and the latter case we have by (2.13)

l (x ) l (x ) = L (x ) L (x ) = M ℓ (x ) M ℓ (x ) r(qn) ε − rε(qn) ε r(qn) ε − rε(qn) ε rε(r(qn)) ε − rε(qn) ε ℓ ℓ M (xε) M (xε) , (2.16) ≤ r(qn) − rε(qn) ℓ where we have used the fact that M (xε) attains its minimum over [τℓ, qn] at time rε(qn) and its minimum over [τℓ, r(qn)] at time rε(r(qn)), respectively. Therefore it suffices to prove i). In a first step we show that for an arbitrary δ > 0 and for every p > 1

δ M ℓ (x ) M ℓ (x ) r(qn) ε rε(qn) ε − 0 in Lp. (2.17)

ε −→

36 SDEs in a Smooth Domain with normal Reflection

It is enough to consider p such that δp > 1. Then, for any α (0, 1 ), ∈ 2 1 δp E M ℓ (x ) M ℓ (x ) εp r(qn) ε − rε(qn) ε   δp αδp M ℓ (x ) M ℓ (x ) r (q ) r(q ) r(qn) ε rε(qn) ε =E | ε n − n | − 1l  p α rε(qn)=r(qn)  ε  rε(qn) r(qn)  { 6 } | − |      1/2 2δp 1/2  r (q ) r(q ) 2αδp M ℓ(x ) M ℓ(x ) E ε n n E t ε s ε | −2p | sup − α . ≤ " ε #  s=t t s !  6 | − |   The first term tends to zero by Proposition 2.13, while the second term is uniformly bounded by 1 ℓ ℓ Lemma 2.10 ii), and we obtain (2.17). We prove now i). Assume M (xε) M (xε) 0 ε r(qn) − rε(qn) 6→ ℓ with positive probability. Since rε(qn) r(qn) a.s. by Lemma 2.11 and M is continuous in t and → δ ℓ ℓ 1 ℓ ℓ x, it follows that a.s. M (xε) M (xε) 0. Thus, M (xε) M (xε) r(qn) − rε(qn) → ε r(qn) − rε(qn) → ∞ with positive probability for any δ < 1, which contradicts (2.17).

2.3.3 Proof of the Differentiability Conververgence along subsequences

We shall proceed similarly to Step 5 in the proof of Theorem 1 in [26]. Denote by ηt(ε) := 1 (X (x ) X (x)) the difference quotient, x = x + εv for any fixed v Rd. Let now t [0, T ] C ε t ε − t ε ∈ ∈ \ and let n be such that t A . Using Proposition 2.8 there exists ∆ > 0 such that a.s. ∈ n n lqn (x) = lqn (xε) = 0if qn < inf C and both of them are strictly positive if qn > inf C for all ε (0, ∆ ). Then, | | ∈ n 1 t 1 rε(qn) η (ε) =η (ε) + (b(X (x )) b(X (x))) dr + n(X (x )) dl (x ) t r(qn) ε r ε − r ε r ε r ε Zr(qn) Z0 1 r(qn) n(X (x )) dl (x ) − ε r ε r ε Z0 d t 1 ∂b α,ε k =ηr(qn)(ε) + k (Xr ) dα ηr (ε) dr + Rqn (xε), (2.18) r(qn) 0 ∂x Xk=1 Z Z  α,ε where Xr := αX (x )+(1 α)X (x), α [0, 1], and r ε − r ∈ 1 rε(qn) R (x ) := n(X (x )) dl (x ). (2.19) qn ε ε r ε r ε Zr(qn)

Note that if qn < inf C we have r(qn)=0, ηr(qn)(ε) = v and Rqn (xε) = 0. In any case,

rε(qn) 1 lrε(qn)(xε) lr(qn)(xε) Rqn (xε) n(Xr(xε)) dls(xε) = − 0 as ε 0, (2.20) k k ≤ ε r(qn) k k ε −→ → Z

2.3 Proof of the Main Result 37 by Corollary 2.14. Recall that η (ε) exp(c (T + l (x) + l (y))) for all t [0, T ] and ε =0 by k t k ≤ 1 T T ∈ 6 Proposition 2.8. Let (εν)ν be any sequence converging to zero. By a diagonal procedure, we can extract a subsequence (ν ) such that η (ε ) has a limitη ˆ Rd and η (ε ) has a limit l l r(qn) νl r(qn) ∈ τℓ νl ηˆ Rd as l for all n N and for all ℓ 0. τℓ ∈ →∞ ∈ ≥ Let nowη ˆ : [0, T ] C Rd be the unique solution of \ → t ηˆ :=η ˆ + Db(X (x)) ηˆ dr, t A . t r(qn) r · r ∈ n Zr(qn) By (2.18) and Proposition 2.8, we get for ε (0, ∆ ) and t A , | | ∈ n ∈ n η (ε) ηˆ η (ε) ηˆ + R (x ) k t − tk ≤k r(qn) − r(qn)k k qn ε k α,ε + sup Db(Xr ) Db(Xr(x)) exp(c1(T + lT (x) + lT (y)) r An k − k ∈ t + c η (ε) ηˆ dr. 2 k r − rk Z0 α,ενl Since η (ε ) ηˆ , R (x ) 0 , Xr X (x) uniformly in r [0,t] and since r(qn) νl → r(qn) k qn ε k → → r ∈ the derivatives of b are continuous, we obtain by Gronwall’s Lemma that ηt(ενl ) converges toη ˆt uniformly in t A for every n. Thus, since C has zero Lebesgue measure, η (ε ) converges to ∈ n t νl ηˆ for all t [0, T ] C as l and by the dominated convergence theorem we get t ∈ \ →∞ t k ηˆt = v + Db(Xr(x)) ηˆr dr, t [0, inf C), 0 · ∈ Z t ηˆ =η ˆ + Db(X (x)) ηˆk dr, t [inf C, T ] C. t r(t) r · r ∈ \ Zr(t) Lemma 2.15. For every qn > inf C, i) η (ε), n(X (x)) 0 a.s. and in Lp, p > 1, as ε 0, h r(qn) r(qn) i→ → ii) η (ε), n(X (x )) 0 a.s. and in Lp, p > 1, as ε 0. h rε(qn) rε(qn) ε i→ → Proof. By dominated convergence it suffices to prove convergence almost surely. Let ℓ be such that q [τ , τ ]. Then, clearly X (x) U ∂G. Recall that n(X (x)) = u1 (X (x)), n ∈ ℓ ℓ+1 r(qn) ∈ mℓ ∩ r(qn) ∇ mℓ r(qn) and by Taylor’s formula we get 1 η (ε), n(X (x)) = u1 (X (x )) u1 (X (x)) + O(ε). h r(qn) r(qn) i ε mℓ r(qn) ε − mℓ r(qn) Note that the term of second order in the Taylor expansion is in O(ε) by Proposition 2.8. Recall 1 that umℓ (Xr(qn)(x)) = 0, and combining formula (2.9) and (2.13), we get u1 (X (x )) = u1 (X (x )) + M ℓ (x ) + L (x ) = M ℓ (x ) M ℓ (x ) mℓ r(qn) ε mℓ τℓ ε r(qn) ε r(qn) ε r(qn) ε − rε(r(qn)) ε for all ε ( ∆ , ∆ ) for some positive ∆ . Arguing similarly as in (2.16) we obtain from ∈ − n n n Corollary 2.14 i) that

1 1 ℓ ℓ u (Xr(q )(xε)) u (Xr(q )(x)) Mr(q )(xε) Mr (q )(xε) mℓ n − mℓ n 2 n − ε n 0 as ε 0, ε ≤ ε −→ →

38 SDEs in a Smooth Domain with normal Reflection and i) follows. The proof of ii) is rather analogous. For an appropriate ∆ > 0 we have r (q ) n ε n ∈ [τℓ, τℓ+1] and lrε(qn)(x) > 0 for all ε (0, ∆n). Then, for such ε we get again by using Taylor’s 1 | | ∈ formula and the fact that umℓ (Xrε(qn)(xε)) = 0, η (ε), n(X (x )) = η (ε), u1 (X (x )) h rε(qn) rε(qn) ε i h rε(qn) ∇ mℓ rε(qn) ε i 1 = u1 (X (x )) u1 (X (x)) + O(ε) − ε mℓ rε(qn) ε − mℓ rε(qn) 1 = Mℓ (x) M ℓ (x) + O(ε).  ε rε(qn) − r(rε(qn))   ℓ Since M (x) attains its minimum over [τℓ, qn] at time r(qn) and its minimum over [τℓ, rε(qn)] at time r(rε(qn)), respectively, we finally get 1 η (ε), n(X (x )) M ℓ (x) M ℓ (x) + M ℓ (x) M ℓ (x) + O(ε) |h rε(qn) rε(qn) ε i| ≤ ε rε(qn) − r(qn) r(rε(qn)) − r(qn) | | 2   M ℓ (x) M ℓ (x) + O(ε), ≤ ε rε(qn) − r(qn) | |   which tends to zero again by Corollary 2.14 i).

Since for every m 0 the coordinate mapping u is one to one, the set ui (x), i =2,...,d ≥ m {∇ m } is linear independent for all x U and by construction it is also a basis of the tangent space at x if ∈ m x ∂G U . Let n¯m(x),..., n¯m(x) be the Gram-Schmidt orthonormalization of ui (x), i = ∈ ∩ m { 2 d } {∇ m 2,...,d for every x U and for every m. Then,n ¯m(x) := n(x), n¯m(x),..., n¯m(x) is an ONB } ∈ m { 2 d } of Rd for all x U ∂G. We shall now extendη ˆ to a right-continuous process by definingη ˆ for ∈ m ∩ t t C [τ , τ ) in the coordinates w.r.t. the basisn ¯mℓ (X (x)) on U ∂G. For that purpose ∈ ∩ ℓ ℓ+1 t mℓ ∩ it suffices to define

ηˆ , u1 (X (x)) = ηˆ , n(X (x)) := 0, h t ∇ mℓ t i h t t i and for i =2,...,d,

i i i ηˆt, umℓ (Xt(x)) := umℓ (Xτℓ (x)) ηˆτℓ + bmℓ (Xr(x)) ηˆr dr h ∇ i ∇ · [τ ,t) C ∇ · Z ℓ \ d ik k + σmℓ (Xr(x)) ηˆr dwr . [τℓ,t) C ∇ · Xk=1 Z \ Remark 2.16. This definition leads in fact to a right-continuous extensionofη ˆ. On one hand we have ηˆ , n(X (x)) = 0 for every n by Lemma 2.15 i), and on the other hand we have h r(qn) r(qn) i for all t [τ , τ ) and i =2,...,d ∈ ℓ ℓ+1 t ηˆ , ui (X (x)) = ui (X (x)) ηˆ + bi (X (x)) ηˆ dr h t ∇ mℓ t i ∇ mℓ τℓ · τℓ ∇ mℓ r · r Zτℓ d t (2.21) ik k + σmℓ (Xr(x)) ηˆr dwr . τℓ ∇ · Xk=1 Z 2.3 Proof of the Main Result 39

Indeed, for t C this is just the definition and for t [τ , τ ) C we have by Taylor’s formula ∈ ∈ ℓ ℓ+1 \ and (2.7) that 1 ηˆ , ui (X (x)) = lim ui (X (x )) ui (X (x)) t mℓ t mℓ t ενl mℓ t h ∇ i l ενl − →∞   1 t = lim ui (X (x )) ui (X (x)) + bi (X (x )) bi (X (x)) dr mℓ τℓ ενl mℓ τℓ mℓ τℓ ενl mℓ τℓ l ενl − τ − →∞  Z ℓ   d t + σik (X (x )) σik (X (x)) dwk , mℓ τℓ ενl mℓ τℓ r τℓ − ! Xk=1 Z   and this converges to the right hand side of (2.21) by a similar argument as in the proof of Lemma 2.17 below. In particular, this argument does not depend on the value ofη ˆ for t C since C has t ∈ zero Lebesgue measure. Let for all x U , m 0 and η Rd ∈ m ≥ ∈ d Π˜ m(η) := η, n¯m(x) n¯m(x), (2.22) x h k i k Xk=2 so that obviously Π˜ m(η) = π (η), x ∂G U , η Rd. x x ∀ ∈ ∩ m ∀ ∈ For later use we prove now uniform convergence of Π˜ mℓ (η (ε)) to Π˜ mℓ (ˆη ) along the chosen Xt(xε) t Xt(x) t subsequence. The proof is based on the fact that there are no local time terms appearing in equation (2.7) for ui , i = 2,...,d. In particular, note that Π˜ mℓ (ˆη ) is not the same as mℓ Xt(x) t Q O ηˆ . Later we will identify that process with Y 2 appearing in Theorem 2.5, which does · t · t t depend on the local time. Both processes do only coincide for t [τ , τ ) C. ∈ ℓ ℓ+1 ∩ mℓ Lemma 2.17. For every ℓ 0 let ∆τ > 0 be as in Remark 2.9 such that Π˜ (ηs(ε)) is well ≥ ℓ+1 Xs(xε) defined for all s [τ , τ ] and all 0 < ε < ∆ . Then, ∈ ℓ ℓ+1 | | τℓ+1

mℓ mℓ p sup Π˜ (ηs(εν )) Π˜ (ˆηs) 1l 0 in L , p > 1 as l , Xs(xε ) l Xs(x) 0< ε <∆τℓ+1 s [τ ,τ ] νl − { | | } → →∞ ∈ ℓ ℓ+1 in particular for every qn > inf C contained in [τℓ, τℓ+1],

Π˜ mℓ (η (ε )) Π˜ mℓ (ˆη ) 1l 0 in Lp, p > 1 as l . X (xε ) r(qn) νl X (x) r(qn) 0< ε <∆τ r(qn) νl − r(qn) { | | ℓ+1 } → →∞

m Proof. Since every functionn ¯k is continuous on Um, it suffices by Proposition 2.8 to show that

mℓ mℓ p sup Π˜ (ηs(εν )) Π˜ (ˆηs) 1l 0 in L , p > 2, as l , Xs(x) l Xs(x) 0< ε <∆τℓ+1 s [τ ,τ ] − { | | } → →∞ ∈ ℓ ℓ+1 and for this it is enough to prove that for every i 2,...,d , ∈{ } i i p sup ηs(εν ), u (Xs(x)) ηˆs, u (Xs(x)) 1l 0 in L as l . l mℓ mℓ 0< ε <∆τℓ+1 s [τℓ,τℓ+1] h ∇ i−h ∇ i { | | } → →∞ ∈

40 SDEs in a Smooth Domain with normal Reflection

As before we use Taylor’s formula and (2.7) to obtain 1 η (ε), ui (X (x)) = ui (X (x )) ui (X (x)) + O(ε) h s ∇ mℓ s i ε mℓ s ε − mℓ s 1  s = ui (X (x )) ui (X (x)) + bi (X (x )) bi (X (x)) dr ε mℓ τℓ ε − mℓ τℓ mℓ r ε − mℓ r  Zτℓ d s  ik ik k + σmℓ (Xr(xε)) σmℓ (Xr(x)) dwr + O(ε) τℓ − ! k=1 Z   X s 1 = ui (X (x)) η (ε) + bi (Xα,ε) η (ε) dαdr ∇ mℓ τℓ · τℓ ∇ mℓ r · r Zτℓ Z0 d s 1 ik α,ε k + σmℓ (Xr ) ηr(ε) dαdwr + O(ε), (2.23) τℓ 0 ∇ · Xk=1 Z Z α,ε where as before Xr := αX (x )+(1 α)X (x), α [0, 1]. Recall the definition ofτ ˆ (x ) in r ε − r ∈ ℓ ε Remark 2.9. Comparing (2.21) and (2.23) leads to

i i p E sup ηs(ε), um (Xs(x)) ηˆs, um (Xs(x)) 1l 0< ε <∆τ ℓ ℓ { | | ℓ+1 } "s [τℓ,τℓ+1] h ∇ i−h ∇ i # ∈ c E ui ( X (x)) p η (ε) ηˆ p ≤ 1 k∇ mℓ τℓ k k τℓ − τℓ k τℓ+1 1 p  i α,ε  i + c1 E bm (Xr ) dα ηr(ε) bm (Xr(x)) ηˆr dr 1l 0< ε <∆τ ∇ ℓ · −∇ ℓ · { | | ℓ+1 } Zτℓ Z0 

d s 1 p + c E sup σik (Xα,ε) dα η (ε) σ ik (X (x)) ηˆ dwk + O(ε). 1 ∇ mℓ r · r −∇ mℓ r · r r k=1 "s [τℓ,τˆℓ(xε)] Zτℓ Z0  # X ∈

Using Burkholder’s inequality the third term can be estimated by

d τˆℓ(xε) 1 p c E σik (Xα,ε) dα η (ε) σik (X (x)) ηˆ dr . 2 ∇ mℓ r · r −∇ mℓ r · r k=1 "Zτℓ Z0 # X

Finally, we obtain by Proposition 2.8,

i i p E sup ηs(ε), um (Xs(x)) ηˆs, um (Xs(x)) 1l 0< ε <∆τ ℓ ℓ { | | ℓ+1 } "s [τℓ,τℓ+1] h ∇ i−h ∇ i # ∈ c E ui ( X (x)) p η (ε) ηˆ p ≤ 3 k∇ mℓ τℓ k k τℓ − τℓ k τℓ+1 1 p  i α,ε  i c4(T +lT (x)+lT (xε) + c3 E bm (Xr ) dα bm (Xr(x)) dr 1l 0< ε <∆τ e ∇ ℓ −∇ ℓ { | | ℓ+1 } Zτℓ Z0  d p τˆℓ(xε) 1 + c E σik (Xα,ε) dα σik (X (x)) dr ec4(T +lT (x)+lT (xε)) 3 ∇ mℓ r −∇ mℓ r · k=1 "Zτℓ Z0 # X τℓ+1 + c E η (ε) ηˆ pdr + O(ε), 3 k r − rk Zτℓ  2.3 Proof of the Main Result 41

α,ενl which converges to zero along (ε ) by dominated convergence, since η (ε ) ηˆ , Xr νl τℓ νl → τℓ → X (x) uniformly in r [0, T ], bi and σik are continuous and η (ε ) converges toη ˆ uniformly r ∈ ∇ mℓ ∇ mℓ r νl r in r A for every n. ∈ n A Characterizing Equation for the Derivatives

We have proven so far that ηt(ε) converges along a subsequence toη ˆt a.s. for every t. In order to show differentiability we still need to show thatη ˆ does not depend on the chosen subsequence

(ενl )l. To that aim we shall establish a system of SDE-like equations, which admits a unique solution and which is solved by Y := O ηˆ , t [0, T ], O denoting the moving frame defined in t t · t ∈ t Section 2.2.2. We shall proceed similarly to Section 4 in [1] (see also Section V.6 in [42]). At first we shall derive an equation for Yt(ε) := Ot ηt(ε), t [0, T ]. Let the rows of Ot be k k · ∈ denoted by nt = n (Xt(x)), k =1,...d. Then, we obtain by the chain rule that for every t

d 1 1 1 α,ε k α,ε 1 [b(Xt(xε)) b(Xt(x))] = D k b(X ) ηt(ε), n dα = Db(X ) dα O− Yt(ε). nt t t t t ε − 0 ·h i 0 · · Xk=1 Z Z By Itˆo’s integration by parts formula we have

dY (ε) =O dη (ε) + dO η (ε) t t · t t · t 1 1 =O [b(X (x )) b(X (x))] dt + O [n(X (x ))dl (x ) n(X (x)dl (x)] + dO η t · ε t ε − t t · ε t ε t ε − t t t · t 1 α,ε 1 1 = O Db(X ) dα O− + β(X (x)) O− Y (ε) dt t · t · t t · t · t  Z0  d 1 k 1 + α (X (x)) O− Y (ε) dw + γ(X (x)) O− Y (ε) dl (x) k t · t · t t t · t · t t Xk=1 1 + O [n(X (x ))dl (x ) n(X (x)dl (x)] , t · ε t ε t ε − t t with coefficient functions α and β and γ as in (2.2). Let P = diag(e1) and Q = Id P and set k − Y 1,ε = P Y (ε) and Y 2,ε = Q Y (ε) t · t t · t to decompose the space Rd into the direct sum Im P Ker P . We define the coefficients c and d ⊕ ε to be such that

d 1 2 1 2 ck(t) ck(t) k dε(t) dε(t) 3 4 dwt + 3 4 dt ck(t) ck(t) dε(t) dε(t) Xk=1     d 1 1 k α,ε 1 1 = αk(Xt(x)) Ot− dwt + Ot Db(Xt ) dα Ot− + β(Xt(x)) Ot− dt. · · 0 · · Xk=1  Z  As before let t [0, T ] C and let n be such that t A . Then there exists ∆ > 0 such that ∈ \ ∈ n n a.s. lqn (x) = lqn (xε)=0if qn < inf C and both of them are strictly positive if qn > inf C for all 42 SDEs in a Smooth Domain with normal Reflection

0 < ε < ∆ . For such ε we get | | n d t t 1,ε 1,ε 1 1,ε 2 2,ε k 1 1,ε 2 2,ε Yt = Y0 + ck(s) Ys + ck(s) Ys dws + dε(s) Ys + dε(s) Ys ds, (2.24) k=1 Z0 Z0 X   if t < inf C and

d t t 1,ε 1,ε 1 1,ε 2 2,ε k 1 1,ε 2 2,ε Yt = Yr(t) + ck(s) Ys + ck(s) Ys dws + dε(s) Ys + dε(s) Ys ds + Rt(ε), r(t) r(t) Xk=1 Z Z   (2.25) if t inf C, where R (ε) := P O R (x ) with R (x ) as in (2.19). ≥ t · t · qn ε qn ε In order to compute the corresponding equation for Y 2,ε, we use again Taylor’s formula to obtain for every k 2,...,d ∈{ } nk(X (x)) 1 [n(X (x )) dl (x ) n(X (x) dl (x)] t · ε t ε t ε − t t = 1 (nk(X (x)) nk(X (x ))) n(X (x )) dl (x ) ε t − t ε · t ε t ε h k i = η (ε)∗ Dn (X (x ))∗ n(X (x )) dl (x ) + O(ε) − t · t ε · t ε t ε k =η (ε)∗ Dn(X (x ))∗ n (X (x ))∗ dl (x ) + O(ε) t · t ε · t ε t ε =nk(X (x )) Dn(X (x )) η (ε) dl (x ) + O(ε) t ε · t ε · t t ε k 1 =n (X (x )) Dn(X (x )) O− Y (ε) dl (x ) + O(ε). t ε · t ε · t · t t ε Hence,

Q O 1 [n(X (x ))dl (x ) n(X (x)dl (x)]=Φ (t) Y (ε) dl (x ) + O(ε) · t · ε t ε t ε − t t ε · t t ε = Φ1(t) Y 1,ε + Φ2(t) Y 2,ε + O(ε), ε · t ε · t h i where

1 1 2 Φ (t) := Q O(X (x )) Dn(X (x )) O− and Φ (t):=Φ (t) P, Φ (t):=Φ (t) Q. ε · t ε · t ε · t ε ε · ε ε · Finally, we obtain the following equation for Y 2,ε:

d t t 2,ε 2,ε 3 1,ε 4 2,ε k 3 1,ε 4 2,ε Yt =Y0 + ck(s) Ys + ck(s) Ys dws + dε(s) Ys + dε(s) Ys ds k=1 Z0 Z0 t X  t  1 1,ε 2 2,ε 1 1,ε 2 2,ε + Φε(s)Ys + Φε(s) Ys dls(xε) + γ (s) Ys + γ (s) Ys dls(x) + O(ε), Z0 Z0   (2.26)

1 1 2 1 with γ (t) := γ(X (x)) O− P and γ (t) := γ(X (x)) O− Q. t · t · t · t · Setting Y = O ηˆ and Y 1 = P Y , Y 2 = Q Y , t [0, T ], the next step is to prove the t t · t t · t t · t ∈ following 2.3 Proof of the Main Result 43

Proposition 2.18. For every t [0, T ], Y satisfies the equations in Theorem 2.5. ∈ t 2 Obviously, from the second equation in Theorem 2.5 it follows that Yt is a continuous semi- martingale in t, which in particular implies that the mapping t πX (x)(ˆηt) is continuous at 7→ r(qn) time t = r(qn) for every n. Thus, the proof of Theorem 2.1 is complete once we have shown Proposition 2.18 and that system in Theorem 2.5 admits a unique solution. First, we prove two preparing lemmas.

Lemma 2.19. For every t > 0,

t 1 1,ε 2 i) Φ (s) Ys dl (x ) 0 in L as ε 0, 0 ε s ε → → R t 1 1,ε 2 ii) γ (s) Ys dl (x) 0 in L as ε 0. 0 s → → R 1 Proof. Since Φε is uniformly bounded, we get

t t Φ1(s) Y 1,ε dl (x ) c η (ε), n(X (x)) dl (x ) ε s s ε ≤ 1 |h s s i| s ε Z0 Z0 t t c η (ε), n(X (x )) dl (x ) + c η (ε), n(X (x )) n(X (x)) dl (x ), (2.27) ≤ 1 |h s s ε i| s ε 1 |h s s ε − s i| s ε Z0 Z0 ε where the second term tends to zero by Proposition 2.8. Let now σs := inf r : lr(xε) s be ε { ≥ } the left-continuous inverse of l(xε), i.e. for every s, σs is the left endpoint of an excursion interval ε of X(xε). In particular, for every fixed s, σs = rε(qn) for some qn depending on s if ε is small. Thus, for every positive M we obtain by Lemma 2.15 ii)

M M 2 2 E η ε (ε), n(X ε (x )) ds = E η (ε), n(X (x )) ds 0, as ε 0. h σs σs ε i h rε(qn) rε(qn) ε i → → Z0  Z0 h i (2.28)

We show now that also the first term in (2.27) tends to zero. On one hand, we have by the change of variables formula for an arbitrary M > 0

t lt(xε) ε ε ηs(ε), n(Xs(xε)) dls(xε) 1l lt(xε) M = ησ (ε), n(Xσ (xε)) ds 1l lt(xε) M |h i| { ≤ } h s s i { ≤ } Z0 Z0 M

η ε (ε), n(X ε (x )) ds, ≤ h σs σs ε i Z0 which converges to zero in L2 by (2.28). On the other hand, using the Cauchy-Schwarz inequality and Proposition 2.8 we get

t 2 4 1/2 E ηs(ε), n(Xs(xε)) dls(xε) 1l lt(xε)>M E exp(c2(t + lt(x) + lt(xε))) lt(xε) |h i| { } ≤ "Z0  #   P[l (x ) > M]1/2. × t ε 44 SDEs in a Smooth Domain with normal Reflection

Hence,

t 2 4 1/2 1/2 lim sup E ηs(ε), n(Xs(xε)) dls(xε) E[exp(c4(lt(x) + t))lt(x) ] P[lt(x) > M] . ε 0 0 |h i| ≤ → "Z  # We let M tend to infinity and obtain that i) holds. ii) follows by an analogous, simpler proceeding.

Lemma 2.20. For every t > 0,

t 2 2,ενl t 2 2 2 i) Φε (s) Ys dls(xεν ) Φ (s) Ys dls(x) in L as l , 0 νl l → 0 →∞

R t 2 2,ενl t 2R 2 2 ii) γ (s) Ys dl (x) γ (s) Y dl (x) in L as l . 0 s → 0 s s →∞ Proof.R Again we only prove i).R Since

t τℓ+1 t 2 2,ε 2 2 ∧ 2 2,ε 2 2 Φε(s) Ys dls(xε) Φ (s) Ys dls(x) = Φε(s) Ys dls(xε) Φ (s) Ys dls(x), 0 − τℓ − Z ℓ:Xτℓ

τℓ+1 Φ2(s) Y 2,ε dl (x ) Φ2(s) Y 2 dl (x) 0 in L2 along ε . ε s s ε − s s → νl Zτℓ Recall the definition of Π˜ m in (2.22). By construction we have for s [τ , τ ], x ∈ ℓ ℓ+1 Φ2(s) Y 2 dl (x) = Φ2(s) Q O ηˆ dl (x) = Φ2(s) Q O π (ˆη ) dl (x) s s · · s · s s · · s · Xs(x) s s 2 mℓ = Φ (s) Q Os Π˜ (ˆηs) dls(x), · · · Xs(x) and analogously, for sufficiently small ε,

2 2,ε Φε(s) Ys dls(xε) = Φ2(s) Q O(X (x )) η (ε) + Φ2(s) Q [O(X (x)) O(X (x ))] η (ε) dl (x ) ε · · s ε · s ε · · s − s ε · s s ε 2 mℓ 2 = Φ (s) Q O(Xs(xε)) Π˜ (ηs(ε))+Φ (s) Q [O(Xs(x)) O(Xs(xε ))] ηs(ε) dls(xε). ε · · · Xs(xε) ε · · − · n o Hence,

τℓ+1 Φ2(s) Y 2,ε dl (x ) Φ2(s) Y 2 dl (x) ε s s ε − s s Zτℓ τℓ+1 2 mℓ 2 mℓ = Φ (s) Q O(Xs(xε)) Π˜ (ηs(ε)) Φ (s) Q Os Π˜ (ˆηs) dls(xε) ε · · · Xs(xε) − · · · Xs(x) Zτℓ τℓ+1h i + Φ2(s) Q [O(X (x)) O(X (x ))] η (ε) dl (x ) ε · · s − s ε · s s ε Zτℓ τℓ+1 2 mℓ + Φ (s) Q Os Π˜ (ˆηs)(dls(xε) dls(x)). · · · Xs(x) − Zτℓ 2.3 Proof of the Main Result 45

2 The first term converges to zero in L along ενl by Lemma 2.17 and Proposition 2.8. The second term converges also to zero in L2 again by Proposition 2.8. Finally, the third term tends to zero by the weak convergence of l(x ) to l(x) on [τ , τ ]. Note that Π˜ mℓ (ˆη ) is continuous in s on ε ℓ ℓ+1 Xs(x) s [τℓ, τℓ+1].

1,ε 2,ε 1 Proof of Proposition 2.18. For every t [0, T ] we have convergence of Yt and Yt to Yt and 2 2 ∈ Yt , respectively, a.s. and in L along the subsequence ενl . Thus, it is enough to prove that the 2 right hand sides in (2.24), (2.25) and (2.26) converge along ενl in L to the corresponding terms 1 2 1,ε in the equation describing Y and Y . Lemma 2.15 gives convergence of Yr(t) to zero, Rt(ε) tends to zero arguing as in (2.20) and in Corollary 2.14. The convergence of the terms involving the local times follows from Lemma 2.19 and Lemma 2.20. The convergence of the remaining integral terms is clear.

It remains to show uniqueness, which is carried out in the next lemma.

Proposition 2.21. The system in Theorem (2.5) admits a unique solution.

Proof. Let (Ut)t 0 be the unique solution of the matrix-valued equation ≥ t U = Id U (Φ2(s) + γ2(s)) dl (x), t 0. t − s · s ≥ Z0 Then, introducing the stopping times T := inf s 0 : max( U 1 , U ) N , N N, we N { ≥ k s− k k sk ≥ } ∈ have T a.s. as N tends to infinity. Using integration by parts we get N ↑∞ d d(U Y 2) = U c3(t) Y 1 + c4(t) Y 2 dwk + U d3(t) Y 1 + d4(t) Y 2 dt t · t t k t k t t t t t k=1 X   + U (Φ2(t) + γ2(t)) Y 2 dl (x) U (Φ2(t) + γ2(t)) Y 2 dl (x). t t t − t t t The last two terms cancel, so we can rewrite the system in Theorem 2.5 as

d t t 1 1 1 1 2 2 k 1 1 2 2 Yt =1l t

We proceed as in Lemma 4.3 in [1]. Let H be the totality of Rd-valued adapted processes 2 (ϕt), t [0, T ], whose paths are a.s. c`adl`ag and which satisfy supt [0,T ] E[ ϕt ] < . On H we ∈ ∈ k k ∞ introduce the norm 2 1/2 ϕ H = sup E ϕt . k k t [0,T ] k k ∈   For any ϕ H we define the process I(ϕ) by ∈ d t t 1 1 1 1 2 2 k 1 1 2 2 I(ϕ)t =1l t

Since T > 0 is arbitrary, Theorem 2.1 and Theorem 2.5 follow.

2.3.4 The Neumann Condition In this final section we prove the Neumann condition stated in Corollary 2.7. Let x ∂G. ∈ By a density argument it is sufficient to consider bounded functions f, which are continuously differentiable and have bounded derivatives. Then, for each t > 0 we obtain by dominated convergence and the chain rule:

D P f(x) =E f(X (x)) D X (x) . n(x) t ∇ t n(x) t   2.3 Proof of the Main Result 47

Thus, it suffices to show D X (x)=0 for all t 0, which is equivalent to Y = 0 for all t 0, n(x) t ≥ t ≥ where Y = O η with v = n(x). Fix some arbitray T > 0. Using the same notations as in t t · t Lemma 2.21, Yt satisfies

d t t 1 1 1 2 2 k 1 1 2 2 Yt = ck(s) Ys + ck(s) Ys dws + d (s) Ys + d (s) Ys ds k=1 Zr(t) Zr(t) X   d t t 2 1 3 1 4 2 k 1 3 1 4 2 Y =U − U c (s) Y + c (s) Y dw + U − U d (s) Y + d (s) Y ds. t t · s · k s k s s t · s · s s k=1 Z0 Z0 X   2 1,N N 2,N N Note that inf C = 0 and Y0 = Q O(x) n(x) = 0. Setting Yt = P Yt , Yt = Q Yt and d t 1 1 2 · 2 k· · · Mt := k=1 0 ck(s) Ys + ck(s) Ys dws , we obtain by Doob’s inequality for every N that R  P 2 2 N 2 1,N 2,N E sup Yt E sup Yt + sup Yt "t [0,T ] # ≤ "t [0,T ] # "t [0,T ] # ∈ ∈ ∈ T 2 2 E sup M M + c E sup Y N ds ≤ t − r(t) 1 r "t [0,T ] # Z0 "r [0,s] # ∈ ∈ T 2 2 E sup M 2 + c E sup Y N ds ≤ k tk 1 r "t [0,T ] # Z0 "r [0,s] # ∈ ∈ T 2 c E sup Y N ds, ≤ 2 r Z0 "r [0,s] # ∈ which implies by Gronwall’s Lemma that Y N = 0 for all t [0, T ] a.s. We let N tend to infinity t ∈ to obtain that Y = 0 for all t [0, T ] a.s. Since T > 0 is arbitrary, the claim follows. t ∈ 48 SDEs in a Smooth Domain with normal Reflection Part II

Particle Approximation of the Wasserstein Diffusion

49

Chapter 3

The Approximating Particle System

3.1 Introduction and Main Results

The construction and regularity of Brownian motion or more general diffusions in Euclidean domains with reflecting boundary condition is a classical subject in probability theory, lying in the intersection of regularity theory for parabolic PDE and stochastic analysis. Starting from the early works by e.g. Fukushima [37] and Tanaka [60], the field has seen perpetual research activity, c.f. [12, 51] (and e.g. [11] for a more comprehensive list of references). Typically the reflecting process can be obtained in two ways. Either via solving a system of Skorohod SDE with local time at the boundary or via Dirichlet form methods, and under suitable smoothness assumptions on the domain and the coefficients both approaches are equivalent. In the second part of this thesis we study a very specific singular case, in which the drift coefficients of the operator may diverge and where the equivalence of the two approaches breaks down but the process exhibits good regularity nevertheless. Our case corresponds to a Dirichlet form (f, f) = f 2q(dx) E |∇ | ZΩ on L2(Ω, dq) for a very specific choice of domain Ω RN and measure q(dx) = q(x)dx (see below). ⊂ Similar but more regular variants were studied under the smoothness assumption q H1,1(Ω) ∈ respectively (log q) Lp(Ω, dq), p > N in [62] and [36], where a Skorohod decomposition for ∇ ∈ the induced process still holds. In the present work we are concerned with a diffusion, which is taking values in the closure of

Σ = x RN : 0

N β 1 i+1 i 1 1 2 N qN (dx) = (x x ) N+1 − dx dx dx , (3.1) Zβ − ··· Yi=0 N+1 1 where Zβ = (Γ(β)/(Γ(β/(N + 1))) )− is a normalization constant and β > 0 is some param- N 2 eter. That is, we study the process (X ) generated by the L (ΣN , qN )-closure of the quadratic · 51 52 The Approximating Particle System form N 2 (f, f) = f (x) q (dx), f C∞(Σ ), E |∇ | N ∈ N ZΣN still denoted by N . Its generator extends the operator (LN ,D(LN )) E L N N β 1 1 ∂ L f(x)=( 1) i i 1 i+1 i i f(x)+∆f(x) for x ΣN (3.2) N +1 − x x − − x x ∂x ∈ Xi=1  − −  with domain

D(LN ) = C2 = f C2(Σ ) f ν =0 on all (n 1)-dimensional faces of ∂Σ , Neu { ∈ N |∇ · − N } N N and ν denoting the outward normal field on ∂ΣN . On the level of formal Itˆocalculus (L ,D(L )) N corresponds to an order preserving dynamics (X ) ΣN for the location of N particles in the · ∈ unit interval which solves the system of coupled Skorohod SDEs

β 1 1 dxi =( 1) dt + √2dwi + dli 1 dli, i =1, , N (3.3) t i i 1 i+1 i t t− t N +1 − x x − − x x − ···  t − t t − t  with x0 = 0, xN+1 = 1 by convention, w are independent real Brownian motions and li are { i} { } the collision local times, i.e. satisfying

t i i i dlt 0, lt = 1l i i+1 dls. (3.4) ≥ xs=xs Z0 { } (XN ) may thus be considered as a system of coupled two sided real Bessel processes with uniform Bessel· dimension δ = β/(N + 1). Similar to the standard real Bessel process BES(δ) with Bessel dimension δ < 1, the existence of XN , even with initial condition X = x Σ , is not trivial, 0 ∈ N nor are its regularity properties.

Except for mathematical curiousity our motivation for studying this process is its relation to the Wasserstein diffusion, which is an infinite dimensional diffusion process on the space of probability measures intimately related to the Wasserstein metric [66]. In the next chapter we shall prove that the normalized empirical measure of the system (3.3) converges to the Wasserstein diffusion in the high density regime for N . Hence the regularity properties of (XN ) may → ∞ · give an indication of the regularity of the Wasserstein diffusion, although in the present work we have not managed to obtain estimates which are uniform w.r.t. the parameter N.

N N,ǫ N,ǫ Remark 3.1. For simulation the dynamics of (X ) can be approximated by Xt = t/ǫ2 , t · X⌊ ⌋ ≥ N,ǫ N,ǫ qN (Bǫ(x) ΣN A) 0, where ( n )n 0 is the on ΣN with transition kernel µ (x, A) = ∩ ∩ . X ≥ qN (Bǫ(x) ΣN ) An alternative approach uses a regularized version of the formal SDE (3.3) and (3.4). For illustra-∩ tion we present some results by courtesy of Theresa Heeg, Bonn, for the case of N = 4 particles with β = 10, β = 1 and β =0.3 respectively, at large times. 3.1 Introduction and Main Results 53

β = 10 β =1 β =0.3

We are now ready to state the main results of this chapter.

N Theorem 3.2. For β < 2(N +1), the Dirichlet form generates a (X ) on ΣN , 2 E · i.e. the associated transition semigroup on L (ΣN , qN ) defines a strongly continuous contraction semigroup on the subspace C(ΣN ) equipped with the sup-norm topology.

In order to prove Theorem 3.2 we use some localization arguments to use the symmetries of the problem. A crucial ingredient for the justification of this strategy is the following Markov N 2 uniqueness result for the operator (L ,CNeu). Proposition 3.3. For β < 2(N + 1), there is at most one symmetric strongly continuous con- 2 N 2 traction semigroup (Tt)t 0 on L (ΣN , qN ) whose generator ( , ( )) extends (L ,CNeu). ≥ L D L Hence, together with Theorem 3.2 we can state the following existence and uniqueness result.

Corollary 3.4. The formal system of Skorohod SDEs defines via the associated martingale problem a unique diffusion process which can be started everywhere on the (closed) simplex ΣN . As for path regularity we obtain the following characterization.

x N Theorem 3.5. For any starting point x ΣN , (X ) ΣN R is a Euclidean semi-martingale ∈ · ∈ ⊂ if and only if β/(N + 1) 1. ≥ In particular we obtain that a Skorohod decomposition of the process XN is impossible if β is small enough. This is in sharp contrast to all aforementioned previous works.· Moreover, again due to the uniqueness assertion of Proposition 3.3 the following negative result holds true.

Corollary 3.6. If β/(N + 1) < 1, the system of equations (3.3),(3.4) is ill-posed, i.e. it admits no solution in the the sense of Itˆocalculus.

Note that theorems 3.2 and 3.5 generalize the corresponding classical results for the family of standard real Bessel processes, which are proved in a completely different manner, c.f. [13, 56].

While the proof of Theorem 3.5 consists of a straightforward application of a regularity cri- terion by Fukushima [38] for Dirichlet processes, the proof of Theorem 3.2 is more involved and has four main steps. The first crucial observation is that the reference measure qN can locally be extended to a measureq ˆ on the full Euclidean space which lies in the Muckenhoupt class 54 The Approximating Particle System

, which allows for a rich potential theory. In particular the Poincar´einequality and doubling A2 conditions hold which imply via heat kernel estimates the regularity of the induced process on RN . The second step is the probabilistic piece of the localization, which corresponds to stopping the RN -valued process at domain boundaries where the measureq ˆ is ’tame’ and to show that the stopped process is again Feller. To this end we employ a version of the Wiener test for degenerate elliptic diffusions by Fabes, Jerison and Kenig [34]. The third step is to use the reflection symme- try of the problem which allows to treat the Neumann boundary condition indeed via a reflection of the extended RN -valued process. The fourth step is to establish the Markov uniqueness for the associated martingale problem which is crucial in order to justify the identification of the processes after localization. In the proof of the Markov uniqueness we shall again depend on the nice potential theory available in Muckenhoupt classes. In the present work we restrict ourselves to the case β < 2(N + 1) to ensure that the local extension of q is contained in the Muckenhoupt class . In our context this case is the more N A2 interesting one in view of the approximation result given in the next chapter. For β 2(N +1) the ≥ extension of q lies only in the larger Muckenhoupt class for some p > 2 suffiently large and N Ap our method fails. Nevertheless, if β N(N +1) we are in the setting of [36] and the arguments ≥ there imply that the Feller property holds for (XN ) in our case, too. The material presented in this chapter is contained· in [10].

3.2 Dirichlet Form and Integration by Parts Formula

We start with the rigorous construction of (XN ), which departs from the symmetrizing measure q on Σ defined above in (3.1). Note that one· can identify Lp(Σ , q ) with Lp(Σ , q ), p 1. N N N N N N ≥ Throughout the paper, by abuse of notation, we will also denote by qN the density w.r.t. the Lebesgue measure, i.e. q (A) = q (x) dx for all measurable A Σ . N A N ⊆ N For general β > 0, N N, the measure q satisfies the ’Hamza condition’, because it has ∈ R N a strictly positive density with locally integrable inverse, c.f. e.g. [2]. This implies that the form N (f, f) with domain f C (Σ ) is closable on L2(Σ , q ). The L2(Σ , q )-closure E ∈ ∞ N N N N N defines a local regular Dirichlet form, still denoted by N . General Dirichlet form theory asserts N E N the existence of a Hunt diffusion (X ) associated with which can be started in qN -almost · E all x Σ and which is understood as a generalized solution of the system (3.3),(3.4). This ∈ N identification is justified by the fact that any semi-martingale solution solves via Itˆo’s formula the martingale problem for the operator (LN ,D(LN )), defined in (3.2). Next we prove an integration by parts formula for qN . Proposition 3.7. Let u C1(Σ ) and ξ = (ξ1,...,ξN ) be a vector field in C1(Σ , RN ) ∈ N N satisfying ξ,ν =0 on ∂Σ , where ν denotes the outward normal field of ∂Σ . Then, h i N N N β i 1 1 u, ξ qN (dx) = u div(ξ)+( N+1 1) ξ i i 1 i+1 i qN (dx). ΣN h∇ i − ΣN " − x x − − x x # Z Z Xi=1  − −  Proof. Let u and ξ be as in the statement. In a first step we shall show that

div(u ξ qN ) dx =0. (3.5) ZΣN 3.2 Dirichlet Form and Integration by Parts Formula 55

For k very large (compared with N) we define

N Ak := x Σ : xi + 1 xi+1 . ∈ N k ≤ i=0 \  We apply the divergence theorem to obtain

div(u ξ qN ) dx = u ξ,νk qN dx, k k h i ZA Z∂A ν denoting the outward normal field of ∂Ak. Since obviously Ak Σ as k , it suffices to k ր N →∞ prove that the right hand side converges to zero as k . Clearly, this term can be written as →∞ N u ξ,νk qN dx = u ξ,νk qN dx k h i k h i ∂A ∂Ai Z Xi=0 Z with ∂Ak := Ak x : xi + 1 = xi+1 . Note that for every i the outward normal on ∂Ak is given i ∩{ k } i by the normalization of the vector ei ei+1, (ei) denoting the canonical basis in RN and − i=1...N with e0 = eN+1 = 0. Hence, for i 1,...,N 1 ∈{ − } β β 1 i+1 i 1 N+1 1 j+1 j N+1 1 u ξ,νk qN dx = √ u(x) ξ (x) ξ (x) ( k ) − (x x ) − dx. − k h i 2 k − − Z∂Ai Z∂Ai j=i  Y6 For any x =(x1,...,xi,xi + 1 ,xi+2,...,xN ) ∂Ak we denotex ˆ =(x1,...,xi,xi,xi+2,...,xN ) k ∈ i ∈ ∂Σ xi = xi+1 . Since ξi(ˆx) ξi+1(ˆx) = 0 by the boundary condition on ξ, using the mean N ∩{ } − value theorem we get 1 ξi+1(x) ξi(x) = ξi+1(x) ξi+1(ˆx) + ξi(ˆx) ξi(x) = ∂ ξi+1(ϑ ) ∂ ξi(ϑ ) − − − k i+1 1 − i+1 2 for some ϑ ,ϑ [xi,xi + 1 ]. Thus,  1 2 ∈ k β β 1 1 N+1 i+1 i j+1 j N+1 1 u ξ,νk qN dx = √ ( k ) u(x) ∂i+1ξ (ϑ1) ∂i+1ξ (ϑ2) (x x ) − dx. − k h i 2 k − − Z∂Ai Z∂Ai j=i  Y6 For k the right hand side converges to zero, since the integral tends to →∞ β i+1 i j+1 j 1 u(x) ∂i+1ξ (x) ∂i+1ξ (x) (x x ) N+1 − dx < . i i+1 − − ∞ Z∂ΣN x: x =x j=i ∩{ }  Y6 For i = 0 and i = N similar arguments apply, and (3.5) follows. Finally, we use (3.5) to obtain

u, ξ q (dx) = div(u ξ q ) dx u div(ξ) q (dx) u ξ, q dx h∇ i N N − N − h ∇ N i ZΣN ZΣN ZΣN ZΣN = u div(ξ) q (dx) u ξ, log q q (dx), − N − h ∇ N i N ZΣN ZΣN and the claim follows. 56 The Approximating Particle System

1 1 Remark 3.8. Let u C (ΣN ) and ξ be a vector field of the the form ξ = w~ϕ with w C (ΣN ) 1 ∈ N ∈ and ~ϕ(x)=(ϕ(x ),...,ϕ(x )), ϕ C∞([0, 1]) and ~ϕ, ν =0 on ∂ΣN , in particular ϕ ∂[0,1] = 0. ∈ h i | Then, the integration by parts formula in Proposition 3.7 reads

β u, ξ qN (dx) = u wVN,ϕ + w, ~ϕ qN (dx), Σ h∇ i − Σ h∇ i Z N Z N h i where N i+1 i N β 1 N β ϕ(x ) ϕ(x ) i V (x ,...,x ):=( 1) − + ϕ′(x ). N,ϕ N +1 − xi+1 xi Xi=0 − Xi=1 Let C2 = f C2(Σ ) : f,ν = 0on ∂Σ as above with ν still denoting the outer Neu { ∈ N h∇ i N } normal field on ∂Σ . Then, for any f C1(Σ ) and g C2 we apply the integration by parts N ∈ N ∈ Neu formula in Proposition 3.7 for ξ = g to obtain ∇ N (f,g) = fLN g q (dx). E − N ZΣN Moreover, arguing similarly as in the proof of Proposition 3.7 we obtain that for every g C2 , ∈ Neu N N (f,g) C f 2 , f D( ). E ≤ k kL (ΣN ,qN ) ∀ ∈ E

In particular, C2 is contained in the domain of the generator associated with N and Neu L E LN f = f for all f C2 . Hence, the process XN is the formal solution to the system L ∈ Neu (3.3),(3.4).

3.3 Feller Property

This section is devoted to the proof of Theorem 3.2. Let from now on β < 2(N + 1). We define

Ω := Ωδ := Σ x RN : x 1 δ , N N N ∩{ ∈ N ≤ − } 1 for some positive small δ < [2(N + 2)]− and the weight function

qN (x) if x ΩN , qˆ(x):=q ˆ (x) := β β ∈ (3.6) N,δ 1 1 N i i 1 1 ( δ N+1 − (x x − ) N+1 − if x S1 ΩN , Zβ i=1 − ∈ \ where Q S := x RN : 0 x1 x2 ... xN . 1 { ∈ ≤ ≤ ≤ ≤ } We want to extend the weight functionq ˆ to the whole RN . To do this we introduce the mapping

T : RN S x ( x(1) ,..., x(N) ), (3.7) → 1 7→ | | | | where (1)...., (N) denotes the permutation of 1,...,N such that

x(1) ... x(N) . | | ≤ ≤| | 3.3 Feller Property 57

The extension ofq ˆ on RN is now defined viaq ˆ(x)=ˆq(Tx), x RN . Again we will also denote ∈ byq ˆ the induced measure on RN . Consider the L2(RN , qˆ)-closure of

ˆN,a N (f, f) = a f, a f(x) qˆ(dx), f Cc∞(R ) (3.8) E N h ∇ ∇ i ∈ ZR still denoted by ˆN,a, for a measurable field x a(x) RN N on RN satisfying E 7→ ∈ × 1 E a(x)t a(x) c E (3.9) c · N ≤ · ≤ · N

t in the sense of non-negative definite matrices. Here EN denotes the identity matrix and a the N,a N transposition of a. Let (Yt)t 0 = (Yt )t 0 be the associated symmetric Hunt process on R , ≥ ≥ starting from the invariant distributionq ˆ. Finally, we denote by (Qt)t the transition semigroup of Y .

3.3.1 Feller Properties of Y

N N In the sequel we will denote by C0(R ) the space of continuous functions on R vanishing at infinity.

Proposition 3.9. For 2(N + 1) > β and a matrix a satisfying (3.9), Y N,a is a Feller process, i.e.

i) for every t > 0 and every f C (RN ) we have Q f C (RN ), ∈ 0 t ∈ 0 N N ii) for every f C0(R ), limt 0 Qtf = f pointwise in R . ∈ ↓ Moreover, Q f C(RN ) for every t > 0 and every f L2(RN , qˆ). t ∈ ∈

Remark 3.10. It is well known that i) and ii) even imply that limt 0 Qtf f = 0 for each ↓ k − k∞ f C (RN ). Moreover, we can derive the following version of the strong (see ∈ 0 Theorem 3 in Section 2.3 in [18]). Let T be a stopping time with T t a.s. for some t > 0. ≤ 0 0 Then, we have for each f L2(RN , qˆ) ∈

E [f(Yt0 ) T ] = EYT [f(Yt0 T )] , |F − with ( t)t 0 denoting the natural filtration of Y . F ≥ Proof of Proposition 3.9. ii) follows directly by path-continuity and dominated convergence. i) as well as the additional statement follow from the analytic regularity theory of symmetric diffusions, see [59], in particular Theorem 3.5, Proposition 3.1 and Corollary 4.2, provided the following two conditions are fulfilled:

The measureq ˆ is doubling, i.e. there exists a constant C , such that for all Euclidian balls • ′ B B R ⊂ 2R qˆ(B ) C′qˆ(B ). 2R ≤ R 58 The Approximating Particle System

ˆ satisfies a uniform local Poincar´einequality, i.e. there is a constant C > 0 such that • E ′

2 2 2 f (f) dqˆ C′R f dq,ˆ | − BR | ≤ |∇ | ZBR ZBR 1 for all Euclidian balls BR and f ( ˆ), where (f)B denotes the integral fdqˆ. ∈ D E R qˆ(BR) BR Both conditions are verified once we have proven that the weight functionq ˆ is containedR in the Muckenhoupt class , which will be done in Lemma 3.11 below. Indeed, the doubling property A2 follows immediately from the Muckenhoupt condition (see e.g. [63] or [61]) and for the proof of the Poincar´einequality see Theorem 1.5 in [35].

Lemma 3.11. For N such that β < 2(N +1), we have qˆ , i.e. there exists a positive constant ∈ A2 C = C(N,δ) such that for every Euclidian ball BR

1 1 1 qdxˆ qˆ− dx C, B B ≤ | R| ZBR | R| ZBR where B denotes the Lebesgue measure of the ball B . | R| R It suffices to prove the Muckenhoupt condition for the weight functionq ˜ defined by

N i i 1 β/(N+1) 1 q˜(x) := (x x − ) − , x S , (3.10) − ∈ 1 Yi=1 andq ˜(x):=q ˜(Tx) if x RN , since there exist positive constants C and C depending on δ and ∈ 1 2 N such that

C q˜(x) qˆ(x) C q˜(x), x RN . (3.11) 1 ≤ ≤ 2 ∀ ∈

β 1 Note that (1 xN ) N+1 − is uniformly bounded and bounded away from zero on ΩN . − N In the following we denote by PR(m) the parallelepiped in R with basis point m =(m1,...,mN ), N which is spanned by the vectors vi = j=i ej, i = 1,...,N, normalized to the length R, where (e ) is the canonical basis in RN . Then, P (m) can also be written as j j=1,...,N P R

N 1 R 2 1 1 R N N 1 N 1 P (m) = m + x R : x 0, ,x x ,x + ,...,x x − ,x − + R . R √N √N 1 ∈ ∈ ∈ − ∈ n h i h i  o In order to prove the Muckenhoupt condition forq ˜ we need the following

N Lemma 3.12. Let BR be an arbitrary Euclidian ball in R . Then, there exists a positive constant C only depending on N and a parallelepiped P (m) S with l > 0 independent of B lR ⊂ 1 R such that 1 1 q˜± dx C q˜± dx. ≤ ZBR ZPlR(m) 3.3 Feller Property 59

Proof. We denote by (S ), n =1,..., 2N N!, the subsets of RN taking the form n · S = x RN : (s π (x),...,s π (x)) S , n { ∈ 1 1 N N ∈ 1} where si 1, 1 , i = 1,...,N, and π(x)=(π1(x),...,πN (x)) is a permutation of the com- ∈ {− } N ponents of x. Note that R = n Sn and the intersections of the sets Sn have zero Lebesgue measure. S Let now B (M) be an arbitrary Euclidian ball with radius R centered in M RN . We first R ∈ consider the case M 2R. Then, obviously B (M) B (0) and B (0) S P (0). | | ≤ R ⊂ 4R 4R ∩ 1 ⊂ 4R Hence,

1 1 1 q˜± dx q˜± dx = q˜± dx. ≤ BR(M) B4R(0) n B4R(0) Sn Z Z X Z ∩ By the definition ofq ˜ we have

1 1 q˜± dx = q˜± dx B R(0) Sn B R(0) S1 Z 4 ∩ Z 4 ∩ for all n 1,..., 2N N! . Thus, ∈{ · }

1 N 1 1 q˜± dx 2 N! q˜± dx C q˜± dx. BR(M) ≤ · B R(0) S1 ≤ P R(0) Z Z 4 ∩ Z 4 Suppose now that M > 2R. Let n be such that M S . Then, by construction ofq ˜ we have | | 0 ∈ n0

1 1 q˜± dx q˜± dx BR(M) Sn ≤ BR(M) Sn Z ∩ Z ∩ 0 for all n =1,..., 2N N!. Hence, ·

1 1 N 1 q˜± dx = q˜± dx 2 N! q˜± dx. ≤ · BR(M) n BR(M) Sn BR(M) Sn0 Z X Z ∩ Z ∩ Set K := T (B (M) S ) S with T defined as above. Then, it is clear by definition ofq ˜ that R ∩ n0 ⊂ 1

1 1 q˜± dx = q˜± dx BR(M) Sn K Z ∩ 0 Z and we get 1 1 q˜± dx C q˜± dx. ≤ ZBR(M) ZK Finally, we choose a parallelepiped P (m) such that K P (m) S , which completes the 2R ⊆ 2R ⊂ 1 proof. 60 The Approximating Particle System

Proof of Lemma 3.11. We prove the Muckenhoupt condition forq ˜. Recall that we have assumed β < 2(N + 1). In the following the symbol C denotes a positive constant depending on N and δ with possibly changing its value from one occurence to another. Using Lemma 3.12 we have

1 1 2N 1 qdx˜ q˜− dx CR− qdx˜ q˜− dx B 2 ≤ | R| ZBR ZBR ZPlR(m) ZPlR(m) N 2N i i 1 β/(N+1) 1 =CR− (x x − ) − dx PlR(m) − Z Yi=1 N i i 1 (β/(N+1) 1) (x x − )− − dx. × PlR(m) − Z Yi=1 By the change of variables y = xi xi 1, i =1,...,N, we obtain i − − N 1 n˜i β 1 n˜i ( β 1) 1 2N N+1 − − N+1 − 2 qdx˜ q˜− dx CR− yi dyi yi dyi, BR BR BR ≤ m˜ i m˜ i | | Z Z Yi=1 Z Z lR where we have setm ˜ i := mi mi 1 with m0 := 0 andn ˜i :=m ˜ i + for abbreviation. Recall − − √N+1 i that in one dimension the weight function x x η on R is contained− in if η ( 1, 1) (see p. 7→ | | A2 ∈ − 229 and p. 236 in [61]). Hence, we get for every i 1,...,N ∈{ }

n˜i β n˜i β 2 N+1 1 ( N+1 1) lR y − dyi y− − dyi C , i i ≤ √N +1 i Zm˜ i Zm˜ i − and the result follows.

3.3.2 Feller Properties of Y inside a Box Let E := Eδ := x RN : x < 1 2δ be a box in RN centered in the origin and (E) be { ∈ k k∞ − } B the Borel σ-field on E. We denote by

τ := inf t > 0: Y Ec E { t ∈ } the first hitting time of Ec. This subsection is devoted to the proof of the Feller properties for E N,a,E N,a the stopped process Yt := Yt := Yt τE , whose transition semigroup is given by ∧ E x x ¯ Qt f(x) = E [f(Yt τE )] = E f(Yt)1l t<τE + f(YτE )1l t τE , t > 0, x E, ∧ { } { ≥ } ∈ for every bounded f on E¯. In order to prove the Feller property we will follow essentially the proof of Theorem 13.5 in [30] (cf. also [19]). It is shown there that the Feller properties are preserved, if the domain is regular in the following sense.

Proposition 3.13. The domain E is regular, i.e. for every z ∂E we have P z[τ =0]=1. ∈ E 3.3 Feller Property 61

Remark 3.14. z ∂E is regular point in the sense of the definition given in Proposition 3.13 if ∈ and only if for every continuous function f on ∂E

x lim E [f(YτE )1l τE < ] = f(z). E x z { ∞} ∋ → For the Brownian case we refer to Theorem 2.12 in [45] and to Theorem 1.23 in [21]. The arguments are robust, but it is required that under P x the probability of the event that the exit time of a ball centered at x does not exceed t is arbitrary small uniformly in x for a suitable chosen small t > 0. In our situation this property is ensured by [58], p. 330.

The proof of this proposition will be based on the following Wiener test established by Fabes, Jerison and Kenig.

Theorem 3.15. Let B (0) be a large ball centered in zero such that E B (0). Then, a R0 ⊂ R0/4 point z ∂E is regular if and only if ∈ 2 a) R0 s ds < , or 0 qˆ(Bs(z)) s ∞ 2 R R0 ρ dρ b) cap(Kρ) = , 0 qˆ(Bρ(z)) ρ ∞ whereRK (z) := (B (0) E) B (z) and cap denotes the capacity associated with the Dirichlet ρ R0 \ ∩ ρ form N,a. E Proof. See Theorem 5.1 in [34].

In the sequel we will use the following notation

i i 1 N d(x) := i 1,...,N : x = x − , x R , ∈{ } ∈ 0  again with the convention x := 0. In order to prove the regularity of E we start with a preparing lemma.

Lemma 3.16. Let y RN be arbitrary. ∈ i) There exists a positive r = r (y) such that for all balls B (y), 0 < r r , there exists 0 0 r ≤ 0 a parallelepiped Plr(¯y) contained in S1 with d(¯y) = d(y), a positive constant C and l > 0, such that qˆ(B (y)) Cqˆ(P (¯y)). r ≤ lr

ii) For all balls Br(y) there exists parallelepiped Plr(¯y) contained in S1 with d(¯y) = d(y), for some l > 0 such that qˆ(P (¯y)) qˆ(B (y)). lr ≤ r iii) For all y we have C rh(y) qˆ(B (y)) for all r and qˆ(B (y)) C rh(y) for all r r (y) 1 ≤ r r ≤ 2 ≤ 0 for some positive constants C1 and C2 depending on y, where

h(y) := N d(y) + β d(y). − N+1 62 The Approximating Particle System

Proof. Obviously, due to (3.11) it suffices to prove the lemma withq ˆ replaced byq ˜ defined in (3.10). i) The case y = 0 is clear. For y = 0 we choose r such that 2r y and let n be such 6 0 0 ≤ | | 0 that y S . Consider now an arbitrary ball B (y) with r r and let K = T (S B (y)) be ∈ n0 r ≤ 0 n0 ∩ r the subset of S1 constructed in the second part of the proof of Lemma 3.12. Then, possibly after choosing a smaller r we find a parallelepiped and a positive constant l such that K P (¯y) S 0 ⊂ lr ⊂ 1 with d(¯y) = d(y) and we obtain i). ii) Let K = T (S B (y)) be defined as in i). Then, clearlyq ˜(B (y)) q˜(K). Thus, we can n0 ∩ r r ≥ choosey ¯ = T y and l independent of r such that P (¯y) K and ii) follows. lr ⊆ iii) We proceed similar as in the proof of Lemma 3.11. For some parallelepiped Plr(¯y) with d(¯y) = d(y) we use a change of variables to obtain

N N y˜ +c r β β i i 1 i i 1 N+1 1 N+1 − q˜(Plr(¯y)) = (x x − ) − dx = zi dzi, Plr(¯y)) − y˜i Z Yi=1 Yi=1 Z

l wherey ˜i =y ¯i y¯i 1,y ¯0 := 0 and ci := . Note that d(¯y) = d(y) is the number of − − √N+1 i components ofy ˜ which are equal to zero. Using the− mean value theorem we obtain that

C rh(y) qˆ(P (¯y)) C rh(y) 1 ≤ lr ≤ 2 for some positive constants C1 and C2 depending on y, so that iii) follows from i) and ii).

Proof of Proposition 3.13. Let z ∂E be fixed and R be as in the statement of Theorem 3.15. ∈ 0 Setting h (z) := 1 h(z) let us first consider the case h (z) > 1. Then, using Lemma 3.16 iii) ′ − ′ − we have that

R 2 r 0 s ds 0 ′ 1 C sh (z)ds + (R2 r2) < , qˆ(B (z)) s ≤ 2ˆq(B (z)) 0 − 0 ∞ Z0 s Z0 r0 with r0 = r0(z) as above in Lemma 3.16 iii). Thus, the criterion a) in Theorem 3.15 applies and the regularity of z follows. The case h (z) 1 is more difficult. Combining Lemma 3.1 in [35] ′ ≤ − and Theorem 3.3 in [35] and using Lemma 3.16 iii) we get the following estimate for the capacity of small balls:

R 2 1 R 1 0 s ds − 0 ′ − cap(B (y)) C sh (y)ds . (3.12) r ≃ qˆ(B (y)) s ≥ Zr s  Zr 

Recall the definition of the set Kρ(z) in Theorem 3.15. Clearly, for every ρ sufficiently small there exists a ball B (ˆz) withz ˆ depending on ρ and with d(ˆz) = d(z) such that B (ˆz) K (z). ρ/2 ρ/2 ⊂ ρ Let now r be such that Lemma 3.16 iii) and (3.12) hold for every ball B (z), ρ r . Then, we 0 ρ ≤ 0 3.3 Feller Property 63 obtain in the case h (z) < 1 ′ − 1 R 2 r r R 0 ρ dρ 0 ′ 0 0 ′ − ′ cap(K ) C cap(B (ˆz))ρh (z) dρ C sh (ˆz)ds ρh (z) dρ ρ qˆ(B (z)) ρ ≥ ρ/2 ≥ Z0 ρ Z0 Z0 Zρ/2 ! ′ r0/2 ρh (z) = C (h′(z)+1) ′ dρ h (y)+1 h′(y)+1 Z0 R0 ρ − ′ r0/2 ′ h (y)+1 h′(y)+1 = C log( R0 ρ ) dρ = . 0 − | − | ∞ Z   Finally, if h (z) = 1 we get by an analogous procedure ′ − R0 ρ2 dρ r0/2 1 cap(K ) C dρ ρ qˆ(B (z)) ρ ≥ ρ(log R log ρ) Z0 ρ Z0 0 − r0/2 ′ = C ( log(log R log ρ)) dρ = . − 0 − ∞ Z0 Hence, applying the criterion b) of Theorem 3.15 completes the proof.

Proposition 3.17. Y E is a Feller process, i.e.

i) for every t > 0 and every f C(E¯) we have QEf C(E¯), ∈ t ∈ ¯ E ¯ ii) for every f C(E), limt 0 Qt f = f pointwise in E. ∈ ↓ The statement is classical and can be found e.g. in Theorem 13.5 in [30]. For illustration we repeat the argument here. We shall need the following lemma (cf. [18], p. 73, Exercise 2).

Lemma 3.18. For any compact set K E we have ⊂ x lim sup P [τE t]=0. t 0 x K ≤ ↓ ∈

Proof. We need to show that for any δ > 0 there exists a t0 > 0 such that

x inf P [τE t0] 1 δ. (3.13) x K ≥ ≥ − ∈ Consider a bounded function f C (RN ) such that 0 f 1, f = 1 on K and f = 0 on the ∈ 0 ≤ ≤ complement of E. Let now t0 be such that supt t0 Qtf f < δ/2 (cf. Remark 3.10). Then, ≤ k − k∞ x x x E [f(Yt0 )] P [τE t0] + E [f(Yt0 ) 1l τE

On the other hand, using the strong Markov property (cf. again Remark 3.10) and the fact that N f(YτE ) = 0 we have

x x x x E [f(Yt0 ) 1l τE

Proof of Proposition 3.17. Let t > 0 and f C(E¯). Then, by the semigroup property of (QE) ∈ t we have for 0

E x x x Qt f(x) Qsψs(x) = E [ψs(Ys τE ) ψs(Ys)] 2 ψs P [τE s] 2 f P [τE s] − | ∧ − | ≤ k k ≤ ≤ k k ≤ and since the right hand side converges to zero uniformly in x on every compact subset of E by Lemma 3.18, we conclude that QEf C (E), i.e. QEf is continuous in the interior of E. t ∈ b t In order to show i) it suffices to verify continuity at the boundary. Since E is regular, we have E obviously Qt f = f on ∂E. By Lemma 13.1 in [30] we have upper semicontinuity of the mapping x P x[t < τ ]. Hence, we obtain for z ∂E, 7→ E ∈ x z lim sup P [t < τE] P [t < τE]=0, x z ≤ → where we have used the regularity of E in Proposition 3.13. Thus, for every x E ∈ E E x x Qt f(x) f(z) Qt f(x) E [f(YτE )1l τE< ] + E [f(YτE )1l τE< ] f(z) − ≤ − { ∞} { ∞} − x E f(Yt)1l t<τE + f(YτE )1l t τE f(YτE )1l τE< ≤ { } { ≥ } − { ∞} x + E [f(YτE )1l τE< ] f(z)  { ∞} − x x f P [t < τE] + E f(YτE )1l τE< 1l t τE 1 ≤k k∞ { ∞} { ≥ } − x x 2 f P [t < τE] + E [f(YτE )1l τE < ] f(z) ,  ≤ k k∞ { ∞} − where we have used the fact that the event τ < is included in t τ . Since z is a regular { E ∞} { ≥ E } point, the second term tends to zero as x z, x E, cf. Remark 3.14. Hence, → ∈ E lim Qt f(x) = f(z) E x z ∋ → and property i) is proven. 3.3 Feller Property 65

N To prove ii), we extend f to a function in C0(R ), i.e. we may deduce from Proposition 3.9 that limt 0 Qtf(x) = f(x) for every x E. Furthermore, we have for every x E ↓ ∈ ∈ E x Qt f(x) Qtf(x) P [t τE] f , | − | ≤ ≤ k k∞ x and P [τE = 0] = 0, since E is open and Y has continuous paths. Hence,

E lim Qt f(x) = lim Qtf(x) = f(x). t 0 t 0 ↓ ↓ E This gives pointwise convergence on E. Since Qt f = f on ∂E by regularity, the convergence on ∂E is trivial.

3.3.3 Feller Properties of XN In this section we will finally prove the Feller property for XN stated in Theorem 3.2. To this end we construct a Feller process X˜ taking values in ΣN and in a second step we will identify this process with XN .

In analogy to the definition of ΩN above we set

Ω := Ωδ := x Σ : xi+1 xi δ , i =0,...,N. (3.14) i i { ∈ N − ≥ } and moreover Ai := ∂Ω2δ ∂Σ = x Σ : xi+1 xi =2δ . i \ N { ∈ N − } N 2δ Notice that we can choose δ so small that ΣN = i=1 Ωi . Furthermore, we define the mappings Hi, i =0,...,N, by S 1 i N i N 1 i i+1 i H (x) := x ,...,x , 1 (x x ), 1 (x − x ),..., 1 (x x ) , x Σ . i − − − − − − ∈ N Notice that for every i, H maps Ω on Ω and vice versa. In particular, H  H = Id and H is i N i i ◦ i N the identity on Ω . Let T : E¯ Ω2δ Ω be defined as in (3.7). Let Y i denote the RN -valued N → N ⊂ N Feller process induced from the form (3.8) with a = ai, where the matrix valued function ai is definedq ˆ-almost everywhere by

t ˚ DHi for x S1 ai(x) := t 1 t ∈ N (3.15) DH (DT − ) for x S˚n, n 2, 2 N! , ( i Sn · | ∈ ∈{ ··· · } 1 such that condition (3.9) is clearly satisfied (note that DHi = DHi− ). In particular, ai is constant 1 t on every S . Moreover, setting ρ := H T , clearly a =(Dρ− ) . n i i ◦ i i Remark 3.19. Analogously to Proposition 3.7 one can establish the following integration by parts formula associated to N,ai . Let u C1(RN ) and ξ a continuous vector field on RN such E ∈ that supp ξ x RN : x < 1 δ , ξ is continuously differentiable in the interior of each ⊆ { ∈ k k∞ − } S and ξ satisfies the boundary condition at a ξ,ν =0 on ∂S for every n, ν denoting the n h i · i · ni n n outward normal field of ∂Sn. Then,

t t ai u, aiξ qˆ(dx) = u div(ai ai ξ) + ai ai ξ, logq ˆ qˆ(dx). N h ∇ i − N · · h · · ∇ i ZR ZR   66 The Approximating Particle System

Thus, every smooth function g such that ξ = g satisfies the above conditions is contained in ∇ the domain of the generator Li associated to N,ai and on RN ∂S we have E \ n n Lig = div(at a g) + at a g, Slogq ˆ . i · i ·∇ h i · i ·∇ ∇ i 2δ ˜ i Next we define the Ωi -valued process X by X˜ i = ρ (Y i,E) = H T (Y i,E), t 0. t i t i ◦ t ≥ ˜ i ˜i ˜i x ˜ i The semigroup of X will be denoted by (Pt )t 0, i.e. Pt f(x) = E [f(Xt )]. ≥ Lemma 3.20. For every i, X˜ i is Markovian. i,E Proof. Since Hi is an injective mapping for every i, it suffices to show that the process T (Y ) is i,E i i · Markovian. Moreover, since T (Yt ) = T (Yt τ )=(T (Y ))t τE it is enough to prove the Markov E ∧ property for the process T (Y i), which is implied∧ e.g. by· the condition that for any Borel set A Σ · ⊆ N x i 1 y i 1 P [Y T − (A)] = P [Y T − (A)] whenever T (x) = T (y). (3.16) t ∈ t ∈ Now the choice ofq ˆ and the metric a imply for any Borel set A Σ condition (3.16) is i ⊆ N satisfied. To see this, let σ k = 1, , N + N(N 1)/2 be the collection of line-reflections { k | ··· − } in RN with respect to either one of the coordinate axes λe , λ R or a diagonal λ(e + e ) , { i ∈ } { j k } then for x, y RN with T (x) = T (y) there exists a finite sequence σ , σ such that τ(x) := ∈ k1 ··· kl σ σ σ (x) = y. Now each of the reflections σ preserves the Dirichlet form (3.8) when a is k1 ◦ k2 ···◦ kl i chosen as in (3.15), such that the processes τ(Y i,x) and Y i,y are equal in distribution. Moreover, 1 1 · · τ(T − (A)) = T − (A), from which (3.16) is obtained. Lemma 3.21. For each f C2 the process t f(X˜ i) t LN f(X˜ i) ds is a martingale w.r.t. ∈ Neu → t − 0 s the filtration generated by X˜ i. · R 2 N Proof. Similar to Proposition 3.7 one checks for f CNeu Cc(Ωi) that the function fi on R , ∈ ∩ N which is defined by fi = f Hi T = f ρi on the set x R : x < 1 δ and fi = 0 ◦ ◦ ◦ { ∈ k k∞ − } on the complement this set, belongs to the domain of the generator Li of the Dirichlet form i i i (3.8) with a = ai as in (3.15) (cf. Remark 3.19). Hence the process fi(Y ) 0· L fi(Ys )ds is a i i,E · − i i,E martingale w.r.t. to the filtration generated by Y and thus also fi(Y ) · L fi(Ys )ds due to · − 0 R the optional sampling theorem. Obviously in the last statement the function f can be modified 2δ 2 i,E ˜ i R i N outside of Ωi , i.e. it holds also for f CNeu. Moreover, fi(Y ) = f(X ) and L fi =(L f) ρi 1 t ∈ t · · ◦ on E¯. Indeed, since a =(Dρ− ) , (f ρ )=(Dρ ) f ρ andq ˆ(x) = q (T (x)) = q (ρ (x)) i i ∇ ◦ i i ·∇ ◦ i N N i for all x E¯, we obtain ∈ Lif = div(at a (Dρ )t f ρ ) + at a (Dρ )t f ρ , (log q ρ ) i i · i · i ·∇ ◦ i h i · i · i ·∇ ◦ i ∇ N ◦ i i 1 1 t = div(Dρ− f ρ ) + Dρ− f ρ , Dρ log q ρ i ·∇ ◦ i h i ·∇ ◦ i i ·∇ N ◦ ii =∆f ρ + f ρ , log q ρ =(LN f) ρ . ◦ i h∇ ◦ i ∇ N ◦ ii ◦ i ˜ i N ˜ i i Thus, f(X ) 0· L f(Xs)ds is a martingale w.r.t. the filtration generated by Y and adapted · − i i,E · to the filtration generated by X˜ = Hi T (Y ) which establishes the claim. R · ◦ · 3.3 Feller Property 67

Proposition 3.22. For every i, X˜ i is a Feller process, more precisely i) for every t > 0 and every f C(Ω2δ) we have P˜if C(Ω2δ), ∈ i t ∈ i 2δ ˜i 2δ ii) for every f C(Ωi ), limt 0 Pt f = f pointwise in Ωi . ∈ ↓ 2δ Proof. Since obviously for every continuous f on Ωi

P˜if(x) = Ex[f(X˜ i)] = Ex[f(H T (Y i,E))] = QE(f H T )(x), t > 0, t t i ◦ t t ◦ i ◦ the result follows from Proposition 3.17 and the continuity of the mappings Hi and T .

Next we define the process X˜ with state space ΣN as follows: Let qN be the initial distri- 2δ i1 bution of X˜, i.e. X˜0 q . Choose i1 0,...N such that X˜0 Ω and dist(X˜0, A ) = ∼ N ∈ { } ∈ i1 ˜ i ˜ ˜ i1 maxi dist(X0, A ). We set Xt = Xt for 0 t T1, where T1 denotes the first hitting time of i1 ≤ ≤ i1 A , i.e. on [0, T1] the process behaves according to P˜ . Choose now i2 0,...N such that 2δ i i ∈ { } X˜ Ω and dist(X˜ , A 2 ) = maxi dist(X˜ , A ). At time T1 the process starts afresh from X˜ T1 ∈ i2 T1 T1 T1 ˜i2 i2 according to (Pt ) up to the first time T2 after T1, when it hits A . This procedure is repeated forever. Proposition 3.23. X˜ is a Feller process. Proof. Let (P˜ ) denote the semigroup associated to X˜. For f C(Σ ), we need to show that t ∈ N P˜ f C(Σ ) for every t > 0. Let us first show that P˜ f C(Σ ) for every n using an t ∈ N Tn ∈ N induction argument. For an arbitrary x ΣN , choose i1 as above depending on x such that i1 i1 2δ ∈ P˜T f(x) = P˜ f(x). Since P˜ f C(Ω ), we conclude that P˜T f is continuous in x for every x. 1 T1 T1 ∈ i1 1 For arbitrary n and x Σ we have by the strong Markov property ∈ N x x X˜ i x i P˜ f(x) = E [f(X˜ )] = E E TN [f(X˜ n )] = E P˜ n f(X˜ ) Tn+1 Tn+1 Tn+1 Tn Tn+1 Tn TN − − = P˜ (P in f)(x) h i h i Tn Tn+1 Tn −

in and since P f can be extended to a continuous function on ΣN , we get P˜T f C(ΣN ) Tn+1 Tn n+1 ∈ by the induction− assumption. x ˜ Similarly, one can show that for every n the mapping x E [f(Xt)1l t (Tn,Tn+1] ] is continu- 7→ { ∈ } ous. Finally, for every x Σ ∈ N n 1 ˜ − x ˜ x ˜ x Ptf(x) E f(Xt)1l t (Tk,Tk+1] = E f(Xt)1l t>Tn f P [t > Tn] − { ∈ } { } ≤k k∞ k=0 h i h i X and since T P x-a.s. locally uniformly in x as n tends to infinity, the claim follows. n ր∞

The proof of Theorem 3.2 is complete, once we have shown that the processes X˜ and the original process XN have the same law.

x Lemma 3.24. For x ΣN the process (X˜ ) obtained from conditioning X˜ to start in x, solves ∈ N· 2 the martingale problem for the operator (L ,CNeu) as in (3.2) and starting distribution δx. 68 The Approximating Particle System

Proof. Let T be the sequence of strictly increasing stopping times introduced in the construc- { k} tion of the process X˜. Then for s

(Tk+1 s) t t ∨ ∧ ˜ ˜ N ˜ ˜ ˜ N ˜ f(Xt) f(Xs) L f(Xσ)dσ = f(X(Tk+1 s) t) f(X(Tk s) t) L f(Xσ)dσ. − − s ∨ ∧ − ∨ ∧ − Z k (T Zs) t X k∨ ∧ Hence t s E f(X˜ ) LN f(X˜ )dσ = f(X˜ ) LN f(X˜ )dσ t − σ Fs s − σ Z0 Z0  (T s) t k+1∨ ∧ ˜ ˜ N ˜ + E f(X(T s) t) f(X(T s) t) L f(Xσ)dσ s . k+1∨ ∧ − k∨ ∧ − F k Z X (Tk s) t  ∨ ∧ Using the strong Markov property of the Feller process X˜ and its pathwise decomposition into · pieces of X˜ i -trajectories one obtains { } (T s) t k+1∨ ∧ ˜ ˜ N ˜ E f(X(T s) t) f(X(T s) t) L f(Xσ)dσ s k+1∨ ∧ − k∨ ∧ − F (T Zs) t k∨ ∧  τ (t s) ∧ − = E E f(X˜ ik ) f(X˜ ik ) LN f(X˜ ik )dσ , (3.17) X˜ ∨ ∧ τ (t s) 0 σ s (Tk s) t − − F ∧ − Z0    ˜ ik ik ˜ where, by construction of X , τ ist the hitting time of the the set A for which dist(A , XTk ) = i ˜ · maxi dist(A , XTk ). Lemma 3.21 implies that the inner expectation in (3.17) is zero. The last ingredient for our proof of Theorem 3.2 is the identification of the processes X˜ with N 2· X . Since both are Markovian and solve the martingale problem for the operator (L ,CNeu) it · suffices to show that the martingale problem admits at most one Markovian solution. Clearly, 2 any such solution induces a symmetric sub-Markovian semigroup on L (ΣN , qN ) whose generator N 2 extends (L ,CNeu). Hence it is enough to establish the following so-called Markov uniqueness N 2 property of (L ,CNeu), cf. [31, Definition 1.2], stated in Proposition 3.3.

1,2 1,2 Proof of Proposition 3.3. Let H (ΣN , qN ) (resp. ’H0 (ΣN , qN )’ in the notation of [31]) denote 2 2 2 1/2 1,2 the closure of C w.r.t. to the norm f =( f 2 + f 2 ) and let W be Neu 1 L (ΣN ,qN ) L (ΣN ,qN ) 2 k k k k k∇ k 2 the Hilbert space of L (ΣN , qN )-functions f whose distributional derivative Df is in L (ΣN , qN ), 2 2 1/2 equipped with the norm f =( f 2 + Df 2 ) . Clearly, the quadratic form 1 L (ΣN ,qN ) L (ΣN ,qN ) k k k k 1,2 k k 2 Q(f, f) = Df,Df 2 , (Q) = W (Σ , q ), is a Dirichlet form on L (Σ , q ). Hence h iL (ΣN ,qN ) D N N N N we may use the basic criterion for Markov uniqueness [31, Corollary 3.2], according to which 2 1,2 1,2 (L,CNeu) is Markov unique if H = W . To prove the latter it obviously suffices to prove that H1,2 is dense in W 1,2. Again we proceed δ by localization as follows. For fixed δ > 0 let Ωi = Ωi denote the subsets defined in (3.14), 3δ then Ωi constitutes a relatively open covering of ΣN for δ small enough. Let ηi i=0, ,N { } { } ··· 3.4 Semi-Martingale Properties 69

3δ and χi i=0, ,N be collections of smooth cut-off function on ΣN such that ηi =1 onΩi and { } ··· supp(η ) Ω2δ and χ =1onΩ2δ and supp(χ ) Ωδ respectively. i ⊂ i i i i ⊂ i Step 1. In the first step we reduce the problem to functions in W 1,2, whose support is contained in one of the sets Ω2δ. For arbitrary f W 1,2 we have f = f η +f (1 η ) =: f +r . i ∈ · N · − N N N Next we write rN = rN ηN 1 + rN (1 ηN 1) =: fN 1 + rN 1. Iterating this procedure yield · − · − 1,2− − − 2δ the decomposition f = i fi with fi W (ΣN , qN ) and supp(fi) Ωi . Hence, it suffices to 1,2 ∈ ⊂ prove that fi H (Σ , q ). In Step 2 we will show first that fi can be approximated w.r.t. ∈ NP N . by functions which are smooth up to boundary and in Step 3 that such functions can be k k1 approximated in . by smooth Neumann functions. We give details for the case i = N only, k k1 the other cases can be treated almost the same way by using the maps Hi. 1,2 2δ Step 2. Let fN W (ΣN , qN ) with supp(fN ) ΩN . Then, obviously fN = fN χN and the ∈ δ 1,2 ⊂δ 1,2 δ · restriction of fN to ΩN belongs to the space W (ΩN , qN ) = W (ΩN , qˆ), whereq ˆ denotes the N modification of qN according to (3.6). Due to Lemma 3.11 the extensionq ˆ(x)=ˆq(T (x)), x R , 1,2 δ 1,2 δ 1,2 ∈δ lies in the Muckenhoupt class 2. Further note that W (ΩN , qˆ) W (ΩN ,dx) = H (ΩN ,dx) 1,2 δ A 1,1 δ 1,1 δ ⊂ if β N +1, and W (ΩN , qˆ) W (ΩN ,dx) = H (ΩN ,dx) by the H¨older inequality if N +1 < ≤ ⊂ 1 β < 2(N + 1), such that fN has well defined boundary values in L (∂ΩN ,dx). Hence we may conclude that the extension fˆ (x) = f (T (x)), x RN defines a weakly differentiable function N N ∈ N ˆ N on R with fN =2 N! fN W 1,2(Ω ,qˆ). By [46, Theorem 2.5] the mollification with W 1,2(RN ,qˆ) · k k N N the standard mollifier yields an approximating sequence ul l of smooth functions ul C0∞(R ) 1,2 N { } ∈ of fˆN in the weighted Sobolev spaces H (R , qˆ). Hence, the restriction of the sequence ul χN δ ˆ δ 1,2 δ ˆ · to ΩN approximates the restriction of fN χN to ΩN in H (ΩN , qˆ). Recall that fN = fN χN δ δ · · on ΩN . Since χN is zero on ΣN ΩN we obtain a sequence of C∞(ΣN )-functions ul χN l which 1,2 \ { · } converges to fN in H (ΣN , qN ). This finishes the second step. Step 3. In the third step we thus may assume w.l.o.g. that fN is smooth up to the boundary of ΣN . In particular, fN is globally Lipschitz. Since fN is integrable on ΣN we may modify fN close to the boundary to obtain a Lipschitz function f˜N which satisfies the Neumann condition and which is close to f in . . (Take, e.g. f˜ (x) = f(π(x)), where π(x) is the projection of N k k1 N x into the set Σǫ = x Σ dist(x,∂Σ ) ǫ for small ǫ > 0.) We may now proceed as in N { ∈ N | N ≥ } step two to obtain an approximation of f˜ by smooth functions w.r.t. . , where we note that N k k1 neither extension by reflection through the map T nor the standard mollification in [46] of the extended f˜N destroys the Neumann boundary condition.

x x Corollary 3.25. For quasi-every x ΣN , the the processes X˜ and X are equal in law. In ∈ · · particular, X is a Feller process on ΣN .

3.4 Semi-Martingale Properties

In this final section we prove the semi-martingale properties of XN stated in Theorem 3.5. To that aim we establish the semi-martingale properties for the symmetric process XN started in equilibrium, which imply the semi-martingale properties to hold for quasi-every starting point x Σ and by the Feller properties proven in the last section for every starting point x Σ . ∈ N ∈ N In order to establish the semi-martingale properties of the , we shall use the 70 The Approximating Particle System following criterion established by Fukushima in [38]. For every open set G Σ we set ⊂ N := u D( N ) C(Σ ) : supp(u) G . CG { ∈ E ∩ N ⊂ } Theorem 3.26. For u D( N ) the additive functional u(XN ) u(XN ) is a semi-martingale ∈ E t − 0 if and only if one of the following (equivalent) conditions holds: i) For any relatively compact open set G Σ , there is a positive constant C such that ⊂ N G N (u, v) CG v , v G. (3.18) |E | ≤ k k∞ ∀ ∈C

ii) There exists a signed Radon measure ν on ΣN charging no set of zero capacity such that

N (u, v) = v(x) ν(dx), v C(Σ ) ( N ). (3.19) E − ∀ ∈ N ∩ D E ZΣN Proof. See Theorem 6.3 in [38]. Theorem 3.27. Let XN be a symmetric diffusion on Σ associated with the Dirichlet form N , N E then XN is an RN -valued semi-martingale if and only if β/(N + 1) 1. ≥ Proof. Since the semi-martingale property for RN -valued diffusions is defined componentwise, we shall apply Fukushima’s criterion for u(x) = xi, i =1,...,N. Let us first consider the case where β′ := β/(N +1) > 1. Then, for a relatively compact open set G Σ and v , ⊂ N ∈CG ∂ N (u, v) = v(x) q (dx) E ∂xi N ZΣN 1 1 1 1 1 N 1 1 2 i+1 i+2 N j+1 j β′ 1 = dx dx dx dx dx (x x ) − Z 1 ··· i−1 i+1 ··· N−1 − β Z0 Zx Zx Zx Zx j=0 j=Yi 1, i 6 − xi+1 ∂ i i 1 β′ 1 i+1 i β′ 1 i i v(x)(x x − ) − (x x ) − dx . × i−1 ∂x − − Zx Since β′ > 1, we obtain by integration by parts xi+1 ∂ i i 1 β′ 1 i+1 i β′ 1 i i v(x)(x x − ) − (x x ) − dx i−1 ∂x − − Zx xi+1 i i 1 β′ 2 i+1 i β′ 1 i i 1 β′ 1 i+1 i β′ 2 i = (β′ 1) v(x) (x x − ) − (x x ) − (x x − ) − (x x ) − dx − − xi−1 − − − − − Z h i so that xi+1 ∂ i i 1 β′ 1 i+1 i β′ 1 i i v(x)(x x − ) − (x x ) − dx xi−1 ∂x − − Z xi+1 xi+1 i i 1 β′ 2 i i+1 i β′ 2 i ( β′ 1) v (x x − ) − dx + (x x ) − dx ≤ − k k∞ i−1 − i−1 − Zx Zx ! 1 β′ 2 2(β′ 1) v r − dr C v , ≤ − k k∞ ≤ k k∞ Z0 3.4 Semi-Martingale Properties 71

and we obtain that condition (3.18) holds. If β′ = 1, i.e. the measure qN coincides with the normalized Lebesgue measure on ΣN , condition (3.18) follows easily by a similar proceeding. Let now β < 1 and let us assume that u(XN ) u(XN ) is a semi-martingale. Then, there exists ′ t − 0 a signed Radon measure ν on Σ satisfying (3.19). Let ν = ν ν be the Jordan decomposition N 1 − 2 of ν, i.e. ν1 and ν2 are positive Radon measures. By the above calculations we have for each relatively compact open set G Σ and for all v ⊂ N ∈CG N N 1 j+1 j β′ 1 (u, v) = (β′ 1) v(x) (x x ) − E − Z − − β ZG j=0 j=Yi 1, i 6 − i i 1 β′ 2 i+1 i β′ 1 i i 1 β′ 1 i+1 i β′ 2 (x x − ) − (x x ) − (x x − ) − (x x ) − dx. × − − − − − h i Hence, we obtain for the Jordan decomposition ν = ν ν that 1 − 2 N 1 i+1 i β′ 2 j+1 j β′ 1 ν (G) = (1 β′) (x x ) − (x x ) − dx 1 Z − − − β ZG j=0 Yj=i 6 N 1 i i 1 β′ 2 j+1 j β′ 1 ν (G) = (1 β′) (x x − ) − (x x ) − dx. 2 Z − − − β ZG j=0 j=Yi 1 6 − Set ∂Σj := x ∂Σ : xj = xj+1 , j = 0,...,N, and let for some x ∂Σi and r > 0, N { ∈ N } 0 ∈ A := x +[ r, r]N Σ be such that dist(A, ∂Σj ) > 0 for all j = i. Furthermore, let (A ) be 0 − ∩ N N 6 n n a sequence of compact subsets of A such that A A and dist(A ,∂Σi ) > 0 for every n. n ↑ n N By the inner regularity of the Radon measures ν1 and ν2 we have v1(A) = limn ν1(An) and v (A) = lim ν (A ). Since β 2 < 1, we get by the choice of A that ν (A) = , while 2 n 2 n ′ − − 1 ∞ ν (A) < , which contradicts the local finiteness of ν and ν , respectively. 2 ∞ 1 72 Weak Convergence to the Wasserstein Diffusion Chapter 4

Weak Convergence to the Wasserstein Diffusion

4.1 Introduction

The large scale behaviour of stochastic interacting particle systems is often described by linear or nonlinear deterministic evolution equations in the hydrodynamic limit, a fact which can be understood as a dynamic version of the , cf. e.g. [47]. Analogously, the fluctuations around such a deterministic limit usually lead to linear Ornstein-Uhlenbeck type stochastic partial differential equations (SPDE) on the diffusive time scale. Here we add one more example to the collection of (in this case conservative) interacting particle systems with a nonlinear stochastic evolution in the hydrodynamic limit. Again we N consider the process (X ) ΣN of N moving particles on the unit interval, which has been · ∈ N 1 N introduced and studied in the last chapter. Recall that (Xt )=(xt ,...,xt ) is a formal solution of the system

β 1 1 dxi =( 1) dt + √2dwi + dli 1 dli, i =1, ,N, (4.1) t i i 1 i+1 i t t− t N +1 − x x − − x x − ···  t − t t − t  driven by independent real Brownian motions wi and local times li satisfying { } t i i i dlt 0, lt = 1l i i+1 dls. (4.2) ≥ xs=xs Z0 { } At first sight equation (4.1) resembles familiar Dyson-type models of interacting Brownian motions with electrostatic interaction. Except from the fact that (4.1) models a nearest-neighbour and not a mean-field interaction, the most important difference towards the Dyson model is however, that in the present case for N β 1 the drift is attractive and not repulsive. One ≥ − technical consequence, as we have seen in the last chapter, is that the system (4.1) and (4.2) has to be understood properly because it may no longer be defined in the class of Euclidean semi-martingales. The second and more dramatic consequence is a clustering of hence strongly correlated particles such that fluctuations are seen on large hydrodynamic scales.

73 74 Weak Convergence to the Wasserstein Diffusion

More precisely, we show that for properly chosen initial condition the empirical probability distribution of the particle system in the high density regime

N N 1 µt = δxi N N·t Xi=1 converges for N to the Wasserstein diffusion (µ ) on the space of probability measures → ∞ t ([0, 1]), which was introduced in [66] as a conservative model for a diffusing fluid when its heat P flow is perturbed by a kinetically uniform random forcing. In particular (µt) is a solution in the sense of an assocciated martingale problem to the SPDE

dµt = β∆µtdt + Γ(µt)dt + div( 2µdBt), (4.3) where ∆ is the Neumann Laplace operator and dBt is space-timep over [0, 1] and, for µ ([0, 1]), Γ(µ) ([0, 1]) is the Schwartz distribution acting on f C ([0, 1]) by ∈ P ∈ D′ ∈ ∞ f ′′(I ) + f ′′(I+) f ′(I+) f ′(I ) f ′′(0) + f ′′(1) Γ(µ), f = − − − , h i 2 − I − 2 I gaps(µs)   ∈ X | | where gaps(µ) denotes the set of components I =]I ,I+[ of maximal length with µ(I) = 0 and − I denotes the length of such a component. | | The SPDE (4.3) has a familiar structure. For instance, the Dawson-Watanabe (’super- Brownian motion’) process solves dµt = ∆µtdt + √2µtdBt whereas the empirical measure of a countable family of independent Brownian motions satifies the equation dµt =∆µtdt+div(√2µtdBt), both again in the weak sense of the associated martingale problems, cf. e.g. [24]. The additional nonlinearity introduced through the operator Γ into (4.3) is crucial for the construction of (µt) by Dirichlet form methods because it guarantees the existence of a reversible measure Pβ on ([0, 1]) which plays a central role for the convergence result, too. For β > 0, Pβ P can be defined as the law of the random probability measure η ([0, 1]) defined by ∈ P 1 f, η = f(Dβ)dt f C([0, 1]), h i t ∀ ∈ Z0 where t Dβ = γt·β is the real valued Dirichlet process over [0, 1] with parameter β and γ → t γβ denotes the standard Gamma subordinator. It is argued in [66] that Pβ admits the formal Gibbsean representation

β 1 βEnt(µ) 0 P (dµ) = e− P (dµ) Z with the Boltzmann entropy Ent(µ) = [0,1] log(dµ/dx)dµ as Hamiltonian and a particular uniform measure P0 on ([0, 1]), which illustrates the non-Gaussian character of Pβ. In particular, Pβ is P R not log-concave. However an appropriate version of the Girsanov formula holds true for Pβ, see also [67], which implies the L2( ([0, 1]), Pβ)-closability of the quadratic form P w 2 β (F,F ) = F µ P (dµ), F E ([0,1]) k∇ k ∈Z ZP 4.1 Introduction 75 on the class

F (µ) = f( φ1,µ , φ2,µ ,..., φk,µ ) = F : ([0, 1]) R k h ki h i h i Z P → f C∞(R ), φi C∞([0, 1]),k N  ∈ c { }i=1 ⊂ ∈ 

where w F µ = (D µF )′( ) 2 k∇ k | · L ([0,1],µ) and (D µF )(x) = ∂t t=0F (µ + tδx). The corresponding closure, still denoted by , is a local | | E regular Dirichlet form on the compact space ( ([0, 1]), τ ) of equipped with the P w weak topology. This allows to construct a unique Hunt diffusion process (Pη)η ([0,1]), (µt)t 0 ∈P ≥ properly associated with , cf. [39]. Starting (µ ) from equilibrium Pβ yields what shall be called E t  in the sequel exclusively the Wasserstein diffusion because its intrinsic metric coincides with the 2 L -Wasserstein distance dW , defined by

1/2 2 dW (µ,ν) := inf x y γ(dx,dy) , γ 2 | − | ZZ[0,1] ! where the infimum is taken over all probability measures γ ([0, 1]2) having marginals µ and ∈ P ν. Our approach for the approximation result is again based on Dirichlet form methods. We use that the convergence of symmetric Markov semigroups is equivalent to an amplified notion of Gamma-convergence [52] of the associated sequence quadratic forms N . In our situation it E suffices to verify that equations (4.1) and (4.2) define a sequence of reversible finite dimensional particle systems whose equilibrium distributions converge nice enough to Pβ. In particular we show that also the logarithmic derivatives converge in an appropriate L2-sense which implies the Mosco-convergence of the Dirichlet forms. (The pointwise convergence of the same sequence N E to has been used in a recent work by D¨oring and Stannat to establish the logarithmic Sobolev E inequality for , cf. [27]). Since the approximating state spaces are finite dimensional we employ E [50] for a generalized framework of Mosco convegence of forms defined on a scale of Hilbert spaces. In case of a fixed state space with varying reference measure the criterion of L2-convergence of the associated logarithmic derivatives has been studied in e.g. [48]. However, in our case this result does not directly apply because in particular the metric and hence also the divergence operation itself is depending on the parameter N. However, only little effort is needed to see that things match up nicely, cf. Section 4.4.4. A much more subtle point is the assumption of Markov uniqueness of the form which E we have to impose for the identification of the limit. By this we mean that be a maximal E element in the class of (not necessarily regular) Dirichlet forms on L2( ([0, 1]), Pβ) which is closely P related to the Meyers-Serrin (weak = strong) property of the corresponding Sobolev space, cf. Corollary 4.20 and [31, 49]. Variants of this assumption appear in several quite similar infinite dimensional contexts as well [41, 48] and the verificiation depends crucially on the integration by parts formula which in the present case of Pβ has a very peculiar structure. By general principles Markov uniqueness of is weaker than the essential self-adjointness of the generator of (µt)t 0 on E ≥ = F F (µ) = f( φ ,µ ,..., φ ,µ ), φ (0) = φ (1) = 0,i =1,...,k and stronger than ZNeu { ∈ Z| h 1 i h k i i′ i′ } 76 Weak Convergence to the Wasserstein Diffusion the well-posedness, i.e. uniqueness, in the class of Hunt processes on ([0, 1]) of the martingale P problem defined by equation (4.3) on the set of test functions , cf. [3, Theorem 3.4]. ZNeu For the sake of a clearer presentation in the proofs we will work with a parametrization of a probability measure by the generalized right continuous inverse of its distribution function, which however is mathematically inessential. A side result of this parametrization is a diffusive scaling limit result for a (1+1)-dimensional gradient interface model with non-convex interaction potential (cf. [40]), see Section 4.5. The material presented in this chapter is contained in [9].

4.2 Main Result

Let us briefly recall that the process XN =(x1, ,xN ) on the closure of Σ := x RN : 0 < t t ··· t N { ∈ x1

N+1 β 1 N Γ(β) i i 1 1 1 N qN (dx , ,dx ) = (x x − ) N+1 − dx ...dx . ··· (Γ( β ))N+1 − N+1 Yi=1 N It can be rigorously constructed as the qN -symmetric Hunt diffusion ((Px) , (Xt )t 0) asso- x ΣN ≥ ciated to the local regular Dirichlet form N obtained as the L2(Σ , q )-closure∈ of E N N

N (f,g) = f(x) g(x) q (dx), f,g C∞(Σ ). E ∇ ·∇ N ∈ N ZΣN Now we can state the main result of this chapter.

Theorem 4.1. For β > 0, assume that the Wasserstein Dirichlet form is Markov unique. Let N E N (Xt ) denote the qN -symmetric diffusion on ΣN induced from the Dirichlet form , starting N N 1 N E from equilibrium X qN , and let µ = δ i ([0, 1]), then the sequence of processes 0 t N i=1 xN·t N ∼ ∈ P (µ ) converges weakly to (µ.) in CR ( ([0, 1]), τ ) for N . . + P P w →∞  4.3 Tightness

N As usual we show compactness of the laws of (µ. ) and, in a second step the uniqueness of the limit. N Proposition 4.2. The sequence (µ ) is tight in CR (( ([0, 1]), τ )). . + P w N Proof. According to Theorem 3.7.1 in [24] it is sufficient to show that the sequence ( f,µ. )N N h i ∈ is tight, where f is taken from a dense subset in C([0, 1]). Choose F ⊂ 3 := f C ([0, 1]) f ′(0) = f ′(1) = 0 , F { ∈ | } N N N then f,µt = F (XN t) with h i · N 1 F N (x) = f(xi). N Xi=1 4.3 Tightness 77

The condition f (0) = f (1) = 0 implies F N C2 and ′ ′ ∈ Neu

N+1 i i 1 N+1 i i 1 N N N β f ′(x ) f ′(x − ) f ′(x ) f ′(x − ) i N L F (x) = i − i 1 i − i 1 + f ′′(x ), · N +1 x x − − x x − Xi=1 − Xi=1 − Xi=1 which can be written as

N+1 i i 1 N N β f ′(x ) f ′(x − ) N L F (x) = i − i 1 · N +1 x x − Xi=1 − N i i 1 N+1 N i f ′(x ) f ′(x − ) f ′(x ) f ′(x ) + f ′′(x ) i − i 1 + N+1 − N . − x x − x x Xi=1  −  − Finally, this can be estimated as follows:

N N N L F (x) f ′′ (β +1)+ f ′′′ =: C(β, f C3([0,1])). (4.4) | · | ≤ ∞ ∞ k k

This implies a uniform in N Lipschitz bound for the BV part in the Doob-Meyer decomposition of N N N N F (XN.). The process X has continuous sample paths with square field operator Γ (F,F ) = LN (F 2) 2F LN F = F 2. Hence the quadratic variation of the martingale part of F N (XN ) − · |∇ | N satisfies ·

t t N N N N N N 2 N 1 2 i 2 [F (XN )]t [F (XN )]s = N F (Xs ) ds = (f ′) (xr) dr (t s) f ′ . · − · · |∇ | N ≤ − Zs Zs i=1 ∞ X (4.5)

Since N 1 1 N N β β Qβ F (X0 ) = f(Di/(N+1)) f(Ds ) ds -a.s., N −→ 0 Xi=1 Z N N N N the law of F (X0 ) is convergent and by stationarity we conclude that also the law of F (XN t) is convergent for every t. Using now Aldous’ tightness criterion the assertion follows once we have· shown that

E F N (XN ) F N (XN ) 0 as N , (4.6) N (τN +δN ) N τN · − · −→ →∞ h i for any given δ 0 and any given sequence of bounded stopping times (τ ). Now, the Doob- N ↓ N Meyer decomposition reads as

τN +δN N N N N N N N N F (XN (τ +δ )) F (XN τN ) = Mτ +δ MτN + N L F (XN s) ds, N N − · N N − · · · ZτN N N N N where M is a martingale with quadratic variation [M ]t = [F (XN )]t. Using (4.4) and (4.5) · 78 Weak Convergence to the Wasserstein Diffusion we get

E F N (XN ) F N (XN ) E M N M N + C δ N (τN +δN ) N τN τN +δN τN 1 N · − · ≤ − h i  2 1/2 E M N M N + C δ ≤ τN +δN − τN 1 N h i 1/2 = C E [M N ] [M N ] + C δ 2 τN +δN − τN 1 N C δ1/2 + δ ,  ≤ 3 N N   for some positive constants Ci, which implies (4.6).

The argument above shows the balance of first and second order parts of N LN as N tends · to infinity. Alternatively one could use the symmetry of (XN ) and apply the Lyons-Zheng de- composition for the same result. We provide here some detail·s. For some fixed time T > 0 we use the Lyons-Zheng decomposition (see e.g. Theorem 5.7.1 in [39]) to obtain N N N 1 ˆ ˆ F (XN t) F (X0 ) = Mt (MT MT t) , · − 2 − − −   where M is a martingale w.r.t. the filtration = σ(XN ; 0 s t) and Mˆ is a martingale Ft N s ≤ ≤ w.r.t. the filtration ˆ = σ(XN ; 0 s t), 0 t T·. Moreover, the quadratic variation of Ft N (T s) ≤ ≤ ≤ ≤ M is given by · − t t N N N N N 2 N [M]t = Γ (F ,F )(XN s) ds = N F (Xs ) ds · · |∇ | Z0 Z0 and by symmetry we have [Mˆ ] =[M] , 0 t T . Hence, we obtain using again (4.5) t t ≤ ≤ N N N N 1 1 ˆ ˆ E F (XN t) F (XN s) = E [ Mt Ms ] + E MT t MT s · − · 2 | − | 2 − − −   1/2 h i 2 1/2 1 2 1 E Mt Ms + E Mˆ T t Mˆ T s ≤ 2 | − | 2 − − −   h i 1/2 1 1/2 1 E [ [M]t [M]s ] + E [Mˆ ]T t [Mˆ ] T s ≤ 2 | − | 2 − − − 1/2 h i C t s , ≤ | − | for some positive constant C. Tightness follows now e.g. by Theorem 7.2 in Chapter 3 of [33].

4.4 Identification of the Limit

4.4.1 The -Parameterization G In order to identify the limit of the sequence (µN ) we parameterize the space ([0, 1]) in terms . P of right continuous quantile functions, cf. e.g. [65, 66]. The set

= g : [0, 1) [0, 1] g c`adl`ag nondecreasing , G { → | } 4.4 Identification of the Limit 79

2 2 equipped with the L ([0, 1],dx) distance dL2 is a compact subspace of L ([0, 1],dx). It is homeo- morphic to ( ([0, 1]), τ ) by means of the map P w ρ : ([0, 1]), g g (dx), G → P → ∗ which takes a function g to the image measure of dx under g. The inverse map κ = ρ 1 : ∈ G − ([0, 1]) is realized by taking the right continuous quantile function, i.e. P → G g (t) := inf s [0, 1] : µ[0, s] >t . µ { ∈ } N For technical reasons we introduce the following modification of (µ. ) which is better behaved in terms of the map κ.

Lemma 4.3. For N N define the Markov process ∈ N 1 νN := µN + δ ([0, 1]), t N +1 t N +1 0 ∈ P

N N then (ν ) is convergent on CR (( ([0, 1]), τ )) if and only if (µ ) is. In this case both limits . + P w . coincide.

Proof. Due to Theorem 3.7.1 in [24] it suffices to consider the sequences of real valued process f,µN and f,νN for N , where f C([0, 1]) is arbitrary. From h . i h . i →∞ ∈

N N 1 N 2 sup f,µt f,νt = sup f,µt f,δ0 f , t 0 h i−h i t 0 N +1 h i−h i ≤ N +1 k k∞ ≥ ≥ it follows that d ( f,µN , f,νN ) 0 almost surely, where d is any metric on C([0, ), R) C h . i h . i → C ∞ inducing the topology of locally uniform convergence. Hence for a bounded and uniformly dC- continuous functional F : C([0, ), R) R ∞ → E(F ( f,µN ) E(F ( f,νN )) 0 for N . h . i − h . i → →∞ Since weak convergence on metric spaces is characterized by expectations of uniformly continuous, bounded functions (cf. e.g. [53, Theorem 6.1]) this proves the claim.

Let (gN ):=(κ(νN )) be the process (νN ) in the -parameterization. It can also be obtained · · · G by N N gt = ι(XN t) · with the imbedding ι = ιN

N ι : Σ , ι(x) = xi 1l , N → G · [ti,ti+1) Xi=0 with ti := i/(N + 1), i = 0,...,N + 1. Similarly, let (g )=(κ(µ.)) be the -image of the · G Wasserstein diffusion under the map κ with invariant initial distribution Qβ. In [66, Theorem 7.5] 80 Weak Convergence to the Wasserstein Diffusion it is shown that (g ) is generated by the Dirichlet form, again denoted by , which is obtained as · E the L2( , Qβ)-closure of G β 1 (u, v) = u g( ), v g( ) L2([0,1]) Q (dg), u, v C ( ). E h∇ | · ∇ | · i ∈ G ZG on the class

1 1 m m 2 C ( ) = u : R u(g) = U( f ,g 2 ,..., f ,g 2 ), U C (R ), f L ([0, 1]),m N , G { G → | h 1 iL h m iL ∈ c { i}i=1 ⊂ ∈ } 2 where u g is the L ([0, 1],dx)-gradient of u at g, defined via ∇ | 2 ∂s s=0u(g + s ξ) = ( gu)( ),ξ( ) L2([0,1],dx) ξ L ([0, 1],dx). | · h ∇| · · i ∀ ∈ N The convergence of (µ ) to (µ ) in CR+ ( ([0, 1]), τw) is thus equivalent to the convergence N · · P N of (g ) to (g ) in CR+ ( ,dL2 ). By Proposition 4.2 and Lemma 4.3 (g )N is a tight sequence of · · G · processes on . The following statement identifies (g ) as the unique weak limit. G · l Proposition 4.4. Let be Markov-unique. Then for any f C( ) and 0 t1 <...

Definition 4.5 (Convergence of Hilbert spaces). A sequence of Hilbert spaces HN converges to a Hilbert space H if there exists a family of linear maps ΦN : H HN such that { → }N N lim Φ u N = u H , for all u H. N H k k ∈

A sequence (u ) with u H converges strongly to a vector u H if there exists a sequence N N N ∈ N ∈ (˜u ) H tending to u in H such that N N ⊂ M lim lim sup Φ u˜N uM HM =0, N M −

and (uN ) converges weakly to u if

lim uN , vN HN = u, v H, N h i h i for any sequence (v ) with v HN tending strongly to v H. Moreover, a sequence (B ) of N N N ∈ ∈ N N bounded operators on HN converges strongly (resp. weakly) to an operator B on H if B u Bu N N → strongly (resp. weakly) for any sequence (uN ) tending to u strongly (resp. weakly).

N N N Definition 4.6 (Mosco Convergence). A sequence (E )N of quadratic forms E on H con- verges to a quadratic form E on H in the Mosco sense if the following two conditions hold: 4.4 Identification of the Limit 81

Mosco I: If a sequence (u ) with u HN weakly converges to a u H, then N N N ∈ ∈ N E(u, u) lim inf E (uN , uN ). ≤ N

Mosco II: For any u H there exists a sequence (u ) with u HN which converges strongly ∈ N N N ∈ to u such that N E(u, u) = lim E (uN , uN ). N Extending [52] it is shown in [50] that Mosco convergence of a sequence of Dirichlet forms is equivalent to the strong convergence of the associated resolvents and semigroups. We will apply this result when HN = L2(Σ , q ), H = L2( , Qβ) and ΦN is defined to be the conditional N N G expectation operator

ΦN : H HN ; (ΦN u)(x) := E(u g = xi,i =1,...,N). → | i/(N+1) However, we shall prove that the sequence N N converges to in the Mosco sense in a slightly ·E E modified fashion, namely the condition (Mosco II) will be replaced by

Mosco II’: There is a core K (E) such that for any u K there exists a sequence (u ) ⊂ D ∈ N N with u (EN ) which converges strongly to u such that E(u, u) = lim EN (u , u ). N ∈ D N N N Theorem 4.7. Under the assumption that HN H the conditions (Mosco I) and (Mosco II’) → are equivalent to the strong convergence of the associated resolvents.

Proof. We proceed as in the proof of Theorem 2.4.1 in [52]. By Theorem 2.4 of [50] strong convergence of resolvents implies Mosco-convergence in the original stronger sense. Hence we need to show only that our weakened notion of Mosco-convergence also implies strong convergence of resolvents. Let RN , λ > 0 and R , λ > 0 be the resolvent operators associated with EN and E, { λ } { λ } respectively. Then, for each λ > 0 we have to prove that for every z H and every sequence (z ) ∈ N tending strongly to z the sequence (u ) defined by u := RN z HN converges strongly to N N λ N ∈ u := R z as N . The vector u is characterized as the unique minimizer of E(v, v)+λ v, v λ →∞ h iH − 2 z, v H over H and a similar characterization holds for each uN . Since for each N the norm of hN i N 1 Rλ as an operator on H is bounded by λ− , by Lemma 2.2 in [50] there exists a subsequence of (u ), still denoted by (u ), that converges weakly to someu ˜ H. By (Mosco II’) we find for N N ∈ every v K a sequence (v ) tending strongly to v such that lim EN (v , v ) = E(v, v). Since ∈ N N N N for every N

N N E (u , u ) + λ u , u N 2 z , u N E (v , v ) + λ v , v N 2 z , v N , N N h N N iH − h N N iH ≤ N N h N N iH − h N N iH using the condition (Mosco I) we obtain in the limit N : →∞ E(˜u, u˜) + λ u,˜ u˜ 2 z, u˜ E(v, v) + λ v, v 2 z, v , h iH − h iH ≤ h iH − h iH which by the definition of the resolvent together with the density of K D(E) implies that ⊂ u˜ = Rλz = u. This establishes the weak convergence of resolvents. It remains to show strong 82 Weak Convergence to the Wasserstein Diffusion

N convergence. Let uN = Rλ zN converge weakly to u = Rλz and choose v K with the respective N N ∈ strong approximations vN H such that E (vN , vN ) E(v, v), then the resolvent inequality N ∈ → for Rλ yields

N 2 N 2 E (u , u ) + λ u z /λ N E (v , v ) + λ v z /λ N . N N k N − N kH ≤ N N k N − N kH Taking the limit for N , one obtains →∞ 2 2 lim sup λ uN zN /λ HN E(v, v) E(u, u) + λ v z/λ H . N k − k ≤ − k − k

Since K is a dense subset we may now let v u D(E), which yields → ∈ 2 2 lim sup uN zN /λ HN u z/λ H . N k − k ≤k − k

Due to the weak lower semicontinuity of the norm this yields lim u z /λ N = u z/λ . N k N − N kH k − kH Since strong convergence in H is equivalent to weak convergence together with the convergence of the associated norms the claim follows (cf. Lemma 2.3 in [50]).

Proposition 4.4 will now essentially be implied by the following statement, which will be proven in the next two subsections.

Theorem 4.8. Assume that is Markov-unique on L2( , Q). Then (N N , HN ) converges to E G ·E ( , H) along ΦN in Mosco sense. E Proposition 4.9. HN converges to H along ΦN , for N . →∞ Proof. We have to show that ΦN u u for each u H. Let N be the σ-algebra on HN → k kH ∈ F generated by the projection maps g g(i/(N + 1)) i =1,...,N . By abuse of notation we G { → | } identify ΦN u HN with E(u N ) of u, considered as an element of L2(Qβ, N ) H. Since the ∈ |F F β ⊂ measure qN coincides with the respective finite dimensional distributions of Q on ΣN we have ΦN u = ΦN u . Hence the claim will follow once we show that ΦN u u in H. For the HN H → latter we use the following abstract result, whose proof can be found, e.g. in [4, Lemma 1.3].

Lemma 4.10. Let (Ω, ,µ) be a measure space and ( n)n N a sequence of σ-subalgebras of . D F ∈ D Then E(f ) f for all f Lp, p [1, ) if and only if for all A there is a sequence |Fn → ∈ ∈ ∞ ∈ D A such that µ(A ∆A) 0 for n . n ∈Fn n → →∞ In order to apply this lemma to the given case ( , ( ), Qβ), where ( ) denotes the Borel G B G B G σ-algebra on , let Qβ ( ) denote the collection of all Borel sets F which can be G F ⊂ B G N β ⊂ G approximated by elements F with respect to Q in the sense above. Note that β N ∈ F FQ is again a σ-algebra, cf. the appendix in [4]. Let denote the system of finitely based open M cylinder sets in of the form M = g g O ,i =1,...,L where t [0, 1] and O [0, 1] G { ∈ G| ti ∈ i } i ∈ i ⊂ open. From the almost sure right continuity of g and the fact that g. is continuous at t1,...,tL β N for Q -almost all g it follows that MN := g g( ti (N+1) /(N+1)) Oi,i = 1,...,L is { ∈ G| ⌈ · ⌉ ∈ }∈F an approximation of M in the sense above. Since generates ( ) we obtain ( ) β such M B G B G ⊂FQ that the assertion holds, due to Lemma 4.10. 4.4 Identification of the Limit 83

m Remark 4.11. It is much simpler to prove Proposition 4.9 for a dyadic subsequence N ′ =2 1, N ′ N ′ − m N when the sequence Φ u ′ is nondecreasing and bounded, because Φ is a projection ∈ HN N ′ N ′ operator in H with increasing range im(Φ ) as N ′ grows. Hence, Φ u ′ is Cauchy and HN thus 2 2 2 N ′ M ′ N ′ M ′ Φ u Φ u = Φ u Φ u 0 for M ′, N ′ , − H H − H → →∞ ′ i.e. the sequence ΦN u converges to some v H. Since obviously ΦN u u weakly in H it follows ∈ N ′ → N ′ that u = v such that the claim is obtained from Φ u u H Φ u u . | H −k k | ≤ − H

4.4.3 Condition Mosco II’

2 To simplify notation for f L ([0, 1],dx) denote the functional g f,g 2 on by l . We ∈ →h iL ([0,1]) G f introduce the set K of polynomials defined by n ki K = u C( ) u(g) = lf (g), ki N, fi C([0, 1]) . ( ∈ G | i ∈ ∈ ) Yi=1 Lemma 4.12. The linear span of K is a core of . E Proof. Recall that by [66, Theorem 7.5] the set 1 1 m m 2 C ( ) = u : R u(g) = U( f ,g 2 ,..., f ,g 2 ), U C (R ), f L ([0, 1]),m N , G { G → | h 1 iL h m iL ∈ c { i}i=1 ⊂ ∈ } is a core for the Dirichlet form in the -parametrization. The boundedness of L2([0, 1],dx) E G G ⊂ implies that U is evaluated on a compact subset of Rm only, where U can be approximated by polynomials in the C1-norm. From this, the chain rule for the L2-gradient operator and ∇ Lebesgue’s dominated convergence theorem in L2( , Qβ) it follows that the linear span of poly- n ki G nomials of the form u(g) = l (g) with ki N, fi C([0, 1]), ki N, is also a core of i=1 fi ∈ ∈ ∈ . E Q n ki n N ki Lemma 4.13. For a polynomial u K with u(g) = l (g) let uN := Φ (lf ) ∈ i=1 fi i=1 i ∈ HN , then u u strongly. N → Q Q  Proof. Letu ˜ := n ΦN (l ) ki H be the respective product of conditional expectations, N i=1 fi ∈ where as above ΦN also denotes the projection operator on H = L2( , Qβ). Note that by Jensen’s Q  G inequality for any measurable functional u : R, ΦN (u) (g) ΦN ( u )(g) for Qβ-almost all N G → | | ≤ | | g G, such that in particular Φ (lfi ) L∞( ,Qβ ) lfi L∞( ,Qβ ) fi C([0,1]). Hence each of ∈ G ≤ k k G ≤ k k the factors ΦN (l ) H is uniformly bounded and converges strongly to l in L2( , Qβ), such fi ∈ fi G that the convergence also holds true in any Lp( , Qβ) with p > 0. This impliesu ˜ u in H. G N → Furthermore, n n M M N ki M ki lim lim Φ u˜N uM HM = lim lim Φ Φ (lfi ) Φ (lfi ) N M − N M ! − Yi=1 Yi=1 H n n  

N ki ki = lim Φ (lfi ) lf =0. (4.7) N − i i=1 i=1 H Y  Y

84 Weak Convergence to the Wasserstein Diffusion

For the proof of Mosco II’ we will also need that the conditional expectation of the random variable g w.r.t. to Qβ given finitely many intermediate points g(t ) = xi yields the linear { i } interpolation.

Lemma 4.14. For X Σ define g by ∈ N X ∈ G i i +1 g (t) = xi + ((N + 1) t i)(xi+1 xi) if t [ , ), i =0,...,N, X · − − ∈ N +1 N +1 then E(g N )(X) = g . |F X Proof. This quite classical claim follows essentially from the bridge representation of the Dirichlet β process, i.e. Q is the law of (γ(β t)t [0,1] G) on conditioned on γ(β) = 1 where γ is · ∈ ∈ G the standard Gamma subordinator, cf. e.g. [66]. Together with the elementary property that E β (g(t)) = t for t [0, 1] the claim follows from the homogeneity of γ together with simple Q ∈ scaling and iterated use of the Markov property. See Appendix A for more details.

Proposition 4.15. For all u K there is a sequence u ( N ) converging strongly to u in ∈ N ∈ D E H and N N (u , u ) (u, u). In particular, condition Mosco II’ is satisfied. ·E N N →E Proof. For u K let u := n ΦN (l ) ki HN as above then the strong convergence of ∈ N i=1 fi ∈ uN to u is assured by Lemma 4.13. From Lemma 4.14 we obtain that ΦN (l )(X) = f,g . In Q  f h X i particular

1 ΦN (l )(X) i = ηN f (t ), (4.8) ∇ f N +1 · ∗ i   where t := i/(N + 1), i = 1,...,N + 1 and ηN denotes the convolution kernel t ηN (t) = i → (N +1) (1 min(1, (N +1) t )) , which can be easily seen as follows: Setting t := j/(N + 1), · − | · | j j 0,...,N +1 , we have on one hand ∈{ } 1 N Φ (lf )(X) = f(t)gX(t) dt Z0 N tj+1 = f(t) xj + ((N + 1)t j)(xj+1 xj) dt − − j=0 Ztj X  N tj+1 N tj+1 = f(t) (1 ((N + 1)t j)) xj dt + f(t)((N + 1)t j) xj+1 dt, tj − − tj − Xj=0 Z Xj=0 Z so that

ti+1 ti ΦN (l )(X) i = f(t) (1 ((N + 1)t i)) dt + f(t) ((N + 1)t (i 1)) dt ∇ f − − − − Zti Zti−1  4.4 Identification of the Limit 85 for all i 1,...,N . On the other hand ∈{ } 1 1 ηN f (t ) = f(t) ηN (t t) dt = f(t) (1 min(1, i (N + 1)t )) dt. N +1 · ∗ i N +1 i − − | − | ZR ZR Since  0 if t ti 1 or t ti+1, ≤ − ≥ 1 min(1, i (N + 1)t ) = 1 (i (N + 1)t) if ti 1 t ti, − | − |  − − − ≤ ≤ 1+(i (N + 1)t) if t t t , − i ≤ ≤ i+1 we obtain (4.8).  For the convergence of N (uN , uN ) to (u, u) let f C([0, 1]) be arbitrary, in particular ·EN E ∈ f is uniformly continuous. Since ηN (t) dt = 1 and ηN =0on[ 1 , 1 ]c we have R − N+1 N+1 R min f(s) f(t s) ηN (s) ds max f(s), 1 1 1 1 s t ,t+ ≤ R − ≤ s t ,t+ ∈h − N+1 N+1 i Z ∈h − N+1 N+1 i and from the uniform continuity of f we can conclude that f ηN f in C([0, 1]) as N . ∗ → → ∞ Hence, using (4.8) we also get

N N N ΦN (l )(X) 2 = (ηN f)2(t ) · |∇ f | (N + 1)2 ∗ i Xi=1 N N N N = (ηN f)2(t ) f 2(t ) + f 2(t ) (N + 1)2 ∗ i − i (N + 1)2 i i=1 i=1 X   X f, f 2 . −→ h iL ([0,1]) as N . Since the gradient does not depend on the value of the vector X and 2 l = f, this →∞ ∇L f implies the claim in the case when u = l . Similarly, for arbitrary f , f C([0, 1]) f 1 2 ∈ N N N Φ (l ), Φ (l ) f , f 2 . (4.9) · ∇ f1 ∇ f2 RN −→ h 1 2iL ([0,1]) Consider now u K with u(g) = n lki (g). The chain rule for the L2-gradient operator ∈ i=1 fi L2 = yields Q ∇ ∇ n ki kj 1 u(g) = l (g) kj l − (g) fj. ∇  fi  fj j=1 i=j X Y6 Thus,  

n ki kr kj 1 ks 1 u, u 2 = l l kj ks l − l − fj, fs 2 . h∇ ∇ iL (0,1)  fi   fr  fj fs h iL (0,1) j,s=1 i=j r=s X Y6 Y6     N and analogously for u = n ΦN (l ) ki with = R N i=1 fi ∇ ∇ Q n  N ki N kj 1 N u = Φ (l ) k Φ (l ) − Φ (l ) ∇ N  fi  j fj ∇ fj j=1 i=j X Y6     86 Weak Convergence to the Wasserstein Diffusion and n N ki N kr u , u N = Φ (l ) Φ (l ) h∇ N ∇ N iR  fi   fr  j,s=1 i=j r=s X Y6  Y6   N kj 1 N ks 1  N N k k Φ (l ) − Φ (l ) − Φ (l ), Φ (l ) . × j s fj fs ∇ fj ∇ fs RN Since  

N N (u , u ) = N u , u N dq ·E N N · h∇ N ∇ N iR N ZΣN = N u , u (g(t ),...,g(t )) Qβ(dg) · h∇ N ∇ N i 1 N ZG and for Qβ-a.e. g ΦN (l )(g(t ),...,g(t )) l (g) as N , f 1 N −→ f →∞ if f C([0, 1]) the first assertion of the proposition holds by (4.9) and dominated convergence. ∈ The second assertion follows now from linearity and polarisation together with Lemma 4.12.

For later use we make an observation which follows easily from the proof of the last proposition. β Lemma 4.16. For u and uN as in the proof of Proposition 4.15 and for Q -a.e. g we have N (N + 1)ι ( uN (g(t1),...,g(tN ))) u g 2 0 as N , ∇ −∇ | L (0,1) −→ →∞ with ιN : RN D([0, 1), R) defined as above and t := l/ (N + 1), l =0,...,N +1. → l Proof. By the definitions we have for every x Σ ∈ N n N N ki N kj 1 (N + 1)ι ( u (x)) = Φ (l )(x) k Φ (l )(x) − ∇ N  fi  j fj j=1 i=j X Y6    N  (N + 1) (ΦN (l )(x))l1l × ∇ fj [tl,tl+1] Xl=1 n N N ki N kj 1 N = Φ (l )(x) k Φ (l )(x) − (η f )(t )1l ,  fi  j fj ∗ j l [tl,tl+1] j=1 i=j l=1 X Y6   X   where we have used again (4.8). Furthermore, for every j, 2 1 N N (η fj)(tl)1l[tl,tl+1](t) fj(t) dt 0 ∗ − ! Z Xl=1 N t l+1 2 = (ηN f )(t ) f (t) dt ∗ j l − j l=1 Ztl X  N t N t l+1 2 l+1 2 (ηN f )(t ) f (t ) dt +2 (f (t ) f (t))2 dt, ≤ ∗ j l − j l j l − j l=1 Ztl l=1 Ztl X  X 4.4 Identification of the Limit 87 where the first term tends to zero as N since ηN f f in the sup-norm and the second →∞ ∗ j → j term tends to zero by the uniform continuity of fj. From this we can directly deduce the claim because ΦN (l )(g(t ),...,g(t )) l (g) as N for Qβ-a.e. g. f 1 N → f →∞ 4.4.4 Condition Mosco I For the verification of Mosco I we exploit that the respective integration by parts formulas of N E and converge. In case of a fixed state space a similar approach is discussed abstractly in [48]. E However, here also the state spaces of the processes change which requires some extra care for the varying metric structures in the Dirichlet forms. Let T N := f : Σ RN be equipped with the norm { N → } 2 1 2 f N := f(x) N q (dx), k kT N k kR N ZΣN then the corresponding integration by parts formula for qN on ΣN , established in Proposition 3.7, reads 1 u, ξ N = u, div ξ N . (4.10) h∇ iT −N h qN iH To state the corresponding formula for we introduce the Hilbert space of vector fields on E G by T = L2( [0, 1], Qβ dx), G × ⊗ and the subset Θ T ⊂ Θ = span ζ T ζ(g,t) = w(g) ϕ(g(t)), w K,ϕ ∞([0, 1]) : ϕ(0) = ϕ(1) = 0 . { ∈ | · ∈ ∈C } Lemma 4.17. Θ is dense in T . Proof. Let us first remove the condition φ(0) = φ(1), i.e let us show that the T -closure of Θ coincides with that of Θ¯ = span ζ T ζ(g,t) = w(g) ϕ(g(t)), w K,ϕ ∞([0, 1]). Since { ∈ | · ¯ ∈ ∈ C supg w(g) < for any w K, it suffices to show that any ζ Θ of the form ζ(g,t) = ϕ(g(t)) ∈G ∞ ∈ ∈ can be approximated in T by functions ζ (g,t) = ϕ (g(t)) with ϕ C ([0, 1]) and ϕ (0) = k k k ∈ ∞ k ϕ (1) = 0. Choose a sequence of functions ϕ C ([0, 1]) such that such that sup ϕ < k k ∈ 0∞ k k kkC([0,1]) and ϕ (s) ϕ(s) for all s ]0, 1[. Now for Qβ-almost all g it holds that s [0, 1] g(s) = ∞ k → ∈ { ∈ | 0 = 0 and s [0, 1] g(s)=1 = 1 , such that φ (g(s)) φ(g(s)) for Qβ dx -almost all } { } { ∈ | } { } k → ⊗ (g, s). The uniform boundedness of the sequence of functions ζ : [0, 1] R together with k G × → dominated convergence w.r.t. the measure Qβ dx the convergence is established. In order to ⊗ complete the proof of the lemma note that Qβ-amost every g G is a strictly increasing funcion ∈ on [0, 1]. This implies that the set Θ¯ is separating the points of a full measure subset of [0, 1]. G × Hence the assertion follows from the following abstract lemma.

Lemma 4.18. Let µ be a probability measure on a Polish space (X,d) and let be a subalgebra A of C(X) containing the constants. Assume that is µ-almost everywhere separating on X, i.e. A there exists a measurable subset X˜ with µ(X˜)=1 and for all x, y X˜ there is an a such ∈ ∈ A that a(x) = a(y). Then is dense in any Lp(X,µ) for p [1, ). 6 A ∈ ∞ 88 Weak Convergence to the Wasserstein Diffusion

Proof. We may assume w.l.o.g. that is stable w.r.t. the operation of taking the pointwise inf A and sup. Let u Lp(X), then we may also assume w.l.o.g. that u is continuous and bounded on ∈ X. By the regularity of µ we can approximate X˜ from inside by compact subsets Km such that 1 µ(Km) 1 m . On each Km the theorem of Stone-Weierstrass tells that there is some am ≥ − 1 ∈ A such that u Km am Km m , and by truncation am C(X) u C(X). In particular, | − | C(Km) ≤ k k ≤ k k 1 for ǫ > 0, µ( am u > ǫ) µ(X Km) m , if m 1/ǫ, i.e. am converges to u on X in | − | ≤ \ ≤ ≥ µ-probability. Hence some subsequence am′ converges to u pointwise µ-a.s. on X, and hence the claim follows from the uniform boundedness of the am by dominated convergence.

The L2-derivative operator defines a map ∇ : C1( ) T ∇ G → which by [66, Proposition 7.3], cf. [67], satisfies the following integration by parts formula.

1 u, ζ = u, div β ζ , u C ( ),ζ Θ, (4.11) h∇ iT −h Q iH ∈ G ∈ where, for ζ(g,t) = w(g) ϕ(g(t)), · β div β ζ(g) = w(g) V (g) + w(g)(.),ϕ(g(.)) 2 Q · ϕ h∇ iL (dx) with 1 β 0 ϕ′(0) + ϕ′(1) V (g) := V (g) + β ϕ′(g(x))dx ϕ ϕ − 2 Z0 and ϕ (g(a+)) + ϕ (g(a )) δ(ϕ g) V 0(g) := ′ ′ − ◦ (a) . ϕ 2 − δg a Jg   X∈ Here J [0, 1] denotes the set of jump locations of g and g ⊂ δ(ϕ g) ϕ (g(a+)) ϕ (g(a )) ◦ (a) := − − . δg g(a+) g(a ) − − By formula (4.11) one can extend to a closed operator on D( ) such that (u, u) = u 2 . ∇ E E k∇ kT The Markov uniqueness of now implies the converse which is a characterization of D( ) via E E (4.11). For this we need the following technical lemma.

Lemma 4.19. The functional

2 u, div β ζ ˜ Q H ˜ 2 Qβ ˜ (u, u) = sup −h 2 i on D( ) = u L ( , ) (u) < (4.12) E ζ Θ ζ E { ∈ G | E ∞} ∈ k kT is a Dirichlet form on L2( , Qβ) extending , i.e. D( ) D( ˜) and ˜(u) = (u) for u D( ). G E E ⊂ E E E ∈ E 4.4 Identification of the Limit 89

Proof. First note that it makes no difference to (4.12) if Θ is replaced by the larger set Θ˜ = ξ ξ(g, s) = w(g)ϕ(g(s)), w C1( ),ϕ ([0, 1]), φ(0) = φ(1) = 0 . Obviously, ˜ is lower { | ∈ G ∈ C∞ } E semicontinuous in L2( , Qβ) and therefore closed. Moreover, ˜ is an extension of , due to G E E (4.11). It remains to show that ˜ is Markovian. We use the stronger quasi-invariance of Qβ E under certain transformations of , cf. [66, Theorem 4.3]. Let h be a C2-diffeomorphisms of G ∈ G [0, 1] and let τ : , τ (g) = h g, then h G → G h ◦ β d(τh−1 ) Q β 0 ∗ (g) = X (g)Y (g). (4.13) dQβ h h where 1 X : R; X (g) = exp( log h′(g(s))ds) h G → h Z0 and h (g(a+)) h (g(a )) 1 Y 0(g) := ′ ′ . h δ(h g)· − ◦ h (g(0)) h (g(1 )) a Jg p δg (a) ′ ′ Y∈ · − p Given ζ = w( )φ( ) Θ we may apply formula (4.13) in the case when h = hφ, where R · · ∈ t × [0, 1] [0, 1], (t,x) ht(x) is the flow of smooth diffeomorphisms of [0, 1] induced from the ODE ˙ → → 1 ht(x) = φ(ht(x)) with initial condition h0(x) = x. In particular, h0 is the identity and ht− = h t − for all t R by the flow property. Then, arguing as in Section 5 in [66], we have ∈ u(ht(g)) u(g) β 1 1 β 0 β Q − Q − w(g) (dg) = u(g) w(ht− (g))X −1 Yh 1 u(g)w(g) (dg) t · t ht t − ZG ZG h i 1 β = u(g)[w(h t(g)) w(g)] Q (dg) t − − ZG 1 + u(g) w(g) Xβ Y 0 1 Qβ(dg) t h−t h−t − ZG h i 1 β 0 β + u(g) w(g)[w(h t(g)) w(g)] X Yh− 1 Q (dg). t − − h−t t − ZG h i We recall that by Lemma 5.7 in [66]

1 β 0 ∂ β 0 β lim X (g) Yh (g) 1 = X Yh = Vϕ (g). t 0 t ht t − ∂t ht t → t=0 h i h i Together with the fact that the approximations of the logarithmic derivative of Xβ Y 0 w.r.t. the ht ht variable t stay uniformly bounded, more precisely log[Xβ Y 0 ] C t for t 1 (cf. [66, Section | ht ht | ≤ | | | | ≤ 5]), this yields by the dominated convergence theorem

u(ht(g)) u(g) β β lim − w(g) Q (dg) = u divQβ ζ(g) Q (dg). t 0 t · − → ZG ZG for any u D( ˜). More precisely, u L2( , Qβ) belongs to D( ˜) if and only if ∈ E ∈ G E u(ht(g)) u(g) 2 β Dφu(g) := w-lim − exists in L ( , Q ) t 0 t G → 90 Weak Convergence to the Wasserstein Diffusion for all φ C ([0, 1]) with φ(0) = φ(1) = 0 and by Riesz’s representation theorem ∈ ∞ ˜ Du T s.th. Du,ζ T = u, divQβ ζ L2( ,Qβ ) = Dφu, w L2( ,Qβ ) ζ = w( )φ( ) Θ. ∃ ∈ h i −h i G h i G ∀ · · ∈ Moreover, ˜(u, u) = Du 2 . Now for u D( ˜) and κ C1(R) by Taylor’s formula κ(u(h (g))) E k kT ∈ E ∈ t − κ(u(g)) = κ′(θ) (u(gt) u(g)) for some θ u(g) for t 0 such that in case of κ′ 1 · − → → k k∞ ≤

κ u(ht(g)) κ u(g) 2 β Dφκ u(g) = w-lim ◦ − ◦ = κ′ u(g) Dφu(g) in L ( , Q ). ◦ t 0 t ◦ · G → Choose some sequence u C1(G) such that u (g) u(g) for Qβ-almost all g . Then k ∈ k → ∈ G ζ˜ = κ u ( ) w( )φ( ) Θ. Since D u and w both belong to L2( , Qβ), by dominated k ′ ◦ k · · · · ∈ φ G convergence

κ′ u Dφu, w L2( ,Qβ ) = lim κ′ uk Dφu, w L2( ,Qβ ) h ◦ · i G k h ◦ · i G 1/2 = lim Du, ζ˜k T ˜ (u, u)lim ζ˜k T k h i ≤ E k k k ˜1/2(u, u) ζ . ≤ E k kT Hence κ u in D( ˜) and ˜(κ u) ˜(u). Applied to a uniformly bounded family κ : R R ◦ E E ◦ ≤ E ǫ → which converges pointwise to κ(s) = min(max(s, 0), 1) the lower semicontinuity of ˜ yields the E claim.

Corollary 4.20 (Meyers-Serrin property). Assume Markov-uniqueness holds for , then E u, div β ζ H ( (u, u))1/2 = sup −h Q i . (4.14) E ζ Θ ζ T ∈ k k Proof. The assumption means that has no proper extension in the class of Dirichlet forms on E L2( , Qβ). By the previous lemma we obtain ˜ = which is the claim. G E E The convergence of (4.10) to (4.11) is established by the following lemma whose proof is given below.

N Lemma 4.21. For ζ Θ there exists a sequence of vector fields ζN : ΣN R such that N ∈ → div ζ H converges strongly to div β ζ in H and such that ζ N ζ for N . qN N ∈ Q k N kT →k kT →∞ Proposition 4.22 (Mosco I). Let be Markov-unique and let u ( N ) converge weakly to E N ∈ D E u H, then ∈ N (u, u) lim inf N (uN , uN ). E ≤ N ·E →∞ Proof. Let u H and u HN converge weakly to u. Let ζ Θ and ζN be as in Lemma 4.21, ∈ N ∈ ∈ then

u, div β ζ = lim u , div ζ N = lim N u ,ζ N −h Q iH − h N qN N iH · h∇ N N iT N 1/2 lim inf N uN N ζN = liminf N (uN , uN ) ζ , ≤ · k∇ kT ·k kTN ·E ·k kT  4.4 Identification of the Limit 91 such that, using (4.14),

1/2 u, divQβ ζ H N 1/2 ( (u, u)) = sup −h i lim inf N (uN , uN ) . (4.15) E ζ Θ ζ T ≤ ·E ∈ k k 

Proof of Lemma 4.21. By linearity it suffices to consider the case ζ(g,t) = w(g) ϕ(g(t)) with · w(g) = n lki (g). Choose i=1 fi Q (ζ (x1, ,xN ))i := w (x1,...,xN ) ϕ(xi) N ··· N ·

n N ki with wN := i=1(Φ (lfi )) . Then, by Remark 3.8

Q β div ζ = w V + w , ~ϕ N , qN N N · N,ϕ h∇ N iR with ~ϕ(x1,...,xN ):=(ϕ(x1),...,ϕ(xN )) and N i+1 i N β 1 N β ϕ(x ) ϕ(x ) i V (x ,...,x ):=( 1) − + ϕ′(x ). N,ϕ N +1 − xi+1 xi Xi=0 − Xi=1 We recall that for all bounded measurable u : [0, 1]N R →

1 N β u(x ,...,x ) qN (dx) = u(g(t1),...,g(tN )) Q (dg), ΣN Z ZG with ti = i/(N + 1), i =0,...,N + 1. Using this we get immediately

N N 2 1 2 i 2 2 1 2 β ζN T N = wN (x) ϕ(x ) qN (dx) = wN (g(t1),...,g(tN )) ϕ(g(ti)) Q (dg) k k N ΣN N Z i=1 ZG i=1 X 1 X 2 2 β 2 w (g) ϕ(g(s)) ds Q (dg) = ζ T . → 0 k k ZG Z

To prove strong convergence of divqN ζN to divQβ ζ, by definition we have to show that there exists a sequence (d ζ) H tending to div β ζ in H such that N N ⊂ Q M 2 lim lim sup Φ (dN ζ) divqM ζM HM =0. N M −

The choice

dN ζ(g) := divqN ζN (g(t1),...,g(tN )) makes this convergence trivial, once we have proven that in fact (dN ζ)N converges to divQβ ζ in H. This is carried out in the following two lemmas. 92 Weak Convergence to the Wasserstein Diffusion

Lemma 4.23. For Qβ-a.s. g we have

V β (g(t ),...,g(t )) V β(g), as N , N,ϕ 1 N → ϕ →∞ and we have also convergence in Lp( , Qβ), p > 1. G β Proof. We rewrite VN,ϕ(g(t1),...,g(tN )) as

N β ϕ(g(ti+1)) ϕ(g(ti)) VN,ϕ(g(t1),...,g(tN )) =β − (ti+1 ti) g(ti+1) g(ti) − Xi=0 − N 1 ϕ(g(t1)) ϕ(g(t0)) − ϕ(g(ti+1)) ϕ(g(ti)) − + ϕ′(g(ti)) − − g(t1) g(t0) − g(ti+1) g(ti) − Xi=1  −  (4.16)

ϕ(g(tN+1)) ϕ(g(tN )) + ϕ′(g(t )) − . N − g(t ) g(t ) N+1 − N Note that all terms are uniformly bounded in g with a bound depending on the supremum norm β of ϕ′ and ϕ′′, respectively. Since the same holds for Vϕ (g) (cf. Section 5 in [66]), it is sufficient β β to show convergence Q -a.s. By the support properties of Q g is continuous at tN+1 = 1, so that the last line in (4.16) tends to zero. Using Taylor’s formula we obtain that the first term in (4.16) is equal to

N N 1 β ϕ′(g(t ))(t t ) + ϕ′′(γ )(g(t ) g(t ))(t t ), i i+1 − i 2 i i+1 − i i+1 − i Xi=0 Xi=0 for some γ [g(t ),g(t )]. Obviously, the first term tends to β 1 ϕ (g(s)) ds and the second i ∈ i i+1 0 ′ one to zero as N . Thus, it remains to show that the second line in (4.16) converges to →∞ R ϕ (g(a+)) + ϕ (g(a )) δ(ϕ g) ϕ (0) + ϕ (1) ′ ′ − ◦ (a) ′ ′ . (4.17) 2 − δg − 2 a Jg   X∈ Note that by the right-continuity of g the first term in the second line in (4.16) tends to ϕ (0). − ′ Let now a2,...,al 1 denote the l 2 largest jumps of g on ]0, 1[. For N very large (compared − − 2 2 1 1 with l) we may assume that a2,...,al 2 ] , 1 [. Put a1 := , al := 1 . For − ∈ N+1 − N+1 N+1 − N+1 j =1,...,l let k denote the index i 1,...,N , for which a [t ,t [. In particular, k =1 j ∈{ } j ∈ i i+1 1 and kl = N. Then

l 1 ϕ(g(ti+1)) ϕ(g(ti)) − δ(ϕ g) ϕ′(g(ti)) − ϕ′(g(aj )) ◦ (aj) − g(ti+1) g(ti) −−−−→N − − δg i k2,...,k − →∞ j=2 ∈{ X l 1} − X δ(ϕ g) ϕ′(g(a )) ◦ (a). (4.18) −−−→l − − δg →∞ a Jg X∈ 4.4 Identification of the Limit 93

Provided l and N are chosen so large that

C g(t ) g(t ) | i+1 − i | ≤ l for all i 0,...,N k ,...,k , where C = sup ϕ (s) /6, again by Taylor’s formula we get ∈ { }\{ 1 l} s | ′′′ | for every j 1,...,l 1 ∈{ − }

kj+1 1 − ϕ(g(ti+1)) ϕ(g(ti)) ϕ′(g(ti)) − − g(ti+1) g(ti) i=Xkj +1 − kj 1 +1− 1 1 2 = ϕ′′(g(t ))(g(t ) g(t )) + ϕ′′′(γ )(g(t ) g(t )) − 2 i i+1 − i 6 i i+1 − i i=Xkj +1 aj+1 g(aj+1 ) 1 − 2 1 − 2 ϕ′′(g(s)) dg(s) + O(l− ) = ϕ′′(s) ds + O(l− ). −−−−→N − 2 −2 →∞ Zaj + Zg(aj +) Summation over j leads to

l 1 kj+1 1 − − ϕ(g(ti+1)) ϕ(g(ti)) ϕ′(g(ti)) − − g(ti+1) g(ti) Xj=1 i=Xkj +1 − l 1 g(aj+1 ) 1 l 1 g(aj +) 1 − − 1 1 1 − 1 ϕ′′(s) ds + O(l− ) = ϕ′′(s) ds + ϕ′′(s) ds + O(l− ) −−−−→N − 2 −2 2 →∞ g(aj +) 0 g(aj ) Xj=1 Z Z Xj=2 Z − 1 1 (ϕ′(1) ϕ′(0)) + ϕ′(g(a+)) ϕ′(g(a )). −−−→l − 2 − 2 − − →∞ a Jg X∈ Combining this with (4.18) yields that the second line of (4.16) converges in fact to (4.17), which completes the proof.

Since w (g(t ),...,g(t )) converges to w in Lp( , Qβ), p > 0 (cf. proof of Lemma 4.13 above), N 1 N G the last lemma ensures that the first term of dN ζ converges to the first term of divQβ ζ in H, while the following lemma deals with the second term.

Lemma 4.24. For Qβ-a.s. g we have

wN (g(t1),...,g(tN )), ~ϕ(g(t1),...,g(tN )) RN w g,ϕ(g(.)) L2(0,1), as N , h∇ i → h∇ | i →∞ and we have also convergence in H.

Proof. As in the proof of the last lemma it is enough to prove convergence Qβ-a.s. Note that

N N w (~g), ~ϕ(~g) N =(N + 1) ι ( w (~g)), ι (~ϕ(~g)) 2 , h∇ N iR h ∇ N iL (0,1) 94 Weak Convergence to the Wasserstein Diffusion

N N writing ~g := (g(t1),...,g(tN )) and using the extension of ι on R . By triangle and Cauchy- Schwarz inequality we obtain

N N (N + 1)ι ( wN (~g)), ι (~ϕ(~g)) L2(0,1) w g,ϕ(g(.)) L2(0,1) |h ∇ i − h∇ | i | N N N (N + 1)ι ( wN (~g)) w g, ι (~ϕ(~g)) L2(0,1) + w g, ι (~ϕ(~g)) ϕ(g(.)) L2(0,1) ≤|h ∇ −∇ | i | h∇ | − i | N N (N + 1)ι ( wN (~g)) w g 2 ι (~ϕ(~g)) 2 ≤ ∇ −∇ | L (0,1) L (0,1) N + w g 2 ι (~ϕ(~g)) ϕ(g(.)) 2 , ∇ | L (0,1) − L (0,1) which tends to zero by Lemma 4.16 and by the definition of ιN .

4.4.5 Proof of Proposition 4.4

Lemma 4.25. For u C( ) let u HN be defined by u (x) := u(ιx), then u u strongly. ∈ G N ∈ N N → Moreover, for any sequence f HN with f f H strongly, u f u f strongly. N ∈ N → ∈ N · N → · Proof. Letu ˜ H be defined byu ˜ (g) := u(gN ), where gN := N g(t )1l , t := i/(N+1), N ∈ N i=1 i [ti,ti+1) i thenu ˜ u in H strongly. Moreover, N → P M M lim lim Φ u˜N uM M = lim lim Φ u˜N u˜M = lim u˜N u H =0, N M − H N M − H N k − k

where as above we have identified ΦM with the corresponding projection operator in L2( , Qβ). G For the proof of the second statement, let H f˜ f in H such that ∋ N →

M lim lim sup Φ f˜N fM =0. N M − HM

From the uniform boundedness ofu ˜ it follows that alsou ˜ f˜ u f in H. In order to show N N · N → · HM u f u f write ∋ M · M → ·

M Φ (˜uN f˜N ) uM fM · − · HM M M M Φ (˜uN f˜N ) uM Φ (f˜M ) + uM fM uM Φ (f˜M ) . ≤ · − · HM · − · HM

Identifying the map ΦM with the associated conditional expectation operator, considered as an orthogonal projection in H, the claim follows from

M M M M Φ (˜uN f˜N ) uM Φ (f˜M ) = Φ (˜uN f˜N ) u˜M Φ (f˜M ) · − · HM · − · H M M = Φ (˜uN f˜N ) Φ (˜uM f˜M ) · − · H

u˜N f˜N u˜M f˜M ≤ · − · H

4.5 A non-convex (1+1)-dimensional ∇φ-interface model 95 and

M M uM fM uM Φ (f˜M ) u fM Φ (f˜N ) · − · HM ≤k k∞ − HM M M + u Φ (f˜N ) Φ (f˜M ) k k∞ − HM M = u f M Φ (f˜N ) k k∞ − HM M M + u Φ (f˜N ) Φ (f˜M ) k k∞ − H M u f M Φ (f˜N ) ≤k k∞ − HM

+ u f˜N f˜M k k∞ − H

M ˜ such that in fact limN lim supM Φ (˜uN fN ) uM fM = 0. · − · HM

Proof of Proposition 4.4. It suffices to prove the claim for functions f C( l) of the form N N ∈ GN f(g1,...,gl) = f1(g1) f2(g2) fl(gl) with fi C( ). Let Pt : H H be the semi- N · N···· ∈N G N → group on H induced by (XN t)t via Eg qN [f(XN t)] = Pt f,g HN . From Theorem 4.8 and · · N · h i the abstract results in [50] it follows that Pt converges to Pt strongly, i.e. for any sequence uN HN converging to some u H strongly, the sequence P N uN also strongly converges to P u. ∈ ∈ t t Let f N := f ιN , then inductive application of Lemma 4.25 yields i i ◦ N N N N N N N N Ptl tl− (fl Ptl− tl− (fl 1 Ptl− tl− ...f2 Pt1 f1 ) ) − 1 · 1− 2 − · 2− 3 · ··· N →∞ Ptl tl− (fl Ptl− tl− (fl 1 Ptl− tl− ...f2 Pt1 f1) ) strongly, −→ − 1 · 1− 2 − · 2− 3 · ··· which in particular implies the convergence of inner products. Hence, using the Markov property of gN and g we may conclude that

E N N E N N N N lim f1(gt1 ) ...fl(gt ) = lim f1 (XN t1 ) ...fl (XN t ) N l N · · l N N N N N N N N   N = lim 1,Ptl tl− (fl Ptl− tl− (fl 1 Ptl− tl− ...f2 Pt1 f1 ) ) H N h − 1 · 1− 2 − · 2− 3 · ··· i

= 1,Ptl tl− (fl Ptl− tl− (fl 1 Ptl− tl− ...f2 Pt1 f1) ) H h − 1 · 1− 2 − · 2− 3 · ··· i = E f1(gt1 ) ...fl(gtl ) . (4.19) 

4.5 A non-convex (1 + 1)-dimensional φ-interface model ∇ We conclude with a remark on a link to stochastic interface models, cf. [40]. Consider an interface on the one-dimensional lattice Γ := 1,...,N , whose location at time t is represented by the N { } height variables φ = φ (x), x Γ √N Σ with dynamics determined by the generator t { t ∈ N } ∈ · N 96 Weak Convergence to the Wasserstein Diffusion

N L˜ defined below and with the boundary conditions φt(0) = 0 and φ(N +1) = √N at ∂ΓN := 0, N +1 . { } β 1 1 ∂ L˜N f(φ):=( 1) f(φ)+∆f(φ) N +1 − φ(x) φ(x 1) − φ(x + 1) φ(x) ∂φ(x) x ΓN   X∈ − − − for φ √N Σ and with φ(0) := 0 and φ(N +1) := √N. L˜N corresponds to LN as an ∈ · N operator on C2(√N Σ ) with Neumann boundary conditions. Note that this system involves a · N non-convex interaction potential function V on (0, ) given by V (r)=(1 β )log(r) and the ∞ − N+1 Hamiltonian

N H (φ) := V (φ(x + 1) φ(x)), φ(0) := 0, φ(N +1) := √N. N − x=0 X Then, the natural stationary distribution of the interface is the µN conditioned on √N Σ : · N 1 µ (dφ) := exp( H (φ))1l dφ(x), N N (φ(1),...,φ(N)) √N ΣN ZN − { ∈ · } x ΓN Y∈ where ZN is a normalization constant. Note that µN is the corresponding measure of qN on the N state space √N ΣN . Suppose now that (φt)t 0 is the stationary process generated by L˜ . Then · ≥ the space-time scaled process

˜ N 1 Φ (x) := φN 2t(x), x =0,...,N +1, t √N taking values in Σ , is associated with the Dirichlet form N N . Introducing the -valued N ·E G fluctuation field

ΦN (ϑ) := ιN (Φ˜ N )(ϑ) = Φ˜ N (x) 1l (ϑ), ϑ [0, 1), t t t [x/(N+1),(x+1)/(N+1)) ∈ x ΓN X∈ by our main result we have weak convergence for the law of the equilibrium fluctuation field ΦN to the law of the nonlinear diffusion process κ(µ ) on , which is the -parametrization of the . G G Wasserstein diffusion. Appendix A

Conditional Expectation of the Dirichlet Process

This section is devoted to the proof of Lemma 4.14. First we recall some basic facts about the link between Gamma and Dirichlet processes. For α > 0 we denote by G(α) the dx-absolutely 1 α 1 x continuous probability measure on R+ with density Γ(α) x − e− .

Definition A.1. A real valued Markov process (γ(t))t 0 starting in zero is called standard ≥ if its increments are independent and distributed according to γ(t) γ(s) G(t s) − ∼ − for 0 s 0 and γ(β t)7→ β > 0. Then, the process ( γ(β ·T ) )t [0,T ] is called Dirichlet process on [0, T ] with parameter β. · ∈ Its law will be denoted by Qβ,T and is obviously concentrated on the set := g : [0, T ) GT { → [0, 1] g c`adl`ag nondecreasing . The finite dimensional distributions are given by | } Qβ,T (g(t ) dx1,...,g(t ) dxN ) 1 ∈ N ∈ N Γ(βT ) i+1 i β(ti ti) 1 1 N = (x x ) +1− − dx ...dx , N − i=0 Γ(β(ti+1 ti)) − Yi=0 for every N N andQ every partition 0 = t < t < t <...

β,T u(g(t1),...,g(tN )) dQ T ZG N Γ(βT ) 1 N i+1 i β(ti ti) 1 1 N = u(x ,...,x ) (x x ) +1− − dx ...dx , N − i=0 Γ(β(ti+1 ti)) ΣN − Z Yi=0 Q where ΣN is defined as before. Moreover, it is well known that the Gamma process (γβ t)t [0,T ] and · ∈ its total charge γ(β T ) are independent (cf. e.g. [32]). In particular, alternatively the law Qβ,T of · 97 98 Conditional Expectation of the Dirichlet Process

the Dirichlet process can be obtained by conditioning the law of the Gamma process (γβ t)t [0,T ] on the event γ(β T ) = 1. In other words, the Dirichlet process on [0, T ] can interpreted as· some∈ · kind of Gamma bridge between zero and T . Finally, we recall that the Dirichlet process is a pure . More precisely, the typical paths are strictly increasing but increase only by jumps and the jump locations are dense in [0, T ] (see Section 3 in [66] for more details). Lemma A.2. For arbitrary T > 0

Id [0,T ] E β,T [g] = | . Q T

Proof. We need to show that for every partition 0 = t0 < t1 < t2 <...

N Γ(βT ) i j+1 j β(tj+1 tj ) 1 1 N E β,T [g(t )] = x (x x ) − − dx ...dx Q i N − j=0 Γ(β(tj+1 tj)) ΣN − Z jY=0 1 Q Γ(βT ) i i i i = N x A(x ) B(x ) dx Γ(β(t t )) 0 j=0 j+1 − j Z with Q i i−1 2 i 1 x x x − i j+1 j β(tj tj ) 1 1 i 1 A(x ) := (x x ) +1− − dx dx − , 0 0 ··· 0 − ··· Z Z Z jY=0 1 1 1 N i j+1 j β(tj tj ) 1 N i+1 B(x ) := (x x ) +1− − dx dx . xi xi+1 ··· xN−1 − ··· Z Z Z Yj=i We will use the well known fact b Γ(1 + u)Γ(1 + v) (z a)u(b z)v dz = (b a)1+u+v, − − Γ(2 + u + v) − Za for arbitrary 0 a < b and u, v > 1 to compute A(xi): ≤ − A(xi) i 3 i 1 2 x x − x j+1 j β(tj tj ) 1 1 βt1 1 2 1 β(t2 t1) 1 1 2 i 1 = (x x ) +1− − (x ) − (x x ) − − dx dx dx − 0 ··· 0 − 0 − ··· Z Z jY=2 Z i 4 i 1 x x − j+1 j β(tj+1 tj ) 1 =Γ(βt1)Γ(β(t2 t1)) (x x ) − − − 0 ··· 0 − Z Z jY=2 x3 1 2 βt2 1 3 2 β(t3 t2) 1 2 3 i 1 (x ) − (x x ) − − dx dx dx − × Γ(βt ) − ··· 2 Z0 i 4 i 1 x x − j+1 j β(tj+1 tj ) 1 Γ(β(t3 t2)) 3 βt3 1 3 i 1 =Γ(βt1)Γ(β(t2 t1)) (x x ) − − − (x ) − dx dx − . − 0 ··· 0 − Γ(βt3) ··· Z Z jY=2 99

Iterating this procedure yields

i 1 − Γ(β(tj+1 tj)) i j=0 i βti 1 A(x ) = − (x ) − . Q Γ(βti) Analogously, we obtain

N Γ(β(tj+1 tj)) i j=i i β(T ti) 1 B(x ) = − (1 x ) − − . Γ(β(T t )) − Q − i Thus,

1 Γ(βT ) i βti i β(T ti) 1 i E β,T [g(t )] = (x ) (1 x ) − − dx Q i Γ(βt )Γ(β(T t )) − i − i Z0 Γ(βT ) Γ(βt + 1)Γ(β(T t )) = i − i , Γ(βt )Γ(β(T t )) · Γ(βT + 1) i − i and using xΓ(x)=Γ(x + 1), x > 0 we finally obtain

ti E β,T [g(t )] = . Q i T

Lemma A.3. Let (γ(t))t 0 denote the Gamma process and let s > 0 be arbitrary but fixed. ≥ Then, for every a > 0 and every measurable F : R Gs →

E F (γ [0,s)) γ(s) = a = E F (a γ [0,s)) γ(s)=1 . | · | γ   γ¯  γ  Proof. We setγ ¯ := a . In particular note that γ¯(s) = γ(s) is the Dirichlet process over [0, s] (with parameter β = 1). Hence,

γ¯ [0,s) E F (γ [0,s)) γ(s) = a = E F (aγ¯ [0,s)) γ¯(s)=1 = E F a | γ¯(s)=1 | | γ¯(s)         γ¯ [0,s) γ [0,s) = E F a | = E F a | = E F (a γ [0,s)) γ(s)=1 . γ¯(s) γ(s) · |        

Proof of Lemma 4.14 Setting ti = i/(N + 1), i =0,...,N + 1, we have

N 1 N β β EQ [g N ](X) = EQ g1l[ti,ti+1) g(t1) = x ,...,g(tN ) = x |F " # Xi=0 N i i+1 β = 1l[ti,ti )EQ g [ti,ti ) g(ti) = x ,g(ti+1) = x +1 | +1 i=0 X  

100 Conditional Expectation of the Dirichlet Process

Using the stationarity of the Gamma process and Lemma A.3 we obtain for every i

i i+1 β EQ g [ti,ti ) g(ti) = x ,g(ti+1) = x | +1 i E i+1 i =x + γ(β ) [ti,ti+1) γ(βti)=0, γ(βti+1) = x x · | − i E i+1 i =x + γ(β( ti)) [ ti,ti+1) γ(0) = 0, γ(β(ti+1 ti)) = x x · − | − − i i+1 i E =x +(x x ) γ(β( ti)) [ti,ti+1) γ(0) = 0, γ(β(ti+1 ti))=1 − · − | − i i+1 i E β,t −t =x +(x x ) Q i+1 i g( ti) [ti ,ti+1) .  − · − | Finally, by Lemma A.2  

N i i+1 i ti β EQ [g N ](X) = 1l[ti,ti+1) x +(x x ) · − = gX . |F − ti+1 ti Xi=0  −  Bibliography

[1] H. Airault, Perturbations singuli`eres et solutions stochastiques de probl`emes de D. Neumann- Spencer, J. Math. Pures Appl. (9) 55, no. 3 (1976), pp. 233–267. [2] S. Albeverio and M. Rockner¨ , Classical Dirichlet forms on topological vector spaces—closability and a Cameron-Martin formula, J. Funct. Anal. 88, no. 2 (1990), pp. 395–436. [3] S. Albeverio and M. Rockner¨ , Dirichlet form methods for uniqueness of martingale problems and applications, in Stochastic analysis (Ithaca, NY, 1993), Amer. Math. Soc., Providence, RI, 1995, pp. 513–528. [4] A. Alonso and F. Brambila-Paz, Lp-continuity of conditional expectations, J. Math. Anal. Appl. 221, no. 1 (1998), pp. 161–176. [5] R. F. Anderson and S. Orey, Small random perturbation of dynamical systems with reflecting boundary, Nagoya Math. J. 60 (1976), pp. 189–216. [6] S. Andres, Pathwise differentiability for stochastic differential equations with reflection (2006). Diploma thesis, Technische Universit¨at Berlin. [7] S. Andres, Pathwise differentiability for SDEs in a smooth domain with reflection (2008). Preprint, submitted. [8] S. Andres, Pathwise differentiability for SDEs in a convex polyhedron with oblique reflection, Ann. Inst. H. Poincar´eProbab. Statist. 45, no. 1 (2009), pp. 104–116. [9] S. Andres and M.-K. von Renesse, Particle approximation of the Wasserstein diffusion (2008). Preprint, available on arXiv:0712.2387v1 [math.PR]. [10] S. Andres and M.-K. von Renesse, Regularity properties for a system of interacting two-sided bessel processes (2008). Preprint, in preparation. [11] R. F. Bass, K. Burdzy, and Z.-Q. Chen, Uniqueness for reflecting Brownian motion in lip domains, Ann. Inst. H. Poincar´eProbab. Statist. 41, no. 2 (2005), pp. 197–235. [12] R. F. Bass and P. Hsu, Some potential theory for reflecting Brownian motion in H¨older and Lips- chitz domains, Ann. Probab. 19, no. 2 (1991), pp. 486–508.

[13] J. Bertoin, Excursions of a BES0(d) and its drift term (0 < d < 1), Probab. Theory Related Fields 84, no. 2 (1990), pp. 231–250. [14] K. Burdzy, Differentiability of stochastic flow of reflected Brownian motions (2008). Preprint. [15] K. Burdzy and Z.-Q. Chen, Coalescence of synchronous couplings, Probab. Theory Related Fields 123, no. 4 (2002), pp. 553–578. [16] K. Burdzy, Z.-Q. Chen, and P. Jones, Synchronous couplings of reflected Brownian motions in smooth domains, Illinois J. Math. 50, no. 1-4 (2006), pp. 189–268 (electronic).

101 102 Bibliography

[17] K. Burdzy and J. M. Lee, Multiplicative functional for reflected Brownian motion via deterministic ODE (2008). Preprint. [18] K. L. Chung, Lectures from Markov processes to Brownian motion, Grundlehren der Mathematischen Wissenschaften 249, Springer-Verlag, New York, 1982. [19] K. L. Chung, Doubly-Feller process with multiplicative functional, in Seminar on stochastic processes, 1985 (Gainesville, Fla., 1985), Progr. Probab. Statist. 12, Birkh¨auser Boston, Boston, MA, 1986, pp. 63–78. [20] K. L. Chung and R. J. Williams, Introduction to stochastic integration, Probability and its Ap- plications, Birkh¨auser Boston Inc., Boston, MA, second ed., 1990. [21] K. L. Chung and Z. X. Zhao, From Brownian motion to Schr¨odinger’s equation, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 312, Springer- Verlag, Berlin, 1995. [22] M. Cranston and Y. Le Jan, On the noncoalescence of a two point Brownian motion reflecting on a circle, Ann. Inst. H. Poincar´eProbab. Statist. 25, no. 2 (1989), pp. 99–107. [23] M. Cranston and Y. Le Jan, Noncoalescence for the Skorohod equation in a convex domain of 2 R , Probab. Theory Related Fields 87, no. 2 (1990), pp. 241–252. [24] D. A. Dawson, Measure-valued Markov processes, in Ecole´ d’Et´ede´ Probabilit´es de Saint-Flour XXI—1991, Lecture Notes in Math. 1541, Springer, Berlin, 1993, pp. 1–260. [25] I. V. Denisov, A random walk and a near a maximum, Theor. Prob. Appl. 28 (1984), pp. 821–824. [26] J.-D. Deuschel and L. Zambotti, Bismut-Elworthy’s formula and random walk representation for SDEs with reflection, . Appl. 115, no. 6 (2005), pp. 907–925. [27] M. Doring¨ and W. Stannat, The Logarithmic Sobolev Inequality for the Wasserstein Diffusion (2008). Preprint, to appear in PTRF. [28] P. Dupuis and H. Ishii, On Lipschitz continuity of the solution mapping to the Skorokhod problem, with applications, Stochastics Stochastics Rep. 35, no. 1 (1991), pp. 31–62. [29] P. Dupuis and H. Ishii, SDEs with oblique reflection on nonsmooth domains, Ann. Probab. 21, no. 1 (1993), pp. 554–580. [30] E. B. Dynkin, Markov processes. Vols. I, II, Die Grundlehren der Mathematischen Wissenschaften 122, Academic Press Inc., Publishers, New York, 1965. [31] A. Eberle, Uniqueness and non-uniqueness of semigroups generated by singular diffusion operators, Springer-Verlag, Berlin, 1999. [32] M. Emery´ and M. Yor, A parallel between Brownian bridges and gamma bridges, Publ. Res. Inst. Math. Sci. 40, no. 3 (2004), pp. 669–688. [33] S. N. Ethier and T. G. Kurtz, Markov processes, Wiley Series in Probability and Mathemati- cal : Probability and , John Wiley & Sons Inc., New York, 1986. Characterization and convergence. [34] E. Fabes, D. Jerison, and C. Kenig, The Wiener test for degenerate elliptic equations, Ann. Inst. Fourier (Grenoble) 32, no. 3 (1982), pp. vi, 151–182. [35] E. B. Fabes, C. E. Kenig, and R. P. Serapioni, The local regularity of solutions of degenerate elliptic equations, Comm. Partial Differential Equations 7, no. 1 (1982), pp. 77–116. Bibliography 103

[36] T. Fattler and M. Grothaus, Strong Feller properties for distorted Brownian motion with reflect- ing boundary condition and an application to continuous N-particle systems with singular interactions, J. Funct. Anal. 246, no. 2 (2007), pp. 217–241. [37] M. Fukushima, A construction of reflecting barrier Brownian motions for bounded domains, Osaka J. Math. 4 (1967), pp. 183–215. [38] M. Fukushima, On semi-martingale characterizations of functionals of symmetric Markov processes, Electron. J. Probab. 4 (1999), pp. no. 18, 32 pp. (electronic). [39] M. Fukushima, Y. Oshima,¯ and M. Takeda, Dirichlet forms and symmetric Markov processes, Walter de Gruyter & Co., Berlin, 1994. [40] T. Funaki, Stochastic interface models, in Lectures on probability theory and statistics, Lecture Notes in Math. 1869, Springer, Berlin, 2005, pp. 103–274. [41] M. Grothaus, Y. G. Kondratiev, and M. Rockner¨ , N/V -limit for stochastic dynamics in continuous particle systems, Probab. Theory Related Fields 137, no. 1-2 (2007), pp. 121–160. [42] N. Ikeda and S. Watanabe, Stochastic differential equations and diffusion processes, North-Holland Mathematical Library 24, North-Holland Publishing Co., Amsterdam, 1981. [43] J.-P. Imhof, Density factorizations for Brownian motion, meander and the three-dimensional Bessel process, and applications, J. Appl. Probab. 21, no. 3 (1984), pp. 500–510. [44] K. Itoˆ and H. P. McKean, Jr., Diffusion processes and their sample paths, Springer-Verlag, Berlin, 1974. Second printing, corrected, Die Grundlehren der mathematischen Wissenschaften, Band 125. [45] I. Karatzas and S. E. Shreve, Brownian motion and , Graduate Texts in Mathematics 113, Springer-Verlag, New York, second ed., 1991. [46] T. Kilpelainen¨ , Weighted Sobolev spaces and capacity, Ann. Acad. Sci. Fenn. Ser. A I Math. 19, no. 1 (1994), pp. 95–113. [47] C. Kipnis and C. Landim, Scaling limits of interacting particle systems, Springer-Verlag, Berlin, 1999. [48] A. V. Kolesnikov, Mosco convergence of Dirichlet forms in infinite dimensions with changing ref- erence measures, J. Funct. Anal. 230, no. 2 (2006), pp. 382–418. [49] A. M. Kulik, Markov uniqueness and Rademacher theorem for smooth measures on infinite- dimensional space under successful filtration condition, Ukra¨ın. Mat. Zh. 57, no. 2 (2005), pp. 170–186. [50] K. Kuwae and T. Shioya, Convergence of spectral structures: a functional analytic theory and its applications to spectral geometry, Comm. Anal. Geom. 11, no. 4 (2003), pp. 599–673. [51] P.-L. Lions and A.-S. Sznitman, Stochastic differential equations with reflecting boundary condi- tions, Comm. Pure Appl. Math. 37, no. 4 (1984), pp. 511–537. [52] U. Mosco, Composite media and asymptotic Dirichlet forms, J. Funct. Anal. 123, no. 2 (1994), pp. 368–421. [53] K. R. Parthasarathy, Probability measures on metric spaces, Probability and Mathematical Statis- tics, No. 3, Academic Press Inc., New York, 1967. [54] A. Pilipenko, Stochastic flows with reflection (2008). Preprint, available on arXiv:0810.4644. [55] P. Protter, Stochastic integration and differential equations, Applications of Mathematics (New York) 21, Springer-Verlag, Berlin, 1990. A new approach. 104 Bibliography

[56] D. Revuz and M. Yor, Continuous martingales and Brownian motion, Grundlehren der Mathema- tischen Wissenschaften 293, Springer-Verlag, Berlin, third ed., 1999. [57] F. Russo, P. Vallois, and J. Wolf, A generalized class of Lyons-Zheng processes, Bernoulli 7, no. 2 (2001), pp. 363–379. [58] D. W. Stroock, Diffusion semigroups corresponding to uniformly elliptic divergence form operators, in S´eminaire de Probabilit´es, XXII, Lecture Notes in Math. 1321, Springer, Berlin, 1988, pp. 316–347. [59] K. T. Sturm, Analysis on local Dirichlet spaces. III. The parabolic Harnack inequality, J. Math. Pures Appl. (9) 75, no. 3 (1996), pp. 273–297. [60] H. Tanaka, Stochastic differential equations with reflecting boundary condition in convex regions, Hiroshima Math. J. 9, no. 1 (1979), pp. 163–177. [61] A. Torchinsky, Real-variable methods in harmonic analysis, Pure and Applied Mathematics 123, Academic Press Inc., Orlando, FL, 1986. [62] G. Trutnau, Skorokhod decomposition of reflected diffusions on bounded Lipschitz domains with singular non-reflection part, Probab. Theory Related Fields 127, no. 4 (2003), pp. 455–495. [63] B. O. Turesson, Nonlinear potential theory and weighted Sobolev spaces, Lecture Notes in Mathe- matics 1736, Springer-Verlag, Berlin, 2000. [64] S. R. S. Varadhan and R. J. Williams, Brownian motion in a wedge with oblique reflection, Comm. Pure Appl. Math. 38, no. 4 (1985), pp. 405–443. [65] C. Villani, Topics in optimal transportation, Graduate Studies in Mathematics 58, American Math- ematical Society, Providence, RI, 2003. [66] M.-K. von Renesse and K.-T. Sturm, Entropic Measure and Wasserstein Diffusion (2007). Preprint, arXiv:0704.0704, to appear in Ann. Probab. [67] M.-K. von Renesse, M. Yor, and L. Zambotti, Quasi-invariance properties of a class of subor- dinators (2007). Preprint, arXiv:0706.3010, to appear in Stochastic Process. Appl.