<<

arXiv:1901.02556v1 [math.PR] 8 Jan 2019 Introduction 1 : Φ a fapoesdsrbdb cenVao tcatcdifferent stochastic McKean-Vlasov a by case described process a of law edsigihtocss a) cases: two distinguish We e kqatttv rpgto fcasvadfeeta ca differential via chaos of propagation quantitative Weak h i fti oki opoiea xc ekerrepninbetwe expansion error weak exact an provide to is work this of aim The P µ 2 siae o nebe fitrcigprilsadso t show and particles interacting able properties. of are ensembles we for Furthermore, estimates systems. particle quantit interacting reveals for work ga regularit this Ultimately, the mean-field analyse SDEs. of and McKean-Vlasov case literature each the connected for required in intimately conditions equation is master and the measures second called probability The of measures. flow of space the particl the corresponding on the of laws marginal of law empirical ereo mohes edsigihtocss a) cases: two distinguish We . of degree of sample a of empirical µ ae st hwta ne utberglrt conditions, regularity suitable under that show to is paper o oepstv constants positive some for nue yte2Wsesendistance 2-Wasserstein the by induced N ( b) ; R steeprclmaueo h agnllw ftecorrespondin the of laws marginal the of measure empirical the is osdrtemti space metric the Consider d ) µ → samria a fMKa-lsvsohsi differential stochastic McKean-Vlasov of law marginal a is R enFaçi Chassagneux Jean-François fteeprclmeasure empirical the of 2 colo ahmtc,Uiest fEdinburgh of University Mathematics, of School µ N ntesaeo measures of space the on C steeprclmaueof measure empirical the is | 1 1 Φ( SM nvriéPrsDiderot Paris Université LSPM, C , . . . , ( P µ 2 ) ( N R − d k E admvralsdsrbtdas distributed variables random ) − W , Φ( W µ 1 N µ 2 htd o eedon depend not do that 2 Let . ) N P ∈ Abstract 1 fsur nerbelw on laws integrable square of ) uazSzpruch Lukasz , | = 1 2 : Φ ( k X j R =1 − µ 1 N d P ) N C steeprclmaueof measure empirical the is 2 n t eemnsi limit deterministic its and j j ( R + ntecs ffntoaso h asof laws the of functionals of case the in y ehave we tv siae fpoaaino chaos of propagation of estimates ative d ytm h rtcs ssuidusing studied is case first The system. e opoiewa rpgto fchaos of propagation weak provide to ) a hs a aesm remarkable some have may these hat O N aerle na t-yefruafor formula Itô-type an on relies case e.W tt h eea regularity general the state We mes. ( → oPE ntesaeo measures, of space the on PDEs to smlsfrom -samples N 1 k R qaini hc case which in equation 2 a qain(cVSE,i which in (McKV-SDE), equation ial N ) n li Tse Alvin and , , eafnto and a be where , µ h anrsl fthis of result main The . atcesystem. particle g na(olna)functional (nonlinear) a en R k d orsod othe to corresponds µ ihtetopology the with b) ; N smlsfrom -samples Φ( 2 µ µ µ stemarginal the is ) N µ , N µ ethe be san is P ∈ lculus 2 ( R d ) . In the first case where µN is the empirical measure of N-samples from µ, the only interesting case is when the functional Φ is non-linear. To provide some context to our results, one may, for example, assume that Φ is Lipschitz continuous with respect to the Wasserstein distance, i.e, there exists a constant C > 0 such that

Φ(µ) Φ(ν) CW (µ,ν), µ,ν (Rd) , | − |≤ 2 ∀ ∈ P2 one could bound Φ(µ) EΦ(µ ) by EW (µ,µ ). Consequently, following [15] or [14], the rate of | − N | 2 N convergence in the number of samples N deteriorates as the dimension d increases. On the other hand, recently, authors [13, Lem. 5.10] made a remarkable observation that if the functional Φ is twice-differentiable with respect to the functional (see Section 2.1.1), then one can obtain a dimension-independent bound for the strong error E Φ(µ) Φ(µ ) p, p 4, which is of order | − N | ≤ O(N −1/2) (as expected by CLT). Here, we study a weak error and show that, (see Theorem 2.17) if Φ is (2k + 1)-times differentiable with respect to the functional derivative, then indeed we have

k−1 C 1 Φ(µ) EΦ(µ ) = j + O( ), | − N | N j N k Xj=1 for some positive and explicit constants C1,...,Ck−1 that do not depend on N. The result is of inde- pendent interest, but is also needed to obtain a complete expansion for the error in particle approx- imations of McKV-SDEs that we discuss next. The second situation we treat in this work concerns estimates of propagation-of-chaos type 1. Consider a probability space (Ω, , P) with a d-dimensional Brownian motion W . We are interested F in the McKean-Vlasov process Xs,ξ with interacting kernels b and σ, starting from a random { t }t∈[s,T ] variable ξ, defined by the SDE2

t t X0,ξ = ξ + b(X0,ξ, L (X0,ξ)) dr + σ(Xs,ξ, L (X0,ξ)) dW , t [s, T ], (1.1) t r r r r r ∈ Z0 Z0 s,ξ s,ξ d d d where L (Xr ) denotes the law of Xr and functions b = (b ) : R (R ) R and σ = i 1≤i≤d × P2 → (σ ) : Rd (Rd) Rd Rd satisfy suitable conditions so that there exists a unique weak i,j 1≤i,j≤d × P2 → ⊗ solution (see e.g. [34] or, for more up-to-date panorama on research on existence and uniqueness, see [32, 20, 1, 12]). McKean-Vlasov SDE (1.1) can be derived as a limit of interacting diffusions. Indeed, 0,ξ N 1 N one can approximate the law L (X ) by the empirical measure X := δ j,N generated by N · · N j=1 X· particles i,N defined as {X }1≤i≤N P t t i,N = ξ + b i,N , XN dr + σ i,N , XN dW i, 1 i N, t [0, T ], (1.2) Xt i Xr r Xr r r ≤ ≤ ∈ Z0   Z0   1We would like to remark that our results also cover a situation where the law µ is induced by a system of stochastic differential equations with random initial conditions that are not of McKean-Vlasov type in which case the samples are i.i.d. 2We assume without loss of generality that the dimensions of X and W are the same because we will not make any non-degeneracy assumption on the diffusion coefficient σ in our work. In particular, one dimension of X could be time itself.

2 where W i, 1 i N, are independent d-dimensional Brownian motions and ξ , 1 i N, are i.i.d. ≤ ≤ i ≤ ≤ random variables with the same distribution as ξ. It is well known, [34, Prop. 2.1], that the property XN of propagation of chaos is equivalent to weak convergence of measure-valued random variables t L N L XN Rd to (Xt). A common strategy is to establish tightness of π = ( t ) ( ( )) and to identify N ∈ P P the limit by showing that π converges weakly to δL (Xt). This approach does not reveal quantitative bounds we seek in this paper, but is a very active area of research. We refer the reader to [18, 34, 29] for the classical results in this direction and to [22, 3, 16, 30, 26] for an account (non-exhaustive) of recent results. On the other hand, the results on quantitative propagation of chaos are few and far in between. In the case when coefficients of (1.1) depend on the measure component linearly, i.e., are of the form b(x,µ)= B(x,y) µ(dy), σ(x,µ)= Σ(x,y)µ(dy) , Rd Rd Z Z with B, Σ being Lipschitz continuous in both variables, it follows from a simple calculation [34] to see that W (L ( i,N ), L (X0,ξ)) = O(N −1/2). We refer to Sznitman’s result as strong propagation of 2 Xt t chaos. Note that in this work we treat the case of McKean-Vlasov SDEs with coefficients with general measure dependence. In that case, as explicitly demonstrated in [7, Ch. 1], the rate of strong propaga- tion of chaos deteriorates with the dimension d. This is due to the fact that one needs to estimate the difference between the empirical law of i.i.d. samples from µ and µ itself using results such as [15] or [14]. In the special case when the diffusion coefficient is constant, with linear measure dependence on the drift (which lies in some negative ), the rate of convergence in the total variation norm has been shown to be O(1/√N) in [21]. Of course, in a strong setting, O(1/√N) is widely considered to be optimal as it corresponds to the size of stochastic fluctuations as predicted by the CLT. In this work, we are interested in weak quantitative estimates of propagation of chaos. Indeed, this new direction of research has been put forward very recently by two independent works [24, Ch. 9] and [31, Th. 2.1]. The authors presented novel weak estimates of propagation of chaos for linear functions in measure, i.e. Φ(µ) := F (x)µ(dx) with F : Rd R being smooth. This gives the rate Rd → of convergence O(1/N), plus the error due to approximation of the functional of the initial law (see R [31, Lem. 4.6] for a discussion of a dimensional-dependent case). While the aim of [31] is to estab- lish quantitative propagation of chaos for the Boltzmann’s equation, in a spirit of Kac’s programme [23, 28], Theorem 6.1 in [31, Th. 6.1] specialises their result to McKV-SDEs studied here, but only for elliptic diffusion coefficients that do not depend on measure and symmetric Lipschitz drifts with linear measure dependence. The key idea behind both results is to work with the semigroup that acts on the space of functions of measure, sometimes called the lifted semigroup, which can be viewed a dual to the space of probability measures on (Rd) as presented in [30]. A similar research programme, but P in the context of mean-field games with a common noise, has been successfully undertaken in [6]. In this work, the authors study the master equation, which is a PDE driven by the Markov generator of a lifted semi-group. They show that existence of classical solution to that PDE is the key to obtain quantitative bounds between an n-player Nash system and its mean field limit. Indeed, perturbation analysis of the PDE on the space of measures leads to the weak error being of the order O(1/N). In this work we build on these observations, and identify minimal assumptions for the expansion in number of particles N to hold. Next, we verify these assumptions for McKV-SDEs with a general drift

3 and general (and possibly non-elliptic) diffusion coefficients. We also consider non-linear functionals of measure. The main theorem in this paper, Theorem 2.17, states that given sufficient regularity we have k−1 C 1 E Φ(XN ) Φ(L (X0,ξ)) = j + O( ), T − T N j N k j=1   X where C1,...,Ck−1 are constants that do not depend on N. As mentioned above, the method of expansion relies heavily on the calculus on ( (Rd),W ) and P2 2 we follow the approach presented by P. Lions in his course at Collège de France [27] (redacted by Cardaliaguet [5]). To obtain such an expansion, one needs to rely on some smoothness property for the solution of (1.1) as it is always the case when one wants to gets error expansion for some approxim- ating procedure (see [36] for a similar expansion on the weak error expansion of SDE approximation with time-discretisation). The important object in our study, similarly to [6], is the PDE written on the space [0, T ] (Rd), which corresponds to the lifted semigroup and comes from the Itô’s formula × P2 of functionals of measures established in [4] and [9]. Smoothness properties on the functions (m) V (see Definition 2.6) required for expansion (2.17) to hold are formulated in Theorem 2.9. A natural question is then to identify some sufficient conditions on the SDE coefficients to guarantee the smooth- ness property of the functions (m). We give one possible answer to this question in Theorem 2.17. V In that theorem, we show that if the coefficients of the SDE are smooth, then the functions (m) are V also smooth enough provided that Φ is itself smooth. This result, which is expected, comes from an extension of Theorem 7.2 in [4] (see Theorem 2.15). While in the current paper, we assume high order of smoothness of the coefficients of McKV-SDEs and Φ, we anticipate this general approach to be valid under a less regular setting. Indeed, when working with a strictly elliptic setting with some structural conditions, the lifted semigroup may be smooth even in the case when drift and diffusion coefficients are irregular. This has been demonstrated in [11, 12]. Similarly, Φ does not need to be smooth for the lifted semigroup to be differentiable in the measure direction. This has been shown using techniques of Malliavin calculus in [10]. Finally, when the underlying equation has some special structure, the more classical approach can be deployed to study weak propagation of chaos property [2]. The analysis of irregular cases goes beyond the scope of this paper. To sum up, there are three main contributions in this paper. Firstly, the main result (Theorem 2.17) allows us to use Romberg extrapolation to obtain an estimator of X with weak error being in the order of O( 1 ), for each k N. (See Section 1.1 for details.) Thus, effectively, a higher-order particle system N k ∈ (in terms of the weak error) can be constructed up to a desired order of approximation. Secondly, the analysis in this paper makes use of the notions of measure derivatives and linear functional derivatives by generalising them to an arbitrary order of differentiation. This is in line with the approach in [10]. Some properties (e.g. Lemma 2.5) relate the regularity of the two notions of derivatives in measure and might be of an independent interest. In particular, the generalisation of Theorem 7.2 in [4] from second order derivatives in measure to higher order derivatives is proven to be useful in the analysis of McKean-Vlasov SDEs in general. Finally, as a by-product of the weak error expansion, a version of the law of large numbers in terms of functionals of measures is developed in Theorem 2.12.

4 1.1 Romberg extrapolation and ensembles of particles In this section we construct an ensemble particle system in the spirit of Richardson’s extrapolation method [33] that has been studied in the context of time-discretisation of SDEs in [36] and in the context of discretisation of SPDEs in [19]. Let F : Rd R be a Borel-measurable function and define Φ(µ) := F (x)µ(dx). Observe that → Rd N R 1 E[Φ(XN )] = E F ( i,N ) = E[F ( 1,N )]. T N XT XT  Xi=1  E 0,ξ E 1,N Hence, the weak error reads [F (XT )] [F ( T )] . By the result of Theorem 2.17, we can apply | − X | E 0,ξ the technique of Romberg extrapolation to construct an estimator which approximates [F (XT )] k such that the weak error is of the order of O(1/N ). More precisely, for k = 2, since C1 is independent of N, C 1 EF ( i,N ) E[F (X0,ξ)] = 1 + O XT − T N N 2   and C 1 EF ( i,2N ) E[F (X0,ξ)] = 1 + O . XT − T 2N N 2   Hence, 1 2EF ( i,2N ) EF ( i,N ) E[F (X0,ξ)] = O . XT − XT − T N 2     For general k, we can use a similar method to show that

k 1 α EF ( i,mN ) E[F (X0,ξ)] = O , m XT − T N k m=1 X   where mk α = ( 1)k−m , 1 m k. m − m!(k m)! ≤ ≤ − To motivate the study of the weak error expansion we will analyse an estimator that uses M ensembles of particles. Fix M 1. The ensembles are indexed by j. For j 1,...,M , consider ≥ ∈ { } t t (i,j),N = ξ + b (i,j),N , X(j,N) dr + σ (i,j),N , X(j,N) dW (i,j), 1 i N, (1.3) Xt (i,j) Xr r Xr r r ≤ ≤ Z0   Z0   where W i : 1 i N are M independent ensembles each consisting of N d-dimensional { ≤ ≤ }1≤j≤M Brownian motions; and ξ : 1 i N are M independent ensembles each consisting of N { (i,j) ≤ ≤ }1≤j≤M i.i.d. random variables with the same distribution as ξ. We consider the following estimator

M k mN 1 1 α F ( (i,j),mN ). M m mN XT m=1 Xj=1 X Xi=1 5 Next we analyse mean-square error3 of this estimator

M k mN 1 1 2 E E[F (X )] α F ( (i,j),mN ) T − M m mN XT m=1  Xj=1 X Xi=1   k 2 2 E[F (X )] α EF ( 1,mN ) ≤ T − m XT  mX=1   k mN M k mN 1 1 1 2 + 2E E α F ( i,mN ) α F ( (i,j),mN ) . m mN XT − M m mN XT   mX=1 Xi=1  Xj=1 mX=1 Xi=1   The first term on the right-hand side is studied in Theorem 2.17 and, provided that the coefficients of (1.1) are sufficiently smooth, it converges with order (N −2k). Control of the second term follows O from the qualitative strong propagation of chaos. Indeed, we write

M k mN 1 1 Var α F ( (i,j),mN ) M m mN XT m=1  Xj=1 X Xi=1  M k mN M k mN 1 1 1 1 2Var α F (X(i,j)) + 2Var α F ( (i,j),mN ) F (X(i,j)) , ≤ M m mN T M m mN XT − T m=1 m=1  Xj=1 X Xi=1   Xj=1 X Xi=1   where X(i,j) denotes the solution of (1.1) driven by W i,j with initial data ξi,j. Hence, independence implies that

M k mN k 1 1 1 1 Var α F (X(i,j)) α2 Var[F (X(1,1))]. M m mN T ≤ M m mN T m=1 m=1  Xj=1 X Xi=1  X On the other hand,

M k mN 1 1 Var α F ( (i,j),mN ) F (X(i,j)) M m mN XT − T  Xj=1 mX=1 Xi=1  k mN k2 1 2 α2 E F ( (i,j),mN ) F (X(i,j)) ≤ M m mN XT − T mX=1  Xi=1  k mN k2 1 α2 E F ( (i,j),mN ) F (X(i,j)) 2 , ≤ M m mN XT − T m=1 i=1 X X   where Jensen’s inequality is used. Using the fact F is Lipschitz continuous and the result on a dimension- free bound for strong propagation of chaos, established in [35], there exists a constant C > 0 with no

3We look at the mean-square error for simplicity, but a similar computation could be done to verify the Lindeberg condi- tion and produce CLT with an appropriate scaling.

6 dependence on N such that C E[ F ( (i,j),mN ) F (X(i,j)) 2] . | XT − T | ≤ mN Consequently, we have

M k mN k 1 1 2 1 1 E E[F (X )] α F ( (i,j),mN ) C(N −2k + α2 ). T − M m mN XT ≤ M m mN m=1 m=1  Xj=1 X Xi=1   X Since there are M ensembles corresponding to the estimator and each ensemble has k sub-particle sys- tems with mN particles each, m 1,...,k , the total number of interactions is = M k (mN)2. ∈ { } C m=1 When we take N = ǫ−1/k and M = ǫ−2+1/k the mean-square error is of the order O(ǫ2) (since k P α2 m−1 is a constant). The corresponding number of interactions is of the order O(ǫ−2−1/k). m=1 m C The message here is that as the smoothness increases, less interactions among particles are needed Pwhen approximating the law of McKean-Vlasov SDE (1.1). We would like to stress out again that the dimension of the system does not deteriorate the rate of convergence, in contrast to results presented in the literature [8, 15, 30]. It is instructive to compare the above computation with a usual mean- square analysis of a single particle system

N 1 2 E E[F (X )] F ( i,N ) T − N XT  Xi=1   N 2 1 2 = E[F (X )] E[F ( 1,N )] + E E[F ( 1,N )] F ( i,N ) . T − XT XT − N XT    Xi=1   As above, invoking strong propagation of chaos, one can show that the second term is of order O N −1 . That means that there would be no gain to go beyond what we can obtain from the strong propagation  of chaos analysis to control the first term. Taking N = ǫ−2 results in mean-square error being of the order O(ǫ2) and number of interactions = N 2 = ǫ−4. That clearly demonstrates that C working with ensembles of particles leads to an improvement in quantitative properties of propagation of chaos, which is interesting on its own but can also be explored when simulating particle systems on the computer.

Notations. The 2 Wasserstein metric is defined by • − 1 2 2 W2(µ,ν) := inf x y π(dx, dy) , π∈Π(µ,ν) Rd Rd | − |  Z ×  where Π(µ,ν) denotes the set of couplings between µ and ν i.e. all measures on B(Rd Rd) such × that π(B, Rd)= µ(B) and π(Rd,B)= ν(B) for every B B(Rd). ∈

7 Uniqueness in law of (1.1) implies that for any random variables ξ,ξ′ such that L (ξ)= L (ξ′)= ′ • L s,ξ L s,ξ s,µ s,ξ µ, we have (Xt )= (Xt ). Therefore, we adopt the notation Xt := Xt if only the law of the process is concerned.

When the total number N of particles is clear from context, we will often simply write i for • X i,N . X For any x,y Rd, we denote their inner product by xy. Since different measure derivatives lie in • ∈ different tensor product spaces, we use to denote the Euclidean norm for any tensor product | · | space in the form Rd1 . . . Rdℓ . ⊗ ⊗ The law of any Z is denoted by L (Z). For any function f : (Rd) R, its lift • P2 → f˜ : L2(Ω, , P; Rd) R is defined by f˜(ξ)= f(L (ξ)). F → Also, (Ωˆ, ˆ, Pˆ) stands for a copy of (Ω, , P), which is useful to represent the Lions’ derivative of • F F a function of a probability measure. Any random variable η defined on (Ω, , P) is represented F by ηˆ as a pointwise copy on (Ωˆ, ˆ, Pˆ). In the section on regularity, we shall introduce a sequence F of copies of (Ω, , P), denoted by (Ω(n), (n), P(n)) . As before, any random variable η defined F { F }n on (Ω, , P) is represented by η(n) as a pointwise copy on (Ω(n), (n), P(n)). F F For T > 0, we define the following subsets of [0, T ]m, m 1: • ≥ ∆m := (t ,...,t ) [0, T ]m 0

We often denote (t1,...,tm) = t and (t1,...,tm−1) = τ. We shall also sometimes use the con- vention ∆0 :=: ∆0 := for simplicity of notation. T T ∅ With the above definition, we denote •

f(t)dt := f(t1,...,tm)dt1 . . . dtm. m Z∆T Z0

For any function f :∆m R, we always denote by ∂ f(t ,...,t ) the partial derivatives of f in • T → t 1 m the variable tm at (t1,...,tm) whenever they exist.

L2 denotes the set of square integrable random variables, 2 the set of square-integrable pro- • 1 H gressively measurable processes θ such that T θ 2 2 L2. 0 | s| ∈ R  8 2 Method of weak error expansion

2.1 Calculus on the space of measures

Our method of proof is based on expansion of an auxiliary map satisfying a PDE on the Wasserstein space. One of the most important tools of the paper is thus the theory of differentiation in measure. We make an intensive use of the so-called “L-derivatives” and “linear functional derivatives” that we recall now, following essentially [6]. We also introduce a higher-order version of this derivative as this is needed in the proofs of our expansion.

2.1.1 Linear functional derivatives

A δU : (Rd) Rd R is said to be the linear functional derivative of δm P2 × → U : (Rd) R, if P2 → for any bounded set (Rd), y δU (m,y) has at most quadratic growth in y uniformly in • K ⊂ P2 7→ δm m , ∈ K for any m,m′ (Rd), • ∈ P2 1 δU U(m′) U(m)= ((1 s)m + sm′,y) (m′ m)(dy) ds. (2.1) − Rd δm − − Z0 Z For the purpose of our work, we need to introduce derivatives at any order p 1. ≥ Definition 2.1. For any p 1, the p-th order linear functional of the function U is a continuous δpU d ≥ d p−1 d function from p : (R ) (R ) R R satisfying δm P2 × × → p for any bounded set (Rd), (y,y′) δ U (m,y,y′) has at most quadratic growth in (y,y′) • K ⊂ P2 7→ δmp uniformly in m , ∈ K for any m,m′ (Rd), • ∈ P2 p−1 p−1 1 p δ U ′ δ U δ U ′ ′ ′ ′ p−1 (m ,y) p−1 (m,y)= p ((1 s)m + sm ,y,y ) (m m)(dy )ds, δm − δm Rd δm − − Z0 Z provided that the (p 1)-th order derivative is well defined. − The above derivatives are defined up to an additive constant via (2.1). They are normalised by

δpU (m, 0) = 0 . (2.2) δmp We make the following easy observation, which will be useful in the latest parts.

9 Lemma 2.2. If U admits linear functional derivatives up to order q, then the following expansion holds

q−1 1 δpU ′ y ′ ⊗p y U(m ) U(m)= p (m, ) m m (d ) − p! Rpd δm { − } Xp=1 Z 1 1 δqU q−1 ′ y ′ ⊗q y + (1 t) q ((1 t)m + tm , ) m m (d )dt. (q 1)! − Rqd δm − { − } − Z0 Z Proof. We define

[0, 1] t f(t)= U (1 t)m + tm′ = U m + t(m′ m) R (2.3) ∋ 7→ − − ∈ and apply Taylor-Lagrange formula to fup to order q, namely 

q−1 1 1 1 f(1) f(0) = f (p)(0) + (1 t)(q−1)f (q)(t)dt. − p! (q 1)! − p=1 0 X − Z It remains to show that δpU (p) ′ y ′ ⊗p y f (t)= p (m + t(m m), ) m m (d ), p 0,...,q . (2.4) Rpd δm − { − } ∀ ∈ { } Z by induction. Since (2.4) holds trivially for p = 0, we suppose that (2.4) holds for p 0,...,q 1 . ∈ { − } Then f (p)(t + h) f (p)(t) − h 1 δpU ′ y ′ ⊗p y = p (m + (t + h)(m m), ) m m (d ) h Rpd δm − { − }  Z δpU ′ y ′ ⊗p y p (m + t(m m), ) m m (d ) − Rpd δm − { − } Z  1 δp+1U ′ y ′ ′ ′ ′ ⊗p y = p+1 (m + (t + sh)(m m), ,y ) (m m)(dy ) ds m m (d ). Rpd Rd δm − − { − } Z Z0 Z Taking h 0 gives (2.4) for p + 1. This completes the proof. →

2.1.2 L-derivatives The above notion of linear functional derivatives is not enough for our work. We shall need to consider further derivatives in the non-measure argument of the derivative function. If the function y δU (m,y) is of class 1, we consider the intrinsic derivative of U that we denote 7→ δm C δU ∂ U(m,y) := ∂ (m,y) . µ y δm

10 The notation is borrowed from the literature on mean field games and corresponds to the notion of “L-derivative” introduced by P.-L. Lions in his lectures at Coll?ge de France [27]. Traditionally, it is introduced by considering a lift on an L2 space of the function U and using the Fr?chet differentiability of this lift on this . The equivalence between the two notions is proved in [8, Tome I, Chapter 5], where the link with the notion of derivatives used in optimal transport theory is also made. In this context, higher order derivatives are introduced by iterating the operator ∂µ and the deriv- ation in the non-measure arguments. Namely, at order 2, one considers

(Rd) Rd (m,y) ∂ ∂ U(m,y) and (Rd) Rd Rd (m,y,y′) ∂2U(m,y,y′) . P2 × ∋ 7→ y µ P2 × × ∋ 7→ µ This leads in particular to the notion of a fully 2 function that will be of great interest for us (see C [9]). Definition 2.3 (Fully 2). A function U : (Rd) R is fully 2 if the following mappings C P2 → C (Rd) Rd (m,y) ∂ U(m,y) P2 × ∋ 7→ µ (Rd) Rd (m,y) ∂ ∂ U(m,y) P2 × ∋ 7→ y µ (Rd) Rd Rd (m,y,y′) ∂2U(m,y,y′) P2 × × ∋ 7→ µ are well-defined and continuous for the product topologies. Let us observe for later use that if the function U is fully 2 and moreover satisfies, for any compact C subset (Rd), K ⊂ P2 2 2 sup ∂µU(m,y) + ∂y∂µU(m,y) dm(y) < + , Rd | | | | ∞ m∈K Z  then it follows from Theorem 3.3 in [9] that U can be expanded along the flow of marginals of an It? process. Namely, let µt = L (Xt) where dX = b dt + σ dW , X L2, t t t t 0 ∈ with b and a := σ σ′ , then ∈H2 t t t ∈H2 t 1 U(µ )= U(µ )+ E ∂ U(µ , X )b + Tr ∂ ∂ U(µ , X )a ds. (2.5) t 0 µ s s s 2 { y µ s s s} Z0   In order to prove our expansion, we need to iterate the application of the previous chain rule and in order to proceed, we need to use higher order derivatives of the measure functional. Inspired by the work [10], for any k N, we formally define the higher order derivatives in meas- ∈ ures through the following iteration (provided that they actually exist): for any k 2, (i ,...,i ) ≥ 1 k ∈ 1,...,d k and x ,...,x Rd, the function ∂kf : (Rd) (Rd)k (Rd)⊗k is defined by { } 1 k ∈ µ P2 × →

k k−1 ∂µf(µ,x1,...,xk) := ∂µ ∂µ f( ,x1,...,xk−1) (µ,xk) , (2.6) (i ,...,i ) · (i1,...,ik−1) i   1 k      k 11 and its corresponding mixed derivatives in space ∂ℓk ...∂ℓ1 ∂kf : (Rd) (Rd)k (Rd)⊗(k+ℓ1+...ℓk) vk v1 µ P2 × → are defined by

∂ℓk ∂ℓ1 ℓk ℓ1 k k N ∂v ...∂v ∂µf(µ,x1,...,xk) := . . . ∂µf(µ,x1,...,xk) , ℓ1 ...ℓk 0 . k 1 ∂xℓk ∂xℓ1 ∈ ∪{ }  (i1,...,ik) k 1  (i1,...,ik) (2.7) Since this notation for higher order derivatives in measure is quite cumbersome, we introduce the following multi-index notation for brevity. This notation was first proposed in [10].

Definition 2.4 (Multi-index notation). Let n,ℓ be non-negative integers. Also, let β = (β1,...,βn) be an n-dimensional vector of non-negative integers. Then we call any ordered tuple of the form (n,ℓ, β) or (n, β) a multi-index. For a function f : Rd (Rd) R, the derivative D(n,ℓ,β)f(x, µ, v , . . . , v ) is × P2 7→ 1 n defined as (n,ℓ,β) βn β1 ℓ n D f(x, µ, v1, . . . , vn) := ∂vn ...∂v1 ∂x∂µ f(x, µ, v1, . . . , vn) if this derivative is well-defined. For any function Φ : (Rd) R, we define P2 → (n,β) βn β1 n D Φ(µ, v1, . . . , vn) := ∂vn ...∂v1 ∂µ Φ(µ, v1, . . . , vn), if this derivative is well-defined. Finally, we also define the order 4 (n,ℓ, β) (resp. (n, β) ) by | | | | (n,ℓ, β) := n + β + ...β + ℓ, (n, β) := n + β + ...β . (2.8) | | 1 n | | 1 n As for the first order case, we can establish the following relationship with linear functional deriv- atives, see e.g. [6] for the correspondence up to order 2, δ δ δn ∂nU( )= ∂ ...∂ U( )= ∂ ...∂ U( ) , (2.9) µ · yn δm y1 δm · yn y1 δmn · provided one of the two derivatives is well-defined. Next, we deduce the following lemma that will be useful later on.

p ∞ Lemma 2.5. Let p 1 and assume that ∂µU L . Then ≥ ∈ δpU (m,y ,...,y ) C ( y p + + y p) , δmp 1 p ≤ p | 1| · · · | p|

for some constant Cp > 0. Proof. We sketch the proof by induction in dimension one, for ease of notation. Let p 1. ≥ First, we compute that

δpU δpU 1 δpU (m,y ,...,y )= (m,y ,...,y , 0) + ∂ (m,y ,...,t y ) dt . δmp 1 p δmp 1 p−1 tp δmp 1 p p p Z0   4 We do not consider ‘zeroth’ order derivatives in our definition, i.e. at least one of n, β1,...,βn and ℓ must be non-zero, for every multi-index n, ℓ, (β1,...,βn).

12 Let ∂xp denote the derivative w.r.t. the pth component of the spatial variables. From the convention of normalisation (2.2), we simply obtain that

δpU 1 δpU (m,y ,...,y )= y ∂ (m,y ,...,t y ) dt . δmp 1 p p xp δmp 1 p p p Z0   Let k

δpU (m,y ,...,y )= δmp 1 p δpU yp−k ...yp ∂xp−k ...∂xp p (m,y1,...,yp−k−1,tp−kyp−k,...,tpyp) dtp−k . . . dtp. (2.10) k+1 δm Z[0,1]   Then, observing that

δpU ∂ ...∂ (m,y ,...,y , 0,t y ,...,t y ) = 0, xp−k xp δmp 1 p−k−2 p−k p−k p p   we recover δpU (m,y ,...,y )= δmp 1 p δpU yp−k−1 ...yp ∂xp−k−1 ...∂xp p (m,y1,...,yp−k−2,tp−k−1yp−k−1,...,tpyp) dtp−k−1 . . . dtp. k+2 δm Z[0,1]   Setting k = p 1 in (2.10), we then obtain − p δ U p p (m,y1,...,yp)= y1 ...yp ∂µU(m,t1y1,...,tpyp)dt1 . . . dtp. δm p Z[0,1] p The proof is concluded by invoking the boundedness assumption of ∂µU along with Young’s inequality.

2.2 Weak error expansion along dynamics

To state our expansion for the dynamic case, we will need some notion of smoothness given in the following definition.

Definition 2.6. Let m be a positive integer. A function U : ∆m (Rd) R is of class (∆m) if the T × P2 → D T following conditions hold:

i) U is jointly continuous on ∆m (Rd). T × P2 ii) For all t ∆m, U(t, ) is fully C2. ∈ T ·

13 iii) Let L be a positive constant. For all t ∆m and ξ L2(Rd), ∈ T ∈ E ∂ U(t, L (ξ))(ξ) 2 + ∂ ∂ U(t, L (ξ))(ξ) 2 + ∂2U(t, L (ξ))(ξ,ξ) 2 L. | µ | | υ µ | | µ | ≤   iv) m = 1: s U(s,µ) is continuously differentiable on (0, T ). • 7→ m> 1: for all (τ ,...,τ ) ∆m−1 and all µ (Rd), the function • 1 m−1 ∈ T ∈ P2 (0,τ ) s U((τ ,...,τ ,s),µ) R m−1 ∋ 7→ 1 m−1 ∈

is continuously differentiable on (0,τm−1). v) The functions

∆m L2(Rd) (t,ξ) ∂ U(t, L (ξ))(ξ) L2(Rd) T × ∋ 7→ t ∈ ∆m L2(Rd) (t,ξ) ∂ U(t, L (ξ))(ξ) L2(Rd) T × ∋ 7→ µ ∈ ∆m L2(Rd) (t,ξ) ∂ ∂ U(t, L (ξ))(ξ) L2(Rd×d) T × ∋ 7→ υ µ ∈ ∆m L2(Rd) (t,ξ) ∂2U(t, L (ξ))(ξ,ξ) L2(Rd×d) T × ∋ 7→ µ ∈ are continuous.

We define recursively the functions Φ(m), (m), 1 m k, that are used to prove the expansion. V ≤ ≤ Definition 2.7. i) For m = 1, we set Φ(0) =Φ and define (1) : [0, T ] (Rd) R by V × P2 → (1)(t,µ) := (t,µ)=Φ(0)(L (Xt,µ)) = Φ(L (Xt,µ)) . V V T T Assuming that (1) belongs to the class (∆1 ), we set Φ(1) : (0, T ) (Rd) R as V D T × P2 →

(1) 2 (1) Φ (t,µ) := Tr ∂µ (t,µ)(x,x)a(x,µ) µ(dx). (2.11) Rd V Z h i ii) For 1

(m) 2 (m) Φ (t,µ) := Tr ∂µ (t,µ)(x,x)a(x,µ) µ(dx). Rd V Z h i A key point in our work is to show that the previous definition is licit under some assumptions on the coefficient functions b, σ and Φ (Theorem 2.16 and Theorem 2.17). Before we proceed we state the following assumptions

14 (Lip) b and σ are Lipschitz continuous with respect to the Euclidean norm and the W2 norm.

(UB) There exists L> 0 such that σ(x,µ) L, for every x Rd and µ (Rd). | |≤ ∈ ∈ P2 It will become apparent from the proofs that when working only with (Lip), higher order integrability conditions would need to be stated in Definition (2.6). We refrain from this extension and assume (UB) to improve readability of the paper, but encourage a curious reader to perform this simple extension. We begin with the following technical lemma.

d Lemma 2.8. Assume (Lip) and (UB). Let m be a positive integer and f : 2(R ) R be a continuous m d P t,µ → function. Consider U : ∆ (R ) R given by U((τ,t),µ) := f([Xτ − ]) and set U (t,µ) = T × P2 → m 1 τ U((τ,t),µ), where τ ∆m−1. If U is of class (∆m), then the following statements hold: ∈ T D T (i) U satisfies on (0,τ ) (Rd) the following PDE τ m−1 × P2 1 ∂sUτ (s,µ)+ ∂µUτ (s,µ)(y)b(y,µ)+ Tr ∂v∂µUτ (s,µ)(y)a(y,µ) µ(dy) = 0, (2.12) Rd 2 Z   with terminal condition U (τ , ) = f(τ, ), where a = (a ) : Rd (Rd) Rd Rd τ m−1 · · i,k 1≤i,k≤d × P2 → ⊗ denotes the diffusion operator

q a (x,µ) := σ (x,µ)σ (x,µ), x Rd, µ (Rd). i,k i,j k,j ∀ ∈ ∀ ∈ P2 Xj=1

(ii) Uτ can be expanded along the flow of random measure associated to the particle system (1.2) as follows, for all 0 t τ , ≤ ≤ m−1 t XN XN 1 XN 2 XN XN N Uτ (t, t )= Uτ (0, 0 )+ Tr a(υ, s )∂µUτ (s, s )(υ, υ) s (dυ)ds + Mt , (2.13) 2N Rd Z0 Z   N N where M is a square integrable martingale with M0 = 0.

0,ξ Proof. (i) By the flow property, we observe that the function [0,τm−1) s Uτ (s, L (Xs )) R is 0,ξ ∋ 7→ ∈ 0,ξ s,Xs 0,ξ constant. Indeed, Uτ (s, L (Xs )) = f(L (Xτ )) = f(L (Xτ )). Applying the chain rule in both time and measure arguments between t and t + h, we get

h L 0,ξ E L 0,ξ 0,ξ 0,ξ L 0,ξ 0 = ∂tUτ (s, (Xs )) + ∂µUτ (s, (Xs ))(Xs )b(Xs , (Xs )) Z0 1 h i + Tr a(X0,ξ, L (X0,ξ))∂ ∂ U (s, L (X0,ξ))(X0,ξ) ds. 2 s s υ µ τ s s h i 15 Dividing by h and letting h 0 allows to recover the first claim. → (ii) To recover the expansion, we use the known strategy of considering finite dimensional projection of U. Namely, for a fixed number of particles N, we define

N 1 u(t,x ,...,x ) := U (t, δ ) . 1 N τ N xi Xi=1 From Definition 2.6(ii), (iv) and (v), we have that u is C1,2([0,τ) (Rd)n) (see Proposition 3.1 in [9]. × Recalling the link between the derivatives of u and U (again see Proposition 3.1 in [9]), we can apply the classical Ito’s formula to t U (t, XN )= u(t, 1,..., N ) to get 7→ τ t Xt Xt N 1 t U (t, XN )= U(0, XN )+ ∂ U (s, XN )( i)σ( i, XN )dW i =: M N (2.14) τ t 0 N µ τ s Xs Xs s s t i=0 Z0 t X XN XN XN 1 XN XN XN + ∂tU(s, s )+ ∂µUτ (s, s )(υ)b(υ, s )+ Tr a(υ, s )∂υ∂µUτ (s, s )(υ) s (dυ) ds Rd 2 Z0  Z      (2.15) t 1 XN 2 XN XN + Tr a(υ, s )∂µUτ (s, s )(υ, υ) s (dυ)ds . (2.16) 2N Rd Z0 Z   XN We first note that the term in (2.15) is precisely (2.12) evaluated at (s, s ) and is thus equal to zero. We now study the local martingale term M N in (2.14). We simply compute

N 1 t E M N 2 = E ∂ U (s, XN )( i)σ( i, XN ) 2 ds | t | N 2 | µ τ s Xs Xs s | i=1 Z0   Xt   C E XN 2 XN ∂µUτ (s, s )(υ) s (dυ) ds , ≤ N Rd | | Z0 Z  where we have used (UB). Using Definition 2.6(iii), we have

XN 2 XN ∂µUτ (s, s )(υ) s (dυ) C, Rd | | ≤ Z which concludes that M N is a square integrable martingale.

Theorem 2.9 (Weak error expansion: dynamic case). Assume (Lip) and (UB). Suppose that Definition 2.7 is well-posed for m 1,...,k . Then the weak error in the particle approximation can be expressed ∈ { } as

k−1 1 1 E Φ(XN ) Φ(L (X0,ξ)) = C + N + O( ), (2.17) T − T N j j Ij+1 N k j=0   X  16 where C0 := 0 and

(m) L 0,ξ Cm := Φ (t, (Xtm ))dt , m 1,...,k 1 , m ∈ { − } Z∆T and N := E (0, XN ) (0, L (ξ)) and I1 V 0 −V   N := E (m)((τ, 0), XN ) (m)((τ, 0), L (ξ)) dτ, m 2,...,k . m − 0 I ∆m 1 V −V ∈ { } Z T  h i  Proof. Part 1: We first check that the constants (C , N ) are well defined. m Im+1 0≤m≤k−1 For 1 m k 1, we first show that the function ≤ ≤ − ∆m (Rd) (t,µ) Φ(m)(t,µ) R T × P2 ∋ 7→ ∈ is continuous. Indeed, let (tn,µn)n be a sequence converging to (t,µ) in the product topology. Then there exists a sequence (ξn) of random variable such that L (ξn) = µn converging to ξ with law µ in L2. By continuity of σ, Definition 2.7 and Definition 2.6(v),

Γ := Tr ∂2 (m)(t , L (ξ ))(ξ ,ξ )a(ξ , L (ξ )) Tr ∂2 (m)(t, L (ξ))(ξ,ξ)a(ξ, L (ξ)) =: Γ n µV n n n n n n → µV h i h i in probability. Next, since σ is bounded,

2 2 E Tr ∂2 (m)(t , L (ξ ))(ξ ,ξ )a(ξ , L (ξ )) CE ∂2 (m)(t , L (ξ ))(ξ ,ξ ) C, µV n n n n n n ≤ µV n n n n ≤  h i    where the last inequality follows from the fact that (m) is of class (∆m), by Definition 2.6 (iii). By de V D T La Vallée Poussin Theorem, the previous computation shows that (Γn) is uniformly integrable and thus Φ(m)(t ,µ ) = E[Γ ] E[Γ] = Φ(m)(t,µ). Observing that ∆m t (t, L (X0,ξ)) ∆m (Rd) is n n n → T ∋ 7→ tm ∈ T × P2 continuous, we conclude that ∆m t Φ(m)(t, L (X0,ξ)) R is also continuous (hence measurable) T ∋ 7→ tm ∈ and therefore Cm is well-defined. Hence, by the definition of (m), for each µ (Rd), the function V ∈ P2 ∆m−1 τ (m)((τ, 0),µ) R T ∋ 7→ V ∈ is continuous. Also, by the previous argument along with Definition 2.6(iii), we can see that Φ(m) is uniformly bounded. Therefore, the function

(τ,µ) (m)((τ, 0),µ) R 7→ V ∈ is also uniformly bounded. By the dominated convergence theorem, the function

∆m−1 τ E (m)((τ, 0), XN ) R T ∋ 7→ V 0 ∈ h i 17 is continuous. This shows that N is well-defined. Im Part 2: We now proceed with the proof of the expansion, which is done by induction on m. Base step: We decompose the weak error as

E Φ(XN ) Φ(L (X0,ξ)) = E (T, XN ) (0, XN ) + E (0, XN ) (0, L (ξ)) . (2.18) T − T V T −V 0 V 0 −V Applying Lemma 2.8(ii) for the first term in the right-hand side  and taking expectation on both side, we obtain that

T E XN L 0,ξ E XN L 1 XN 2 XN XN Φ( T ) Φ( (XT )) = (0, 0 ) (0, (ξ)) + Tr a(υ, s )∂µ (s, s )(υ, υ) s (dυ)ds . − V −V 2N Rd V  Z0 Z  h i   Recalling the definition of Φ(1) in (2.11), we get

1 T E Φ(XN ) Φ(L (X0,ξ)) = E (0, XN ) (0, L (ξ)) + E Φ(1)(t , XN ) dt . T − T V 0 −V N 1 t1 1 Z0 h i   h i From Part 1, we know that (1) is uniformly bounded and thus T E (1) XN , where Φ 0 Φ (t1, t1 ) dt1 < C C > 0 does not depend on N. This proves the induction for the base step. R   Induction step: Assume that for 1

m−1 1 1 E XN L 0,ξ N E (m) t XN t Φ( T ) Φ( (XT )) = j j+1 + Cj + m Φ ( , tm ) d . − N I N m j=0 Z∆T h i X  h i Then, we observe that

E Φ(m)(t, XN ) Φ(m)(t, L (X0,ξ)) tm − tm h i = E (m+1)((t,t ), XN ) (m+1)((t, 0), XN )+ (m+1)((t, 0), XN ) (m+1)((t, 0), L (ξ)) , V m tm −V 0 V 0 −V h i which leads to

E Φ(XN ) Φ(L (X0,ξ)) T − T mh m−1 i 1 N 1 1 E (m+1) XN (m+1) XN = j + Cj + ((t,tm), tm ) ((t, 0), 0 ) dt.(2.19) N j I N j N m m V −V ∆T Xj=0 Xj=1 Z h i Applying Lemma 2.8(ii) to (m+1)(t, ), we obtain that V · E (m+1)((t,t ), XN ) (m+1)((t, 0), XN ) V m tm −V 0 h1 tm i E 2 (m+1) t XN XN XN = Tr ∂µ (( ,tm+1), tm+1 )(υ, υ)a(v, tm+1 ) tm+1 (dυ)dtm+1 . 2N 0 Rd V Z Z h i  18 Inserting this back into (2.19), we get m m 1 1 1 E XN L 0,ξ N E (m+1) t XN t Φ( T ) Φ( (XT )) = j + Cj + Φ ( , tm+1 ) d . − N j I N j N m+1 m+1 ∆T h i Xj=0 Xj=1 Z h i E (m+1) XN The proof is concluded by observing that m+1 Φ (t, t ) dt < C, due to the uniform ∆T m+1 (m+1) boundedness of Φ given in Part 1. R h i

2.3 Weak error expansion for the initial condition Assuming enough smoothness of the functions (m), we can take care of the terms N appearing in V Im the previous theorem, which are error made at time 0. The following weak error analysis relies on the notion of linear functional derivatives. We first start by studying the weak error generated between the evaluation of the function at a measure and its empirical measure counterparts. We prove two results: one dealing mainly with low order expansion and the order one, available at any order. The main assumption we work with relates to the couple (U, m), where U is a function with domain (Rd). P2 (p-LFD) The pth order linear functional derivative of U exists and is continuous and that for any family (ξi)1≤i≤p of random variable identically distributed with law m the following holds

p E δ U sup p (ν,ξ1,...,ξp) L(U,m) , Rd δm ≤ "ν∈P2( ) #

for some positive constant L(U,m). We first make the following observation regarding assumption (p-LFD), that will be of later use. Remark 2.10. (i) Lemma 2.5 states that δpU (m)(y ,...,y ) C y p + . . . + y p , (2.20) δmp 1 p ≤ | 1| | p|

 for every m (Rd), for every y ,...,y Rd, and for some C > 0. This means that for any ∈ P2 1 p ∈ µ (Rd), the couple (U, µ) satisfies (p-LFD). This polynomial growth condition is motivated by our ∈ Pp example of application, stated in Section 2.4, that relies on the smoothness of the coefficients. (ii) The following simple example of measure functional shows that the above condition is reasonable to consider: For any bounded smooth function b : R R, we set Ψ(m) := b xdm(x) . The linear de- → rivative functional of order p can then be computed by induction, using the normalisation convention R  (2.2), to obtain that δpΨ (m,y ,...,y )= y ...y b(p) xdm(x) , δmp 1 p 1 p Z  which easily relates to (2.20) .

19 Theorem 2.11. Let (ξ ) be i.i.d. random variables with law µ (Rd). The following statements i 1≤i≤N ∈ P2 hold:

(i) Let (p-LFD) hold with p 1, 2 for (U, µ). Then ∈ { } N E 1 1 U δξi U(µ)= O( ) . " N # − N  Xi=1  (ii) Let (p-LFD) hold with p 1, 2, 3, 4 for (U, µ). Suppose that µ (Rd). Then ∈ { } ∈ P4 N 1 1 δ2U 1 E U δ U(µ)= E (µ)(ξ˜ ,y)(δ δ )(dy) + O( ) , ξi 2 1 ξ˜1 ξ1 2 " N # − 2N Rd δm − N  Xi=1  Z  where ξ˜ µ and is independent of (ξ ) . 1 ∼ i 1≤i≤N Proof. Let µ = 1 N δ and mN = µ + t(µ µ), t [0, 1]. We also consider i.i.d. random vari- N N i=1 ξi t N − ∈ ables (ξ˜ ) with law µ that are also independent of (ξ ). i P i

(i) By the definition of linear functional derivatives, we have

1 E E δU N [U(µN )] U(µ)= (mt )(v) (µN µ)(dv)dt − Rd δm − Z0 Z  1 N 1 E δU N E δU N ˜ = (mt )(ξi) (mt )(ξ1) dt 0 N δm − δm Z Xi=1      1 δU δU = E (mN )(ξ ) (mN )(ξ˜ ) dt. δm t 1 − δm t 1 Z0   We introduce measures

N N t N N N N m˜ := m + (δ˜ δξ ) and m := (m ˜ m )t1 + m , t,t1 [0, 1], t t N ξ1 − 1 t,t1 t − t t ∈ and notice that δU δU E (m ˜ N )(ξ˜ ) = E (mN )(ξ ) . δm t 1 δm t 1     Therefore,

1 δU δU E[U(µ )] U(µ) = E (m ˜ N )(ξ˜ ) (mN )(ξ˜ ) dt N − δm t 1 − δm t 1 Z0   1 1 δ2U = E (mN )(ξ˜ ,y )(m ˜ N mN )(dy ) dt dt 2 t,t1 1 1 t t 1 1 Rd δm − Z0  Z0 Z  20 1 1 1 δ2U = E t (mN )(ξ˜ ,y )(δ δ )(dy ) dt dt . (2.21) 2 t,t1 1 1 ξ˜1 ξ1 1 1 N Rd δm −  Z0 Z0 Z  To conclude part (i), we observe that

δ2U δ2U δ2U E (mN )(ξ˜ ,y )(δ δ )(dy ) E sup (ν)(ξ˜ , ξ˜ ) + (ν)(ξ˜ ,ξ ) 2 t,t1 1 1 ξ˜1 ξ1 1 2 1 1 2 1 1 δm − ≤ Rd |δm | |δm |   "ν∈P2( ) # 2L , ≤ (U,µ) by assumption (p-LFD) with p = 2.

N (ii) We continue the expansion of (2.21). To avoid a further interpolation in measure between mt,t1 and µ, we proceed via . Let

1 δ2U g(t) := (mN )(ξ˜ ,y )(δ δ )(dy ) dt , t [0, 1], 2 t,t1 1 1 ξ˜1 ξ1 1 1 Rd δm − ∈ Z0 Z and note that mN := tt1 (δ δ )+ µ + t(µ µ). Then, by a similar method as the derivation t,t1 N ξ˜1 ξ1 N of (2.4), − −

1 δ3U t g′(t)= (mN )(ξ˜ ,y ,y )(δ δ )(dy ) 1 (δ δ ) + (µ µ) (dy ) dt . 3 t,t1 1 1 2 ξ˜1 ξ1 1 ξ˜1 ξ1 N 2 1 0 Rd Rd δm − N − − Z Z Z   Therefore, by integration by parts,

1 1 δ2U E t (mN )(ξ˜ ,y )(δ δ )(dy ) dt dt 2 t,t1 1 1 ξ˜1 ξ1 1 1 Rd δm −  Z0 Z0 Z  1 1 1 1 t2 = E tg(t) dt = E (1 t)g(1 t) dt = E g(0) + (t )g′(1 t) dt − − 2 − 2 −  Z0   Z0   Z0  1 1 1 t2 = E g(0) + ( − )g′(t) dt 2 2  Z0  1 δ2U = E (µ)(ξ˜ ,y )(δ δ )(dy ) 2 1 1 ξ˜1 ξ1 1 2 Rd δm −  Z  1 1 1 δ3U + E (1 t2)t (mN )(ξ˜ ,y ,y )(δ δ )(dy )(δ δ )(dy ) dt dt 1 3 t,t1 1 1 2 ξ˜1 ξ1 1 ξ˜1 ξ1 2 1 2N Rd Rd − δm − −  Z0 Z0 Z Z  1 1 1 δ3U + E (1 t2) (mN )(ξ˜ ,y ,y )(δ δ )(dy )(µ µ)(dy ) dt dt . 3 t,t1 1 1 2 ξ˜1 ξ1 1 N 2 1 2 Rd Rd − δm − −  Z0 Z0 Z Z  (2.22)

For the final term in (2.22), by exchangeability, we rewrite

δ3U E (mN )(ξ˜ ,y ,y )(δ δ )(dy )(µ µ)(dy ) 3 t,t1 1 1 2 ξ˜1 ξ1 1 N 2 Rd Rd δm − −  Z Z  21 1 δ3U = E (mN )(ξ˜ ,y ,y )(δ δ )(dy )(δ δ )(dy ) 3 t,t1 1 1 2 ξ˜1 ξ1 1 ξ1 ξ˜2 2 N Rd Rd δm − −  Z Z  N 1 δ3U + E (mN )(ξ˜ ,y ,y )(δ δ )(dy )(δ δ )(dy ) 3 t,t1 1 1 2 ξ˜1 ξ1 1 ξi ξ˜2 2 N Rd Rd δm − − Xi=2  Z Z  1 δ3U = E (mN )(ξ˜ ,y ,y )(δ δ )(dy )(δ δ )(dy ) 3 t,t1 1 1 2 ξ˜1 ξ1 1 ξ1 ξ˜2 2 N Rd Rd δm − −  Z Z  N 1 δ3U + − E (mN )(ξ˜ ,y ,y )(δ δ )(dy )(δ δ )(dy ) . (2.23) 3 t,t1 1 1 2 ξ˜1 ξ1 1 ξ2 ξ˜2 2 N Rd Rd δm − −  Z Z  As before, we introduce measures

N N t N N N N m˜ := m + (δ˜ δξ ) and m := (m ˜ m )t2 + m , t,t1,t2 [0, 1]. t,t1 t1 N ξ2 − 2 t,t1,t2 t,t1 − t,t1 t,t1 ∈ Then δ3U E (mN )(ξ˜ ,y ,y )(δ δ )(dy )(δ δ )(dy ) 3 t,t1 1 1 2 ξ˜1 ξ1 1 ξ2 ξ˜2 2 Rd Rd δm − −  Z Z  δ3U δ3U = E (mN )(ξ˜ ,y ,ξ )(δ δ )(dy ) E (mN )(ξ˜ ,y , ξ˜ )(δ δ )(dy ) 3 t,t1 1 1 2 ξ˜1 ξ1 1 3 t,t1 1 1 2 ξ˜1 ξ1 1 Rd δm − − Rd δm −  Z   Z  δ3U δ3U = E (m ˜ N )(ξ˜ ,y , ξ˜ )(δ δ )(dy ) E (mN )(ξ˜ ,y , ξ˜ )(δ δ )(dy ) 3 t,t1 1 1 2 ξ˜1 ξ1 1 3 t,t1 1 1 2 ξ˜1 ξ1 1 Rd δm − − Rd δm −  Z   Z  t 1 δ4U = E (mN )(ξ˜ ,y , ξ˜ ,y )(δ δ )(dy )(δ δ )(dy ) dt . (2.24) 4 t,t1,t2 1 1 2 2 ξ˜1 ξ1 1 ξ˜2 ξ2 2 2 N Rd Rd δm − −  Z0 Z Z  Combining (2.21), (2.22), (2.23) and (2.24) gives

E[U(µ )] U(µ) N − 1 δ2U = E (µ)(ξ˜ ,y )(δ δ )(dy ) 2 1 1 ξ˜1 ξ1 1 2N Rd δm −  Z  1 1 1 δ3U + E (1 t2)t (mN )(ξ˜ ,y ,y )(δ δ )(dy )(δ δ )(dy ) dt dt 2 1 3 t,t1 1 1 2 ξ˜1 ξ1 1 ξ˜1 ξ1 2 1 2N Rd Rd − δm − −  Z0 Z0 Z Z  1 1 1 δ3U + E (1 t2) (mN )(ξ˜ ,y ,y )(δ δ )(dy )(δ δ )(dy ) dt dt 2 3 t,t1 1 1 2 ξ˜1 ξ1 1 ξ1 ξ˜2 2 1 2N Rd Rd − δm − −  Z0 Z0 Z Z  N 1 1 1 1 δ4U + E t(1 t2) (mN )(ξ˜ ,y , ξ˜ ,y ) −3 4 t,t1,t2 1 1 2 2 2N Rd Rd − δm  Z0 Z0 Z0 Z Z

(δ˜ δξ )(dy1)(δ˜ δξ )(dy2) dt2 dt1 dt . ξ1 − 1 ξ2 − 2  Using the fact that (U, µ) satisfies assumption (p-LFD) with p 3, 4 , the statement for part ∈ { } (ii) is established.

22 In principle, we can continue the above expansion to higher orders. However, in the next theorem we present a simplified argument that allows for complete weak error expansion. The simplification is at the cost of requiring one extra order of regularity in the assumption. However, we believe the argument is of independent interest.

Theorem 2.12 (Weak error expansion: static case). Let q be a positive integer and µ (Rd). ∈ P2q−1 Suppose that assumption (p-LFD) holds for U : (Rd) R, for each p 1,..., 2q 1 . Then, for i.i.d. P2 → ∈ { − } random variables ξ N with law µ, { i}i∈ N q−1 E 1 Cp 1 U δξi U(µ)= p−1 + O( q−1 ) , " N # − N N  Xi=1  Xp=2 where

p δpU E y Cp = p (µ)( ) (δξ δξˆk )(dyk) , " Rpd δm − # Z Ok=1 k for some i.i.d. random variables (ξˆ )1≤k≤q with law µ that are also independent of (ξi)i∈N.

1 N Proof. Let µN = N i=1 δξi . By Lemma 2.2, we have

Pq−1 1 δpU 1 1 E E y ⊗p y (q−1) [U(µN )] U(µ)= p (µ)( ) (µN µ) (d ) + (1 t) R(q,N,t)dt − p! Rpd δm − (q 1)! − p=1 0 X Z  − Z (2.25) with δqU E N y ⊗q y R(q,N,t) := q (mt )( ) (µN µ) (d ) , Rqd δm − Z  where mN := (1 t)µ + tµ . Observe that by assumption (p-LFD) all the terms in the expansion are t − N well defined. We study them now. For p = 1, we have

δU E (µ)(y) (µN µ)(dy) = 0. Rd δm − Z  Now let p 2,...,q 1 and observe that ∈ { − } p p p δ U ⊗p 1 δ U E (µ)(y) (µ µ) (dy) = E (µ)(y) (δ δ k )(dy ) . p N p p ξik ξˆ k Rpd δm − N " Rpd δm − # Z  1≤i1X,...,ip≤N Z Ok=1

23 Suppose that at least one of the i is different from the other i , j = k. Without loss of generality, we k j 6 assume that this is the case for k = p. We then observe that

p δpU E y p (µ)( ) (δξi δξˆk )(dyk) " Rpd δm k − # Z Ok=1 p p−1 E δ U = p (µ)(y1,...,yp−1,ξip ) (δξi δξˆk )(dyk) " R(p−1)d δm k − # Z Ok=1 p p−1 E δ U ˆp p (µ)(y1,...,yp−1, ξ ) (δξi δξˆk )(dyk) = 0, − " R(p−1)d δm k − # Z Ok=1 by conditioning on ξ ,...,ξ , ξˆ1,..., ξˆp−1. Therefore, when i = = i , i1 ip−1 1 · · · p p δpU 1 δpU E y ⊗p y E y p (µ)( ) (µN µ) (d ) = p−1 p (µ)( ) (δξ δξˆk )(dyk) . Rpd δm − N " Rpd δm − # Z  Z Ok=1 It remains to study the remainder term R above. We rewrite

q 1 δqU R(q,N,t)= E (mN )(y) (δ δ )(dy ) . q q t ξip ξˆp p N  Rqd δm −  p=1 1≤i1X,...,iq≤N Z O   Let be a subset of ω = 1,...,q . We denote c := 1,...,q and introduce L { } L { }\L L q ′ ′ c ′ := i = (i ,...,i ) 1,...,N ℓ,ℓ , i = i ′ , k, k s.t. k = k , i = i ′ I { 1 q ∈ { } | ∀ ∈L ℓ ℓ ∀ ∈L 6 k 6 k and (ℓ, k) c, i = i . ∀ ∈L×L ℓ 6 k} Then

q q 1 δqU R(q,N,t)= E (mN )(y) (δ δ )(dy ) . q q t ξip ξˆp p N  Rqd δm −  i L p=1 Xj=1 L,X|L|=j X∈I Z O   For j = q, we simply observe that

q 1 δqU 1 E (mN )(y) (δ δ )(dy ) = O( ) . (2.26) q q t ξip ξˆp p q−1 N  Rqd δm −  N i ω p=1 X∈I Z O   For 1 j

q δqU E (mN )(y) (δ δ )(dy ) q t ξip ξˆp p  Rqd δm −  i L p=1 X∈I Z O   24 j q δqU E N y = N(N 1) . . . (N (q j)) q (mt )( ) (δξ1 δξˆp )(dyp) (δξp δξˆp )(dyp) . − − −  Rqd δm − −  p=1 Z O p=Oj+1  (2.27)

For later use, we denote

j q ∆(dy) := (δ δ )(dy ) (δ δ )(dy ). ξ1 − ξˆp p ξp − ξˆp p Op=1 p=Oj+1 We will now work iteratively from j + 1 to q. Firstly, we introduce

N N t N N N N m˜ := m + (δ˜ δξ ) and m :=m ˜ + sj+1(m m˜ ), t t N ξj+1 − j+1 t,sj+1 t t − t where we define independent random variables ξ˜u j+1≤u≤q that are also independent of (ξi)1≤i≤q i { } and (ξˆ )1≤i≤q, but with the same law. We then compute that

δqU δqU E N y y E N y y q (mt )( )∆(d ) = q (m ˜ t )( )∆(d ) (2.28) Rqd δm Rqd δm Z  Z  1 q+1 t δ U N + E (m )(y,y )∆(dy)(δ δ j+1 )(dy ) ds . (2.29) q+1 t,sj+1 q+1 ξ˜j+1 ξˆ q+1 j+1 N R(q+1)d δm − Z0 Z  As before,

q j q E δ U N q (m ˜ t )(y1,...,yj,ξj+1,yj+2,...,yq) (δξ1 δξˆp )(dyp) (δξp δξˆp )(dyp)  R(q−1)d δm − −  p=1 Z O p=Oj+2  q j q  E δ U N ˆj+1 = q (m ˜ t )(y1,...,yj, ξ ,yj+2,...,yq) (δξ1 δξˆp )(dyp) (δξp δξˆp )(dyp) ,  R(q−1)d δm − −  p=1 Z O p=Oj+2   so the term on the right hand side of (2.28) is equal to zero. Next, for u j + 1,...,q 1 , we define ∈ { − } inductively N N t m˜ := m + (δ˜ δξ ), t,sj+1,...,su t,sj+1,...,su N ξu+1 − u+1  mN :=m ˜ N + s (m ˜ N mN ). t,sj+1,...,su+1 t,sj+1,...,su u+1 t,sj+1,...,su − t,sj+1,...,su This procedure is then iterated from j + 2 to q on the remainder term in (2.29). We thus have δqU t q−j 1 1 δ2q−jU E (mN )(y)∆(dy) = . . . E (mN )(y,y ,...,y ) q t 2q−j t,sj+1,...,sq q+1 2q−j Rqd δm N R(2q−j)d δm Z    Z0 Z0  Z 25 q

∆(dy) (δ˜ δˆk )(dyq−j+k) dsj+1 ...dsq. (2.30) ξk − ξ k=Oj+1  Next, by (2.20), we estimate the by

2q−j q δ U N E (m )(y,y ,...,y )∆(dy) (δ δ k )(dy ) 2q−j t,sj+1,...,sq q+1 2q−j ξ˜k ξˆ q−j+k R(2q−j)d δm −  Z k=j+1  O

E ...... ≤  ˆ1 ˆj ˆj+1 ˆq ˜ ˆj+1 − ˜ ˆq y1∈{Xξ1,ξ } yj ∈{Xξ1,ξ } yj+1∈{Xξj+1,ξ } yq ∈{Xξq,ξ } yq+1∈{Xξj+1,ξ } y2q jX∈{ξq ,ξ } δ2q−jU (mN )(y ,...,y ) δm2q−j t,sj+1,...,sq 1 2q−j  4q−2j C2 L(U,m) , (2.31) ≤ where we used assumption (p-LFD). Combining with (2.30) and (2.31) gives δqU 1 E N y y q (mt )( )∆(d ) = O( q−j ) . Rqd δm N Z  Finally, combining with (2.27) yields

q q 1 δqU 1 E (mN )(y) (δ δ )(dy ) = O( ). q q t ξip ξˆp p q−1 N Rqd δm N i L  −  Xj=1 L,X|L|=j X∈I Z Op=1  

2.4 Expansion in terms of regularity of the drift and diffusion functions In this subsection, we explore a sufficient condition for the expansion of an arbitrary order purely in terms of regularity of the drift and diffusion functions. It turns out that proving regularity conditions for higher order expansions for class is highly non-trivial and therefore a stronger notion of D Mk regularity in differentiating measures is proposed. Definition 2.13. A function f : Rd (Rd)) R belongs to class (Rd (Rd)), if the derivatives ×P2 → Mk ×P2 D(n,ℓ,β)f(x, µ, v , . . . , v ) exist for every multi-index (n,ℓ, β) such that (n,ℓ, β) k and satisfy 1 n | |≤ (a) D(n,ℓ,β)f(x, µ, v , . . . , v ) C, (2.32) 1 n ≤

(b) D(n,ℓ,β)f(x, µ, v , . . . , v ) D(n,ℓ, β)f(x′,µ′, v′ , . . . , v′ ) 1 n − 1 n n

C x x′ + v v′ + W (µ,µ′) , ≤ | − | | i − i| 2  i=1  X (2.33) for any x,x′, v , v′ , . . . , v , v′ Rd and µ,µ′ (Rd), for some constant C > 0. 1 1 n n ∈ ∈ P2

26 By convention, a function f defined only on (Rd) will be extended to Rd (Rd) naturally by P2 × P2 (x,µ) f(µ), for all x Rd. 7→ ∈ For the time-dependent case (possibly with multi-index in time), we extend the previous definition as follows.

Definition 2.14. A function : ∆m Rd (Rd) R is said to be in (∆m Rd (Rd)) 5, if V T × × P2 → Mk T × × P2 1. m = 1: s (s,x,µ) is continuously differentiable on (0, T ). • 7→ V m> 1: for all (τ ,...,τ ) ∆m−1 and all µ (Rd), the function • 1 m−1 ∈ T ∈ P2 (0,τ ) s ((τ ,...,τ ,s),x,µ) R m−1 ∋ 7→ V 1 m−1 ∈

is continuously differentiable on (0,τm−1).

2. (t, ) (Rd (Rd)), for each t ∆m, where the constant C in (2.32) and (2.33) is V · ∈ Mk × P2 ∈ T uniform in t.

3. All derivatives in measure (including the zeroth order derivative) of ( , ) up to the kth order are jointly continuous in time, measure and space. V · ·

When it is clear from context, we will just use the notation for the two definitions above. Mk Note that the condition (Rd (Rd)) automatically implies (Lip). The following is a general- M1 × P2 isation of Theorem 7.2 in [4] from to , for any k 2. M2 Mk ≥ Theorem 2.15. Suppose that b and σ are in (Rd (Rd)) , where k 2. We consider a function Mk × P2 ≥ : [0,t] (Rd) R defined by V × P2 → (s,µ)=Φ L (Xs,µ) , (2.34) V t for some function Φ : (Rd) R that is also in ( (Rd)). Then ((0,t) (Rd)) and P2 → Mk P2 V ∈ Mk × P2 satisfies the PDE

∂ (s,µ)+ ∂ (s,µ)(x)b(x,µ)+ 1 Tr ∂ ∂ (s,µ)(x)a(x,µ) µ(dx) = 0, s (0,t), sV Rd µV 2 v µV ∈  R    (t,µ)=Φ(µ). V  This proof of Theorem 2.15 is postponed to the next section. We now state the key result for this part which will certify that the expansion along the dynamics is licit.

Theorem 2.16. Assume (UB). Suppose that b and σ belong to the class (Rd (Rd)). Moreover, M2k × P2 suppose that Φ : (Rd) R also belongs to the class ( (Rd)). Then Definition 2.7 is well-posed P2 → M2k P2 for m 1,...,k . ∈ { } 5 d This definition is modified accordingly to define Mk((0,t) ×P2(R )), where t ∈ (0,T ).

27 Proof. We prove by induction on m 1,...,k and prove that for each m 1,...,k , (m) ∈ { } ∈ { } V ∈ (∆m (Rd)) (∆m (Rd)) and therefore (m) (∆m), which establishes the M2k−2m+2 T × P2 ⊆ M2 T × P2 V ∈ D T claim. For simplicity of notations, we present this proof in the case of dimension one. We commence the proof by noting that Φ and b, σ , therefore it follows from Theorem 2.15 that ∈ M2k ∈ M2k (1) . V ∈ M2k Suppose that for m 1,...,k 1 , (m) . We recall the definition of Φ(m) : ∆m ∈ { − } V ∈ M2k−2m+2 T × (R) R as P2 → (m) 2 (m) 2 Φ (t,µ)= ∂µ (t,µ)(x,x) σ(x,µ) µ(dx). R V Z  Fix t ∆m. We shall first establish the smoothness of Φ(m)(t, ). Let p : R (R) R be a continuous ∈ T · ×P2 → function defined by p(x,µ) := ∂2 (m)(t,µ)(x,x) σ(x,µ) 2. µV Since (m) , for each x R, p(x, ) is also differentiable  in measure with its derivative V ∈ M2k−2m+2 ∈ · given by

∂ p(x,µ)(y)= ∂3 (m)(t,µ)(x,x,y) σ(x,µ) 2 + 2∂2 (m)(t,µ)(x,x) σ(x,µ) ∂ σ(x,µ)(y). (2.35) µ µV µV µ We observe that ∂µp(x,µ)(y) and ∂xp(x,µ) are both continuous and uniformly  bounded in space and measure. Therefore, by Example 3 in Section 5.2.2 of [8], Φ(m)(t, ) is differentiable in measure with · its derivative given by

(m) ∂µΦ (t,µ)(y)= ∂xp(y,µ)+ ∂µp(x,µ)(y) µ(dx), R Z where ∂xp(y,µ) is given by

2 ∂ p(y,µ) = ∂ ∂2 (m)(t,µ)(y,y)+ ∂ ∂2 (m)(t,µ)(y,y) σ(y,µ) x v1 µV v2 µV    +2∂2 (m)(t,µ)(y,y)σ(y,µ)∂ σ(y,µ). (2.36) µV y (m) Formulae (2.35) and (2.36) tell us that ∂µΦ (t,µ)(y) is uniformly bounded in measure and space. Furthermore, each of ∂xp(y,µ) and ∂µp(x,µ)(y) is a finite sum of products of uniformly bounded Lipschitz functions in measure and space, and is hence Lipschitz continuous as well. Finally, by the duality formula for the Kantorovich-Rubinstein distance (see Remark 6.5 in [37]), we note that there exist constants C ,C ,C > 0 such that for every µ ,µ (R) and y ,y R, 1 2 3 1 2 ∈ P2 1 2 ∈ ∂ Φ(m)(t,µ )(y ) ∂ Φ(m)(t,µ )(y ) µ 1 1 − µ 2 2

C 1 y1 y2 + W2(µ1,µ2)+ ∂µp(x,µ )(y1)µ1(dx) ∂µp(x,µ)(y2)µ1(dx) ≤ | − | R − R  Z Z

+ ∂µp(x,µ)(y2)µ1(dx) ∂µp(x,µ)(y2)µ2(dx) R − R Z Z 

28 C2 y1 y2 + W2(µ1,µ2)+ ∂µp(x,µ)(y2)µ1(dx) ∂µp(x,µ)(y2)µ2(dx) ≤ | − | R − R  Z Z 

C y y + W (µ ,µ )+ ∂ p W (µ ,µ ) ≤ 2 | 1 − 2| 2 1 2 k µ kLip 1 1 2   C y y + W (µ ,µ ) , ≤ 3 | 1 − 2| 2 1 2   where W1 denotes the 1-Wasserstein metric. Subsequently, we can repeat the same procedure to prove existence and regularity properties of higher order derivatives of Φ(m)(t, ). In particular, we can show that ∂2Φ(m)(t, µ, v , v ) and · µ 1 2 ∂ ∂ Φ(m)(t, µ, v ) exist, by expressing them in terms of derivatives of (m) up to the fourth order, and v1 µ 1 V derivatives of σ up to the second order, which also allows us to show that they are uniformly bounded and Lipschitz continuous. In general, for any multi-index (n, β) such that (n, β) 2k 2m, we can | | ≤ − show that D(n,β)Φ(m)(t, µ, v , . . . , v ) exists, by expressing it in terms of derivatives of (m) up to the 1 n V (2k 2m + 2)th order, and derivatives of σ up to the (2k 2m)th order, which again allows us to show − − that it is uniformly bounded and Lipschitz continuous. Thus, Φ(m)(t, ) . · ∈ M2k−2m Next, we note that since (m) (∆m (R)), (m) is continuously differentiable V ∈ M2k−2m+2 T × P2 V in the last component of t ∆m and so is Φ(m). Moreover, as mentioned above, each derivative ∈ T D(n,β)Φ(m)(t, µ, v , . . . , v ) up to the (2k 2m)th order can be expressed in terms of derivatives of 1 n − (m) up to the (2k 2m + 2)th order and derivatives of σ up to the (2k 2m)th order, which implies V − (n,β) (m) − that each derivative D Φ (t, µ, v1, . . . , vn) is jointly continuous in time, measure and space, since (m) (∆m (R)). Therefore, by Definition 2.14, Φ(m) (∆m (R)). V ∈ M2k−2m+2 T × P2 ∈ M2k−2m T × P2 We now recall the definition of (m+1) : ∆m+1 (R) R, given by V T × P2 → (m+1)((τ,t),µ)=Φ(m)(τ, L (Xt,µ)), τ ∆m. V τm ∈ T For fixed τ ∆m, it follows from Theorem 2.15 that (m+1)((τ, ), ) is continuously differentiable ∈ T V · · in time and that (m+1)((τ,t), ) , for each t (0,τ ). Finally, all derivatives in measure of V · ∈ M2k−2m ∈ m (m+1) up to the (2k 2m)th order are jointly continuous in time, measure and space, since Φ(m) V . This implies− that (m+1) , which concludes the proof by the principle of induction.∈ M2k−2m V ∈ M2k−2m

The following theorem is the main result of this paper and is a direct consequence of Theorem 2.9, Theorem 2.12, Theorem 2.16 and Remark 2.10(i). Theorem 2.17 (Main result on regularity: Full expansion). Assume (UB). Suppose that b and σ belong to the class (Rd (Rd)). Moreover, suppose that Φ : (Rd) R also belongs to the M2k+1 × P2 P2 → class ( (Rd)). Finally, suppose that the initial condition satisfies E[ ξ 2k+1] < + . Then M2k+1 P2 | 1| ∞ k−1 C 1 E Φ(XN ) Φ(L (X0,ξ)) = j + O( ), T − T N j N k j=1   X where C1,...,Ck−1 are constants that do not depend on N.

29 Proof. We commence the proof by noting that Φ, b and σ all belong to , therefore it follows M2k+1 from Theorem 2.15 that (1) . As in the proof of Theorem 2.16, we prove by induction on V ∈ M2k+1 m 1,...,k in order to establish that for each m 1,...,k , (m) . By Theorem ∈ { } ∈ { } V ∈ M2k−2m+3 2.16, Definition 2.7 is well-posed for m 1,...,k . Therefore, by Theorem 2.9, we have ∈ { } k−1 1 1 E Φ(XN ) Φ(L (X0,ξ)) = C + N + O( ), (2.37) T − T N j j Ij+1 N k j=0   X  for some constants C0 = 0,C1,...,Ck−1 > 0, where

N := E (0, XN ) (0, L (ξ)) , I1 V 0 −V     N E (j+1) XN (j+1) L  j+1 := j ((τ, 0), 0 ) ((τ, 0), (ξ)) dτ, for j 1,...,k 1 . I ∆T V −V ∈ { − }    Recall that (1) R . By Remark 2.10(i) and Theorem 2.12, V ∈ M2(k+1)−1 k−1 C(1) 1 N = E (0, XN ) (0, L (ξ)) = ℓ + O( ), (2.38) I1 V 0 −V N ℓ N k ℓ=1   X for some constants C(1),...,C(1) > 0. Similarly,for every j 1,...,k 1 , since (j+1) , 1 k−1 ∈ { − } V ∈ M2(k−j+1)−1 it also follows by Remark 2.10(i) and Theorem 2.12 6 that

k−j−1 C(j) 1 N = ℓ + O( ), (2.39) Ij+1 N ℓ N k−j Xℓ=1 (j) (j) for some constants C1 ,...,Ck−j−1 > 0. The result follows by combining (2.37), (2.38) and (2.39).

3 Proof of Theorem 2.15

Let Φ : (Rd) R be a Borel-measurable function. In this section, we study the smoothness of P2 → the function : [0,t] (Rd) R defined by V × P2 → (s,µ)=Φ(L (Xs,µ)). V t There are various methods of establishing smoothness of functions of this form in the literature. One way involves considering PDE (2.12) and proving regularity properties of the solution to this PDE ([6]). 6 ∈ M j+1 ×P Rd t ∈ j+1 Note that for U 2(k−j+1)−1(∆T 2( )), the constant C in Lemma 2.5 is uniform in ∆T . Therefore, t ∈ j+1 the constant C in the same inequality (2.20) in Theorem 2.12 is also uniform in ∆T . The fact that the constants (j) (j) C1 ,...,Ck−j−1 are well-defined follows from a similar argument as the first part of the proof of Theorem 2.9.

30 The method of Malliavin calculus is adopted in [10]. This paper proves smoothness of , for Φ V being in the form Φ(µ)= ζ(y) µ(dy), Rd Z where ζ : Rd R is infinitely differentiable with bounded partial derivatives. → Article [11] considers the method of parametrix. We represent in terms of the transition density ′ ′ s,x,µ V p(s,µ; t ,y ; t,y) of Xt (defined below in (3.2)). This method is applied to the case in which b and σ are of the form

b(x,µ)= B(x,y)µ(dy), σ(x,µ)= Σ(x,y)µ(dy), Rd Rd Z Z for some functions B : Rd Rd Rd and Σ : Rd Rd Rd Rd. Nonetheless, it is not clear whether × → × → ⊗ this method can be applied to b and σ with more general forms. We follow here a different route. Framework of analysis. We adopt the ‘variational’ approach employed in [4]. The core idea is to prove smoothness of by viewing the lift of as a composition of the map ξ Xs,ξ and the lift of V V 7→ t Φ. As [4] already proves smoothness of derivatives in measure up to the second order, we generalise that result to an arbitrary order. The analysis of variational derivatives of solutions to classical SDEs is rather well-understood in the literature ([17], [25]). As differentiation in the direction of measure leads to rather complicated expressions, we restrict ourselves to the following special case in this section. This captures the key difficulty of this approach. The general case can be handled in an analogous way. We consider the forward system Xs,ξ , Xs,x,µ , ξ µ, which takes the form { t }t∈[s,T ] { t }t∈[s,T ] ∼ s,ξ t s,ξ  X = ξ + σ(L (Xr )) dW , t [s, T ], (3.1) t s r ∈  R  s,x,µ t s,µ  X = x + σ(L (Xr )) dW , t [s, T ], x R, (3.2) t s r ∈ ∈  for some Borel-measurable function σR: (R) R and one-dimensional Brownian motion W . P2 → Xs,x,ξ is also called the decoupled process, as it no longer depends on the law of itself. { t }t∈[s,T ] For any sub-σ-algebra , let L2( ) denote the set of all random variables in L2(Ω, , P; Rd). Let G G G (resp. (n) ) denote the filtration generated by Brownian motion W = W {Ft}t∈[0,T ] {Ft }t∈[0,T ] { t}t∈[0,T ] (resp. W (n) ). Let ξ be a random variable in L2( ). For simplicity of notations, in the following { t }t∈[0,T ] Fs calculations, we shall denote the law L (ξ) by [ξ]. First order derivative of [ξ] Xs,x,[ξ]. We start our analysis by analysing the smoothness of the map 7→ [ξ] Xs,x,[ξ]. Suppose that the lift of [ξ] Xs,x,[ξ] with values in L2 7→ t 7→ t L2( ) L2( ); ξ Xs,x,[ξ] Fs → Ft 7→ t

31 is Fréchet differentiable with its Fréchet derivative given by

L2( ) L(L2( ),L2( )); ξ η Eˆ U s,x,[ξ](ξˆ)ˆη , Fs → Fs Ft 7→ 7→ t     s,x,[ξ] for some real-valued process Ut (y) t∈[s,T ] that is adapted to t t∈[s,T ]. Then we define the de- s,x,[ξ] { } {F } rivative of Xt with respect to the measure component by

∂ Xs,x,[ξ](y) := U s,x,[ξ](y), t [s, T ], x,y R. (3.3) µ t t ∈ ∈ s,x,[ξ] The next theorem computes ∂µXt (y) explicitly. R s,x,[ξ] Theorem 3.1. Suppose that σ 1( 2( )). Then ∂µXt (y) exists and is the unique solution of the SDE ∈ M P t s,x,[ξ] E(1) s,ξ (1) s,y,[ξ] s,ξ (1) s,ξ(1) (1) s,x,[ξ] ∂µXt (y)= (∂µσ) [Xr ], X r +(∂µσ) [Xr ], X r ∂µ X r (y) dWr. Zs          Proof. The proof is done in [4], but is included for completeness. We first define the L2-directional derivative D Xs,x,[ξ] (η) of Xs,x,[ξ] in direction η L2( ), given by ξ t t ∈ Fs  s,x,[ξ] 1 s,x,[ξ+hη] s,x,[ξ] Dξ Xt (η) := lim Xt Xt , (3.4) h→0 h −    where the limit is interpreted in the L2 sense, i.e.

2 E 1 s,x,[ξ+hη] s,x,[ξ] s,x,[ξ] lim Xt Xt Dξ Xt (η) = 0. h→0 h − −       Similarly, the L2-directional derivative of Xs,ξ in direction η L2( ) is given by t ∈ Fs 1 s,ξ+hη s,ξ s,ξ+hη lim Xt Xt = ∂hXt , (3.5) h→0 h − h=0   2 where both the limit and the derivative are interpreted in the L sense. We proceed by formal differ- entiation and obtain that

s,ξ+hη s,x,[ξ+hη] ∂hXt = ∂h Xt  x=ξ+hη

s,x,[ξ+hη] 1 s,x,[ξ+(h+ν)η] s,x,[ξ+hη] = ∂xXt η + lim Xt Xt . ν→0 ν −  x=ξ+hη   x=ξ+hη  

Hence, s,ξ s,ξ+hη s,x,[ξ] Dξ Xt (η)= ∂hXt = η + Dξ Xt (η) . (3.6) h=0 x=ξ  

32 Recall that the lift of σ, i.e. σ : L2( ) R, is defined by σ(θ) := σ([θ]). By (3.4), (3.5), and (3.6), F → formal differentiation of (3.2) with respect to ξ in the direction η gives

e t e s,x,[ξ] s,ξ s,x,[ξ] Dξ Xt (η)= (Dθσ)(Xr ) η + Dξ Xr (η) dWr. (3.7) Zs x=ξ    

By the definition of derivative in measure ofeσ, we can further rewrite (3.7 ) as

t s,x,[ξ] E(1) s,ξ (1) s,ξ(1) (1) (1) s,x,[ξ] (1) Dξ Xt (η)= (∂µσ) [Xr ], X r η + Dξ X r (η ) dWr. Zs  x=ξ(1)         (3.8) s,x,[ξ] It is then verified rigorously in Lemma 4.2 of [4] that Dξ Xt (η) is indeed the directional derivat- ive of Xs,x,[ξ] in direction η L2( ), by using the fact that σ is in . t ∈ Fs  M1 The next step involves the consideration of a process U s,x,[ξ] satisfying the SDE { t }t∈[s,T ] t s,x,[ξ] (1) s,ξ (1) s,y,[ξ] s,ξ (1) s,ξ(1) (1) s,x,[ξ] U (y)= E (∂µσ) [X ], X +(∂µσ) [X ], X U (y) dWr. t r r r r r (1) s x=ξ Z h     i    (3.9) We write

Eˆ s,x,[ξ] ˆ Ut (ξ)ˆη  t i Eˆ E(1) s,ξ (1) s,y,[ξ] = (∂µσ) [Xr ], X r ηˆ dWr Zs    y=ξˆ  t    Eˆ E(1) s,ξ (1) s,ξ(1) (1) s,x,[ξ] ˆ + (∂µσ) [Xr ], X r U r (ξ) ηˆ dWr Zs    x=ξ(1)      and notice that

Eˆ E(1) s,ξ (1) s,y,[ξ] (∂µσ) [Xr ], X r ηˆ    y=ξˆ     E(1) E(1) s,ξ (1) s,y,[ξ] (1) = (∂µσ) [Xr ], X r η    y=ξ(1)     E(1) E(1) s,ξ (1) s,y,[ξ] (1) (1) = (∂µσ) [Xr ], X r η s (1) F   y=ξ     E(1) s,ξ (1) s,ξ(1) (1) = (∂µσ) [Xr ], X r η , (3.10)      (1) s,y,[ξ] (1) (1) where the second equality uses the fact that X is σ Wr Ws r [s,t] -adapted and is { − | ∈ } (1) (1) (1) (1) therefore independent of s , whereas ξ and η are both s -measurable. The final equality uses F  F

33 (1) (1) s,y,[ξ] (1) s,ξ the fact that X r y=ξ(1) = X r . We also notice by the Fubini’s theorem that

  (1) Eˆ E(1) s,ξ (1) s,ξ (1) s,x,[ξ] ˆ (∂µσ) [Xr ], X r U r (ξ) ηˆ    x=ξ(1)   (1)   E(1) s,ξ (1) s,ξ Eˆ (1) s,x,[ξ] ˆ = (∂µσ) [Xr ], X U (ξ)ˆη . (3.11) r r x=ξ(1) h   h i i  s,x,[ξ]  Eˆ s,x,[ξ] ˆ Therefore, by (3.10) and (3.11), we observe that Dξ X (η) and U (ξ)ˆη satisfy the same SDE and hence s,x,[ξ] s,x,[ξ]    D X (η)= Eˆ U (ξˆ)ˆη , t [s, T ], η L2( ). (3.12) ξ t t ∈ ∈ Fs h i We then observe that U s,x, [ξ](y) satisfies the same SDE for any x R. Therefore, there is no dependence t ∈ on x and hence (3.9) can be rewritten as t s,x,[ξ] E(1) s,ξ (1) s,y,[ξ] s,ξ (1) s,ξ(1) (1) s,x,[ξ] Ut (y)= (∂µσ) [Xr ], X r + (∂µσ) [Xr ], X r U r (y) dWr. Zs          (3.13) Moreover, by the fact that σ is in , we establish that M1 (i) E sup U s,x,[ξ](y) 2 C, (3.14) t ≤  t∈[s,T ] 

(ii) ′ E sup U s,x,[ξ](y) U s,x,[ξ ](y′) 2 C y y′ 2 + W ([ξ], [ξ′])2 , (3.15) t − t ≤ | − | 2  t∈[s,T ]    for any s [0, T ], x,y,y′ R and ξ,ξ′ L2( ), for some constant C > 0. Indeed, (3.14) follows from ∈ ∈ ∈ Fs the boundedness of ∂µσ and Gronwall’s inequality. (3.15) follows from the Lipschitz property of ∂µσ and Gronwall’s inequality, along with the bounds ′ E s,ξ s,ξ 2 E ′ 2 supt∈[s,T ] Xt Xt C ξ ξ , ′ s,ξ− s,ξ 2 ≤ | − | ′ 2 sup t∈[s,T ] W2 ([Xt ], [Xt ]) CW2([ξ], [ξ ]) , s,x,[ξ] s,x ′,[ξ≤′] 2 E sup X X C x x′ 2 + W ([ξ], [ξ′])2 , t∈[s,T ] t − t ≤ | − | 2 for some constantC > 0. Finally, the bounds (3.14 ), (3.15) and connection (3.12) allow us to establish  that the Gâteaux derivative

L2( ) L(L2( ),L2( )); ξ η D Xs,x,[ξ] (η) (3.16) Fs → Fs Ft 7→ 7→ ξ t   2 2  is continuous (where the space L(L ( s),L ( t)) is equipped with the corresponding operator norm), F F s,x,[ξ] which proves that (3.16) is indeed the Fréchet derivative of Xt with respect to ξ. By (3.12), it s,x,[ξ] follows from the definition of ∂µXt (y) that ∂ Xs,x,[ξ](y)= U s,x,[ξ](y), t [s, T ]. (3.17) µ t t ∈

34 Higher order derivatives of [ξ] Xs,x,[ξ]. We recall that ∂ Xs,x,[ξ](y) does not depend on x and 7→ µ t hence we define s,[ξ] s,x,[ξ] ∂µXt (y) := ∂µXt (y). (3.18) Subsequently, we define inductively as in (2.6) and (2.7), the nth order derivative in measure of s,x,[ξ] Xt by

∂nXs,[ξ](v , . . . , v ) := ∂n−1 ∂ Xs,[ξ](v ) (v , . . . , v ), t [s, T ], v , . . . , v Rd, µ t 1 n µ µ t 1 2 n ∈ 1 n ∈   and its corresponding mixed derivatives by

∂βn ...∂β1 ∂nXs,[ξ](v , . . . , v ), ℓ,β ,...,β N 0 , vn v1 µ t 1 n 1 n ∈ ∪ { } 2 provided that these derivatives actually exist, where each derivative in vi is interpreted in the L sense. (See Lemma 4.1 in [4] for its precise meaning.) Next, we generalise the multi-index notation and the class to include derivatives of Xs,x,[ξ]. Mk t s,x,[ξ] Definition 3.2 (Multi-index notation for derivatives of Xt ). Let (n, β) be a multi-index. Then (n,β) s,[ξ] D Xt (v1, . . . , vn) is defined by

(n,β) s,[ξ] βn β1 n s,[ξ] D Xt (v1, . . . , vn) := ∂vn ...∂v1 ∂µ Xt (v1, . . . , vn), if this derivative is well-defined.

s,x,[ξ] Definition 3.3 (Class k of kth order differentiable functions of X ). M s,x,[ξ] s,[ξ] The process Xs,x,[ξ] = X belongs to class (Xs,x,[ξ]), if D(n,β)X (v , . . . , v ) exists for { t }t∈[s,T ] Mk t 1 n every multi-index (n, β) such that (n, β) k and | |≤ (a) E sup D(n,β)Xs,[ξ](v , . . . , v ) 2 C, (3.19) t 1 n ≤  t∈[s,T ] 

(b)

′ E sup D(n,β)Xs,[ξ](v , . . . , v ) D(n,β)Xs,[ξ ](v′ , . . . , v′ ) 2 t 1 n − t 1 n  t∈[s,T ] 

n C v v′ 2 + W ([ξ], [ξ′])2 , (3.20) ≤ | i − i| 2  Xi=1  for any s [0, T ], v , v′ , . . . , v , v′ Rd and ξ,ξ′ L2( ), for some constant C > 0. ∈ 1 1 n n ∈ ∈ Fs

35 The following theorem extends Theorem 3.1 to higher order derivatives. It uses the notations

Λ := θ : 1,...,i 1,...,k θ is a strictly increasing function , i 1,...,k , i,k { } → { } ∈ { } n o and R R Rk := y = y(j,ℓ) 1≤j,ℓ≤k y(j,ℓ) , Tk := z = z(j,i,θ) 1≤j,i≤k z(j,i,θ) . ∈ θ∈Λ ∈ n o n i,k o  k  For any function Fk : 2(R) R R k Tk R, ∂x Fk denotes the corresponding P × × × → j with respect to the second component of Fk. ∂y(j,ℓ) Fk denotes the corresponding partial derivative with respect to the third component of Fk. ∂z(j,i,θ) Fk denotes the corresponding partial derivative with respect to the fourth component of Fk.

Theorem 3.4. Suppose that σ is K ( 2(R)). Then, for any k 1,...,K , t [s, T ], the kth order k s,[ξ] M P ∈ { } ∈ derivative in measure ∂µXt (v1, . . . , vk) exists and satisfies (3.19) and (3.20). In particular, it is the unique solution of an SDE given by

t (j) k s,[ξ] E(1)E(2) E(k) s,ξ (j) s,ξ (j) s,vℓ,[ξ] ∂µXt (v1, . . . , vk) = . . . Fk [Xr ], X , X , r 1≤j≤k r 1≤j,ℓ≤k Zs         i (j) s,[ξ] ∂µ X r (vθ(1), . . . , vθ(i)) 1≤j,i≤k dWr, (3.21) θ∈Λi,k     where F : (R) Rk R T R is defined by the recurrence relation k P2 × × k × k →

Fk+1 µ, (xj)1≤j≤k+1, (y(j,ℓ))1≤j,ℓ≤k+1, z(j,i,θ) 1≤j,i≤k+1 θ∈Λi,k+1    = ∂µFk µ, (xj)1≤j≤k, (y(j,ℓ))1≤j,ℓ≤k, z(j,i,θ) 1≤j,i≤k,y(k+1,k+1) θ∈Λi,k    +∂µFk µ, (xj)1≤j≤k, (y(j,ℓ))1≤j,ℓ≤k, z(j,i,θ) 1≤j,i≤k,xk+1 z(k+1,1,Pk+1) θ∈Λ  i,k  k 

+ ∂xj Fk µ, x1,...,xj−1,y(j,k+1),xj+1,...,xk , (y(j,ℓ))1≤j,ℓ≤k, z(j,i,θ) 1≤j,i≤k j=1 θ∈Λi,k X     k

+ ∂xj Fk µ, (xj)1≤j≤k, (y(j,ℓ))1≤j,ℓ≤k, z(j,i,θ) 1≤j,i≤k z(j,1,Pk+1) j=1 θ∈Λi,k X    k

+ ∂y(j,ℓ) Fk µ, (xj)1≤j≤k, (y(j,ℓ))1≤j,ℓ≤k, z(j,i,θ) 1≤j,i≤k z(j,1,Pk+1) j,ℓ=1 θ∈Λi,k X    k + ∂ F µ, (x ) , (y ) , z z , k 1,...,K 1 , z(j,i,θ) k j 1≤j≤k (j,ℓ) 1≤j,ℓ≤k (j,i,θ) 1≤j,i≤k (j,i+1,θk+1) ∈ { − } j,i=1 θ∈Λ θ∈Λi,k X Xi,k    (3.22)

36 where P Λ is defined by P (1) = k + 1 and for each θ Λ , the function θ Λ k+1 ∈ 1,k+1 k+1 ∈ i,k k+1 ∈ i+1,k+1 is defined such that θk+1 {1,...,i} = θ and θk+1(i +1) = k + 1. Moreover, F1 is given by

F1(µ,x,y,z)= ∂µσ(µ,y)+ ∂µσ(µ,x)z. (3.23)

Proof. We remark that the functions F , k 1,...,K , are well-defined, since σ . We proceed k ∈ { } ∈ MK by strong induction on k 1,...,K . The base step k = 1 is done in Theorem 3.1. In particular, ∈ { } (3.13) verifies (3.23). The main arguments in the induction step are the same as the base step. Suppose that the statement holds for all k 1,...,k∗ , where k∗ 1,...,K 1 . Then, in particular, ∗ s,[ξ] ∈ { } ∈ { − } k ∗ ∂µ Xt (v1, . . . , vk ) satisfies the SDE

∗ t ∗ (j) k s,[ξ] (1) (2) (k ) s,ξ (j) s,ξ (j) s,vℓ,[ξ] ∂ X (v , . . . , v ∗ ) = E E . . . E F ∗ [X ], X , X , µ t 1 k k r r ∗ r ∗ s 1≤j≤k 1≤j,ℓ≤k Z       s,[ξ]   i (j) ∗ ∂µ X r (vθ(1), . . . , vθ(i)) 1≤j,i≤k dWr. (3.24) θ∈Λi,k∗    

Let F˜k∗ be the lift of Fk∗ . In the following expression, ∂xF˜k∗ denotes the partial derivative with respect to the lifted component of F˜k. As in (3.6) and (3.7), we formally differentiate (3.24) with respect to ξ in the direction η to obtain the directional derivative

∗ s,[ξ] k ∗ Dξ ∂µ Xt (v1, . . . , vk ) (η)

t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ˜∗ s,ξ (j) (j) = . . . ∂xFk [Xr ], X , X , r j r j,ℓ Zs         i (j) s,[ξ] s,x,[ξ] ∂µ X (vθ(1), . . . , vθ(i)) η + Dξ(Xr )(η) dWr r j,i,θ x=ξ    k∗   t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ˜ ∗ s,ξ (j) (j) + . . . ∂xj Fk [Xr ], X , X , r j r j,ℓ j=1 Zs   X       i (j) s,[ξ] (j) (j) s,x,[ξ] (j) ∂µ X (vθ(1), . . . , vθ(i)) η + Dξ X (η ) dWr r j,i,θ r x=ξ(j)    k∗     t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ˜ ∗ s,ξ (j) ( j) + . . . ∂y j,ℓ Fk [Xr ], X , X , ( ) r j r j,ℓ j,ℓ=1 Zs   X       i (j) s,[ξ] (j) s,vℓ,[ξ] (j) ∂µ X (vθ(1), . . . , vθ(i)) Dξ X (η ) dWr r j,i,θ r    k∗      t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ˜ ∗ s,ξ (j) (j) + . . . ∂z(j,i,θ) Fk [Xr ], X r , X r , s j j,ℓ j,i=1 θ∈Λi,k∗ Z   X X       i (j) s,[ξ] i (j) s,[ξ] (j) ∂µ X (vθ(1), . . . , vθ(i)) Dξ ∂µ X (vθ(1), . . . , vθ(i)) (η ) dWr. r j,i,θ r        37 We then recall that the following directional derivatives can be represented as s,x,[ξ] Eˆ s,[ξ] ˆ (j) s,vℓ,[ξ] (j) Eˆ (j) s,[ξ] ˆ Dξ(Xr )(η) = ∂µXr (ξ)ˆη , Dξ X (η )= ∂µ X (ξ)ˆη , x=ξ r r h i   h i (j) s,x,[ξ] (j) Eˆ  (j) s,[ξ] ˆ  Dξ X (η ) = ∂µ X (ξ)ˆη r x=ξ(j) r h i and   

D ∂i X(j) s,[ξ](v , . . . , v ) (η(j))= Eˆ ∂i+1 X(j) s,[ξ](v , . . . , v , ξˆ)ˆη , i 1,...,k∗ 1 . ξ µ r θ(1) θ(i) µ r θ(1) θ(i) ∈ { − } We can therefore rewrite (3.25) as h  i ∗ s,[ξ] k ∗ Dξ ∂µ Xt (v1, . . . , vk ) (η) t ∗ ∗ s,ξ(j) s,v ,[ξ] E(1)E(2) E(k )E(k +1) ∗ s,ξ (j) (j) ℓ = . . . ∂µFk [Xr ], X r , X r , s j j,ℓ Z    ∗    ∗ (k +1) ∗  ∗ i (j) s,[ξ] (k +1) s,ξ (k +1) Eˆ (k +1) s,[ξ] ˆ ∂µ X (vθ(1), . . . , vθ(i)) , X η + ∂µ X (ξ)ˆη dWr r j,i,θ r r    k∗    h  i t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂xj Fk [Xr ], X , X , r j r j,ℓ j=1 Zs   X       i (j) s,[ξ] (j) Eˆ (j) s,[ξ] ˆ ∂µ X (vθ(1), . . . , vθ(i)) η + ∂µ X (ξ)ˆη dWr r j,i,θ r    k∗   h  i t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂y j,ℓ Fk [Xr ], X , X , ( ) r j r j,ℓ j,ℓ=1 Zs   X       i (j) s,[ξ] Eˆ (j) s,[ξ] ˆ ∂µ X (vθ(1), . . . , vθ(i)) ∂µ X (ξ)ˆη dWr r j,i,θ r    k∗ k∗−1  h  i t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂z(j,i,θ) Fk [Xr ], X r , X r , s j j,ℓ j=1 i=1 θ∈Λi,k∗ Z   X X X       i (j) s,[ξ] Eˆ i+1 (j) s,[ξ] ˆ ∂µ X (vθ(1), . . . , vθ(i)) ∂µ X (vθ(1), . . . , vθ(i), ξ)ˆη dWr r j,i,θ r    k∗   h  i t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂z ∗ I ∗ Fk [Xr ], X , X , (j,k , k ) r j r j,ℓ j=1 Zs   X       s,[ξ] ∗ s,[ξ] i (j) k (j) ∗ ∂µ X (vθ(1), . . . , vθ(i)) Dξ ∂µ X (v1, . . . , vk ) (η) dWr, r j,i,θ t         (3.25) where, on the second last line, Ik∗ denotes the identity function from 1,...,k∗ to itself. We now s,[ξ] { } ∗ ∗ define a process Uk +1 t (v1, . . . , vk +1) t∈[s,T ] that satisfies the SDE s,[ξ] ∗   ∗ Uk +1 t (v1, . . . , vk +1)  38 t ∗ ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k )E(k +1) ∗ s,ξ (j) (j) = . . . ∂µFk [Xr ], X r , X r , s j j,ℓ Z       ∗ ∗   i (j) s,[ξ] (k +1) s,vk +1,[ξ] ∂µ X (vθ(1), . . . , vθ(i)) , X dWr r j,i,θ r    t  ∗ ∗  (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k )E(k +1) ∗ s,ξ (j) (j) + . . . ∂µFk [Xr ], X r , X r , s j j,ℓ Z    ∗    s,[ξ] ∗ s,ξ(k +1) ∗ s,[ξ] i (j) (k +1) (k +1) ∗ ∂µ X (vθ(1), . . . , vθ(i)) , X ∂µ X (vk +1) dWr r j,i,θ r r  ∗    k t    ∗ s,ξ(1) s,ξ(j−1) s,v ∗ ,[ξ] E(1)E(2) E(k ) ∗ t,ξ (1) (j−1) (j) k +1 + . . . ∂xj Fk [Xr ], X r ,..., X r , X r , s j=1 Z    X ∗    (j+1) ∗ (k ) (j+1) s,ξ (k ) s,ξ (j) s,vℓ,[ξ] i (j) s,[ξ] X ,..., X , X , ∂µ X (vθ(1), . . . , vθ(i)) dWr r r r j,ℓ r j,i,θ  k∗          t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂xj Fk [Xr ], X , X , r j r j,ℓ j=1 Zs   X       s,[ξ] s,[ξ] i (j) (j) ∗ ∂µ X (vθ(1), . . . , vθ(i)) ∂µ X (vk +1) dWr r j,i,θ r    k∗    t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂y j,ℓ Fk [Xr ], X , X , ( ) r j r j,ℓ j,ℓ=1 Zs   X       s,[ξ] s,[ξ] i (j) (j) ∗ ∂µ X (vθ(1), . . . , vθ(i)) ∂µ X (vk +1) dWr r j,i,θ r    k∗ k∗−1   t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂z(j,i,θ) Fk [Xr ], X r , X r , s j j,ℓ j=1 i=1 θ∈Λi,k∗ Z   X X X       s,[ξ] s,[ξ] i (j) i+1 (j) ∗ ∂µ X (vθ(1), . . . , vθ(i)) ∂µ X (vθ(1), . . . , vθ(i), vk +1) dWr r j,i,θ r    k∗    t ∗ (j) s,ξ s,vℓ,[ξ] E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂z ∗ I ∗ Fk [Xr ], X , X , (j,k , k ) r j r j,ℓ j=1 Zs   X       s,[ξ] (j) s,[ξ] i (j) ∗ ∂µ X (vθ(1), . . . , vθ(i)) Uk∗+1 (v1, . . . , vk +1) dWr. (3.26) r j,i,θ t       Then we write

s,[ξ] Eˆ ∗ ∗ ˆ Uk +1 t (v1, . . . , vk , ξ)ˆη

ht ∗ ∗ i (j)  s,ξ s,vℓ,[ξ] Eˆ E(1)E(2) E(k )E(k +1) ∗ s,ξ (j) (j) = . . . ∂µFk [Xr ], X , X , r j r j,ℓ Zs          39 ∗ ∗ i (j) s,[ξ] (k +1) s,vk +1,[ξ] ∂µ X r (vθ(1), . . . , vθ(i)) , X r ηˆ dWr j,i,θ ∗ ˆ  vk +1=ξ      t ∗ ∗ (j) (1) (2) (k ) (k +1) s,ξ (j) s,ξ (j) s,vℓ,[ξ] Eˆ E E E E ∗ + . . . ∂µFk [Xr ], X r , X r , s j j,ℓ Z    ∗    s,[ξ] ∗ s,ξ(k +1)  ∗ s,[ξ]  i (j) (k +1) (k +1) ∗ ∂µ X r (vθ(1), . . . , vθ(i)) , X r ∂µ X r (vk +1) ηˆ dWr j,i,θ v ∗ =ξˆ     k +1  k∗    t ∗ Eˆ E(1)E(2) E(k ) ∗ t,ξ + . . . ∂xj Fk [Xr ], j=1 Zs    X ∗ (1) (j−1) ∗ (j+1) ∗ (k ) (1) s,ξ (j−1) s,ξ (j) s,vk +1,[ξ] (j+1) s,ξ (k ) s,ξ X r ,..., X r , X r , X r ,..., X r ,   (j)s,vℓ,[ξ] i (j) s,[ξ]    X r , ∂µ X r (vθ(1), . . . , vθ(i)) ηˆ dWr j,ℓ j,i,θ v ∗ =ξˆ      k +1  k∗   t ∗ (j) s,ξ s,vℓ,[ξ] Eˆ E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂xj Fk [Xr ], X , X , r j r j,ℓ j=1 Zs    X       s,[ξ] s,[ξ] i (j) (j) ∗ ∂µ X r (vθ(1), . . . , vθ(i)) ∂µ X r (vk +1) ηˆ dWr j,i,θ v ∗ =ξˆ     k +1  k∗   t ∗ (j) s,ξ s,vℓ,[ξ] Eˆ E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂y j,ℓ Fk [Xr ], X , X , ( ) r j r j,ℓ j,ℓ=1 Zs    X       s,[ξ] s,[ξ] i (j) (j) ∗ ∂µ X r (vθ(1), . . . , vθ(i)) ∂µ X r (vk +1) ηˆ dWr j,i,θ ∗ ˆ   vk +1=ξ   ∗ ∗    k k −1 t ∗ s,ξ(j) s,v ,[ξ] Eˆ E(1)E(2) E(k ) ∗ s,ξ (j) (j) ℓ + . . . ∂z(j,i,θ) Fk [Xr ], X r , X r , s j j,ℓ j=1 i=1 θ∈Λi,k∗ Z    X X X       s,[ξ] s,[ξ] i (j) i+1 (j) ∗ ∂µ X r (vθ(1), . . . , vθ(i)) ∂µ X r (vθ(1), . . . , vθ(i), vk +1) ηˆ dWr j,i,θ v ∗ =ξˆ     k +1  k∗   t ∗ (j) s,ξ s,vℓ,[ξ] Eˆ E(1)E(2) E(k ) ∗ s,ξ (j) (j) + . . . ∂z ∗ I ∗ Fk [Xr ], X , X , (j,k , k ) r j r j,ℓ j=1 Zs    X       i (j) s,[ξ] (j) s,[ξ] ∗ ∗ ∂µ X r (vθ(1), . . . , vθ(i)) Uk +1 t (v1, . . . , vk +1) ηˆ dWr. j,i,θ ∗ ˆ   vk +1=ξ     

∗ k s,[ξ] ∗ As in the proof of Theorem 3.1, we deduce that Dξ ∂µ X (v1, . . . , vk ) (η) satisfies the same SDE s,[ξ] Eˆ ∗ ∗ ˆ as Uk +1 (v1, . . . , vk , ξ)ˆη . (Note that equality of the first and third terms follows from the sameh argument as (3.10) and equalityi of the other terms follows from the same argument as (3.11).)

40 Consequently, ∗ s,[ξ] s,[ξ] k ∗ Eˆ ∗ ∗ ˆ Dξ ∂µ Xt (v1, . . . , vk ) (η)= Uk +1 t (v1, . . . , vk , ξ)ˆη . (3.27) h i By the induction hypothesis, we can again establish that (as in the proof of Theorem 3.1)

(i) s,[ξ] 2 E sup U ∗ (v , . . . , v ∗ ) C, k +1 t 1 k +1 ≤  t∈[s,T ]  

(ii)

∗ ′ k +1 s,[ξ] s,[ξ ] ′ ′ 2 ′ 2 ′ 2 E sup U ∗ (v , . . . , v ∗ ) U ∗ (v , . . . , v ∗ ) C v v +W ([ξ], [ξ ]) , k +1 t 1 k +1 − k +1 t 1 k +1 ≤ | i− i| 2  t∈[s,T ]   i=1    X

∗ ′ ′ R ′ 2 for any s [0, T ], v1, . . . , vk +1, v1, . . . , vk∗+1 and ξ,ξ L ( s), for some constant C > 0. Subsequently,∈ it follows from the same reasoning∈ as in the proof∈ of TheF orem 3.1 and (3.27) that

∗ s,[ξ] s,[ξ] k +1 ∗ ∗ ∗ ∂µ Xt (v1, . . . , vk +1)= Uk +1 t (v1, . . . , vk +1).

 s,[ξ] ∗ Finally, by the recurrence relation (3.22) and the expression of Uk +1 t in (3.26), it is clear that ∗ s,[ξ] k +1 ∗ ∂µ Xt (v1, . . . , vk +1) satisfies the SDE 

∗ s,[ξ] k +1 ∗ ∂µ Xt (v1, . . . , vk +1)

t ∗ (j) (1) (2) (k +1) s,ξ (j) s,ξ (j) s,vℓ,[ξ] = E E . . . E F ∗ [X ], X , X , k +1 r r ∗ r ∗ s 1≤j≤k +1 1≤j,ℓ≤k +1 Z       s,[ξ]   i (j) ∗ ∂µ X r (vθ(1), . . . , vθ(i)) 1≤j,i≤k +1 dWr. ∗ θ∈Λi,k +1    

Corollary 3.5. Suppose that σ is in ( (R)). Then Xs,x,[ξ] (Xs,x,[ξ]). Mk P2 ∈ Mk Proof. For any multi-index (n, β) such that (n, β) k, we have an SDE representation of ≤ n s,[ξ] ∂µ Xt (v1, . . . , vn), by (3.21) in Theorem 3.4 . By (3.22) and (3.23), we know that the function Fn in (3.21) is differentiable in the spatial components for at most k n times. This is exactly what we − need, since β = β + . . . + β k n. Hence, we formally differentiate β times with respect to each | | 1 n ≤ − i variable v , 1 i n, and then use a standard Gronwall argument to establish bounds (3.19) and i ≤ ≤ (3.20). (See Theorem 5.5.3 in [17] or Proposition 4.10 in [25] for details.)

We are now in a position to prove Theorem 2.15, via the smoothness of σ and Xs,x,[ξ].

41 Proof of Theorem 2.15. By combining (3.6), (3.12), (3.17) and (3.18), we deduce that

χ : L2( ) L2( ); ξ Xs,ξ Fs → Ft 7→ t is Fréchet differentiable with Fréchet derivative given by

Eˆ s,[ξ] ˆ Dχ(ξ)(η)= η + ∂µXt (ξ)ˆη .   Next, for any fixed s [0,t], we define the lifts Φ : L2( ) R and (s, ) : L2( ) R for functions ∈ Ft → V · Fs → Φ and (s, ) respectively, given by V · e e Φ(θ )= f([θ ]), (s,θ )= h(s, [θ ]), for θ L2( ), θ L2( ). 1 1 V 2 2 1 ∈ Ft 2 ∈ Fs Then, we noticee from equation (2.34e ) that

(s, )= Φ χ. V · ◦ By the chain rule of Fréchet differentiation,e we obtaine that

D (s,ξ)= DΦ(χ(ξ)) Dχ(ξ), V ◦ which implies that e e

D (s,ξ)(η) = DΦ(χ(ξ)) Dχ(ξ)(η) V E s,ξ s,ξ = ∂µΦ [X ], X Dχ (ξ)(η) e e t t E s,ξ s,ξ Eˆ s,[ξ] ˆ = ∂µΦ[Xt ], Xt  η + ∂µXt (ξ)ˆη , (3.28) for any ξ, η . Note that the first term can be rewritten as   ∈ Fs E s,ξ s,ξ E E s,ξ s,x,[ξ] ∂µΦ [Xt ], Xt η = (∂µΦ [Xt ], Xt x=ξη (3.29)       and the second term can be rewritten by the Fubini’s theorem as

E s,ξ s,ξ Eˆ s,[ξ] ˆ Eˆ E s,ξ s,ξ s,[ξ] ˆ ∂µΦ [Xt ], Xt ∂µXt (ξ) ηˆ = ∂µΦ [Xt ], Xt ∂µXt (ξ) ηˆ . (3.30)

Consequently, by combining (3.29 ) and (3.30 ), equation  (3.28 ) becomes  

D (s,ξ)(η)= E E(∂ Φ [Xs,ξ], Xs,x,[ξ] η + Eˆ E ∂ Φ [Xs,ξ], Xs,ξ ∂ Xs,[ξ](ξˆ) ηˆ , V µ t t x=ξ µ t t µ t         which impliese that

∂ (s, [ξ])(y)= E ∂ Φ [Xs,ξ], Xs,y,[ξ] + ∂ Φ [Xs,ξ], Xs,ξ ∂ Xs,[ξ](y) , y R. µV µ t t µ t t µ t ∈ h   i 42 s,[ξ] By our assumption, we know that ∂µΦ satisfies (2.32) and(2.33), and the process ∂µXt (v1) satisfies (3.19) and (3.20). It follows that ∂ also satisfies (2.32) and (2.33), with the constant bound C µV uniform in time. By iterating this procedure, we can show that for any multi-index (n, β) such that (n, β) k, (n,β) | | ≤ D (s,µ)(v1, . . . , vn) can be computed explicitly as above and can be represented in terms of V ′ ′ ′′ ′′ (n ,β ) s,[ξ] ′ ′ (n ,β ) ′′ ′′ ′ ′′ N derivatives in the form D Xt (v1, . . . , vn′ ) and D Φ(µ)(v1 , . . . , vn′′ ), for some n ,n ′ ′′ ∈ ∪ 0 , β′ N 0 n and β′′ N 0 n , such that (n′, β′) k and (n′′, β′′) k. The facts { } ∈ ∪ { } ∈ ∪ { } | | ≤ | | ≤ that Xs,x,[ξ] (Xs,x,[ξ]) and Φ also allow us to deduce that D(n,β) (s,µ)(v , . . . , v ) satisfies ∈ Mk  ∈ Mk  V 1 n estimates (2.32) and (2.33), with the constant bound C uniform in time. Finally, we know from Theorem 7.2 in [4] (which corresponds to Theorem 2.15 with k = 2) that ( ,µ) C1((0,t)), for V · ∈ every µ (R). Therefore, we conclude that . ∈ P2 V ∈ Mk

References

[1] Viorel Barbu and Michael Röckner. From nonlinear Fokker-Planck equations to solutions of distribution dependent SDE. arXiv preprint arXiv:1808.10706, 2018.

[2] Oumaima Bencheikh and Benjamin Jourdain. Bias behaviour and antithetic sampling in mean-field particle approximations of SDEs nonlinear in the sense of Mckean. arXiv preprint arXiv:1809.06838, 2018.

[3] Mireille Bossy, Jean-François Jabir, and Denis Talay. On conditional Mckean Lagrangian stochastic models. Probability theory and related fields, 151(1-2):319–351, 2011.

[4] Rainer Buckdahn, Juan Li, Shige Peng, and Catherine Rainer. Mean-field stochastic differential equations and associated PDEs. The Annals of Probability, 45(2):824–878, 2017.

[5] Pierre Cardaliaguet. Notes on mean field games. Technical report, Technical report, 2010.

[6] Pierre Cardaliaguet, François Delarue, Jean-Michel Lasry, and Pierre-Louis Lions. The master equation and the convergence problem in mean field games. Number 201 in Annals of Mathematics Studies. Princeton University Press, 2018.

[7] René Carmona. Lectures on BSDEs, stochastic control, and stochastic differential games with finan- cial applications, volume 1. SIAM, 2016.

[8] Rene Carmona and Francois Delarue. Probabilistic theory of mean field games with applications I: Mean Field FBSDEs, Control, and Games. Springer, 2017.

[9] Jean-François Chassagneux, Dan Crisan, and François Delarue. A probabilistic approach to classical solutions of the master equation for large population equilibria. arXiv preprint arXiv:1411.3009, 2014.

43 [10] Dan Crisan and Eamon McMurray. Smoothing properties of McKean–Vlasov SDEs. Probability Theory and Related Fields, pages 1–52, 2017.

[11] Paul-Eric Chaudru de Raynal. Strong well-posedness of Mckean-Vlasov stochastic differential equation with Hölder drift. arXiv preprint arXiv:1512.08096, 2015.

[12] Paul-Eric Chaudru de Raynal and Noufel Frikha. Well-posedness for some non-linear diffusion processes and related PDE on the Wasserstein space. arXiv preprint arXiv:1811.06904, 2018.

[13] Francois Delarue, Daniel Lacker, and Kavita Ramanan. From the master equation to mean field game limit theory: A central limit theorem. arXiv preprint arXiv:1804.08542, 2018.

[14] Steffen Dereich, Michael Scheutzow, and Reik Schottstedt. Constructive quantization: Approx- imation by empirical measures. In Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, volume 49, pages 1183–1203. Institut Henri Poincaré, 2013.

[15] Nicolas Fournier and Arnaud Guillin. On the rate of convergence in Wasserstein distance of the empirical measure. Probability Theory and Related Fields, 162(3-4):707–738, 2015.

[16] Nicolas Fournier and Maxime Hauray. Propagation of chaos for the Landau equation with mod- erately soft potentials. The Annals of Probability, 44(6):3581–3660, 2016.

[17] Avner Friedman. Stochastic differential equations and applications. Courier Corporation, 2012.

[18] Jürgen Gärtner. On the Mckean-Vlasov limit for interacting diffusions. Mathematische Na- chrichten, 137(1):197–248, 1988.

[19] István Gyöngy and Nicolai Krylov. An accelerated splitting-up method for parabolic equations. SIAM journal on , 37(4):1070–1097, 2005.

[20] William Hammersley, David Šiška, and Lukasz Szpruch. Mckean-Vlasov SDEs under Measure Dependent Lyapunov Conditions. arXiv preprint arXiv:1802.03974, 2018.

[21] Pierre-Emmanuel Jabin and Zhenfu Wang. Quantitative estimates of propagation of chaos for stochastic systems with W −1,∞ kernels. Inventiones mathematicae, 214(1):523–591, 2018.

[22] Benjamin Jourdain, Sylvie Méléard, and Wojbor Woyczynski. Nonlinear SDEs driven by Lévy processes and related PDEs. arXiv preprint arXiv:0707.2723, 2007.

[23] Mark Kac. Foundations of kinetic theory. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, volume 3, pages 171–197. University of California Press Berkeley and Los Angeles, California, 1956.

[24] Vassili N Kolokoltsov. Nonlinear Markov processes and kinetic equations, volume 182. Cambridge University Press, 2010.

44 [25] NV Krylov. On Kolmogorov’s equations for finite dimensional diffusions. In Stochastic PDE’s and Kolmogorov Equations in Infinite Dimensions, pages 1–63. Springer, 1999.

[26] Daniel Lacker. On a strong form of propagation of chaos for Mckean-Vlasov equations. arXiv preprint arXiv:1805.04476, 2018.

[27] PL Lions. Cours au collège de france: Théorie des jeux à champs moyens, 2014.

[28] HP McKean Jr. An exponential formula for solving Boltzmann’s equation for a Maxwellian gas. Journal of Combinatorial Theory, 2(3):358–382, 1967.

[29] Sylvie Méléard. Asymptotic behaviour of some interacting particle systems; Mckean-Vlasov and Boltzmann models. In Probabilistic models for nonlinear partial differential equations, pages 42– 95. Springer, 1996.

[30] Stéphane Mischler and Clément Mouhot. Kac’s program in kinetic theory. Inventiones mathem- aticae, 193(1):1–147, 2013.

[31] Stéphane Mischler, Clément Mouhot, and Bernt Wennberg. A new approach to quantitative propagation of chaos for drift, diffusion and jump processes. Probability Theory and Related Fields, 161(1-2):1–59, 2015.

[32] Yuliya S Mishura and Alexander Yu Veretennikov. Existence and uniqueness theorems for solu- tions of Mckean–Vlasov stochastic equations. arXiv preprint arXiv:1603.02212, 2016.

[33] Lewis Fry Richardson. IX. the approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam. Phil. Trans. R. Soc. Lond. A, 210(459-470):307–357, 1911.

[34] Alain-Sol Sznitman. Topics in propagation of chaos. Ecole d’Eté de Probabilités de Saint-Flour XIX, 1989, pages 165–251, 1991.

[35] Lukasz Szpruch and Alvin Tse. Antithetic multi-level Monte-Carlo particle approximation of Mckean-Vlasov SDEs. Article in preparation.

[36] Denis Talay and Luciano Tubaro. Expansion of the global error for numerical schemes solving stochastic differential equations. Stochastic analysis and applications, 8(4):483–509, 1990.

[37] Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.

45