DISCRETE AND CONTINUOUS doi:10.3934/dcdsb.2019067 DYNAMICAL SYSTEMS SERIES B Volume 24, Number 10, October 2019 pp. 5481–5502

SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH HITTING TIMES

Vadim Kaushansky* and Christoph Reisinger University of Oxford Andrew Wiles Building Woodstock Road Oxford, UK, OX2 6GG

(Communicated by Peter Kloeden)

Abstract. We develop an Euler-type particle method for the simulation of a McKean–Vlasov equation arising from a mean-field model with positive feed- back from hitting a boundary. Under assumptions on the parameters which ensure differentiable solutions, we establish convergence of order 1/2 in the time step. Moreover, we give a modification of the scheme using Brownian bridges and local mesh refinement, which improves the order to 1. We confirm our theoretical results with numerical tests and empirically investigate cases with blow-up.

1. Introduction. There has been a recent surge in interest in mean-field problems, both from a theoretical and applications perspective. We focus on models where the interaction derives from feedback on the system when a certain threshold is hit. Application areas include electrical surges in networks of neurons and systemic risk in financial markets. As analytic solutions are generally not known, numerical methods are inevitable, but still lacking. We therefore propose and analyse numerical schemes for the simulation of a spe- cific McKean–Vlasov equation which exhibits key features of these models, namely

푌푡 = 푌0 + 푊푡 − 훼퐿푡, 푡 ∈ [0, 푇 ], (1)

퐿푡 = P(휏 ≤ 푡), 푡 ∈ [0, 푇 ], (2)

휏 = inf{푡 ∈ [0, 푇 ]: 푌푡 ≤ 0}, (3) where 훼, 푇 ∈ R+, 푊 a standard on a probability space (Ω, ℱ, (ℱ푡)푡≥0, P), on which is also given an R+-valued random variable 푌0 independent of 푊 . The non- linearity arises from the dependence of 퐿푡 in (1) on the law of 푌 . More specifically, if 푡 → 퐿푡 has a derivative 푝휏 , which is then the density of the hitting time 휏 of zero,

푑푌푡 = 푑푊푡 − 훼 푝휏 (푡) 푑푡, 푡 ∈ (0, 푇 ),

2010 Mathematics Subject Classification. Primary: 65C35; Secondary: 60K35. Key words and phrases. McKean-Vlasov equations, particle method, timestepping scheme, Brow- nian bridge, mean-field problems. The first author gratefully acknowledges support from the Economic and Social Research Council and Bank of America Merrill Lynch. * Corresponding author: Vadim Kaushansky.

5481 5482 VADIM KAUSHANSKY AND CHRISTOPH REISINGER so that the drift depends on the law of the path of 푌 . Theoretical properties of (1)–(3) have been studied in [12], who prove the exis- tence of a differentiable solution (퐿푡)0<푡<푡* up to an “explosion time” 푡*. Conversely, they show that 퐿 cannot be continuous for all 푡 for 훼 above a threshold determined by the law of 푌0. Such systemic events where discontinuities occur are also referred to as “blow-ups” in the literature. The question of the constructive solution, however, remained open. Examples of a numerical solution computed with Algorithm1 introduced in Section2 are shown in Figure1. The left plot shows the formation of a discontinuity in the loss function 푡 → 퐿푡 for increasing 훼, with 푌0 ∼ Gamma(1.5, 0.5). The density of 푌푇 for 푇 before and after the shock is displayed in the right panel.

α = 0.5 0.9 1.4 T . α = 1.1 = 0 0015 T . α = 1.15 = 0 004 0.8 α = 1.2 1.2 α = 1.25 0.7 α = 1.5 1 0.6

0.5 0.8 t (x) T L Y f 0.4 0.6

0.3 0.4 0.2

0.2 0.1

0 0 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 t x (a) (b) Figure 1. (a) 퐿푡 for different 훼 near the jump; (b) distribution of 푌푇 for 푌푇 > 0 before and after the jump. Fitted by kernel density estimation with normal kernel for 푁 = 107.

A similar model has been studied in [14], where the authors consider log(1 − 퐿푡) instead of −퐿푡 in (1). Our numerical scheme can be applied in principle to this problem, but we concentrate the analysis on (1)–(3) in this paper. One motivation for studying these equations comes from , in particular, systemic risk. A large interconnected banking network can be ap- proximated by a particle system with interactions by which the default of one firm, modeled as the hitting of a lower default threshold of its value, causes a downward move in the firm value of others. More details can be found in[12] and [14]. This model can also be viewed as the large pool limit of a structural default model for a pool of firms where interconnectivity is caused by mutual liabilities, such asin[13]. An earlier version of this problem is found in neuroscience, where a large network of electrically coupled neurons can be described by McKean–Vlasov type equations ([5,6,8,7]). If a neuron’s potential reaches some fixed threshold, it jumps toa higher potential level and sends a signal to other neurons. This feedback leads to the following equations ∫︁ 푡 푋푡 = 푋0 + 푏(푋푠) 푑푠 + 훼E[푀푡] + 푊푡 − 푀푡, (4) 0 ∑︁ 푀푡 = 1[0,푡](휏푘), 휏푘 = inf{푡 > 휏푘−1 : 푋푡− ≥ 1}, (5) 푘≥1 1 where 푋0 < 1 a.s. The similarity to (1)–(3) is seen by noticing that 퐿푡 = E[ [0,푡](휏)] and 푀푡 in (5) is constant between hitting times, however, while in (4) an upper SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5483 boundary is hit and after that the value resets to zero, in our model we are interested in hitting the zero boundary (from above), and after hitting the particle’s value remains zero. While McKean–Vlasov equations are an active area of current research, to our knowledge, there is only a fairly small number of papers where their simulation is studied rigorously and none of these encompass the models above. Early works include [4], which proves convergence (with order 1/2 in the timestep and inverse number of particles) of a particle approximation to the distribution function for the measure 휈푡 of 푋푡 in the classical McKean-Vlasov equation ∫︁ ∫︁ 푋푡 = 푋0 + 훽(푋푡, 푢) 휈푡(푑푢) 푑푡 + 훼(푋푡, 푢) 휈푡(푑푢) 푑푊푡 (6) R R with sufficient regularity. The proven rate in the timestep is improved to1in[ ] using techniques. More recently, multilevel simulation algorithms have been proposed and analysed: [15] considers the special case of an SDE whose coefficients at time 푡 depend on 푋푡 and the of a function of 푋푡;[16] study a method based on fixed point iteration for the general case (6). An alternative reduction technique by importance sampling is given in [9]. The system (1)–(3) above does not fall into the setting of (6) due to the extra path-dependence of the coefficients through the hitting time distribution. In this paper, we therefore propose and analyse a particle scheme for (1)–(3) with an ex- plicit timestepping scheme for the nonlinear term. We simulate 푁 exchangeable particles at discrete time points with distance ℎ, whereby at each time point 푡 we use an estimator of 퐿푡−ℎ from the previous particle locations to approximate (3). We prove the convergence of the numerical scheme up to some time 푇 as the time step ℎ goes to zero and number of particles goes to infinity. We also proved that the scheme can be extended up to the explosion time under certain conditions on the model parameters. The order in ℎ for this standard estimator is 1/2. Next, we use Brownian bridges to better approximate the hitting probability, similar to barrier option pricing ([11]). In this case, the convergence order improves to (1 + 훽)/2, where 훽 ∈ (0, 1] is the H¨olderexponent of the density of the initial value 푌0 (e.g., 0.5 in the example from Figure8 below). The order can be improved to 1 for all 훽 ∈ (0, 1] by non-uniform time-stepping. A main contribution of the paper is the first provably convergent scheme for equations of the type (1)–(3). The analysis uses a direct recursion of the error and regularity results proven in [12]. This has the advantage that sharp convergence orders – i.e., consistent with the numerical tests – can be given, but also means that it seems difficult to apply the analysis directly to variations of the problem where such results are not available. Nonetheless, the method itself is natural and applicable in principle to other settings such as those outlined above. The rest of the paper is organized as follows. In Section2 we list the running assumptions and state the main results of the paper; in Sections 3.1 and 3.2 we prove the uniform convergence of the discretized process; in Section 3.3, we show the convergence of Monte Carlo particle estimators with an increasing number of samples; in Section 3.4 we prove the convergence order for the scheme with Brownian bridges; in Section4 we give numerical tests of the schemes; finally, in Section5, we conclude. 5484 VADIM KAUSHANSKY AND CHRISTOPH REISINGER

2. Assumptions and main results. We begin by listing the assumptions. The first one, H¨older continuity at 0 of the initial density, is key for the regularity ofthe solution. The H¨olderexponent will also limit the rate of convergence of the discrete time schemes.

Assumption 1. We assume that 푌0 has a density 푓푌0 supported on R+ such that 훽 푓푌0 (푥) ≤ 퐵푥 , 푥 ≥ 0, (7) for some 훽 ∈ (0, 1]. Under Assumption1, we can refer to Theorem 1.8 in [12] for the existence of a unique, differentiable solution 푡 → 퐿푡 for (1)–(3) up to time

푡* := sup {푡 > 0 : ||퐿||퐻1(0,푡) < ∞} ∈ [0, ∞], and a corresponding 퐵ˆ such that for every 푡 < 푡* 1−훽 ′ ˆ − 2 퐿푡 ≤ 퐵푡 푎.푒. (8) This estimate admits a singularity of the rate of losses at time 0, however, what is actually observed in numerical studies (see Figure1, left) is that the loss rate is bounded initially but then has a sharp peak for small 훽 (and then especially for large 훼). Integrating (8), we have for future reference a bound on 퐿푡, 1+훽 퐿푡 ≤ 퐵푡˜ 2 , (9) where 퐵˜ = 2퐵/ˆ (1 + 훽). The following assumption will be used to control the propagation of the discreti- sation error, by bounding the density (especially at 0) of the running minimum of 푌 and its approximations.

* * Assumption 2. We assume that 푇 < min(푇 , 푡*), where 푇 is defined by 훽 [︃√︂ * ]︃ 2푇 * 1+훽 훼퐵 + 훼퐵˜(푇 ) 2 = 1, (10) 휋 with 퐵 and 퐵˜ the smallest constants such that (7) and (9) hold for given 훽. In the following, we assume that Assumptions1 and2 hold. Consider a uniform time mesh 0 = 푡0 < 푡1 < . . . < 푡푛 = 푇 , where 푡푖 − 푡푖−1 = ℎ, and a discretized process, for 1 ≤ 푖 ≤ 푛, ˜ ˜ 푌푡푖 = 푌0 + 푊푡푖 − 훼퐿푡푖 , (11) ˜ 퐿푡푖 = P(˜휏 < 푡푖), (12)

휏˜ = min {푌˜푡 ≤ 0}. (13) 0≤푗≤푛 푗 ˜ ˜ ˜ We extend 퐿푡푖 to [0, 푇 ] by setting 퐿푠 = 퐿푡푖−1 for 푡푖−1 < 푠 < 푡푖. The first theorem, proven in Section 3.1, shows that 퐿˜푡 converges uniformly to 퐿푡. ˜ Theorem 1. Consider 퐿푡푖 from (11)–(13) and 퐿푡 from (1)–(3). Then, for any 훿 > 0, there exists 퐶 > 0 independent of ℎ such that 1 −훿 max |퐿˜푡 − 퐿푡 | ≤ 퐶ℎ 2 . (14) 푖≤푛 푖 푖 SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5485

We now propose a particle simulation scheme for (11)–(13) in Algorithm1.

Algorithm 1 Discrete time Monte Carlo scheme for simulation of the loss process Require: 푁 — number of Monte Carlo paths Require: 푛 — number of time steps: 0 < 푡1 < 푡2 < . . . < 푡푛 1: Draw 푁 samples of 푌0 (from initial distribution) and 푊 (a Brownian path) 2: Define 퐿ˆ0 = 0 3: for 푖 = 1 : 푛 do ˜ ˆ푁 1 ∑︀푁 4: Estimate 퐿푡 by 퐿 = 1 (푘) 푖 푡푖 푁 푘=1 {min푗<푖 푌^ ≤0} 푡푗 5: for 푘 = 1 : 푁 do (푘) (푘) (푘) 6: Update 푌ˆ = 푌 + 푊 − 훼퐿ˆ푁 푡푖 0 푡푖 푡푖 7: end for 8: end for

In Section 3.3, we prove convergence in probability of Algorithm1 as 푁 → ∞.

Theorem 2. For all 푖 ≤ 푛,

P 퐿ˆ푡 −−−−→ 퐿˜푡 . (15) 푖 푁→∞ 푖 Next, we improve our scheme by using a Brownian bridge strategy to estimate the hitting probabilities. In order to do this, we consider the process

푌˘푡 = 푌0 + 푊푡 − 훼퐿˘푡, 푡 ∈ [푡푖, 푡푖+1), (16) ˘ 퐿푡 = P(˘휏 < 푡푖), 푡 ∈ [푡푖, 푡푖+1), (17)

휏˘ = inf {푌˘푠 ≤ 0}. (18) 0≤푠≤푇

(푘) ¯ (푘) (푘) (푘) ¯푁 Then, for each Brownian path (푊푡 )푡≥0, we compute 푌푡 = 푌0 +푊푡 −훼퐿푡 ¯푁 ˘ in (푡푖, 푡푖+1), where 퐿푡 is an 푁-sample estimator of 퐿푡푖 given below. Hence, using Brownian bridges, we compute (︂ )︂ (푘) ¯ (푘) ¯ (푘) ¯ (푘) 푝푡푖 = P inf 푌푠 > 0|푌0 ,..., 푌푡푖 푠<푡푖 푖 (︂ )︂ ∏︁ ¯ (푘) ¯ (푘) ¯ (푘) = P inf 푌푠 > 0 | 푌푡 , 푌푡 푠∈[푡 ,푡 ) 푗−1 푗 푗=1 푗−1 푗 푖 (︃ (︃ 2(푌¯ (푘) ∨ 0)(푌¯ (푘) ∨ 0))︃)︃ ∏︁ 푡푗−1 푡푗 − = 1 − exp − . ℎ 푗=1

Thus, a natural choice for 퐿¯푁 is 푡푖

푁 1 ∑︁ (︁ (푘))︁ 퐿¯푁 = 1 − 푝 . (19) 푡푖 푁 푡푖 푘=1 As a result, the new algorithm with the Brownian bridge modification is the fol- lowing. The convergence rate for (17) is given as follows and proven in Section 3.4. 5486 VADIM KAUSHANSKY AND CHRISTOPH REISINGER

Algorithm 2 Discrete time Monte Carlo scheme with Brownian bridge Require: 푁 — number of Monte Carlo paths Require: 푛 — number of time steps: 0 < 푡1 < 푡2 < . . . < 푡푛 1: Draw 푁 samples 푌0 (from the initial distribution) and 푊 (a Brownian path) 2: for 푖 = 1 : 푛 do ¯ 3: Estimate 퐿푡푖 using (19) 4: for 푘 = 1 : 푁 do (푘) (푘) (푘) 5: Update 푌¯ = 푌 + 푊 − 훼퐿¯푁 푡푖 0 푡푖 푡푖 6: end for 7: end for

Theorem 3. Consider 퐿˘푡 from (16)–(18) and 퐿푡 from (1)–(3), 훽 ∈ (0, 1] from Assumption1. Then, there exists 퐶 > 0 independent of ℎ such that 1+훽 max |퐿˘푡 − 퐿푡 | ≤ 퐶ℎ 2 . (20) 푖≤푛 푖 푖 We will later give a result with variable time steps which achieves rate 1 for all 훽.

3. Convergence results. 3.1. Convergence of the timestepping scheme. In this section we prove The- orem1. Then, in Section 3.2, under a modification of Assumption2, we formulate and prove an improvement of this theorem which extends the applicable time inter- val. The proof is based on induction on the error bound over the timesteps, which requires an error estimate of the hitting probability after discretisation (Lemmas1 and2), a sort of consistency, plus a control of the resulting misspecification of the barrier through a bound on the density of the running minimum (Lemma3), a kind of stability. As we have crude estimates on the densities, using only Assumption1 but no sharper bound on, and regularity of, 푓푌0 , or any regularity of the distribution of * inf푠≤푡(푊푠 − 훼퐿푠) or its approximations, the time 푇 until which the numerical scheme is shown to converge will by no means be sharp. Indeed, in our numerical tests (see Section4) we did not encounter any difficulties for any 푡, even 푡 > 푡*. We first formulate these auxiliary results which we will use to prove Theorems 1,3, and4. The proofs are given in Appendix6. First, we modify Proposition 1 from [2], where an analogous result is shown for standard Brownian motion, without the presence of 퐿 and 퐿˜ (i.e., 훼 = 0).

* * Lemma 1. Define 푌 = 푌0 + 푊 푡 − 훼퐿 푡 and 푌˜ = 푌0 + 푊 푡 − 훼퐿˜ 푡 for 푡 ℎ⌊ ℎ ⌋ ℎ⌊ ℎ ⌋ 푡 ℎ⌊ ℎ ⌋ ℎ⌊ ℎ ⌋ 푡 < 푡*. Then, as ℎ → 0, √︂ 1 * 푡 √ sup (푌푠 − 푌푠 ) →푑 2 log , (21) ℎ 푠≤푡 ℎ and √︂ 1 (︁ ˜ ˜ *)︁ 푡 √ sup 푌푠 − 푌푠 →푑 2 log . (22) ℎ 푠≤푡 ℎ The next lemma deduces the convergence rate of the hitting probabilities on the mesh. SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5487

Lemma 2. Consider the processes 푌 and 푌˜ . Then, for any 훿 > 0 there exist 훾, 훾˜ > 0, independent of ℎ and 푖, such that

(︂ )︂ (︂ )︂ 1 2 −훿 0 ≤ P min 푌푡푗 > 0 − P inf 푌푠 > 0 ≤ 훾ℎ (23) 푗<푖 푠<푡푖 and (︂ )︂ (︂ )︂ 1 ˜ ˜ 2 −훿 0 ≤ P min 푌푡푗 > 0 − P inf 푌푠 > 0 ≤ 훾ℎ˜ , (24) 푗<푖 푠<푡푖 where 푡푖 < 푡*. Finally, we bound the probability that the running minimum is close to the boundary. ¯ ˜ ˜ Lemma 3. Consider 푍푖 = inf푠≤푡푖 푌푠, 푍푖 = min푗<푖 푌푡푗 , and 푍푖 = min푗<푖 푌푡푗 for some 푖 ≤ 푛 and 푡푖 < 푡*. Then, 푍푖, 푍¯푖, and 푍˜푖 each have a density, denoted 휙푖, 휙¯푖, and 휙˜푖, respectively, with [︃ √︂ ]︃훽 2푡 1+훽 max(휙 (푧), 휙¯ (푧), 휙˜ (푧)) ≤ 퐵 (푧 ∨ 0) + 푖 + 훼퐵푡˜ 2 , (25) 푖 푖 푖 휋 푖 where 훽, 퐵 are from (7) and 퐵˜ is from (9). We are now in a position to prove Theorem1.

Proof of Theorem1. We shall proceed by induction. For 푡0 = 0, we have 퐿0 = 퐿˜0; ˜ for 푡1, we have min푗<1 푌푡푗 = 푌0 and P(푌0 > 0) = 1, hence, ⃒ (︂ )︂⃒ ˜ ⃒ ⃒ |퐿푡1 − 퐿푡1 | ≤ ⃒1 − P inf 푌푠 > 0 ⃒ , ⃒ 푠<푡1 ⃒ which can be estimated using Lemma2,(23). 1 ˜ ˜ 2 −훿 For 푖 > 1, assume now that we have shown 퐿푡푗 = 퐿푡푗 − 퐶푗ℎ for 푗 < 푖, where ˜ ˜ 퐶푗 ≥ 0 as 퐿푡푗 ≤ 퐿푡푗 . We split the error into two contributions, ⃒ (︂ )︂ (︂ )︂⃒ ˜ ⃒ ˜ ⃒ |퐿푡푖 − 퐿푡푖 | = ⃒P min 푌푡푗 > 0 − P inf 푌푠 > 0 ⃒ ⃒ 푗<푖 푠<푡푖 ⃒ ⃒ (︂ )︂ (︂ )︂⃒ ⃒ (︂ )︂ (︂ )︂⃒ ⃒ ˜ ⃒ ⃒ ⃒ ≤ ⃒P min 푌푡푗 > 0 − P min 푌푡푗 > 0 ⃒+⃒P min 푌푡푗 > 0 − P inf 푌푠 > 0 ⃒ . ⃒ 푗<푖 푗<푖 ⃒ ⃒ 푗<푖 푠<푡푖 ⃒ We can use Lemma2,(23), for the second term. For the first term, (︂ )︂ (︂ )︂ min 푌˜푡 > 0 − min 푌푡 > 0 P 푗<푖 푗 P 푗<푖 푗 (︂ )︂ (︂ )︂ (︁ 1 −훿)︁ = min 푌푡 + 훼퐶˜푗ℎ 2 > 0 − min 푌푡 > 0 P 푗<푖 푗 P 푗<푖 푗 (︂ )︂ (︂ )︂ 1 −훿 ≤ min 푌푡 > −훼 max 퐶˜푗ℎ 2 − min 푌푡 > 0 P 푗<푖 푗 푗<푖 P 푗<푖 푗 (︂ )︂ 1 −훿 = 퐹¯푖(0) − 퐹¯푖 −훼 max 퐶˜푗ℎ 2 푗<푖 (︂ )︂ 1 −훿 1 −훿 ≤ 훼 max 퐶˜푗ℎ 2 sup 휙¯푖 −휃훼 max 퐶˜푗ℎ 2 , 푗<푖 휃∈[0,1] 푗<푖 ¯ where 퐹푖(푥) and 휙¯푖(푥) are the CDF and pdf of min푗<푖 푌푡푗 . 5488 VADIM KAUSHANSKY AND CHRISTOPH REISINGER

Then, using Lemma3, we have [︃ ]︃훽 (︂ )︂ √︂ 1+훽 훼 1 2푡푖 ˜ 2 −훿 ˜ 2 휙¯푖 −휃 max 퐶푗ℎ ≤ 퐵 + 훼퐵푡푖 , 2 푗<푖 휋

1 훼 ˜ 2 −훿 ˜ as −휃 2 max푗<푖 퐶푗ℎ < 0. As a result, we have the following inequality for 퐶푖, [︃√︂ ]︃훽 2푡 1+훽 ˜ ˜ 푖 ˜ 2 퐶푖 ≤ 훼 max 퐶푗퐵 + 훼퐵푡푖 + 훾, 푗<푖 휋 hence 퐶˜푖 is bounded independent of 푖 and ℎ by Assumption2. By induction we get (14). 3.2. Extension of the result in time. The following result extends the appli- cability of Theorem1 up to the explosion time 푡* under certain conditions on the parameters, as specified precisely in (27) below. We shall adapt Theorem 1 in [3], which states: For a Lipschitz function 푓 with Lipschitz constant 퐾, and 푔 such that sup푠≤푡 |푓(푠) − 푔(푠)| ≤ 휀, (︂ 2 )︂ |P(∃푠 ∈ [0, 푡]: 푊푠 < 푓(푠)) − P(∃푠 ∈ [0, 푡]: 푊푠 < 푔(푠))| ≤ 2.5퐾 + √ 휀. 푡 By Remark 2 in [3], this result can be improved for a non-decreasing function 푔. Indeed, retracing the steps in their proof and using monotonicity, one finds easily the slightly better bound (︂ 1 )︂ |P(∃푠 ∈ [0, 푡]: 푊푠 < 푓(푠)) − P(∃푠 ∈ [0, 푡]: 푊푠 < 푔(푠))| ≤ 2 퐾 + √ 휀. 푡

In our case, we cannot directly apply the result with 푓(푠) = −푌0 + 훼퐿푠 and 푔(푠) = −푌0 + 훼퐿˜푠 as 푓 is not guaranteed to be Lipschitz at 푠 = 0. But, along the lines of the proof of Theorem 1 in [3], the above result can be modified as follows: Lemma 4. For a non-decreasing function 푔 which is Lipschitz with constant 퐾 on * * [푇 , 푇 ], and a function 푓 such that 푓(푡) = 푔(푡) for 푡 ≤ 푇 and sup푠≤푇 |푓(푠)−푔(푠)| ≤ 휀, (︂ 1 )︂ |P(∃푠 ∈ [0, 푇 ]: 푊푠 < 푓(푠)) − P(∃푠 ∈ [0, 푇 ]: 푊푠 < 푔(푠))| ≤ 2 퐾 + √ 휀. (26) 푇 Theorem 4. Assume 푇 * from (10) satisfies (︂ )︂ * − 1−훽 1 2훼 퐵ˆ(푇 ) 2 + √ < 1. (27) 푇 * * Then Theorem1 holds for any 푇 < 푡* (and not only up to 푇 as per Assumption 2). Proof. We first split again the error by ⃒ (︂ )︂ (︂ )︂⃒ ˜ ⃒ ˜ ⃒ |퐿푡푖 − 퐿푡푖 | = ⃒P min 푌푡푗 > 0 − P inf 푌푠 > 0 ⃒ ⃒ 푗<푖 푠≤푡푖 ⃒ ⃒ (︂ )︂ (︂ )︂⃒ ⃒ (︂ )︂ (︂ )︂⃒ ⃒ ˜ ⃒ ⃒ ˜ ˜ ⃒ ≤ ⃒P inf 푌푠 > 0 − P inf 푌푠 > 0 ⃒ + ⃒P min 푌푡푗 > 0 − P inf 푌푠 > 0 ⃒ . ⃒ 푠≤푡푖 푠≤푡푖 ⃒ ⃒ 푗<푖 푠≤푡푖 ⃒ (28) SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5489

The second term can be estimated from Lemma2,(24). Again, we shall then 1 ˜ 2 −훿 * proceed by induction. We have already shown that |퐿푡푖 −퐿푡푖 | ≤ 퐶푖ℎ for 푡푖 ≤ 푇 according to Theorem1. 1 * ˜ 2 −훿 Consider 푇 < 푡푖 ≤ 푇 . Assume we have shown that |퐿푡푗 − 퐿푡푗 | ≤ 퐶푗ℎ for 1 ˜ 2 −훿 푗 < 푖. We want to derive 퐶푖 such that |퐿푡푖 − 퐿푡푖 | ≤ 퐶푖ℎ and where all 퐶푖 are bounded independent of ℎ. First, consider an intermediate point 푠 ∈ (푡푖−1, 푡푖), then ˜ ˜ ˜ 퐿푠 − 퐿푠 ≤ 퐿푡푖−1 + 퐾(푠 − 푡푖−1) − 퐿푡푖−1 ≤ 퐿푡푖−1 − 퐿푡푖−1 + 퐾ℎ, with 퐾 the Lipschitz constant of 퐿, and thus (︂ )︂ sup |퐿푠 − 퐿˜푠| ≤ max max |퐿푡 − 퐿˜푡 | + 퐾ℎ, |퐿푡 − 퐿˜푡 | . (29) 푗<푖 푗 푗 푖 푖 푠≤푡푖

1 ˜ 2 −훿 Now we show that |퐿푡푖 − 퐿푡푖 | ≤ 퐶푡푖 ℎ . Consider {︃ * 퐿푡, 푡 ≤ 푇 , 푙푡 = * 퐿˜푡, 푡 > 푇 ,

푙 and 푌푡 = 푌0 + 푊푡 − 훼푙푡. Then, ⃒ (︂ )︂ (︂ )︂⃒ ⃒ ˜ ⃒ ⃒P inf 푌푠 > 0 − P inf 푌푠 > 0 ⃒ ⃒ 푠≤푡푖 푠≤푡푖 ⃒ ⃒ (︂ )︂ (︂ )︂⃒ ⃒ (︂ )︂ (︂ )︂⃒ ⃒ ˜ 푙 ⃒ ⃒ 푙 ⃒ ≤ ⃒P inf 푌푠 > 0 − P inf 푌푠 > 0 ⃒ + ⃒P inf 푌푠 > 0 − P inf 푌푠 > 0 ⃒ . ⃒ 푠≤푡푖 푠≤푡푖 ⃒ ⃒ 푠≤푡푖 푠≤푡푖 ⃒ (30) To estimate the first term, we can write ⃒ (︂ )︂ (︂ )︂⃒ ⃒ 푙 ⃒ [︁ ]︁ inf 푌˜ > 0 − inf 푌 > 0 = 1 푙 ⃒P 푠 P 푠 ⃒ E {∃푡∈[0,푡푖]:푌 ≤0,∀푠∈[0,푡푖]푌˜푠>0} ⃒ 푠≤푡푖 푠≤푡푖 ⃒ 푡 [︁ ]︁ = 1 * 푙 ˜ E {∃푡∈[0,푇 ]:푌푡 ≤0,∀푠∈[0,푡푖]푌푠>0} [︁ ]︁ ≤ 1 * 푙 * ˜ E {∃푡∈[0,푇 ]:푌푡 ≤0,∀푠∈[0,푇 ]푌푠>0} ⃒ (︂ )︂ (︂ )︂⃒ ⃒ ˜ 푙 ⃒ = ⃒P inf 푌푠 > 0 − P inf 푌푠 > 0 ⃒ , ⃒ 푠≤푇 * 푠≤푇 * ⃒ * where we have used in the second line that since 푙푡 = 퐿˜푡 for 푡 > 푇 , hitting after * * 푇 does not affect the difference. For 푡 ≤ 푇 , 푙푡 = 퐿푡. Then, using Theorem1, ⃒ (︂ )︂ (︂ )︂⃒ ⃒ ˜ 푙 ⃒ ⃒P inf 푌푠 > 0 − P inf 푌푠 > 0 ⃒ ≤ ⃒ 푠≤푡푖 푠≤푡푖 ⃒ ⃒ ⃒ (︂ )︂ (︂ )︂ * 1 ⃒ ˜ ⃒ ¯푇 2 −훿 ⃒P inf 푌푠 > 0 − P inf 푌푠 > 0 ⃒ ≤ 퐶 ℎ . ⃒ 푠≤푇 * 푠≤푇 * ⃒

* * − 1−훽 For the second term in (30), 퐿푡 is Lipschitz on [푇 , 푇 ] with 퐾 = 퐵ˆ(푇 ) 2 and we can apply (26). Thus, ⃒ (︂ )︂ (︂ )︂⃒ (︂ )︂ 1 * 1 ⃒ ˜ ⃒ ˜ ¯푇 2 −훿 ⃒P inf 푌푠 > 0 − P inf 푌푠 > 0 ⃒ ≤ 2훼 퐾 + √ sup |퐿푠 − 퐿푠| + 퐶 ℎ 푠≤푡 푠≤푡 ⃒ 푖 푖 ⃒ 푡푖 푠≤푡푖 (︂ )︂ (︂ )︂ 1 * 1 ˜ ˜ ¯푇 2 −훿 ≤ 2훼 퐾 + √ max max |퐿푡푗 − 퐿푡푗 | + 퐾ℎ, |퐿푡푖 − 퐿푡푖 | + 퐶 ℎ , (31) 푡푖 푗<푖 5490 VADIM KAUSHANSKY AND CHRISTOPH REISINGER using (29). Moreover, by (24) and (31), (28) can be written as (︂ )︂ (︂ )︂ 1 1 ˜ ˜ ˜ 2 −훿 |퐿푡푖 − 퐿푡푖 | ≤ 2훼 퐾 + √ max max |퐿푡푗 − 퐿푡푗 | + 퐾ℎ, |퐿푡푖 − 퐿푡푖 | +훾ℎ ¯ , 푡푖 푗<푖 * where 훾¯ =훾 ˜ + 퐶¯푇 . Taking into account (27), we have ˜ |퐿푡푖 − 퐿푡푖 | ≤ ⎛ ⎞ (︂ (︂ )︂ )︂ 1 1 −훿 훾¯ 1 −훿 max ⎝ 2훼 퐾 + √ max 퐶푗 +훾 ¯ +휀 ˜ ℎ 2 , (︁ )︁ℎ 2 ⎠ , 푡푖 푗<푖 1 − 2훼 퐾 + √1 푡푖

(︁ 1 )︁ 1 +훿 where 휀˜ = 2훼 퐾 + √ 퐾ℎ 2 . 푇 * As a result, we have the following inequality for 퐶푖, ⎛ ⎞ (︂ (︂ 1 )︂ )︂ 훾¯ 퐶푖 ≤ max ⎝ 2훼 퐾 + √ max 퐶푗 +훾 ¯ +휀 ˜ , (︁ )︁⎠ . 푡푖 푗<푖 1 − 2훼 퐾 + √1 푡푖

* − 1−훽 (︁ 1 )︁ Since the Lipschitz constant 퐾 ≤ 퐵ˆ(푇 ) 2 , using (27), 2훼 퐾 + √ < 1. 푡푖 Hence, 퐶푖 is bounded independent of ℎ, and by induction we get the result. 3.3. Monte Carlo simulation of discretized process. In this section, we prove the convergence in probability of 푁 ˆ 1 ∑︁ 퐿푡 = 1 (푘) 푖 {min푗<푖 푌^ ≤0} 푁 푡푗 푘=1 ˜ in Algorithm1 to 퐿푡푖 as 푁 → ∞. We note that we cannot directly apply the , as the summands are dependent through 퐿ˆ푁 , 푗 < 푖. However, we see 푡푗 below that the dependence diminishes (i.e., the goes to zero) as 푁 → ∞, which easily gives convergence, albeit without a -type error estimate or a rate for the variance. First, we formulate an auxiliary lemma. Lemma 5. Consider 푖 ≤ 푛. Assume for all 푗 < 푖 퐿ˆ푁 −→P 퐿˜ . (32) 푡푗 푡푗 Then, ˆ푁 ˜ E[퐿푡 ] −−−−→ 퐿푡푖 (33) 푖 푁→∞ ˆ푁 V[퐿푡 ] −−−−→ 0. (34) 푖 푁→∞ The proof is given in Appendix7. Now we can deduce the convergence instantly. Proof of Theorem2. The proof is immediate by induction. The statement is true for 푖 = 0. Now take 푖 ≥ 1. By Lemma5, there exists 푁 * such that for all 푁 > 푁 *, 푁 휀 | [퐿ˆ ] − 퐿˜푡 | ≤ . E 푡푖 푖 2 Thus, by Chebyshev’s inequality, we have 4 [퐿ˆ푁 ] 푁 (︁ 푁 푁 휀 )︁ V 푡푖 (|퐿ˆ − 퐿˜푡 | > 휀) ≤ |퐿ˆ − [퐿ˆ ]| > ≤ . P 푡푖 푖 P 푡푖 E 푡푖 2 휀2 SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5491

ˆ푁 Using again Lemma5, we have that V[퐿푡 ] −−−−→ 0. Hence, 푖 푁→∞

ˆ푁 P ˜ 퐿푡 −−−−→ 퐿푡푖 , 푖 푁→∞ for 푖 and by induction we have proved the theorem.

3.4. Brownian bridge convergence improvement. In this section, we prove Theorem3, which ascertains the uniform convergence (in 푡) of 퐿˘ to 퐿 at the im- proved rate.

Proof of Theorem3. We shall proceed by induction. For 푖 = 0, we have that 퐿˘0 = 1+훽 ˘ 2 퐿0 = 0. Assume we have shown that |퐿푡푗 − 퐿푡푗 | ≤ 퐶푗ℎ for all 푗 < 푖 with some ˘ 퐶푗 > 0, and we want to estimate |퐿푡푖 − 퐿푡푖 |. First, for 푗 > 0 we have

1+훽 1+훽 ˘ ˘ ˆ 2 ˆ 2 sup |퐿푠 − 퐿푠| ≤ |퐿푡푗 − 퐿푡푗 | + 퐵ℎ ≤ (퐶푗 + 퐵)ℎ , (35) 푡푗 ≤푠<푡푗+1

1−훽 1+훽 ′ ˆ − 2 ˘ ˜ 2 since 퐿휁 ≤ 퐵휁 , while for 푗 = 0, sup0≤푠<푡1 |퐿푠 − 퐿푠| ≤ 퐿푡1 ≤ 퐵ℎ . Now consider ⃒ (︂ )︂ (︂ )︂⃒ ˘ ⃒ ˘ ⃒ |퐿푡푖 − 퐿푡푖 | = ⃒P inf 푌푠 > 0 − P inf 푌푠 > 0 ⃒ ⃒ 푠≤푡푖 푠≤푡푖 ⃒ ⃒ (︂ )︂ (︂ )︂⃒ ⃒ ˘ ⃒ = ⃒P inf (푌푠 + 훼(퐿푠 − 퐿푠)) > 0 − P inf 푌푠 > 0 ⃒ ⃒ 푠≤푡푖 푠≤푡푖 ⃒ ⃒ (︂ )︂ (︂ )︂⃒ ⃒ ˘ ⃒ ≤ ⃒P inf 푌푠 > −훼 sup |퐿푠 − 퐿푠| − P inf 푌푠 > 0 ⃒ ⃒ 푠≤푡푖 푠<푡푖 푠≤푡푖 ⃒

(︂ 1+훽 )︂ (︂ )︂ ˆ 2 ≤ P inf 푌푠 > −훼 max(퐶푗 + 퐵)ℎ − P inf 푌푠 > 0 푠≤푡푖 푗<푖 푠≤푡푖

(︂ 1+훽 )︂ 1+훽 ≤ 훼 max(퐶푗 + 퐵ˆ) sup 휙 −휃훼 max(퐶푗 + 퐵ˆ)ℎ 2 ℎ 2 , 푗<푖 휃∈[0,1] 푗<푖

where 휙푖(푥) is the density of inf푠<푡푖 푌푠. Then, using Lemma3, we have

[︃√︂ ]︃훽 푖 푘 [︃√︂ ]︃훽 2푡 1+훽 2푡 1+훽 ˆ 푖 ˜ 2 ∑︁ 푘 ∏︁ 푗 ˜ 2 퐶푖 ≤ 훼퐵 max(퐶푗 + 퐵) + 훼퐵푡푖 ≤ 훾 (훼퐵) + 훼퐵푡푗 , 푗<푖 휋 휋 푘=0 푗=1

훽 [︁√︁ 1+훽 ]︁ ˆ 2푇 ˜ 2 where 훾 = 훼퐵퐵 휋 + 훼퐵푇 . Thus, 퐶푖 is bounded independent of ℎ and 푖 by (10). By induction we get (20).

The proof of Theorem3 indicates that the order is limited by the behaviour of 퐿 for small 푡. The next result shows that a locally refined time mesh achieves convergence order 1 for all 훽.

2 Corollary 1. Consider a non-uniform time mesh 푡푖 = (푖ℎ) 1+훽 for 0 ≤ 푖 ≤ 푛 with 1+훽 ℎ = 푇 2 /푛. Then, there exists 퐶1 > 0, independent of ℎ, such that

max |퐿˘푡 − 퐿푡 | ≤ 퐶1ℎ. 푖≤푛 푖 푖 5492 VADIM KAUSHANSKY AND CHRISTOPH REISINGER

Proof. We shall proceed by induction as in the proof of Theorem3 above. We now have the time step

2 2 2 2 1−훽 푡 − 푡 = ((푗 + 1)ℎ) 1+훽 − (푗ℎ) 1+훽 = ℎ 1+훽 (푗 + 휉) 1+훽 , 푗+1 푗 1 + 훽 for some 휉 ∈ (0, 1), and, for 푗 > 0,

− 1−훽 1−훽 ′ ˆ 2 ˆ − 1+훽 퐿휁 ≤ 퐵푡푗 = 퐵(푗ℎ) . Hence, for 푗 > 0, ˘ ˘ ′ ˘ ¯ sup |퐿푠 − 퐿푠| ≤ |퐿푡푗 − 퐿푡푗 | + 퐿휁 (푡푗+1 − 푡푗) ≤ |퐿푡푗 − 퐿푡푗 | + 퐵ℎ, (36) 푡푗 ≤푠<푡푗+1 for some 퐵¯ > 0 independent of ℎ and 푗. We treat 푗 = 0 separately and obtain directly ˘ ˜ sup |퐿푠 − 퐿푠| ≤ 퐿푡1 ≤ 퐵ℎ. 0≤푠<푡1 Repeating the remaining steps of the proof of Theorem3, we get the result.

4. Numerical experiments. In this section, we demonstrate that the proven con- vergence orders in the timestep of 1/2, (1 + 훽)/2, and 1 for the different methods are indeed sharp in the case of regular solutions; that the empirical variance of 푁-sample estimators is 1/푁; and that the method also converges experimentally in the presence of blow-up. To show this, we study three test cases with varying regularity of the initial data and of the loss function. 4.1. Lipschitz initial data and no blow-up. In our first experiment, we choose 1 푌0 such that ∼ exp(휆), which guarantees that the density decays exponentially 푌0 near zero. We take 휆 = 1 in our experiments, and pick the parameters in (1) to be 훼 = 0.8, 푇 = 2. The solution is found to be continuous. We perform numerical simulations using Algorithms1 and2 with 푁 = 2 × 105 particles and different time meshes varying from 50 to 3200 points. To estimate the ˜2푛 ˜푛 error, we consider the difference |퐿푇 − 퐿푇 | between the solutions with 푛 and 2푛 timesteps, respectively, computed with the same paths. The results, presented in 1 Figure2, agree with the theory: for Algorithm1, we get the convergence rate 2 , and for Algorithm2, the rate is 1, because the initial distribution is regular enough ˆ ˜ around 0, i.e. 훽 = 1 in (1). We also investigate the convergence rate of 퐿푡푖 to 퐿푡푖 ¯ ˘ and 퐿푡푖 to 퐿푡푖 empirically. In order to compute the benchmark solution, we used 푁 = 5 × 107 particles. From the results we conclude that both Algorithm1 and2 1 have the convergence rate 2 in 푁. To illustrate the dependence on the parameter 훼, we include in Figure3 plots for ′ ′ 퐿푡 and 퐿푡 for different values of 훼. We evaluate 퐿푡 numerically using a central finite difference approximation. In order to reduce the Monte Carlo noise, we increase 푁 to 5 × 107 and reduce 푛 to 200. 4.2. H¨older 1/2 initial data and no blow-up. In another example, we again consider 훼 = 0.8 and 푇 = 2, but choose 푌0 ∼ Gamma(1 + 훽, 1/2), i.e. with density 푥훽푒−푥/2 푓 (푥) = , 푌0 Γ(1 + 훽)21+훽 훽 1 such that we have that 푓푌0 (푥) ≤ 퐶푥 for 푥 > 0 and some 퐶 > 0. We choose 훽 = 2 . The solution is again found to be continuous. SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5493

-1 10 MC error absolute error without BB −1/2 C1n absolute error with BB MC with BB error CN −1/2 −1 C2n

10-2 10-2

10-3 error error

10-4

10-5 101 102 103 104 103 104 105 n N (a) (b) Figure 2. Error of the loss process at 푡 = 푇 for 1 ∼ exp(1): (a) 푌0 for increasing number 푛 of timesteps; (b) for increasing number 푁 of samples, both for Algorithms1 and2.

0.7 1.4 α = 0.001 α = 0.01 0.6 1.2 α = 0.1 α = 1.0 α = 2.0 0.5 1 α = 3.0

0.4 0.8 t t L L' 0.3 0.6

0.2 0.4 α = 0.001 α = 0.01 α = 0.1 0.1 α = 1.0 0.2 α = 2.0 α = 3.0 0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 t t (a) (b) ′ Figure 3. (a) 퐿푡 and (b) 퐿푡 for different values of 훼.

We perform numerical simulations using Algorithm1, and Algorithm2 on uni- form and non-uniform meshes varying from 50 to 3200 points and with 푁 = 2 × 105 particles. The results are presented in Figure4. As predicted by the theory, for 1 Algorithm1 we get the convergence rate 2 , for Algorithm2 on uniform meshes rate 1+훽 3 2 = 4 , and for Algorithm2 on non-uniform meshes rate 1. As in the previous example, we also investigate the convergence rate in 푁 empirically. These results 1 also confirm 2 convergence rate in 푁 for both Algorithms1 and2. In Figure5 ′ we present the dependence of 퐿푡 and 퐿푡 on the parameter 훼. As in the previous example, we use 푁 = 5 × 107 and 푛 = 200. 4.3. H¨older 1/2 initial data and blow-up. In our third example, we illustrate possible jumps arising in the solution for sufficiently large values of 훼. We consider 훼 = 1.5, 푇 = 0.008 and choose 푌0 as in the previous example, 푌0 ∼ Gamma(1 + 훽, 1/2). Note that the blow-up happens already for very small 푡 due to the interplay of the mass close to 0 for 푌0 and the relatively large 훼. With the lack of convergence theory for discontinuous (퐿푡)푡≥0, we apply Algo- rithms1 and2, and empirically estimate the error. In Figure6 (a) we show 퐿˜푡 computed using Algorithm1 for different 푛; in Figure6 (b) the numerical error as a 5494 VADIM KAUSHANSKY AND CHRISTOPH REISINGER

absolute error without BB 10-1 absolute error with BB CN −1/2 10-2

10-2 error error

10-3 MC error −1/2 C1n MC with BB error −0.75 C2n MC with BB (non-uniform mesh) error −1 C3n 10-4 10-3 101 102 103 104 103 104 105 n N (a) (b)

Figure 4. Error of the loss process at 푡 = 푇 for 푌0 ∼ Gamma(3/2, 1/2): (a) for increasing number 푛 of timesteps; (b) for increasing number 푁 of samples, both for Algorithms1 and2.

1 18 α = 0.001 α = 0.01 0.9 16 α = 0.1 α = 1.0 0.8 α = 2.0 14 α = 3.0 0.7 12 0.6 10 t t L

0.5 L' 8 0.4 6 0.3 α = 0.001 4 0.2 α = 0.01 α = 0.1 α . = 1 0 2 0.1 α = 2.0 α = 3.0 0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 t t (a) (b) ′ Figure 5. (a) 퐿푡 and (b) 퐿푡 for different values of 훼.

function of 푡 for specific 푛; and in Figure7 (a) and (b) we estimate the convergence rate for different 푡. Figure6 (a) shows that a fairly fine resolution is needed to capture the discon- tinuity and its timing, but that all meshes predict the size of the jump well. This is further illustrated in Figure6 (b), which shows the build-up of the error before the jump, the lack of uniform convergence due to the displacement of the jump on different meshes, and the relatively constant error after the jump. In Figure7 (a) we estimate the convergence order at 푇 = 0.002, i.e. before the jump. By regression, we get 0.53 (0.47, 0.59) for Algorithm1 and 0.80 (0.72, 0.88) for Algorithm2, where the 95% confidence interval is in brackets, which agrees with the theory for continuous 퐿푡. In Figure7 (b), for the error at 푇 = 0.008, i.e. after the jump, we get 0.93 (0.81, 1.05) for Algorithm1 and 1.03 (0.92, 1.14) for Algorithm2. The faster convergence may result from the almost constancy of the losses after the jump. To get an overall picture of the accuracy, we suggest the following metrics to measure the “closeness” of the computed solutions to 퐿푡: SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5495

0.9 100

0.8 10-1 0.7 10-2 0.6

-3 0.5 10 t L 0.4 err(t) 10-4

0.3 10-5 n = 100 0.2 n = 200 n = 400 10-6 0.1 n = 800 n = 1600 n = 3200 0 10-7 0 1 2 3 4 5 6 7 8 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 t ×10-3 t ×10-3 (a) (b) Figure 6. (a) Loss process computed using Algorithm1 for dif- ferent 푛; (b) error as a function of 푡.

-2 -1 10 MC error 10 MC error −1/2 C1n MC with BB error MC with BB error Cn−1 −3/4 C2n

10-2

10-3

10-3 error error

10-4

10-4

10-5 10-5 102 103 104 102 103 104 n n (a) (b) Figure 7. Convergence rate: at (a) 푇 = 0.001, (b) 푇 = 0.008.

˜ ∫︀ 푇 ˜ 1. 푑1(퐿푡, 퐿푡) = 0 |퐿푡 − 퐿푡| 푑푡. This is practically computed by numerical inte- gration.

2. 푑2(퐿푡, 퐿˜푡) = |푡* − 푡˜*|, where 푡* and 푡˜* are the jump times for 퐿푡 and 퐿˜푡, respectively. They are approximated by the points with the steepest gradient ˜ ˜ 푗* = argmax푗(퐿푡푗 − 퐿푡푗−1 ), 푡* = 푗*ℎ.

˜ −1 ˜−1 −1 ˜−1 3. 푑3(퐿푡, 퐿푡) = sup푡∈[0,푇 ] |퐿푡 −퐿푡 |, where 퐿푡 and 퐿푡 are the corresponding inverse functions. The inverse functions are found by the chebfun toolbox ([10]), which automatically splits functions into intervals of continuity.

In the absence of the exact 퐿푡, we use again the difference between quantities computed using 2푛 and 푛 points, for the same Monte Carlo paths, as a proxy. We present the results in Figure8 (a) for Algorithm1 and in Figure8 (b) for Algo- rithm2. For metric 1 we have convergence rate 0.68 (0.63, 0.73) and 0.76 (0.71, 0.81), for metric 2 we have 0.70 (0.65, 0.80) and 0.76 (0.72, 0.80), and for metric 3 we have 0.60 (0.52, 0.68) and 0.71 (0.63, 0.79) for Algorithms1 and2, respectively, where the 95% confidence interval are in brackets. We observe that the convergence rate 5496 VADIM KAUSHANSKY AND CHRISTOPH REISINGER for Algorithm1 is somewhat better than 0.5, while for Algorithm2 it is around 0.75 for all three metrics.

˜ d1(Lt, Lt) d1(Lt, L˜t) ˜ d2(Lt, Lt) d2(Lt, L˜t) ˜ d3(Lt, Lt) d3(Lt, L˜t) Cn−0.5 Cn−0.75

10-3 10-3 error error

10-4 10-4

102 103 102 103 n n (a) (b)

Figure 8. Convergence rate for 푑1(퐿푡, 퐿˜푡), 푑2(퐿푡, 퐿˜푡), 푑3(퐿푡, 퐿˜푡) for (a) Algorithm1, (b) Algorithm2.

5. Conclusions and outlook. We have developed particle methods with explicit timestepping for the simulation of (1)-(3). Convergence with a rate up to 1 in the timestep is shown under a condition on the model parameters and time horizon, when the loss function is differentiable. Experimentally, the method also converges in the blow-up regime, and the variance of estimators is inversely proportional to the number of samples. This opens up several theoretical and practical questions. The efficiency of the method could be significantly improved by a simple application of multilevel simula- tion ([16, 15]). If the variance can be shown to behave on the finer levels as suggested by the numerical tests, combined with the proven result on the time stepping bias, the computational complexity for root-mean-square error 휖 would be brought down to 휖−2, from 휖−3 as observed presently (휖−2 from the number of samples and 휖−1 from the number of timesteps). For the particular system (1)–(3), it is conceivable to apply a particle method without time stepping of the form

푁 (푖) (푖) (푖) 1 ∑︁ 푌 = 푌 + 푊 − 훼 1 , 푡 0 푡 푁 {휏(푗)≤푡} 푗=1

(푖) (푖) where 푌0 and 푊 are 푁 i.i.d. copies of 푌0 and 푊 , and 휏(푗), 1 ≤ 푗 ≤ 푁, is the order statistic of the hitting times of zero. Then, for 휏(푖) < 푡 < 휏(푖+1), conditional (푗) (푗) on the information up to 휏(푖), the law of 푌푡 −푌휏(푖) for those 푁 −푖 particles 푗 which have not yet hit, is identical to that of independent standard Brownian motions with (푗) constant drift −훼푖/푁. So if we simulate those independent hitting times of −푌휏(푖) for each and then take the smallest one to be 휏(푖+1), this inductive construction satisfies the correct law. The complexity isthen 푂(푁 2), and combined with the observed variance of 푂(1/푁), this gives a complexity of 휖−4 for error 휖. Another disadvantage is that this method with exact sampling is restricted to the constant parameter case, while the time stepping scheme can more easily be extended to variable coefficients. SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5497

Theoretically, one would like guaranteed convergence also in the blow-up regime. This requires the choice of an appropriate metric – the Skorokhod distance may be suitable, considering the analysis in [8] and the tests in Section 4.3. It also requires verification that the limit in the case of blow-up is a so-called “physical solution” as defined in [12, 14,8], which on the one hand results from a specific sequential realisation of the losses in the case of blow-up ([14,8]), and on the other hand is the right-continuous solution with the smallest jump size ([12]). Lastly, it would be interesting to investigate the extension to the models in [14] and [8] in more detail.

Acknowledgments. We thank Alex Lipton, Andreas Søjmark, and Sean Ledger for useful comments and interesting discussions. Appendices

6. Proof of lemmas in section 3.1. 6.1. Proof of Lemma1. Proof of Lemma1. Writing

* sup (푌푠 − 푌푠 ) 푡 푠≤ℎ⌊ ℎ ⌋ (︀ )︀ = max sup 푊ℎ(푘−1)+푠 − 푊ℎ(푘−1) − 훼(퐿ℎ(푘−1)+푠 − 퐿ℎ(푘−1)) 푡 1≤푘≤⌊ ℎ ⌋ 0≤푠≤ℎ (︁ ′ )︁ = max sup 푊ℎ(푘−1)+푠 − 푊ℎ(푘−1) − 훼푠퐿 푡 ℎ(푘−1)+휃푠 1≤푘≤⌊ ℎ ⌋ 0≤푠≤ℎ for some 휃 ∈ [0, 1], the last expression can be estimated from both sides, {︂ }︂ (︀ )︀ ′ max sup 푊ℎ(푘−1)+푠 − 푊ℎ(푘−1) − 훼 sup ℎ퐿 ≤ 푡 ℎ(푘−1)+휃푠 1≤푘≤⌊ ℎ ⌋ 0≤푠≤ℎ 0≤푠≤ℎ (︁ ′ )︁ max sup 푊ℎ(푘−1) − 푊ℎ(푘−1)+푠 − 훼푠퐿 푡 ℎ(푘−1)+휃푠 1≤푘≤⌊ ℎ ⌋ 0≤푠≤ℎ {︂ }︂ (︀ )︀ ′ ≤ max sup 푊ℎ(푘−1) − 푊ℎ(푘−1)+푠 − 훼 inf 푠퐿ℎ(푘−1)+휃푠 . 푡 0≤푠≤ℎ 1≤푘≤⌊ ℎ ⌋ 0≤푠≤ℎ ′ ˆ − 1−훽 Since 0 ≤ 퐿푠 ≤ 퐵푠 2 , * sup (푌푠 − 푌푠 ) ≥ 푡 푠≤ℎ⌊ ℎ ⌋

{︂ 1−훽 }︂ (︀ )︀ ˆ − 2 max sup 푊ℎ(푘−1)+푠 − 푊ℎ(푘−1) − 훼퐵ℎ(ℎ(푘 − 1)) 푡 1≤푘≤⌊ ℎ ⌋ 0≤푠≤ℎ √ {︂ √ }︂ (푘) − 1−훽 =푑 ℎ max sup 푊 − 훼퐵ˆ ℎ(ℎ(푘 − 1)) 2 , 푡 푠 1≤푘≤⌊ ℎ ⌋ 0≤푠≤1 and {︂ }︂ * (︀ )︀ sup (푌푠 − 푌푠 ) ≤ max sup 푊ℎ(푘−1)+푠 − 푊ℎ(푘−1) 푡 1≤푘≤⌊ 푡 ⌋ 0≤푠≤ℎ 푠≤ℎ⌊ ℎ ⌋ ℎ √ {︂ }︂ (푘) =푑 ℎ max sup 푊 , 푡 푠 1≤푘≤⌊ ℎ ⌋ 0≤푠≤1 5498 VADIM KAUSHANSKY AND CHRISTOPH REISINGER

(1) (2) where 푊 , 푊 ,... are i.i.d.√ copies of 푊 . − 1−훽 Taking into account that ℎ(ℎ(푘 − 1)) 2 → 0, as ℎ → 0, and applying the same arguments as in Proposition 1 in [2], we get (21). Similar,

(︁ ˜ ˜ *)︁ (︀ )︀ sup 푌푠 − 푌푠 = max sup 푊ℎ(푘−1)+푠 − 푊ℎ(푘−1) 푡 1≤푘≤⌊ 푡 ⌋ 0≤푠≤ℎ 푠≤ℎ⌊ ℎ ⌋ ℎ √ {︂ }︂ (︀ )︀ (푘) = max sup 푊ℎ(푘−1)+푠 − 푊ℎ(푘−1) =푑 ℎ max sup 푊 , 푡 푡 푠 1≤푘≤⌊ ℎ ⌋ 0≤푠≤ℎ 1≤푘≤⌊ ℎ ⌋ 0≤푠≤1 where 푊 (1), 푊 (2),... are i.i.d. copies of 푊 . Again, applying the same arguments as in Proposition 1 in [2], we get (22). 6.2. Proof of Lemma2. Proof of Lemma2. We first prove (23). It is obvious that (︂ )︂ (︂ )︂

P min 푌푡푗 > 0 ≥ P inf 푌푠 > 0 . 푗<푖 푠<푡푖 Also, (︂ )︂ (︂ )︂ (︂{︂ }︂ {︂ }︂)︂

P min 푌푡푗 > 0 = P inf 푌푠 > 0 + P inf 푌푠 ≤ 0 ∩ min 푌푡푗 > 0 . 푗<푖 푠<푡푖 푠<푡푖 푗<푖 Thus, (︂ )︂ (︂ )︂ (︂{︂ }︂ {︂ }︂)︂

P min 푌푡푗 > 0 − P inf 푌푠 > 0 = P inf 푌푠 ≤ 0 ∩ min 푌푡푗 > 0 푗<푖 푠<푡푖 푠<푡푖 푗<푖 (︂ )︂ (︂ )︂

≤ P min 푌푡푗 − inf 푌푠 ≥ 휀 + P min 푌푡푗 ∈ (0, 휀), min 푌푡푗 − inf 푌푠 < 휀 , 푗<푖 푠<푡푖 푗<푖 푗<푖 푠<푡푖 for any 휀 > 0. Consider the first term. Using Markov’s inequality (︂ )︂ [︀(︀ )︀푝]︀ E min푗<푖 푌푡푗 − inf푠<푡푖 푌푠 P min 푌푡푗 − inf 푌푠 ≥ 휀 ≤ 푝 , 푗<푖 푠<푡푖 휀 for any 푝 ≥ 1. Using Lemma1, (︂ )︂ √︂ 1 푡푖 √ min 푌푡푗 − inf 푌푠 →푑 2 log ℎ 푗<푖 푠<푡푖 ℎ thus 푝 [︀(︀ )︀ ]︀ 푡푖 푝/2 E min푗<푖 푌푡푗 − inf푠<푡푖 푌푠 (2 log ) −→ ℎ . 휀푝 · ℎ푝/2 휀푝 For the second term (︂ )︂ (︂ )︂

P min 푌푡푗 ∈ (0, 휀), min 푌푡푗 − inf 푌푠 < 휀 ≤ P min 푌푡푗 ∈ (0, 휀) = 푗<푖 푗<푖 푠<푡푖 푗<푖 (︂ )︂ (︂ )︂ ¯ ¯ P min 푌푡푗 > 0 − P min 푌푡푗 > 휀 = 퐹푖(휀) − 퐹푖(0) ≤ 휀 sup 휙¯푖 (휃휀) , 푗<푖 푗<푖 휃∈[0,1] ¯ where 퐹푖(푥) and 휙¯푖(푥) are the CDF and PDF of the process min푗<푖 푌푡푗 . We also note that 휙¯푖 (휃휀) is bounded according to Lemma3. SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5499

Combining both terms, we have (︂ )︂ (︂ )︂ ℎ푝/2 P min 푌푡푗 > 0 − P inf 푌푠 > 0 ≤ 퐴 푝 + 퐵휀. 푗<푖 푠<푡푖 휀

푎 푝 Minimising the right-hand side over 휀, we find 휀 = ℎ with 푎 = 2(푝+1) . As a result, choosing 푝 → ∞, we get that for any 훿 > 0, there exists 훾 > 0, such that

(︂ )︂ (︂ )︂ 1 2 −훿 P min 푌푡푗 > 0 − P inf 푌푠 > 0 ≤ 훾ℎ . 푗<푖 푠<푡푖 For (24), we use identical steps and the corresponding estimates in Lemma1 and Lemma3 for 푌˜ instead of 푌 .

6.3. Proof of Lemma3.

Proof of Lemma3. We can rewrite 푍푖 = 푌0 + 푉푖, where 푉푖 = inf푠≤푡푖 (푊푠 − 훼퐿푠). As 푌0 and 푉푖 are independent, using convolution, we have ∫︁ +∞

휙푖(푧) = 푓푌0 (푧 − 푣)P(푉푖 ∈ 푑푣). −∞

Using the fact that 푉푖 ≤ 0, 푌0 ≥ 0, and (7), we have

∫︁ 푧∧0 ∫︁ 푧∧0 훽 휙푖(푧) = 푓푌0 (푧 − 푣)P(푉푖 ∈ 푑푣) ≤ 퐵 (푧 − 푣) P(푉푖 ∈ 푑푣) −∞ −∞ ∫︁ 푧∧0 훽 P(푉푖 ∈ 푑푣) = 퐵퐹푉푖 (푧 ∧ 0) (푧 − 푣) , (37) −∞ 퐹푉푖 (푧 ∧ 0) where 퐹푉푖 is the CDF of 푉푖. It is obvious that 퐹푉푖 (푧) > 0 for all 푧 ≤ 0. Indeed,

퐹푉푖 (푧) ≥ P(푉푖 ≤ 푧) ≥ P(inf푠≤푡푖 푊푠 ≤ 푧) > 0. Since 훽 ∈ (0, 1], for all 푧 the function (푧 − ·)훽 :(−∞, 푧 ∧ 0) → (0, ∞) is con- cave and, using Jensen’s inequality for the proper probability measure P(푉푖∈푑푣) on 퐹푉푖 (푧∧0) (−∞, 푧 ∧ 0),

(︂∫︁ 푧∧0 )︂훽 1−훽 휙푖(푧) ≤ 퐵퐹푉푖 (푧 ∧ 0) (푧 − 푣)P(푉푖 ∈ 푑푣) −∞ 1−훽 (︀ [︀ 1 ]︀)︀훽 = 퐵퐹푉푖 (푧 ∧ 0) E (푧 − 푉푖) {푉푖≤푧} , (38) where (푉푖 ≤ 푧) ⇔ (푉푖 ≤ 푧 ∧ 0) as 푉푖 ≤ 0. Since 푡 → 퐿푡 is non-decreasing, inserting 푉푖,

1−훽 훽 휙푖(푧) ≤ 퐵퐹푉 (푧 ∧ 0) ( [(푧 − inf {푊푠} + 훼퐿푡 )1 ]) 푖 E 푖 {inf푠≤푡 {푊푠}−훼퐿푡푖 ≤푧} 푠≤푡푖 푖 훽 (︃ 1+훽 )︃ 1−훽 2 ≤ 퐵퐹 (푧 ∧ 0) [(푧 − inf {푊 } + 훼퐵푡˜ )1 1+훽 ] . 푉푖 E 푠 푖 {︀ }︀ 푠≤푡푖 inf {푊 }≤푧+훼퐵푡˜ 2 푠≤푡푖 푠 푖 (39) By simple integration of the density of the running minimum of Brownian motion, √︂ (︂ (︂ 휉 )︂ )︂ 2푡 휉2 푖 − 2푡 [(휉 − inf 푊푠)1 ] = 2Φ √ ∧ 1 휉 + 푒 푖 , (40) E {inf푠≤푡푖 푊푠≤휉} 푠≤푡푖 푡푖 휋 5500 VADIM KAUSHANSKY AND CHRISTOPH REISINGER

1+훽 ˜ 2 and, taking 휉 = 푧 + 훼퐵푡푖 , we get from (39) for 푧 > 0, [︃ √︂ ]︃훽 1+훽 2푡 휙 (푧) ≤ 퐵 푧 + 훼퐵푡˜ 2 + 푖 , 푖 푖 휋 and, similarly, for 푧 ≤ 0, [︃ √︂ ]︃훽 1+훽 2푡 휙 (푧) ≤ 퐵 훼퐵푡˜ 2 + 푖 . 푖 푖 휋

Combing the last two equations, we finally get (25) for 휙푖. ¯ ˜ Using similar arguments for 푍푖 and 푍푖, using inf푠≤푡푖 푊푠 ≤ min푗<푖 푊푡푗 and 퐿푡푖 ≥ ˜ 퐿푡푖 , respectively, we get the analogous estimates for 휙¯푖 and 휙˜푖, and hence (25).

7. Proof of lemma5, section 3.3.

Proof of Lemma5. We can write 1 1 [ (푘) ] = [ (푘) (푘) 푁 ], E {min푗<푖 푌^ >0} E {min푗<푖(푌 +푊 −훼퐿˜푡 +훼(퐿˜푡 −퐿^ ))>0} 푡푗 0 푡푗 푗 푗 푡푗 and estimate 1 1 [ (푘) 푁 ] ≤ [ (푘) ] E {min푗<푖 푌˜ >−훼 min푗<푖(퐿˜푡 −퐿^ )} E {min푗<푖 푌^ >0} 푡푗 푗 푡푗 푡푗 1 ≤ [ (푘) 푁 ]. (41) E {min푗<푖 푌˜ >−훼 max푗<푖(퐿˜푡 −퐿^ )} 푡푗 푗 푡푗 We evaluate left- and right-hand side of the last equation separately. We start with ˜(푘) ˜ (푘) ˜ the right-hand side, and define for brevity 퐴푖 = {min푗<푖 푌푡푗 > −훼 max푗<푖(퐿푡푗 − 퐿ˆ푁 )}. Consider some 휀 > 0. Then, 푡푗 1 1 [ (푘) ] = [ (푘) 푁 푁 ] = E 퐴˜ E 퐴˜ ∩({max푗<푖(퐿˜푡 −퐿^ )≤휀}∪{max푗<푖(퐿˜푡 −퐿^ )>휀}) 푗 푗 푗 푡푗 푗 푡푗 1 1 [ (푘) 푁 ] + [ (푘) 푁 ] ≤ E 퐴˜ ∩{max푗<푖(퐿˜푡 −퐿^ )≤휀} E 퐴˜ ∩{max푗<푖(퐿˜푡 −퐿^ )>휀} 푗 푗 푡푗 푗 푗 푡푗 (︂ )︂ (︂ )︂ ˜(푘) ˜ ˆ푁 ˜ ˆ푁 퐴푗 ∩ (max(퐿푡푗 − 퐿푡 ) ≤ 휀) + max(퐿푡푗 − 퐿푡 ) > 휀 ≤ P 푗<푖 푗 P 푗<푖 푗 (︂ )︂ (︂ )︂ (︂ )︂ ˜ (푘) ˜ ˆ푁 ˜ (푘) min 푌푡 > −훼휀 + max(퐿푡푗 − 퐿푡 ) > 휀 = min 푌푡 > 0 P 푗<푖 푗 P 푗<푖 푗 P 푗<푖 푗 (︂ (︂ )︂ (︂ )︂)︂ (︂ )︂ ˜ (푘) ˜ (푘) ˜ ˆ푁 + min 푌푡 > −훼휀 − min 푌푡 > 0 + max(퐿푡푗 − 퐿푡 ) > 휀 ≤ P 푗<푖 푗 P 푗<푖 푗 P 푗<푖 푗 (︂ )︂ 1 − 퐿˜ + 훼휀 sup 휙˜ (−휃훼휀) + max(퐿˜ − 퐿ˆ푁 ) > 휀 , 푡푖 푖 P 푡푗 푡푗 휃∈[0,1] 푗<푖 ˜ (푘) where 휙˜푖(푥) is the pdf of min푗<푖 푌푡푗 , which is bounded according to Lemma3. Using (32) and properties of convergence in probability, we have

˜ ˆ푁 P max(퐿푡푗 − 퐿푡 ) −→ 0. 푗<푖 푗 Thus, the last term goes to 0 with 푁 → ∞. Considering 휀 → 0, we get 1 ˜ lim E[ ˜(푘) ] ≤ 1 − 퐿푡푖 . (42) 푁→∞ 퐴푗 SIMULATION OF A SIMPLE PARTICLE SYSTEM INTERACTING THROUGH ... 5501

Similar, denote by 퐴¯(푘) = {min 푌˜ (푘) > −훼 min (퐿˜ − 퐿ˆ푁 )}, then 푖 푗<푖 푡푗 푗<푖 푡푗 푡푗 1 1 [ (푘) ] = [ (푘) 푁 푁 ] = E 퐴¯ E 퐴˜ ∩({min푗<푖(퐿˜푡 −퐿^ )}≥−휀}∪{min푗<푖(퐿˜푡 −퐿^ )}<−휀}) 푗 푗 푗 푡푗 푗 푡푗 1 1 [ (푘) 푁 ] + [ (푘) 푁 ] ≥ E 퐴¯ ∩{min푗<푖(퐿˜푡 −퐿^ )≥−휀} E 퐴¯ ∩{min푗<푖(퐿˜푡 −퐿^ )<−휀} 푗 푗 푡푗 푗 푗 푡푗 (︂ )︂ ¯(푘) ˜ ˆ푁 퐴푗 ∩ {min(퐿푡푗 − 퐿푡 ) ≥ −휀} ≥ P 푗<푖 푗 (︂ )︂ ˜ (푘) ˜ P min 푌푡 > 훼휀 ≥ 1 − 퐿푡푖 − 훼휀 inf 휙˜푖 (휃훼휀) , 푗<푖 푗 휃∈[0,1] ˜ (푘) where 휙˜푖(푥) is the pdf of min푗<푖 푌푡푗 , which is bounded according to Lemma3. Thus, 1 ˜ lim E[ ˜(푘) ] ≥ 1 − 퐿푡푖 . (43) 푁→∞ 퐴푗 Combining (42), (43), and (41), we immediately get (33). Consider now the variance

(︃ 푁 푁 1 ∑︁ [퐿ˆ ] = [1 (푘) ]+ V 푡푖 2 V {min푗<푖 푌^ >0} 푁 푡푗 푘=1 ⎞ ∑︁ (︂ )︂ cov 1 (푘) , 1 (푙) . (44) {min푗<푖 푌^ >0} {min푗<푖 푌^ >0} ⎠ 푡푗 푡푗 푘̸=푙 The covariance can be estimated by (︂ )︂ cov 1 (푘) , 1 (푙) = {min푗<푖 푌^ >0} {min푗<푖 푌^ >0} 푡푗 푡푗 [︂ ]︂ [︂ ]︂ [︂ ]︂ 1 (푘) 1 (푙) − 1 (푘) 1 (푙) . E {min푗<푖 푌^ >0} {min푗<푖 푌^ >0} E {min푗<푖 푌^ >0} E {min푗<푖 푌^ >0} 푡푗 푡푗 푡푗 푡푗 (45) As above and because of exchangeability, for the second term of the last expression, [︂ ]︂ [︂ ]︂ 2 (︁ ˜ )︁ 1 (푘) 1 (푙) −−−−→ 1 − 퐿푡 . (46) E {min푗<푖 푌^ >0} E {min푗<푖 푌^ >0} 푖 푡푗 푡푗 푁→∞ Now we consider the first term of (45). Similar to above, it can be estimated by 1 1 [ (푘) 푁 (푙) 푁 ] ≤ E {min푗<푖 푌˜ >−훼 min푗<푖(퐿˜푡 −퐿^ )} {min푗<푖 푌˜ >−훼 min푗<푖(퐿˜푡 −퐿^ )} 푡푗 푗 푡푗 푡푗 푗 푡푗 [︂ ]︂ 1 (푘) 1 (푙) E {min푗<푖 푌^ >0} {min푗<푖 푌^ >0} 푡푗 푡푗 1 1 ≤ [ (푘) 푁 (푙) 푁 ]. (47) E {min푗<푖 푌˜ >−훼 max푗<푖(퐿˜푡 −퐿^ )} {min푗<푖 푌˜ >−훼 max푗<푖(퐿˜푡 −퐿^ )} 푡푗 푗 푡푗 푡푗 푗 푡푗 Similar to (42) and (43), one can show that 1 1 ˜ 2 lim [ ˜ (푘) ˜ ^푁 ˜ (푙) ˜ ^푁 ] ≥ (1 − 퐿푡푖 ) , E {min푗<푖 푌 >−훼 min푗<푖(퐿푡 −퐿 )} {min푗<푖 푌 >−훼 min푗<푖(퐿푡 −퐿 )} 푁→∞ 푡푗 푗 푡푗 푡푗 푗 푡푗 1 1 ˜ 2 lim [ ˜ (푘) ˜ ^푁 ˜ (푙) ˜ ^푁 ] ≤ (1 − 퐿푡푖 ) . E {min푗<푖 푌 >−훼 max푗<푖(퐿푡 −퐿 )} {min푗<푖 푌 >−훼 max푗<푖(퐿푡 −퐿 )} 푁→∞ 푡푗 푗 푡푗 푡푗 푗 푡푗 Hence, [︂ ]︂ ˜ 2 1 (푘) 1 (푙) −−−−→ (1 − 퐿푡 ) . (48) E {min푗<푖 푌^ >0} {min푗<푖 푌^ >0} 푖 푡푗 푡푗 푁→∞ 5502 VADIM KAUSHANSKY AND CHRISTOPH REISINGER

Thus, combining (46) and (48), we get (︂ )︂ cov 1 (푘) , 1 (푙) −−−−→ 0. {min푗<푖 푌^ >0} {min푗<푖 푌^ >0} 푡푗 푡푗 푁→∞ Considering the first term of (44), it is easy to see that each variance is bounded by 1. As a result, we have ⃒ (︂ )︂⃒ 푁 1 ⃒ ⃒ [퐿ˆ ] ≤ + max cov 1 (푘) , 1 (푙) −−−−→ 0. V 푡푖 ⃒ {min푗<푖 푌^ >0} {min푗<푖 푌^ >0} ⃒ 푁 푘̸=푙 ⃒ 푡푗 푡푗 ⃒ 푁→∞

REFERENCES [1] F. Antonelli, A. Kohatsu-Higa et al., Rate of convergence of a particle method to the solution of the McKean–Vlasov equation, The Annals of Applied Probability, 12 (2002), 423–476. [2] S. Asmussen, P. Glynn and J. Pitman, Discretization error in simulation of one-dimensional reflecting Brownian motion, The Annals of Applied Probability, 5 (1995), 875–896. [3] K. Borovkov and A. Novikov, Explicit bounds for approximation rates of boundary crossing probabilities for the , Journal of Applied Probability, 42 (2005), 82–92. [4] M. Bossy and D. Talay, A stochastic particle method for the McKean-Vlasov and the Burgers equation, Mathematics of Computation, 66 (1997), 157–192. [5] M. J. C´aceres,J. A. Carrillo and B. Perthame, Analysis of nonlinear noisy integrate & fire neuron models: Blow-up and steady states, The Journal of Mathematical Neuroscience, 1 (2011), Art. 7, 33 pp. [6] J. A. Carrillo, M. d. M. Gonz´alez,M. P. Gualdani and M. E. Schonbek, Classical solutions for a nonlinear Fokker–Planck equation arising in computational neuroscience, Communications in Partial Differential Equations, 38 (2013), 385–409. [7] F. Delarue, J. Inglis, S. Rubenthaler and E. Tanr´e, Global solvability of a networked integrate- and-fire model of McKean–Vlasov type, The Annals of Applied Probability, 25 (2015), 2096– 2133. [8] F. Delarue, J. Inglis, S. Rubenthaler and E. Tanr´e, Particle systems with a singular mean-field self-excitation. Application to neuronal networks, Stochastic Processes and their Applications, 125 (2015), 2451–2492. [9] G. Dos Reis, G. Smith and P. Tankov, Importance sampling for McKean-Vlasov SDEs, 2018, arXiv:1803.09320. [10] T. A. Driscoll, N. Hale and L. N. Trefethen, Chebfun guide, Pafnuty Publ. [11] P. Glasserman, Monte Carlo Methods in Financial Engineering, vol. 53, Stochastic Modelling and Applied Probability, Springer-Verlag, New York, 2004. [12] B. Hambly, S. Ledger and A. Sojmark, A McKean–Vlasov equation with positive feedback and blow-ups, arXiv:1801.07703. [13] A. Lipton, Modern monetary circuit theory, stability of interconnected banking network, and balance sheet optimization for individual banks, International Journal of Theoretical and Applied Finance, 19 (2016), 1650034, 57 pp. [14] S. Nadtochiy and M. Shkolnikov, Particle systems with singular interaction through hitting times: Application in systemic risk modeling, The Annals of Applied Probability, 29 (2019), 89–129. [15] L. Ricketson, A multilevel Monte Carlo method for a class of McKean–Vlasov processes, arXiv:1508.02299. [16] L. Szpruch, S. Tan and A. Tse, Iterative particle approximation for McKean–Vlasov SDEs with application to Multilevel Monte Carlo estimation, arXiv:1706.00907. Received for publication June 2018. E-mail address: [email protected] E-mail address: [email protected]