Growth-fragmentation process embedded in a planar Brownian excursion

by Elie Aïdékon1 and William Da Silva2

Summary. The aim of this paper is to present a self-similar growth-fragmentation process linked to a Brownian excursion in the upper half-plane H, obtained by cutting the excursion at horizontal levels. We prove that the associated growth-fragmentation is related to one of the growth- fragmentation processes introduced by Bertoin, Budd, Curien and Kortchemski in [5]. Keywords. Growth-fragmentation process, self-similar Markov pro- cess, planar , excursion theory. 2010 Mathematics Subject Classification. 60D05.

1 Introduction

We consider a Brownian excursion in the upper half-plane H from 0 to a positive real number z0. For a > 0, if the excursion hits the set {z ∈ C : =(z) = a} of points with imaginary part a, it will make a countable number of excursions above it, that we a,+ a,+ denote by (ei , i ≥ 1). For any such excursion, we let ∆ei be the difference between the endpoint of the excursion and its starting point, which we will refer to as the size or length of the excursion. Since both points have the same imaginary part, the collection a,+ (∆ei , i ≥ 1) is a collection of real numbers and we suppose that they are ranked in decreasing order of their magnitude. Our main theorem describes the law of the a,+ process (∆ei , i ≥ 1)a≥0 indexed by a in terms of a self-similar growth-fragmentation. We refer to [4] and [5] for background on growth-fragmentations. Let us describe the growth-fragmentation process involved in our case.

Let Z = (Za)0≤a<ζ be the positive self-similar Markov process of index 1 whose Lamperti representation is

−1 arXiv:2005.06372v2 [math.PR] 21 Aug 2020 Za = z0 exp(ξ(τ(z0 a))),

where ξ is the Lévy process with Laplace exponent

Z −y 4 2 qy y e dy Ψ(q) = − q + (e − 1 − q(e − 1)) y 2 , q < 3, (1) π π y>− ln(2) (e − 1)

τ is the time change  Z s  τ(a) = inf s ≥ 0, eξ(u)du > a , 0

1LPSM, Sorbonne Université Paris VI, and Institut Universitaire de France, [email protected] 2LPSM, Sorbonne Université Paris VI, [email protected]

1 and ζ = inf{a ≥ 0,Za = 0}. The cell system driven by Z can be roughly constructed as follows. The size of the so-called Eve cell is z0 at time 0 and evolves according to Z. Then, conditionally on Z, we start at times a when a jump ∆Za = Za − Za− occurs independent processes starting from −∆Za, distributed as Z when ∆Za < 0 and as −Z when ∆Za > 0. These processes represent the sizes of the daughters of the Eve particle. Then repeat the process for all the daughter cells: at each jump time of the cell process, start an independent copy of the process Z if the jump is negative, −Z if the jump is positive, with initial value the negative of the corresponding jump. This defines the sizes of the cells of the next generation and we proceed likewise. We then define, for a ≥ 0, X(a) as the collection of sizes of cells alive at time a, ranked in decreasing order of their magnitude. Growth-fragmentation processes were introduced in [4]. Beware that the growth- fragmentation process we just defined is not included in the framework of [4] or [5] be- cause we allow cells to be created at times corresponding to positive jumps, giving birth to cells with negative size. Therefore, the process X is not a true growth-fragmentation process. The formal construction of the process X is done in Section4. The following theorem is the main result of the paper.

a,+ Theorem 1.1. The process (∆ei , i ≥ 1)a≥0 is distributed as X.

Remarks.

• The fact that there is no local explosion (in the sense that there is no compact of R\{0} with infinitely many elements of X) can be seen as a consequence of the theorem.

• From the skew-product representation of planar Brownian motion, this theorem has an analog in the radial setting. It can be stated as follows. Take a Brownian excursion in the unit disc from boundary to boundary, with continuous deter- mination of its argument (i.e., its winding number around the origin) z0 > 0. Then, for each a ≥ 0, record for each excursion made in the disc of radius e−a the corresponding winding number. The collection of these winding numbers, ranked in decreasing order of their magnitude and indexed by a is distributed as X.

• One could finally look at the growth-fragmentation associated to the Brownian bubble measure in H. It would give an infinite measure on the space of (signed) growth-fragmentation processes starting from 0. In the non-critical case (i.e. when the natural martingale associated to the intrinsic area converges in L1), a measure on growth-fragmentation processes starting from 0 has been constructed by Bertoin, Curien and Kortchemski [6], see Section 4.3 there.

Related works. A pure fragmentation process was identified by Bertoin [3] in the case of the linear Brownian excursion where the size of an excursion was there its duration. Le Gall and Riera [15] identified a growth-fragmentation process in the Brownian motion indexed by the Brownian tree. We will follow the strategy of this paper, making use of excursion theory to prove our theorem.

When killing in X all cells with negative size (and their progeny), one recovers a genuine self-similar (positive) growth-fragmentation driven by Z, call it X. The process X appears in the work of Bertoin et al. [5], compare Proposition 5.2 in [5] with Proposition 4.2 below. In Section 3.3 of [5], the authors exhibit remarkable martingales associated to growth-fragmentation processes and describe the corresponding changes

2 of measure. In the case of X, the martingale consists in summing the sizes raised to the power 5/2 of all cells alive at time a. Under the change of measure, the process X has a spinal decomposition: the size of the tagged particle is a conditioned on staying positive, while other cells behave normally. In the case of X, where we also include cells with negative size, a similar martingale appears, substituting 2 for 5/2, while the tagged particle will now follow a Cauchy process (with no conditioning). It is the content of Section 3.3. This martingale is related to the one appearing in [1], where a change of measure was also specified. In that paper, the authors exhibit a martingale in the radial case, see Section 7.1 there. The martingale in our setting can be viewed as a limit case, where one conformally maps the unit disc to the upper half-plane, then sends the image of the origin towards infinity.

Connection with random planar maps. In [5], the authors relate a distinguished family of growth-fragmentation processes to the exploration of a Boltzmann planar map, see Proposition 6.6 there. The mass of a particle in the growth-fragmentation represents the perimeter of a region in the planar map which is currently explored, a negative jump the splitting of the region into two smaller regions to be explored, and a positive jump the discovery of a face with large degree. In this setting, only a negative jump is a birth event. The area of the map is identified as the limit of a natural martingale associated to the underlying branching , see Corollary 6.7 there. On the other hand, a Boltzmann random map can also be seen as the gasket of a O(n) loop model, see Section 8 of [14]. From this point of view, a positive jump of the growth-fragmentation stands for the discovery of a loop which still has to be explored, so that positive jumps will be birth events too. The signed growth-fragmentation X of our paper would represent the exploration of a planar map decorated with the O(n) model with n = 2, where the sign depends on the parity of the number of loops which surrounds the explored region. One could wonder whether we would have an intrinsic area as in [5]. Actually, the natural martingale associated to the branching random walk converges to 0: it is the so-called critical martingale in the branching random walk literature. The martingale to consider is then the derivative martingale, see Section 5, whose limit is proved to be twice the duration of the Brownian excursion (i.e. the inverse of an exponential random variable, see (7)). This gives a conjectured limit of the area of a O(2) decorated planar map properly renormalized, see [9], Theorem 9, for the analogous results in the O(n) model for n 6= 2.

The paper is organized as follows. In Section2, we recall some excursion theory for the planar Brownian motion. Among others, we will define the locally largest fragment, which will be our Eve particle. In Section3, we show the branching property, identify the law of the Eve particle with that of Z and exhibit the martingale in our context. Theorem 1.1 will be proved in Section4, where we also show the relation with [5]. Finally, we identify the limit of the derivative martingale in Section5.

Acknowledgements: We are grateful to Jean Bertoin and Bastien Mallein for stimulating discussions, and to Juan Carlos Pardo for a number of helpful discussions regarding self-similar processes. After a first version of this article appeared online, Nicolas Curien pointed to us the connection with random planar maps, and the link between the duration of the excursion and the area of the map. We warmly thank him for his explanations. We also learnt that Timothy Budd in an unpublished note had already predicted the link between growth-fragmentations of [5] and planar excursions.

3 2 Excursions of Brownian motion in H 2.1 The excursion process of Brownian motion in H In this section, we recall some basic facts from excursion theory. Let (X,Y ) be a planar Brownian motion defined on the complete probability space (Ω, F , P), and (Ft)t≥0 be the usual augmented filtration. In addition, we call X the space of real-valued continuous functions w defined on an interval [0,R(w)] ⊂ [0, ∞), endowed with the usual σ-fields generated by the coordinate mappings w 7→ w(t ∧ R(w)). Let also X0 be the subset of functions in X vanishing at their endpoint R(w). We set U := {u = (x, y) ∈ X × X0, u(0) = 0 and R(x) = R(y)} ± and Uδ := U ∪ {δ}, where δ is a cemetery function and write U for the set of such functions in U with nonnegative and nonpositive imaginary part respectively. These sets are endowed with the product σ−field denoted Uδ and the filtration (Ft)t≥0 adapted to the coordinate process on U. For u ∈ U, we take the obvious notation Y R(u) := R(x) = R(y). Finally, let (Ls)s≥0 = (Ls )s≥0 denote the at 0 of Y Y and τs = τs its inverse defined by τs := inf{r > 0,Lr > s}. Recall that the set of zeros of Y is almost surely equal to the set of τs, τs− ; we refer to [16] for more details on local times.

Definition 2.1. The excursion process is the process e = (es, s > 0) with values in (Uδ, Uδ) defined on (Ω, F , P) by

(i) if τs − τs− > 0, then   e : r 7→ X − X ,Y , r ≤ τ − τ − , s r+τs− τs− r+τs− s s

(ii) if τs − τs− = 0, then es = δ. Figure1 is a (naive) drawing of such an excursion.

Figure 1: Drawing of an excursion in the upper half-plane H.

The next proposition follows from the one-dimensional case.

4 Proposition 2.2. The excursion process (es)s>0 is a (Fτs )s>0−Poisson . We write n for the intensity measure of this . It is a measure + − on U, and we shall denote by n+ and n− its restrictions to U and U . We have the following expression for n.

R(y) Proposition 2.3. n(dx, dy) = n(dy)P(X ∈ dx), where n denotes the one-dimensional T Itô’s measure on X0 and X := (Xt, t ∈ [0,T ]).

2.2 The under n

For any u ∈ U and any a > 0, let Ta := inf{0 ≤ t ≤ R(u), y(t) = a} be the hitting time of a by y. Then we have the following kind of Markov property under n+. Lemma 2.4. (Markov property under n) Under n , on the event {T < ∞}, the process (u(T + t) − u(T )) is + a a a 0≤t≤R(u)−Ta independent of FTa and has the law of a Brownian motion killed at the time ρ when it reaches {=(z) = −a}.

Proof. This results from the fact that under the one-dimensional Itô’s measure n+, the coordinate process t 7→ y(t) has the transition of a Brownian motion killed when it reaches 0 (cf. Theorem 4.1, Chap. XII in [16]). Let f, g, h1, h2 be nonnegative measurable functions defined on X . For simplicity, T write for w ∈ X or w ∈ C([0, ∞)), w(θr) = w(r + ·) − w(r) and for T > 0, w := (w(t), t ∈ [0,T ]). We want to compute Z Ta Ta 1 f(x(θTa ))g(y(θTa ))h1(x )h2(y ) {Ta<∞}n+(dx, dy) U Z Ta Ta 1 R(y) = f(x(θTa ))g(y(θTa ))h1(x )h2(y ) {Ta<∞}n+(dy)P(X ∈ dx) U Z h    i Ta 1 R(y)−Ta(y) Ta(y) = g(y(θTa ))h2(y ) {Ta<∞}E f Xe h1 X n+(dy) X0

where Xe = X(θTa(y)), and for y ∈ X0, Ta = Ta(y) is the hitting time of a by y. Using the simple Markov property at time Ta(y) in the above expectation gives Z Ta Ta 1 f(x(θTa ))g(y(θTa ))h1(x )h2(y ) {Ta<∞}n+(dx, dy) U Z h  i h  i Ta 1 R(y)−Ta(y) Ta(y) = g(y(θTa ))h2(y ) {Ta<∞}E f X E h1 X n+(dy). X0

Then we can use the Markov property under n+ stated in Theorem 4.1, Chap. XII in [16]: Z Ta Ta 1 f(x(θTa ))g(y(θTa ))h1(x )h2(y ) {Ta<∞}n+(dx, dy) U Z h  i h    i Ta(y) Ta 1 T−a T−a = E h1 X h2(y ) {Ta<∞}n+(dy)E g Y f X X0 Z h    i Ta Ta 1 T−a T−a = h1(x )h2(y ) {Ta<∞}n+(dx, dy)E g Y f X . U This concludes the proof of Lemma 2.4.

5 2.3 Excursions above horizontal levels We next set some notation for studying the excursions above a given level. Let a ≥ 0 and u = (x, y) ∈ U +. In the following list of definitions, one should think of u as a Brownian excursion in the sense of Definition 2.1. Define I(a) = {s ∈ [0,R(u)], y(s) > a} . Then by continuity I(a) is a countable (possibly empty) union of disjoint open intervals I1,I2,... For any such interval I = (i−, i+), take uI (s) = u(i− + s) − u(i−), 0 ≤ s ≤ i+ − i−, for the restriction of u to I, and ∆uI = x(i+) − x(i−) for the size or length of uI . Note that uI ∈ U. If now z = u(t), 0 ≤ t ≤ R(u), is on the path of u and 0 ≤ a < =(z), we define

(t) (t) ea = ea (u) = uI , where I is the unique open interval in the above partition of I(a) such that t ∈ I (note that this depends on t and not only on z, which could be a double point). By (t) (t) convention, we also set for a = =(z), ea = z and ∆ea = 0. This is represented in an excessively naive way in Figure2 below.

Figure 2: Excursions above the level t.

(t) (t) For z = u(t), let F : a ∈ [0, =(z)] 7→ ∆ea . Define t,← u := (u(t − s) − u(t))0≤s≤t , (2) t,→ u := (u(t + s) − u(t))0≤s≤R(u)−t . (3) If we set for a ∈ [0, y(t)],

t,← Ta := inf {s ≥ 0, y(t − s) = a} , (4) t,→ Ta := inf {s ≥ 0, y(t + s) = a} , (5)

(t) t,→ t,→ t,← t,← we can write F (a) = u (Ta ) − u (Ta ). Lemma 2.5. For any u ∈ U +, for all 0 ≤ t ≤ R(u), the function F (t) is càdlàg. Proof. Fix t ∈ [0,R(u)]. We want to show that F (t) is càdlàg on [0, y(t)]. By usual properties of inverse of continuous functions (see Lemma 4.8 and the remark following t,← t,→ it in Chapter 0 of Revuz-Yor [16]), a 7→ Ta and a 7→ Ta are càdlàg (in a). Hence F (t) is càdlàg since u is continuous.

6 2.4 Bismut’s description of Itô’s measure in H

In the case of one-dimensional Itô’s measure n+, Bismut’s description roughly states that if we pick an excursion u at random according to n+, and some time 0 ≤ t ≤ R(u) according to the Lebesgue measure, then the "law" of u(t) is the Lebesgue measure and conditionally on u(t) = α, the left and right parts of u (seen from u(t)) are independent Brownian motions killed at −α (see Theorem 4.7, Chap. XII in [16]). We deduce an analogous result in the case of Itô’s measure in H and we apply it to show that for n+−almost every excursion, there is no loop remaining above any horizontal level.

Proposition 2.6. (Bismut’s description of Itô’s measure in H) + Let n+ be the measure defined on R+ × U by

n+(dt, du) = 1{0≤t≤R(u)}dt n+(du).

Then under n+ the "law" of (t, (x, y)) 7→ y(t) is the Lebesgue measure dα and condition- t,← t,→ ally on y(t) = α, u = (u(t − s) − u(t))0≤s≤t and u = (u(t + s) − u(t))0≤s≤R(u)−t are independent Brownian motions killed when reaching {=(z) = −α}.

See Figure3. Proposition 2.6 is a direct consequence of the one-dimensional analo- gous result, for which we refer to [16] (see Theorem 4.7, Chapter XII). The next proposition ensures that for almost every excursion under n+, there is no loop growing above any horizontal level. Let

+ (t) L := {u ∈ U , ∃0 ≤ t ≤ R(u), ∃0 ≤ a < y(t), ∆ea (u) = 0}, be the set of excursions u having a loop remaining above some level a. Then we have :

Proposition 2.7. n+ (L ) = 0.

Proof. We first prove the result under n+, namely

 + (t)  n+ {(t, u) ∈ R+ × U , ∃0 ≤ a < y(t), ∆ea (u) = 0} = 0.

Recall the notation (2)-(5). From Bismut’s description of n+ we get

 + (t)  n+ {(t, u) ∈ R+ × U , ∃0 ≤ a < y(t), ∆ea (u) = 0}  + t,→ t,→ t,← t,←  = n+ {(t, u) ∈ R+ × U , ∃0 ≤ a < y(t), u (Ta ) = u (Ta )} Z ∞  0  = dα P ∃0 < a ≤ α, XTa = XT 0 , 0 a

0 0 where X and X are independent linear Brownian motions, and Ta and Ta are hitting times of a of other independent Brownian motions (corresponding to the imaginary 0 parts). Now, XT and X 0 are independent symmetric Cauchy processes, and therefore a Ta 0 XT − X 0 is again a Cauchy process (see Section 4, Chap. III of [16]). Since points a Ta are polar for the symmetric Cauchy process (see [2], Chap. II, Section 5), we obtain  0  ∃0 < a ≤ α, XT = X 0 = 0 and under n+ the result is proved. P a Ta

7 Figure 3: Bismut’s description of n+

To extend the result to n+, we notice that if u ∈ L , then the set of t’s satisfying the definition of L has positive Lebesgue measure: namely, it contains all the times until the loop comes back to itself. This translates into

( Z R(u) ) + L ⊂ u ∈ U , 1 (t) dt > 0 . 0 {∃0≤a

But

Z R(u) ! n+ 1 (t) dt 0 {∃0≤a

Hence, by the first step of the proof,

Z R(u) ! n+ 1 (t) dt = 0, 0 {∃0≤a

Z R(u) which gives 1 (t) dt = 0 for n+−almost every excursion, and the 0 {∃0≤a

2.5 The locally largest excursion In [5], the authors give a canonical way to construct the growth-fragmentation, through the so-called locally largest fragment. We want to mimic this construction in our case.

8 In order to define the locally largest excursion, we set for u ∈ U + and 0 ≤ t ≤ R(u),

n 0 (t) 0 (t) 0− (t) 0 o S(t) := sup a ∈ [0, y(t)], ∀ 0 ≤ a ≤ a, F (a ) ≥ F (a ) − F (a ) .

Observe that the supremum is taken over a non-empty set by Lemma 2.5 as soon as y(t) > 0 and u(R(u)) 6= 0. Let

S := sup S(t). 0≤t≤R(u)

In the case of Brownian excursions, the following proposition holds.

• Proposition 2.8. For almost every u under n+, there exists a unique 0 ≤ t ≤ R(u) such that S(t•) = S. Moreover, S = =(z•) where z• = u(t•).  (t•)  (t•) We call ea the locally largest excursion and Ξ(a) = ∆ea 0≤a≤=(z•) 0≤a≤=(z•) the locally largest fragment.

Thus Ξ is the length of the excursion which is locally the largest, meaning that at any level a where the locally largest excursion splits, Ξ is larger (in absolute value) than the length of the other excursion. See Figure4 for a picture of z•. Following [5], we will see it as the Eve particle of our growth-fragmentation process.

Figure 4: The locally largest excursion.

Proof. Existence. We deal with the excursions u satisfying the following properties, which happen n+-almost everywhere : u has no loop above any horizontal level (see Proposition 2.7) and y has distinct local minima. Take a convergent sequence (tn, n ≥ 1) • such that S(tn) converges to S, and denote by t the limit of tn. We have necessarily, • by definition of S(t), that y(tn) ≥ S(tn). By continuity of y, we get that y(t ) ≥ S. • • Take a < S. For n large enough, since a < y(t ), we observe that tn and t are in • (t ) (tn) (t ) 0 (t•) 0 the same excursion above a, i.e. ea = ea . For such n, F n (a ) = F (a ) for 0 0 (t•) 0 all a ≤ a. Moreover, for n large enough, S(tn) > a hence for all a ≤ a, F (a ) = • • F (tn)(a0) ≥ F (tn)(a0−)−F (tn)(a0) = F (t )(a0−)−F (t )(a0) . It implies that S(t•) ≥ a, hence S(t•) ≥ S by taking a arbitrarily close to S. We found t• such that S(t•) = S. We show that y(t•) = S. Notice that, for all 0 ≤ t ≤ R(u), by right-continuity of F (t), the set

n 0 (t) 0 (t) 0− (t) 0 o A(t) := 0 ≤ a ≤ y(t), ∀ 0 ≤ a ≤ a, F (a ) ≥ F (a ) − F (a )

9 (t) is open in [0, y(t)]. Indeed, for a < y(t), ea cannot be an excursion with size 0 by assumption, and so by right-continuity, we can take δ > 0 such that on [a, a + δ], F (t)  3 (t) 3 (t)  (t) takes values in 4 F (a), 2 F (a) (in the case F (a) > 0, without loss of generality). 0 (t) 0 3 (t) 3 2 (t) 0− 1 (t) 0− For such a δ, and for any a ∈ [a, a+δ], F (a ) > 4 F (a) > 4 3 F (a ) = 2 F (a ), and F (t)(a0−) ≥ 0. These two inequalities imply that |F (t)(a0)| ≥ |F (t)(a0−) − F (t)(a0)|. Now suppose that S < y(t•) and let us find a contradiction. We have A(t•) = [0,S), • (t•) (t•) − (t•) (t ) hence |F (S)| < |F (S ) − F (S)|. Write ea = uI with I = (ia,−, ia,+), so that (t•) (t•) F (a) = x(ia,+) − x(ia,−). Since F jumps at S, either i·,− or i·,+ jumps at S. Both cases cannot happen at the same time because local minima of y are all distinct. Suppose for example that iS−,− < iS,−. Take t ∈ (iS−,−, iS,−) (see Figure5). We have F (t)(a) = F (t•)(a) for all a < S and

(t) F (S) = x(iS,−) − x(iS−,−) = x(iS−,+) − x(iS−,−) − (x(iS,+) − x(iS,−)) • • = F (t )(S−) − F (t )(S) • = F (t)(S−) − F (t )(S).

Figure 5: Construction of the locally largest excursion.

We deduce that |F (t)(S)| = |F (t•)(S−) − F (t•)(S)| > |F (t•)(S)| = |F (t)(S−) − F (t)(S)|. Then A(t) is open in [0, y(t)], contains S, and we have y(t) > S. Hence sup A(t) > S which gives the desired contradiction. Uniqueness. Suppose that S(t) = S(t0) = S with t < t0 and let us find again a 0 0 contradiction. We showed that necessarily, y(t) = y(t ) = S. Let tm ∈ [t, t ] such 0 0 that y(tm) = min {y(r), r ∈ [t, t ]}. Set am := y(tm). Observe that t and t cannot be starting times or ending times of an excursion of y (otherwise we could have extended the locally largest fragment inside this excursion for some positive height). Hence am < S. At level am, there must be a splitting into two excursions (one straddling time 0 t, the other t ) with equal size. It happens on a negligible set under n+. To see it, we can restrict to t < t0 rationals and use the Markov property at time t0.

10 2.6 Disintegration of Itô’s measure over the size of the excursions

We are interested in conditioning Itô’s measure of excursions in H on their initial size, i.e. in fixing the value of x(R(u)) = z. This will allow us to define probability measures γz which disintegrate n+ over the value of the endpoint z. Properties will a→b simply transfer from n+ to γz via the disintegration formula. Define Pr as the law of the one-dimensional of length r between a and b, and Πr as the law of a three-dimensional Bessel (BES3) bridge of length r from 0 to 0.

Proposition 2.9. We have the following disintegration formula Z dz n+ = 2 γz, (6) R 2πz where for z 6= 0, Z −1/2v e 0→z γz = dv 2 Pvz2 ⊗ Πvz2 . (7) R+ 2v Proof. Let f and g be two nonnegative measurable functions defined on X and X0 respectively. Thanks to Itô’s description of n+ (see [16], Chap. XII, Theorem 4.2), we have Z Z  R(y)  f(x)g(y) n+(dx, dy) = f(x)g(y) n+(dy)P X ∈ dx U U Z Z dr r = √ f(x)Πr[g] (X ∈ dx) . 3 P R+ 2 2πr X

Now, decomposing on the value of the Gaussian r.v. Xr yields

Z Z Z −z2/2r dr e 0→z f(x)g(y) n+(dx, dy) = √ dz √ Πr[g] E [f] . 3 r U R+ 2 2πr R 2πr We finally perform the change of variables v(r) = r/z2 to get

Z Z Z −1/2v dz e 0→z f(x)g(y) n+(dx, dy) = 2 dv 2 Evz2 [f]Πvz2 [g]. U R 2πz R+ 2v

Lemma 2.10. Let z be a nonzero real number. The image measure of γz by the function which sends (x, y) to ! x(tz2) y(tz2) R(u) , , 0 ≤ t ≤ , z |z| z2 is γ1.

3 Proof. It comes from the definition of γz and the scaling property of BES bridge and Brownian bridge.

2.7 The metric space of excursions in H

Very often, results under γz can be obtained by proving the analog under the Itô’s measure n+, and then disintegrating over z = x(R(u)). This usually provides results under γz for Lebesgue-almost every z > 0, and so we would like to study the continuity + of z 7→ γz. This requires to define a topology on the space of excursions U . All these results will be stated for z > 0 because the scaling depends on the sign of the endpoint (Lemma 2.10), but they all extend to the general case.

11 We therefore introduce the usual distance

d(u, v) = |R(u) − R(v)| + sup |u(t ∧ R(u)) − v(t ∧ R(v))|, t≥0 where we identified δ with the excursion with lifetime 0. The distance d makes U + into a Polish space. The following lemmas may come in useful.

Lemma 2.11. The map ∆ : u ∈ U + 7→ ∆u = x(R(u)) is continuous.

Proof. This is straightforward since |x(R(u))−x0(R(u0))| = |u(R(u))−u0(R(u0))| ≤ d(u, u0) for u = (x, y) and u0 = (x0, y0).

+ ∗ (z) 2 2 2 Lemma 2.12. Let u ∈ U . Then z ∈ R+ 7→ u := zu(·/z ) = zu(t/z ), 0 ≤ t ≤ R(u)z is a continuous function.

Proof. Let z0 > 0. Then for all z > 0  t   t  (z) (z0) 2 2 d(u , u ) = R(u)|z − z0| + sup zu 2 ∧ R(u) − z0u 2 ∧ R(u) . t≥0 z z0 The second term is     t t sup zu 2 ∧ R(u) − z0u 2 ∧ R(u) t≥0 z z0       t t t ≤ z sup u 2 ∧ R(u) − u 2 ∧ R(u) + sup (z − z0)u 2 ∧ R(u) t≥0 z z0 t≥0 z     t t ≤ z sup u 2 ∧ R(u) − u 2 ∧ R(u) + |z − z0| sup |u(t)|. t≥0 z z0 t≥0 We conclude by using the uniform continuity of u.

If we equip the set P(U +) of probability measures on U + with the topology of weak convergence, we have the following result.

∗ Proposition 2.13. The map z ∈ R+ 7→ γz is continuous. Proof. Let G be a continuous bounded function on U +. Then by scaling (Lemma 2.10), for all z > 0, h (z) i γz(G) = γ1 G(u ) . Applying Lemma 2.12 together with the dominated convergence theorem yields the desired result.

Also, we will use the continuity of the excursions cut at horizontal levels. Recall from Section 2.3 that I(a) is the set of times when the excursion u ∈ U + lies above a, and for each connected component I of I(a), uI denotes the associated excursion above a. The path uI is an excursion above a, I is the time interval of uI , and the size or length of uI is the difference between its endpoint and its starting point. On {Ta < ∞}, we rank the excursions above a according to the absolute value of a,+ a,+ a,+ a,+ their size. Write z1 = z1 (u), z2 = z2 (u),... for the sizes, ranked in descending a,+ a,+ a,+ a,+ order of their absolute value, and e1 = e1 (u), e2 = e2 (u),... for the correspond- ing excursions. This is possible since for any fixed ε > 0 there are only finitely many excursions with length larger than ε in absolute value.

a,+ Proposition 2.14. Let a > 0 and z > 0. For any i ≥ 1, the function ei is continuous + on U on the event {Ta < ∞} outside a γz-negligible set.

12 Proof. We consider the set E of trajectories u = (x, y) such that Ta < ∞ and sat- isfying the following conditions, which occur with γz-probability one when conditioned on touching a: the level a is not a local minimum for y, there exist infinitely many excursions above a, all excursions touch a only at their starting point and endpoint, a,+ the sizes (zi , i ≥ 1) of the excursions are all distinct. Let i ≥ 1 and u = (x, y) ∈ E . (a,+) We want to show that ei is continuous at u. a,+ (t) a,+ Let t be a time in the excursion ei , i.e. such that y(t) > a and ea = ei . We restrict our attention to u0 = (x0, y0) ∈ E close enough to u so that y0(t) > a and we 0(t) 0 will write ea for the excursion of u corresponding to t. Let ε > 0.

• First, we want to find δ > 0 such that, whenever d(u, u0) < δ, the durations (t) 0(t) 0(t) (t) of the excursions ea and ea are close, namely |R(ea ) − R(ea )| < ε. Write 0 0 (i−(a), i+(a)), and (i−(a), i+(a)), for the excursion time intervals corresponding (t) 0(t) (t) to ea and ea respectively. For simplicity, we take the notation R = R(ea ) 0 0(t) and R = R(ea ). Since a is not a local minimum for y, there exist times t1 ∈ ε ε (i−(a) − 2 , i−(a)) and t2 ∈ (i+(a), i+(a) + 2 ) when y is strictly below a. Take 0 0 0 δ1 ∈ (0, a) such that y(t1) and y(t2) are in (0, a − δ1). Let u = (x , y ) ∈ E 0 δ1 0 δ1 such that d(u, u ) < 2 . We deduce that y (t1) < y(t1) + 2 < a and similarly 0 0 ε 0 ε y (t2) < a. This implies that i−(a) ≥ t1 > i−(a) − 2 and i+(a) ≤ t2 < i+(a) + 2 . ε ε Likewise, pick two times t3 ∈ (i−(a), i−(a) + 2 ) and t4 ∈ (i+(a) − 2 , i+(a)) such (t) that t3 < t < t4. Since the excursion ea touches level a only at its extremities, the distance between the compact u([t3, t4]) and the closed set {=(z) = a} is positive, and so, on the interval [t3, t4], y remains above, say, a + δ2 where δ2 > 0. 0 δ2 0(t) 0 ε Then when d(u, u ) < 2 , the excursion ea will satisfy i−(a) < t3 < i−(a) + 2 0 ε 0 δ1 δ2 and i+(a) > t4 > i+(a) − 2 . Therefore, when d(u, u ) < δ = min( 2 , 2 ), we get 0 ε 0 ε 0 that |i−(a) − i−(a)| < 2 and |i+(a) − i+(a)| < 2 , so in particular |R − R| < ε. Observe that we not only proved that the durations are close, but also that the 0 0 times i−, i− (and i+, i+) are close, and this will be useful in the remainder of the proof.

• Secondly, we show that we can take δ0 > 0 small enough so that

(t) 0(t) 0 sup |ea (s ∧ R) − ea (s ∧ R )| < ε, s≥0 whenever d(u, u0) < δ0. Take η = η(ε) > 0 some modulus of uniform continuity of u with respect to ε. The previous paragraph gives the existence of δ > 0 such that when u0 ∈ E and 0 0 0 d(u, u ) < δ, |i−(a) − i−(a)| < η/3 and |i+(a) − i+(a)| < η/3. Without loss of generality, we can assume that δ < ε. Define δ0 = min(δ, η), and let u0 ∈ E such that d(u, u0) < δ0. For all s ≥ 0, we have

(t) 0(t) 0 |ea (s ∧ R) − ea (s ∧ R )| 0 0 0 0 0 = u(i−(a) + (s ∧ R)) − u(i−(a)) − u (i−(a) + (s ∧ R )) + u (i−(a)) 0 0 0 0 ≤ u(i−(a)) − u (i−(a)) + u(i−(a) + (s ∧ R)) − u(i−(a) + (s ∧ R )) . (8) Now,

0 0 0 0 0 0 u(i−(a)) − u (i−(a)) ≤ u(i−(a)) − u(i−(a)) + u(i−(a)) − u (i−(a)) , and so by uniform continuity of u and because d(u, u0) < δ0 < ε, we obtain

0 0 u(i−(a)) − u (i−(a)) ≤ 2ε. (9)

13 Similarly, the second term of (8) is

0 0 0 u(i−(a) + (s ∧ R)) − u (i−(a) + (s ∧ R )) 0 0 ≤ u(i−(a) + (s ∧ R)) − u(i−(a) + (s ∧ R )) 0 0 0 0 0 + u(i−(a) + (s ∧ R )) − u (i−(a) + (s ∧ R )) ,

0 0 and since |i−(a) + (s ∧ R) − i−(a) − (s ∧ R )| < η, we can conclude in the same way that 0 0 0 u(i−(a) + (s ∧ R)) − u (i−(a) + (s ∧ R )) ≤ 2ε. (10) Inequalities (8), (9) and (10) give

(t) 0(t) 0 |ea (s ∧ R) − ea (s ∧ R )| ≤ 4ε,

which is the desired result.

(t) So far, we proved that ea is continuous at u. To conclude, we need an argument to say that this is the i-th excursion above a for u0 sufficiently close to u.

00 0a,+ 0(t) • Finally, we show that we can take δ > 0 small enough so that ei = ea whenever d(u, u0) < δ00. This is derived in two steps.

0 0 - Step 1: Let η > 0, and introduce, for u ∈ E , the number Nη(u ) of time 0 intervals (i−, i+) of excursions of u above a such that i+ − i− > η. Note 0 R(u0) that Nη(u ) ≤ η < ∞. We take η such that u has no excursion time interval above a satisfying i+ − i− = η. The first step consists in proving 0 0 that for u ∈ E sufficiently close to u, Nη(u ) = Nη(u). From the first point 0 (applied Nη(u) times), we know that for δ > 0 small enough, Nη(u ) ≥ 0 0 Nη(u) whenever d(u, u ) < δ. To prove that Nη(u ) ≤ Nη(u) holds as well when δ is sufficiently small, we use an argument by contradiction and we consider a sequence (un)n≥1 of elements in E such that d(u, un) → 0 and Nη(un) ≥ Nη(u) + 1. Consider Nη(u) + 1 distinct excursion time intervals (n) (n) (n) (n) (ij,−, ij,+), 1 ≤ j ≤ Nη(u) + 1, of un above a such that ij,+ − ij,− > η. We (n) (tj ) (n) can write the corresponding excursions ea (un) for some tj ’s. Moreover, (n) (n) (n) (n) (n) we may take tj such that |ij,+ − tj | > η/2 and |ij,− − tj | > η/2. Since |R(u) − R(un)| → 0, we can assume (up to some extraction) that when n (n) (n) (n) goes to infinity, ij,+ → ij,+, ij,− → ij,− and tj → tj ∈ [0,R(u)], for some ij,+, ij,−, tj ∈ [0,R(u)]. From un → u, we deduce that for all j, y(ij,−) = a (n) (n) (n) (n) and y(ij,+) = a. For n large enough, because ij,+ −ij,− > η and |ij,± −tj | > (n) (tj ) (tj ) (tj ) η/2, we have ea (un) = ea (un). Now consider ea (u). From the two (tj ) (tj ) previous points, ea (un) → ea (u). For any time s ∈ (i−, i+), we have y(s) > a (otherwise a would be a local minimum of y). Hence (ij,−, ij,+) is an excursion time interval for u and ij,+ −ij,− > η. Therefore we constructed Nη(u) + 1 distinct excursion time intervals above a for u, which gives the desired contradiction. a,+ a,+ zi - Step 2: Suppose for example that zi > 0. Take δ < 6 and η = η(δ) > 0 some modulus of uniform continuity for u with respect to δ. We can assume in order to apply Step 1 that η is such that u has no excursion above a satisfying |i+ − i−| = η. We look at the N := Nη(u) excursions e1, . . . , eN of

14 u above a (ranked by decreasing order of the absolute value of their sizes) such that |i+ − i−| > η, and denote their sizes by z1, . . . , zN . Observe that a,+ a,+ the first i excursions among these are the excursions e1 , . . . , ei . Indeed, if |i+ − i−| ≤ η, then by uniform continuity,

a,+ |u(i+) − u(i−)| ≤ δ < zi .

0 1 Let ε = 2 (min1≤k≤N−1 |zk+1 − zk| ∧ zi) (this is positive since all the sizes are assumed to be distinct in E ). Take times t1, . . . , tN in the excursion time intervals of e1, . . . , eN . Thanks to Step 1 and the first point of the proof (applied N times), there exists δ0 > 0 such that for d(u, u0) < δ0, if we 0(tk) 0(tk) 0(tk) denote by (i− , i+ ) the excursion time interval of ea , 1 ≤ k ≤ N, then 0 (i) Nη(u ) = N, 0(tk) (ii) the excursions ea , 1 ≤ k ≤ N, are distinct, 0(tk) 0(tk) (iii) ∀ 1 ≤ k ≤ N, |i+ − i− | > η, a,+ 0(tk) 0 (iv) ∀ 1 ≤ k ≤ N, |zk − ∆ea | ≤ ε . 0 0(tk) An easy calculation shows that by our choice of ε and (iv), the ∆ea , 1 ≤ k ≤ N, are ranked in decreasing order, and that

za,+ ∀ 1 ≤ k ≤ i, ∆e0(tk) > i . (11) a 2

0(tk) In addition, by (i), (ii) and (iii), the ea , 1 ≤ k ≤ N, are the excursions of 0 u above a satisfying |i+ − i−| > η. Now set δ00 = min(δ, δ0) and assume that d(u, u0) < δ00. Then for all 1 ≤ k ≤ 0(tk) a,+ 0 0 i, ea = ek (u ). Indeed, if (i−, i+) is an excursion time interval of u such that |i+ − i−| ≤ η, then

0 0 0 0 |u (i+) − u (i−)| ≤ |u (i+) − u(i+)| + |u(i+) − u(i−)| + |u(i−) − u (i−)| ≤ 3δ,

a,+ 0 0 zi and so in particular |u (i+) − u (i−)| < 2 . This proves that the first i 0 excursions of u are among the N previous excursions satisfying |i+ −i−| > η. 0(tk) a,+ 0 Since these are ranked in decreasing order, necessarily ea = ek (u ) for all 1 ≤ k ≤ i, which concludes the proof.

a,+ Putting these three points together, we proved that ei is continuous on E which has full probability under γz, hence Proposition 2.14.

3 Markovian properties

In this section, we are interested in Markovian properties of excursions cut at horizontal levels. Time will therefore be indexed by the height a of the cutting.

3.1 The branching property for excursions in H

Consider an excursion under the measure γz. Then cutting it at some height a > 0 yields a family of excursions above a as defined in Section 2.3. Our aim is to show that conditionally on what happens below a, these are independent and distributed according to the measures γz, where z is the size of the corresponding excursion. We shall first consider the case when the original excursion is taken under the Itô’s measure n+ in H, and then transfer the property to γz by the previous disintegration result (6).

15 0 Let Ga be the σ–field containing all the information of the trajectory below level a 0 and Ga be the completion of Ga with the n+–negligible sets. In other words, the σ–field 0 Ga is generated by the trajectory u once you cut out the excursions above a, and close up the time gaps. A formal definition of this process is the process u indexed by the R t 1 generalized inverse of t 7→ 0 {u(s)≤a}ds.

Figure 6: The excursion process above a.

a,+ a,+ Recall from Section 2.7 that z1 , z2 ,... are the sizes of the excursions above a, a,+ a,+ ranked in decreasing order of their absolute value, and e1 , e2 ,... are the correspond- ing excursions.

Proposition 3.1. (Branching property for excursions in H under n+) + For any A ∈ Ga, and for all nonnegative measurable functions G1,...,Gk : U → R+, k ≥ 1,

k ! k ! Y a,+ Y 1 1 1 1 a,+ n+ {Ta<∞} A Gi(ei ) = n+ {Ta<∞} A γ [Gi] . (12) zi i=1 i=1

Proof. Lemma 2.4 ensures that on the event {Ta < ∞}, the trajectory u after time Ta has the law of a killed Brownian motion. Excursion theory tells us that given the excursions below a, the excursions above a form a Poisson point process on U + with intensity L n+(du), where L is the total local time at level a, see Figure6. Finally, a,+ conditionally on the sizes (zi )i≥1 of the excursions above a, these excursions are independent with law γ a,+ . We deduce the proposition since the σ-field Ga is generated zi a,+ by FTa , the excursions below a, and the sizes (zi )i≥1.

We can now transfer this property to the probability measures γz.

Proposition 3.2. (Branching property for excursions in H under γz) Let z ∈ R \{0}. For any A ∈ Ga, and for all nonnegative measurable functions + G1,...,Gk : U → R+, k ≥ 1,

k ! k ! Y a,+ Y 1 1 1 1 a,+ γz {Ta<∞} A Gi(ei ) = γz {Ta<∞} A γ [Gi] . zi i=1 i=1

Proof. It suffices to prove the proposition for bounded continuous functions G1,...,Gk : + U → R+, k ≥ 1. Take a nonnegative measurable function f : R → R+ and a bounded

16 + continuous function h : U → R+ which is Ga−measurable. Observe that x(R(u)) is Ga−measurable as a function of u. From Proposition 3.1, we know that

k ! k ! Y a,+ Y 1 1 a,+ n+ {Ta<∞}h(u)f(x(R(u))) Gi(ei ) = n+ {Ta<∞}h(u)f(x(R(u))) γ [Gi] . zi i=1 i=1

Thanks to the disintegration formula (6), we can split n+ over the size:

Z k ! Z k ! dz Y a,+ dz Y 1 1 a,+ f(z) γz {Ta<∞}h Gi(ei ) = f(z) γz {Ta<∞}h γz [Gi] . 2πz2 2πz2 i R i=1 R i=1

Since this holds for any f, it entails for Lebesgue-almost every z ∈ R,

k ! k ! Y a,+ Y 1 1 a,+ γz {Ta<∞}h Gi(ei ) = γz {Ta<∞}h γ [Gi] . (13) zi i=1 i=1 To prove that this holds for all z, we need a continuity argument. We first treat the case z = 1. Using the scaling property 2.10 of the measures γz, for z > 0 the left-hand side of (13) is k ! γ 1 h(u(z)) Y G (ea,+(u(z))) 1 {Ta/z<∞} i i i=1 where we recall from Lemma 2.12 that u(z) = zu(·/z2). The right-hand side term, on the other hand, is

k ! k ! Y (z) Y 1 a,+ 1 a,+ γz {Ta<∞}h γ [Gi] = γ1 {T <∞}h(u ) γ (z) [Gi] , zi a/z zi (u ) i=1 i=1 and so (13) translates into

k ! k ! 1 (z) Y a,+ (z) 1 (z) Y γ1 {T <∞}h(u ) Gi(ei (u )) = γ1 {T <∞}h(u ) γ a,+ (z) [Gi] , a/z a/z zi (u ) i=1 i=1 (14) for Lebesgue-almost every z > 0. In particular this is true for a dense set of z. Taking z & 1 along some decreasing sequence, we first get that u(z) → u by Lemma 2.12 and Ta/z → Ta by left-continuity of the stopping times. In addition, for all 1 ≤ i ≤ a,+ (z) a,+ a,+ (z) a,+ (z) k, zi (u ) → zi (u) γ1-almost surely because z → zi (u ) = ∆ei (u ) is a continuous function (outside a negligible set) by Lemmas 2.11, 2.12 and Proposition

2.14. Finally, by continuity of z 7→ γz (Lemma 2.13), for all 1 ≤ i ≤ k, γ a,+ (z) [Gi] → zi (u ) γ a,+ [Gi]. Applying the dominated convergence theorem to both sides of equation (14) zi triggers k ! k ! Y a,+ Y 1 1 a,+ γ1 {Ta<∞}h Gi(ei ) = γ1 {Ta<∞}h γ [Gi] . zi i=1 i=1 and concludes the proof of Proposition 3.2 for z = 1. The general case follows by scaling.

3.2 The locally largest evolution Recall that Proposition 2.8 gives a canonical choice of excursion at level a > 0, which is (t•) the locally largest excursion ea . One may wonder whether the locally largest fragment (t•) Ξ(a) = ∆ea still exhibits some kind of Markovian behavior. The following theorem answers this question.

17 Theorem 3.3. Let z > 0. Under γz, (Ξ(a))0≤a<=(z•) is distributed as the positive self-similar Markov process (Za)0≤a<ζ with index 1 starting from z whose Lamperti representation is −1 Za = z exp(ξ(τ(z a))), qξ(1) where ξ is the Lévy process with Laplace exponent Ψ(q) := ln γz[e ] given by Z −y 4 2 qy y e dy Ψ(q) = − q + (e − 1 − q(e − 1)) y 2 , q < 3, (15) π π y>− ln(2) (e − 1) τ is the time change  Z s  τ(a) = inf s ≥ 0, eξ(u)du > a , 0 and ζ = inf{a ≥ 0,Za = 0}. Recall the notation (2)-(5). We set     t, t,← t,← t,→ t,→ t,← t,← t,← t,← ua := (u (s + Ta ))s≥0, (u (s + Ta ))s≥0 − u (Ta ), u (Ta ) ,

t, with the convention that ua is a cemetery function if y(t) < a. We shall use the following lemma. Note that the lemma does not disintegrate the • t , law of ua on the measures γz, and one has to be careful not to confuse the z appearing in the integral with the value of x(R(u)) (the reader should keep track of x(R(u)) in the proof).

0 0 Lemma 3.4. Let (X,Y ) and (X ,Y ) be under P two independent planar Brownian 0 motions starting from the origin, and for b ≤ 0, Tb and Tb their respective hitting times 0 of {=(z) = b}, with Teb and Teb denoting the hitting times of {=(z) < b}. For α ≥ a ≥ 0, and z ∈ R we set   0 0 0 E := z + (X 0 − X ) ≥ (X − X ) − (X 0 − X ) , ∀ b ∈ [−α, −α + a] . α,a,z T Tb 0 T T Tb b Teb eb b Then, for any nonnegative measurable function H,

• Z dz t , 1 n+[H(ua ) {a<=(z•)}] = 2 h(−a, z), z 2πz where h is :

h  0 0  i h(−a, z) := H (X ,Y ) , (z + X ,Y ) 0 , E . E s s s∈[0,T−a] s s s∈[0,T−a] a,a,z Remark. Observe that the process (Ξ(a0), a0 ≤ a) is measurable with respect • t , to ua . We denote by D the space of càdlàg real-valued paths with finite lifetime, endowed with the local Skorokhod topology. It results from the lemma that for any nonnegative measurable function on D, Z 1 dz n+[H(Ξ(b), b ∈ [0, a]) {a<=(z•)}] = 2 h(−a, z), z 2πz where h is now :

h  0  i h(−a, z) := H z + X 0 − XT−a+b , b ∈ [0, a] , Ea,a,z . E T−a+b

(t•) Proof. Integrating over the duration of the excursion ea , we see that for n+−almost every u ∈ U +,

R(u) • Z 1 t , 1 t, 1 H(ua ) {a<=(z•)} = H(ua ) (t•) (t) dt. {y(t)>a, ea =ea } (t) 0 R(ea )

18 With Bismut’s description of n+ (Proposition 2.6, cf. Figure3), we get

  0 0   • Z H (X, Y ), (X , Y ) t , 1 n+[H(ua ) {a<=(z•)}] = dα E  0 , Eα,a,0 , α>a T−α+a + T−α+a where

(X, Y ) = ((X,Y )(s + T−α+a))s∈[0,T−α−T−α+a] − (X,Y )(T−α+a), 0 0 0 0 0 (X , Y ) = ((X ,Y )(s + T )) 0 0 − (X,Y )(T ), −α+a s∈[0,T−α−T−α+a] −α+a with the notation (X,Y )(s) = (Xs,Ys). By the strong Markov property at times T−α+a 0 and T−α+a, the former integral can be expressed as

Z " #  0  1 dα E h −a, XT 0 − XT−α+a 0 , α>a −α+a T−α+a + T−α+a for h defined as

h  0 0  i h(−a, z) := H (X ,Y ) , (z + X ,Y ) 0 , E . E s s s∈[0,T−a] s s s∈[0,T−a] a,a,z See Figure3. By a change of variables, the former integral is

Z " # 0 1 dα E h(−a, XT 0 − XT−α ) 0 . α≥0 −α T−α + T−α Therefore, we proved that " # • Z 1 t , 1 0 n+[H(ua ) {a<=(z•)}] = dα E h(−a, XT 0 − XT−α ) 0 . α≥0 −α T−α + T−α

On the other hand, using again Bismut’s decomposition of n+, we see that (actually for any h), ! Z R(u) 1 n+[h(−a, x(R(u)))] = n+ h(−a, x(R(u)) dt 0 R(u) Z " #  0  1 = dα E h −a, XT 0 − XT−α 0 . α≥0 −α T−α + T−α Comparing the last two equations, we proved that

• Z dz t , 1 n+[H(ua ) {a<=(z•)}] = n+[h(−a, x(R(u)))] = 2 h(−a, z), z 2πz by Proposition 2.9. We now come to the proof of Theorem 3.3. We closely follow the strategy of Le Gall and Riera in [15]. Proof. Let H be a nonnegative bounded continuous function on D. From the previous Lemma 3.4, or rather from the Remark following its statement, we know that Z 1 dz n+[H(Ξ(b), b ∈ [0, a]) {a<=(z•)}] = 2 h(−a, z), z 2πz where h is:

h  0  i h(−a, z) = H z + X 0 − XT−a+b , b ∈ [0, a] , Ea,a,z . E T−a+b

19 Notice that, in the notation of Lemma 3.4, b 7→ X0 − X is a (càdlàg) symmetric 0 T Te−b e−b Cauchy process of Laplace exponent ψ(λ) = −2|λ| (for example, use that it is a Lévy process and Proposition 3.11 of [16], Chap. III). Denote by ηb the double of the Cauchy process which under Pz, starts from z, and ∆ηb the jump at time b. Write ηˆb = η(a−b)− for the time-reversal of η, ∆ˆηb being the jump of ηˆ at time b. Then by definition of h,

h 1 i h(−a, z) = Ez H(ˆηb, b ∈ [0, a]) {∀ b∈[0,a], |ηˆb|≥|∆ˆηb|} .

Now we want to reverse time in the function h. Conditioning on ηa,

h(−a, z) 1 Z 2adx = E [H(ˆη , b ∈ [0, a]) 1 |η = x]. 2 2 z b {∀ b∈[0,a], |ηˆb|≥|∆ˆηb|} a π R (2a) + (x − z) By Corollary 3, Chap. II of [2]:

1 Ez[H(ˆηb, b ∈ [0, a]) {∀ b∈[0,a], |ηˆb|≥|∆ˆηb|}|ηa = x] 1 = Ex[H(ηb, b ∈ [0, a]) {∀ b∈[0,a], |ηb|≥|∆ηb|}|ηa = z].

Indeed, the Cauchy process η is symmetric, hence is itself its dual. We obtain Z dz 2 h(−a, z) = R 2πz Z dz 1 Z 2adx E [H(η , b ∈ [0, a]) 1 |η = z]. 2 2 2 x b {∀ b∈[0,a], |ηb|≥|∆ηb|} a R 2πz π R (2a) + (x − z) We can rewrite it as " # Z dx 1 Z 2adz x2 E H(η , b ∈ [0, a]) 1 η = z , 2 2 2 x 2 b {∀ b∈[0,a], |ηb|≥|∆ηb|} a R 2πx π R (2a) + (x − z) z which is " # Z dx x2 E H(η , b ∈ [0, a]) 1 . 2 x 2 b {∀ b∈[0,a], |ηb|≥|∆ηb|} R 2πx (ηa)

Now this gives the law of Ξ under the disintegration measures γx. Indeed, take instead of H some nonnegative measurable function f of the initial size Ξ(0), multiplied by H. Then using the above expression, we find that

n+[f(Ξ(0))H(Ξ(b), b ∈ [0, a])1{a<=(z•)}] " # Z dx x2 = f(x)E H(η , b ∈ [0, a]) 1 . 2 x 2 b {∀ b∈[0,a], |ηb|≥|∆ηb|} R 2πx (ηa)

Hence for Lebesgue-almost every x ∈ R, " # x2 γ [H(Ξ(b), b ∈ [0, a])1 • ] = E H(η , b ∈ [0, a]) 1 , x {a<=(z )} x 2 b {∀ b∈[0,a], |ηb|≥|∆ηb|} (ηa) (16) and by continuity this must hold for all x ∈ R. Indeed, by scaling, the left-hand side of equation (16) is

−1 γx[H(Ξ(b), b ∈ [0, a])1{a<=(z•)}] = γ1[H(xΞ(x b), b ∈ [0, a])1{a

20 The right-hand term can be put in the same form by using the scale invariance of the Cauchy process. Since (16) holds for almost every x, it must hold on a dense set of x > 0, and we may take x % 1 along a sequence. By dominated convergence, we get  1  γ [H(Ξ(b), b ∈ [0, a])1 • ] = E H(η , b ∈ [0, a]) 1 , 1 {a<=(z )} 1 2 b {∀ b∈[0,a], |ηb|≥|∆ηb|} (ηa) and this proves that equation (16) holds for x = 1. The general case x ∈ R follows by scaling. Notice that, almost surely, on the event {∀ b ∈ [0, a], |ηb| ≥ |∆ηb|}, if η0 > 0, then ηb is positive for all b ∈ [0, a]. We know from [8] that a symmetric Cauchy process starting from x > 0 killed when entering the negative half-line can be written using its Lamperti representation as xeξ0(τ 0(a)) where a s Z ds  Z 0  τ 0(a) := = inf s ≥ 0, xeξ (u)du ≥ a , 0 ηs 0 and (ξ0(a), a ≥ 0) is under P a Lévy process killed at an exponential time of parameter 2 π , starting from 0 with Laplace exponent Z 0 2 qy y y y −2 2 Ψ (q) = (e − 1 − q(e − 1)1|ey−1|<1)e (e − 1) dy − , −1 < q < 1. (17) π R π 0 0 0 0 0 Let ∆ξb denote the jump of ξ at time b, i.e. ∆ξb := ξb − ξb− . The following lemma is the analog of Lemma 17 in [15]. Lemma 3.5. For every a ≥ 0, set

−2ξ0 Ma = e a 1 0 . {∀ b∈[0,a], ∆ξb >− ln(2)}

Then (Ma)a≥0 is a martingale with respect to the canonical filtration of the process 0 −2ξ0 ξ . Under the tilted probability measure e a 1 0 · P , the process {∀ b∈[0,a], ∆ξb >− ln(2)} 0 (ξ (b))b∈[0,a] is a Lévy process with Laplace exponent Ψ introduced in (15) in Theo- rem 3.3. Proof. We compute

(q−2)ξ0 E[e a 1 0 ]. {∀ b∈[0,a], ∆ξb >− ln(2)} 0 Indeed, that (Ma)a≥0 is a martingale will come from the fact that ξ is a Lévy process and that the expectation above is 1 when q = 0. To compute this expectation, we decompose ξ0 into its small and large jumps parts:

0 0 00 ξa = ξa + ξa , 00 P 0 0 00 where ξ = ∆ξ 1 0 . Notice that and ξ and ξ are independent. Then a 0≤b≤a b ∆ξb ≤− ln(2) by independence, the above expectation is

0 0 (q−2)ξa (q−2)ξa E[e 1 0 ] = E[1{ξ00=0} e ] {∀ b∈[0,a], ∆ξb >− ln(2)} a 0 00 (q−2)ξa = P (ξa = 0)E[e ]. (18) Thus, we need to compute the Laplace exponents of ξ0 and ξ00 (under P ), that we denote respectively by Ψ0 and Ψ00. Because ξ00 is the pure- given by the jumps of ξ0 smaller than − ln(2), its Laplace exponent is given by the Lévy measure of ξ0 restricted to (−∞, − ln(2)], namely Z y 00 2 qy e Ψ (q) = (e − 1) y 2 dy. (19) π y≤− ln(2) (e − 1)

21 It results from the independence of ξ0 and ξ00 that the Laplace exponent of ξ0 is Ψ0 = Ψ0 − Ψ00, hence by equations (17) and (19), for all −1 < q < 1, Z 0 2 qy y y y −2 Ψ (q) = (e − 1 − q(e − 1)1{|ey−1|<1})e (e − 1) dy π y>− ln(2) Z 2 y y y −2 2 − q (e − 1)1|ey−1|<1 e (e − 1) dy − . (20) π y≤− ln(2) π The middle term in this expression (20) is Z Z y 2 y y y −2 2 e q (e − 1)e (e − 1) dy = − q y dy π y≤− ln(2) π y≤− ln(2) 1 − e 2 Z 1/2 dx = − q π 0 1 − x 2 = − q ln(2). π Hence Z 0 2 qy y y y −2 Ψ (q) = (e − 1 − q(e − 1)1|ey−1|<1)e (e − 1) dy π y>− ln(2) 2 2 + q ln(2) − . (21) π π This extends analytically to all q < 1. Let us come back to (18). We have for q < 3

(q−2)ξ0 00 (q−2)ξ0 E[e a 1 0 ] = P (ξ = 0)E[e a ] ∀ b∈[0,a], ∆ξb >− ln(2) a 00 0 = eaΨ (∞)eaΨ (q−2) Z y ! 2 e aΨ0(q−2) = exp − a y 2 dy e π y≤− ln(2) (e − 1) a(Ψ0(q−2)− 2 ) = e π , by a change of variables. This essentially concludes the calculation of the new Laplace exponent Ψe of ξ0 under −2ξ0 the tilted measure e a 1 0 · P , which is simply {∀ b∈[0,a], ∆ξb ≥− ln(2)} 2 Ψ(e q) = Ψ0(q − 2) − , q < 3. (22) π Still we can put it in a Lévy-Khintchin form. Replacing q by q − 2 in the integral in (21), we get Z −2y qy y y y −2 (e e − 1 − (q − 2)(e − 1)1|ey−1|<1)e (e − 1) dy y>− ln(2) Z qy 2y 3y 2y −y y −2 = (e − e − (q − 2)(e − e )1|ey−1|<1)e (e − 1) dy y>− ln(2) Z qy y −y y −2 = (e − 1 − q(e − 1)1|ey−1|<1)e (e − 1) dy y>− ln(2) Z −y h 2y  y 3y 2y  1 i e + 1 − e + q(e − 1) − (q − 2)(e − e ) |ey−1|<1 y 2 dy. y>− ln(2) (e − 1) After simplifications, we find that the last integral is equal to Z −y h 2y  y 3y 2y  1 i e 1 − e + q(e − 1) − (q − 2)(e − e ) |ey−1|<1 y 2 dy y>− ln(2) (e − 1)  3 = 2 + 2 ln(2) − q 2 ln(2) + . (23) 2

22 From equations (22), (20) and (23), we deduce

  Z −y 2 3 2  qy y 1  e dy Ψ(e q) = − ln(2) + q + e − 1 − q(e − 1) |ey−1|<1 y 2 . (24) π 2 π y>− ln(2) (e − 1)

Finally, we can remove the indicator using simple calculations. One finds that

Z −y y e 1 1 (1 − e ) y 2 {|ey−1|≥1}dy = − ln(2), y>− ln(2) (e − 1) 2 and therefore Z −y 4 2 qy y e dy Ψ(e q) = − q + (e − 1 − q(e − 1)) y 2 , q < 3. π π y>− ln(2) (e − 1)

Hence we recovered the expression for Ψ in the statement of Theorem 3.3 and this gives both the martingale property and the law of ξ0 under the change of measure. We finish the proof of Theorem 3.3 with the arguments of [15] that we reproduce here to be self-contained. Let x > 0. Equation (16) reads

h  0 0 i γx[H(Ξ(b), b ∈ [0, a])1{a<=(z•)}] = E Mτ 0(a)H x exp(ξ (τ (b))), b ∈ [0, a] .

The optional stopping theorem implies that for any c > 0,

h  0 0  i E Mτ 0(a)H x exp(ξ (τ (b))), b ∈ [0, a] 1{c>τ 0(a)} h  0 0  i = E McH x exp(ξ (τ (b))), b ∈ [0, a] 1{c>τ 0(a)} .

By the lemma, the right-hand side is, with the notation ξ of the theorem, h i E H (x exp(ξ(τ(b))), b ∈ [0, a]) 1{c>τ(a)} .

Making c 7→ ∞ and using dominated convergence completes the proof. In addition, in order to study the genealogy of the growth-fragmentation process linked to Brownian excursions in the next section, we need to clarify the behavior of the offspring of Ξ. By offspring we mean all the excursions that were created at times (t•) a when the excursion ea divided into two excursions (i.e. at jump times of Ξ). We rank these excursions in descending order of the absolute value of their sizes. This way we get a sequence (zi, ai)i≥1 of jump sizes and times for Ξ, associated to excursions ei, i ≥ 1, of size zi above ai.

Theorem 3.6. Let z ∈ R\{0}. Under γz, conditionally on the jump sizes and jump times (zi, ai)i≥1 of Ξ, the excursions ei, i ≥ 1, are independent and each ei has law γzi . Proof. By Lemma 3.4, we know that for all nonnegative measurable function H,

• Z dz t , 1 n+[H(ua ) {a<=(z•)}] = 2 h(−a, z), z 2πz where h is :

h  0 0  i h(−a, z) = H (X ,Y ) , (z + X ,Y ) 0 , E . E s s s∈[0,T−a] s s s∈[0,T−a] a,a,z

 •  t , Imagine that H is some functional of the offspring of Ξ below level a, say H ua = (a) (a) (a) f1(e1 ) ··· fn(en ), where the ei denote the offspring of Ξ created before a, ranked

23 (a) in descending order of the absolute value of their sizes zi , and the fi’s are taken continuous and bounded. For such a function H, h is given by

h(−a, z) = E [f1(ε1) ··· fn(εn), Ea,a,z] , where ε1, . . . , εn are the n largest excursions (before hitting {=(z) = −a}) of (X,Y ) and (X0,Y 0) above the past infimum of their imaginary parts. Consider the collection

{(b, eb ), b ∈ [−a, 0]} where eb is an excursion of the Brownian motions (X,Y ) or (X0,Y 0) above the past infimum of their imaginary parts when the infimum is equal to b (set eb = δ if no such excursion exists). A consequence of Lévy’s Theorem,

(Theorem 2.3, Chap. VI of [16]) is that the collection {(b, eb ), b ≤ 0} is a Poisson 1 point process of intensity 2 R− db n+(du). Write z(e) for the size of an excursion e, i.e. the difference between its endpoint and its starting point. Conditionally on the sizes {(b, z(eb )), b ≤ 0}, the excursions eb are distributed as independent excursions with law γ . Observe that Ea,a,z is measurable with respect to {(b, z(e )), b ≤ 0}. z(eb ) b Therefore, conditioning on the sizes of the excursions yields h i h(−a, z) = E γz(ε1)(f1) ··· γz(εn)(fn), Ea,a,z .

And so using Lemma 3.4 backwards, we get   h (a) (a) 1 i 1 n+ f1(e1 ) ··· fn(en ) {a<=(z•)} = n+ γ (a) (f1) ··· γ (a) (fn) {a<=(z•)} . z1 zn

Multiplying by a function of the endpoint x(R(u)) and disintegrating over it gives   h (a) (a) 1 i 1 γz f1(e1 ) ··· fn(en ) {a<=(z•)} = γz γ (a) (f1) ··· γ (a) (fn) {a<=(z•)} , z1 zn for Lebesgue-almost every z ∈ R. Let us prove that this holds for example when z = 1. By scaling (Lemma 2.10), for z > 0 this writes

h (a) (z) (a) (z) 1 i γ1 f1(e1 (u )) ··· fn(en (u )) {a

We then condition on the birth times of these excursions. We can apply Proposition 2.14 at different levels and Lemma 2.12 to prove that γ1-almost surely, for all 1 ≤ i ≤ n, (a) (z) (a) (a) (z)  (a) (z)  ei (u ) −→ ei (u) in U. Besides, zi (u ) = ∆ ei (u ) , so by Lemma 2.11, z%1 (a) (z) (a) zi (u ) −→ zi (u), and by continuity of z 7→ γz (Proposition 2.13), γ (a) (z) −→ z%1 zi (u ) z%1 γ (a) almost surely under γ1. An application of the dominated convergence theorem zi (u) finally gives   h (a) (a) 1 i 1 γ1 f1(e1 ) ··· fn(en ) {a<=(z•)} = γ1 γ (a) (f1) ··· γ (a) (fn) {a<=(z•)} . z1 zn

The statement follows.

24 Figure 7: Excursions of B = (X,Y ) and B0 = (X0,Y 0) above their past infimum. The past infimum process is depicted in blue, and by Lévy’s theorem the excursions above it form a Poisson point process represented in red.

3.3 A change of measures We begin by calling attention to a natural martingale associated to the growth-fragmentation process.

Proposition 3.7. Let z ∈ R\{0}. Under γz, the process 1 X a,+ 2 Ma = {Ta<∞} |∆ei | , a ≥ 0, i≥1 is a (Ga)a≥0−martingale.

Proof. The branching property 3.1 shows that it is enough to prove that γz[Ma] = z2 for all a ≥ 0. For a Brownian excursion process (es)s>0 in the sense of Definition 2.1, we use the + + shorthand 0 < s ≤ T to denote times 0 < s ≤ T such that es ∈ U . Let g : R → R+ be a nonnegative measurable function. By the Markov property at time Ta, see Lemma 2.4,    n (M g(x(R(u)))) = n 1  X |∆e |2 g(X(T )) . (25) + a +  {Ta<∞} E  s −a  + s ≤LT−a By the master formula,   " # Z T−a Z +∞ dz  X 2  2  0  E |∆es| g(X(T−a)) = E dLs z E g(z + XT−a ) 0   2 z =z+Xs + 0 −∞ 2πz s ≤LT−a " # Z T−a Z +∞ 0 dz  0  = E dLs E g(z + XT−a ) 0 −∞ 2π  1 Z +∞  = E LT−a g , 2π −∞

25 since the Lebesgue measure is an invariant measure for the Brownian motion. Fi- nally, the law of the Brownian local time LT−a at the hitting time of −a is known to be exponential with mean 2a (see for example Section 4, Chap. VI of [16]). Hence h 1 R +∞ i 1 R +∞ E LT−a 2π −∞ g = 2a × 2π −∞ g. Coming back to (25), we get  1 Z +∞  n+ (Mag(x(R(u)))) = 2a × g n+ (Ta < ∞) . 2π −∞ 1 But n+ (Ta < ∞) = n+ (sup(y) ≥ a) = 2a (see Proposition 3.6, Chapter XII, of [16]), so finally 1 Z +∞ n+ (Mag(x(R(u)))) = g. 2π −∞

Disintegrating n+ over z as in Proposition 2.9 yields Z +∞ dz 1 Z +∞ 2 g(z) γz[Ma] = g. −∞ 2πz 2π −∞ This holds for all nonnegative measurable function g, and thus for Lebesgue-almost every z ∈ R, 2 γz[Ma] = z . Recall the notation u(z) = zu(·/z2) for z > 0 from Lemma 2.12. By scaling, this means for Lebesgue-almost every z > 0,   X a,+ (z) 2 2 γ 1 2 |∆e (u )| = z , 1 {z Ta/z<∞} i i≥1 which yields   γ 1 X |∆ea,+(u(z))|2 = z2. (26) 1 {Ta/z<∞} i i≥1 Again, this must hold on a dense set of endpoints z, and thus taking z according to some sequence, Lemma 2.12 and Proposition 2.14 together with Fatou’s lemma imply that γ1[Ma] ≤ 1. This holds for all a, and so by scaling we deduce that for all z 6= 0, 2 a,+ (z) a/z,+ γz[Ma] ≤ z . On the other hand, notice that ∆ei (u ) = z∆ei (u). By the branching property under γ1 (Proposition 3.2), for a z < 1 such that equation (26) holds,

    a  X a/z,+ 2 X X z −a,+ 2 1 1 a,+ 1 1 = γ1 {T <∞} |∆ei | = γ1 {Ta<∞} γ T a <∞ |∆ej | a/z ∆ei z −a i≥1 i≥1 j≥1   1 X a,+ 2 ≤ γ1 {Ta<∞} |∆ei | . i≥1

2 Finally combining the two inequalities, we have γ1[Ma] = 1, and γz[Ma] = z by scaling.

Associated to this martingale is the change of measures

dµz Ma = , a ≥ 0. dγ z2 z Ga

We aim at making explicit the law µz. Following Chap. 5.3 of [13], we call H−excursion a process in the upper half-plane whose real part is a Brownian motion and whose imaginary part is an independent three-dimensional starting at 0. We also introduce, for a > 0, Sa = inf{s > 0, y(R(u) − s) = a}. We have the following characterization.

26 Theorem 3.8. Let z ∈ R\{0}. For any a > 0, under µz, (u(s))0≤s≤Ta and (u(R(u) − s)−z)0≤s≤Sa are two independent H−excursions stopped at the hitting time of {=(z) = a}.

Through the change of measures µz, u therefore splits into two independent H−excursions starting at 0 and z respectively. Proof. The theorem follows from a similar application of the master formula. Let f, g : U → R+ be two bounded continuous functions. Then   n+ f(u(s), 0 ≤ s ≤ Ta)g(u(R(u) − s), 0 ≤ s ≤ Sa)Ma (27)   

= n+ f(u(s), 0 ≤ s ≤ Ta)1 n+ g(u(R(u) − s), 0 ≤ s ≤ Sa)Ma FT . {Ta<∞} a (28) By the master formula,  

n+ g(u(R(u) − s), 0 ≤ s ≤ Sa)Ma FT a " # Z T−a Z +∞ dx  0  = E dLr E g(x + x + XT−a−s, a + YT−a−s, 0 ≤ s ≤ S) x0=X , 0 −∞ 2π r where S := inf{s > 0,YT−a−s = 0}. The change of variables x + Xr 7→ x provides  

n+ g(u(R(u) − s), 0 ≤ s ≤ Sa)Ma FT a Z +∞   dx   = E LT−a E g(x + XT−a−s, a + YT−a−s, 0 ≤ s ≤ S) −∞ 2π Z +∞ dx   = 2a × E g(x + XT−a−s, a + YT−a−s, 0 ≤ s ≤ S) . −∞ 2π

The path (Xs, 0 ≤ s ≤ T−a) is conditionally on Y distributed as a linear Brownian motion stopped at time T−a (recall that T−a is a measurable function of Y ). Since the Lebesgue measure is a reversible measure for the Brownian motion, by time-reversal, the "law" of (x + XT−a−s, 0 ≤ s ≤ S) for x chosen with the Lebesgue measure is the "law" of a linear Brownian motion with initial measure the Lebesgue measure, stopped R +∞ dx at time S (S is measurable with respect to Y ). Therefore, the integral −∞ 2π E [g(...)] above is also Z +∞ dz   E g(z + Xs, a + YT−a−s, 0 ≤ s ≤ S) . −∞ 2π

Now we use that (a + YT−a−s, 0 ≤ s ≤ S) has the law of a 3-dimensional Bessel process V V starting from 0 and run until its hitting time of a (call this time Ta ), see Corollary 4.6, Chap. VII of [16]. Hence the former integral is also Z +∞ dz h V i E g(z + Xs,Vs, 0 ≤ s ≤ Ta ) . −∞ 2π Plugging this into equation (28) triggers   n+ f(u(s), 0 ≤ s ≤ Ta)g(u(R(u) − s), 0 ≤ s ≤ Sa)Ma

Z +∞   dz h V i 1 = 2a × E g(z + Xs,Vs, 0 ≤ s ≤ Ta ) n+ f(u(s), 0 ≤ s ≤ Ta) {Ta<∞} −∞ 2π Z +∞   dz h V i = E g(z + Xs,Vs, 0 ≤ s ≤ Ta ) × n+ f(u(s), 0 ≤ s ≤ Ta) Ta < ∞ . −∞ 2π

27 Moreover, using for example Williams’ description of the Itô’s measure n+ (Theorem 4.5, Chap. XII, in [16]), the law of (u(s), 0 ≤ s ≤ Ta) conditionally on {Ta < ∞} is the V one of (Xs,Vs, 0 ≤ s ≤ Ta ). We get eventually   n+ f(u(s), 0 ≤ s ≤ Ta)g(u(R(u) − s), 0 ≤ s ≤ Sa)Ma

Z +∞ dz h V i h V i = E g(z + Xs,Vs, 0 ≤ s ≤ Ta ) × E f(Xs,Vs, 0 ≤ s ≤ Ta ) . −∞ 2π

Finally, we disintegrate n+ over x(R(u)) to get +∞   Z dz Ma γz f(u(s), 0 ≤ s ≤ Ta)g(u(R(u) − s), 0 ≤ s ≤ Sa) 2 −∞ 2π z Z +∞ dz h V i h V i = E g(z + Xs,Vs, 0 ≤ s ≤ Ta ) × E f(Xs,Vs, 0 ≤ s ≤ Ta ) . −∞ 2π

Now multiply g by any measurable function h : R → R+ of x(R(u)) to see that for Lebesgue-almost every z ∈ R,  M  γ f(u(s), 0 ≤ s ≤ T )g(u(R(u) − s), 0 ≤ s ≤ S ) a (29) z a a z2 h V i h V i = E g(z + Xs,Vs, 0 ≤ s ≤ Ta ) × E f(Xs,Vs, 0 ≤ s ≤ Ta ) . (30) The right-hand side of this equation is a continuous function of z. Moreover, by scaling (Lemma 2.10), for z > 0 the left-hand term can be written  M  γ f(u(s), 0 ≤ s ≤ T )g(u(R(u) − s), 0 ≤ s ≤ S ) a (31) z a a z2  s  = γ f(zu(s/z2), 0 ≤ s ≤ T z2)g(zu(R(u) − ), 0 ≤ s ≤ S z2)M . (32) 1 a/z z2 a/z a/z Since equality (29)-(30) holds almost everywhere, it must hold for a dense set of z > 0. Take z & 1 along such a subsequence. By Lemma 2.12 and the observation that 2 2 Ta/z → Ta, Sa/z → Sa, we get the convergences (zu(s/z ), 0 ≤ s ≤ Ta/zz ) → (u(s), 0 ≤ s 2 s ≤ Ta) and (zu(R(u) − z2 ), 0 ≤ s ≤ Sa/zz ) → (u(R(u) − s), 0 ≤ s ≤ Sa) in U. In h i addition, we know that Ma/z → Ma almost surely and γ1 Ma/z → γ1 [Ma] (both these expressions are equal to 1 by Proposition 25). By Scheffé’s lemma, Ma/z −→ Ma z&1 in L1. When z & 1, this turns (32) into  M  γ f(u(s), 0 ≤ s ≤ T )g(u(R(u) − s), 0 ≤ s ≤ S ) a z a a z2

−→ γ1 [f(u(s), 0 ≤ s ≤ Ta)g(u(R(u) − s), 0 ≤ s ≤ Sa)Ma] . z&1 Therefore (29)-(30) holds for z = 1, and then for any z by scaling. So we proved that for all z ∈ R\{0},  M  γ f(u(s), 0 ≤ s ≤ T )g(u(R(u) − s), 0 ≤ s ≤ S ) a z a a z2 h V i h V i = E g(z + Xs,Vs, 0 ≤ s ≤ Ta ) × E f(Xs,Vs, 0 ≤ s ≤ Ta ) , which is simply

µz [f(u(s), 0 ≤ s ≤ Ta)g(u(R(u) − s), 0 ≤ s ≤ Sa)] h V i h V i = E g(z + Xs,Vs, 0 ≤ s ≤ Ta ) × E f(Xs,Vs, 0 ≤ s ≤ Ta ) . (33)

This proves that under µz, the processes (u(s))0≤s≤Ta and (u(R(u) − s) − z)0≤s≤Sa are independent H−excursions stopped at the hitting time of {=(z) = a}.

28 Remark. This gives a new insight on why the Cauchy process should be hidden in some sense in the law of Ξ: under the tilted measure, u splits into two independent H–excursions and so the size at some level a of the spine going to infinity is just the difference of two Brownian motions started from infinity taken at their hitting time of {=(z) = a}.

4 The growth-fragmentation process of excursions in H In this section, we summarize the previous results in the language of the self-similar growth-fragmentations introduced by Bertoin in [4]. The main reference here is [5], but for the sake of completeness we shall recall in the first paragraph the bulk of the construction of such processes. At the heart of this section lies the calculation of the cumulant function. We recover the cumulant function of [5], formula (19), in the specific case when θ = 1. Recall the definition of Z in Theorem 3.3. The process Z starting at z < 0 is defined to be the negative of the process Z starting at −z.

4.1 Construction of X We explain how one can define the cell system driven by Z. We use the Ulam tree ∞ i U = ∪i=0N , where N = {1, 2,...}, to encode the genealogy of the cells (we write 0 N = {∅}, and ∅ is called the Eve cell). A node u ∈ U is a list (u1, . . . , ui) of positive i+1 integers where |u| = i is the generation of u. The children of u are the lists in N of the form (u1, . . . , ui, k), with k ∈ N.A cell system is a family X = (Xu, u ∈ U) indexed by U, where Xu = (Xu(a))a≥0 is meant to describe the evolution of the size or mass of the cell u with its age a.

To define the cell system driven by Z, we first define X∅ as Z, started from some initial mass z 6= 0, and set b∅ = 0. Observe the realization of X∅ and its jumps. Since Z hits 0 in finite time, we may rank the sequence of jump sizes and times

(x1, β1), (x2, β2),... of −X∅ by decreasing order of the |xi|’s. Conditionally on these jump sizes and times, we define the first generation of our cell system Xi, i ∈ N, to be independent with Xi distributed at Z, starting from xi. We also set bi = b∅ + βi for the birth time of the particle i ∈ N. By recursion, one defines the law of the n-th generation given generations 1, . . . , n − 1 in the same way. Hence the cell la- n 0 n−1 belled by u = (u1, . . . , un) ∈ N is born from u = (u1, . . . , un−1) ∈ N at time 0 0 bu = bu + βun , where βun is the time of the un-th largest jump of Xu , and condi- − 0 0 tionally on Xu (βun ) − Xu (βun ) = −y, Xu has the law of Z with initial value y and is independent of the other daughter cells at generation n. We write ζu for the lifetime of the particle u. We may then define, for a ≥ 0,

X(a) := (Xu(a − bu), u ∈ U and bu ≤ a < bu + ζu), (34) as the family of the sizes of all the cells alive at time a. We arrange the elements in X(a) in descending order of their absolute values.

4.2 The growth-fragmentation process of excursions in H We restate Theorem 1.1. Beware that the signed growth-fragmentation X in this section starts from z.

Theorem 4.1. Let z ∈ R\{0}. Under γz,

law  a,+  (X(a), a ≥ 0) = (∆ei , i ≥ 1), a ≥ 0 .

29 Proof. Let u ∈ U + be such that the locally largest excursion described in Subsec- tion 3.2 is well-defined, i.e. u has no loop above any level, has distinct local minima, and no splitting in two equal sizes (this set of excursions has full probability under γz). This gives our Eve cell process. The independence of the daughter excursions given their size at birth has already been proved in Theorem 3.6, and we have taken Z according to the law of the largest fragment in Theorem 3.3, so it remains to prove that every excursion can be found in the genealogy of X as constructed in the former section. For a ≥ 0, we denote by Xexc(a) the set of all excursions associated to the sizes in (t) exc X(a). Let 0 ≤ t ≤ R(u) such that =(u(t)) > a. We want to show that ea ∈ X (a). Set n 0 (t) exc 0 o A = a ∈ [0, a], ea0 ∈ X (a ) . Then A is an interval containing 0.

• 0 0 (τ ) 0 •A is open in [0, a]. Let a ∈ A with a < a. Write eb , b ≥ a , for the locally (t) (t) (τ •) largest excursion inside ea0 . Then for small enough ε > 0, ea0+ε = ea0+ε. Indeed, • 0 (t) (τ ) the first height b ≥ a when eb 6= eb is equal to the minimum of y(s) for s between t and τ •, and so it is stricly above a0. This implies that a0 + ε ∈ A since • (τ ) exc 0 ea0+ε ∈ X (a + ε) as the locally largest excursions are in the genealogy.

•A is closed in [0, a]. Let an be a sequence of elements in A increasing to a∞. For all ε > 0, there exists δ > 0 such that:

0 (t) (t) ∀a ∈ (a∞ − δ, a∞), |∆e 0 − ∆e − | < ε. a a∞

Then for all a1, a2 ∈ (a∞ − δ, a∞),

(t) (t) (t) (t) (t) (t) |∆ea − ∆ea | ≤ |∆ea − ∆e − | + |∆ea − ∆e − | < 2ε. 1 2 1 a∞ 2 a∞

(t) Take ε = |∆e − |/4 and N large enough so that aN ∈ (a∞ − δ, a∞). Then the a∞ (t) 0 (t) excursion eaN is such that for all a ∈ [aN , a∞), ea0 is taken along the locally (t) largest excursion inside eaN . Indeed, it follows from these inequalities that for all (t) (t) 1 (t) (t) 0 a1, a2 ∈ (a∞ − δ, a∞), |∆ea1 − ∆ea2 | ≤ |∆e − | < |∆ea1 | , then take a1 = a and 2 a∞ 0 a2 % a . This entails that a∞ ∈ A. By connectedness A must be [0, a]. This concludes the proof.

4.3 The cumulant function The process X is not a growth-fragmentation in the sense of [5] because it carries neg- ative masses. We show in this section that if one discards all cells with negative masses together with their progeny, one obtains one of the growth-fragmentation processes studied in [5].

Formally, let X defined by (34) where we only consider the u’s such that Xv(bv) > 0 for all ancestors v of u (including itself) in the Ulam tree. The process X is a growth- fragmentation in the sense of [5]. It is characterized by its self-similarity index α = −1 and its cumulant function defined for q ≥ 0, by

Z 0 κ(q) := Ψ(q) + (1 − ey)qΛ(dy), −∞

30 where Λ denotes the Lévy measure of the Lévy process ξ. The following proposition is Proposition 5.2 of [5] in the case θ = 1, βˆ = 1 and γ =γ ˆ = 1/2 with the additional factor 2 (corresponding to a time change).

+ + Proposition 4.2. Let ω+ = ω = 5/2, and Φ (q) = κ(q + ω+) for q ≥ 0. Then Φ is the Laplace exponent of a symmetric Cauchy process conditioned to stay positive, namely Γ( 1 − q)Γ( 3 + q) 3 1 Φ+(q) = −2 2 2 , − < q < . (35) Γ(−q)Γ(1 + q) 2 2 Furthermore, the associated growth-fragmentation X has no killing and its cumulant function is cos(πq) κ(q) = −2 Γ(q − 1)Γ(3 − q), 1 < q < 3. (36) π Remark. In [5], the roots of κ pave the way to remarkable martingales. It should not 3 5 come as a surprise that in our case these roots happen to be ω− = 2 and ω+ = 2 . Indeed, the h-transform for the symmetric Cauchy process conditioned to stay positive (resp. conditioned to hit 0 continuously) is given by x 7→ x1/2 (resp. x 7→ x−1/2). This turns the martingale in Proposition 3.7 into the sum over all masses in X to the power 1 1 ω+ = 2 + 2 , and ω− = 2 − 2 respectively, which are exactly the quantities considered in [5].

Proof. The strategy is as follows. In view of Theorem 5.1 in [5], we first compute κ(q + ω) − κ(ω) and we put it in a Lévy-Khintchin form so as to retrieve the Laplace exponent of the Lévy process involved in the Lamperti representation of a Cauchy process conditioned to stay positive, which is known from [8]. We then show that κ(ω) = 0, and therefore deduce the expression of κ. Recall first that by definition

Z 0 κ(q) = Ψ(q) + (1 − ey)qΛ(dy), −∞ with Ψ given by (15). In fact, we rather use formula (24), which is closer to [8]:

  Z −y 2 3 2  qy y 1  e dy Ψ(q) = − ln(2) + q + e − 1 − q(e − 1) |ey−1|<1 y 2 . π 2 π y>− ln(2) (e − 1)

3 1 Let − 2 < q < 2 . Then π (κ(q + ω) − κ(ω)) 2   Z −y 3  (q+ω)y ωy y 1  e dy = − ln(2) + q + e − e − q(e − 1) |ey−1|<1 y 2 2 y>− ln(2) (e − 1) Z 0 −y  y q+ω y ω e dy + (1 − e ) − (1 − e ) y 2 . − ln(2) (e − 1)

31 Performing the change of variables ex = 1 − ey in the second integral entails π (κ(q + ω) − κ(ω)) 2   Z −y 3  (q+ω)y ωy y 1  e dy = − ln(2) + q + e − e − q(e − 1) |ey−1|<1 y 2 2 y>− ln(2) (e − 1) Z − ln(2) −x  (q+ω)x ωx e dx + e − e x 2 −∞ (e − 1)   Z +∞ −y 3  (q+ω)y ωy ωy y 1  e dy = − ln(2) + q + e − e − qe (e − 1) |ey−1|<1 y 2 2 −∞ (e − 1) Z −y ωy y y 1 e dy + q (e (e − 1) − (e − 1)) |ey−1|<1 y 2 y>− ln(2) (e − 1) Z − ln(2) −y ωy y 1 e dy + q e (e − 1) |ey−1|<1 y 2 −∞ | {z } (e − 1) =1   Z +∞ (ω−1)y 3  qy y 1  e dy = − ln(2) + q + e − 1 − q(e − 1) |ey−1|<1 y 2 2 −∞ (e − 1) Z ln(2) −y Z − ln(2) −y ωy e dy ωy e dy + q (e − 1) y + q e y . − ln(2) e − 1 −∞ e − 1

Because ω = 5/2, this has the form of Φ↑ of Corollary 2 in [8] for the symmetric Cauchy process (α = 1 and ρ = 1/2), apart from a possible extra drift. We now show that the drifts do in fact coincide. Let I and J denote the last two integrals in the above expression. Using the change of variables x = ey, we get

Z 2 x5/2 − 1 I = 2 dx, 1/2 x (x − 1) √ Z 1/2 x J = dx. 0 x − 1 Now √ Z 2 x5/2 − x2 Z 2 x2 − 1 Z 2 x − 1 Z 2 x + 1 I = 2 dx + 2 dx = dx + 2 dx, 1/2 x (x − 1) 1/2 x (x − 1) 1/2 x − 1 1/2 x | {z } :=I1 and √ Z 1/2 x − 1 Z 1/2 1 J = dx + dx . 0 x − 1 0 x − 1 | {z } :=J1 3 One can check that I1+J1 = ln(2)+ 2 . Therefore the linear term in the above expression of κ(q + ω) − κ(ω) is precisely √ √ √ 2 Z 2 x − 1 2 Z 1 1 + u − 1 2 Z 1 1 − u − 1 a+ = dx = du − du, π 0 x − 1 π 0 u π 0 u

↑ which is exactly a+ = 2a as defined in Corollary 2 of [8] for the symmetric Cauchy process. Note that there is a sign error in formula (17) of the latter paper. Hence Corollary 2 of [8] triggers that κ(q + ω) − κ(ω) is twice the Laplace exponent of a Cauchy process conditioned to stay positive, and now by [12], we deduce

Γ( 1 − q)Γ( 3 + q) 3 1 κ(q + ω) − κ(ω) = −2 2 2 , − < q < . Γ(−q)Γ(1 + q) 2 2

32 2 Taking q = −1/2 in this formula, one sees that κ(2) − κ(5/2) = − π . Yet one can easily compute κ(2) from the definition of κ. Simple calculations left to the reader 2 actually lead to κ(2) = − π , and thus κ(5/2) = 0. Finally, we recovered the expression of Φ+, and using Euler’s reflection formula

cos(πq) κ(q) = −2 Γ(q − 1)Γ(3 − q), 1 < q < 3. π

5 Convergence of the derivative martingale

Recall the construction of the cell system in Section4 and that for u ∈ U, |u| denotes its generation. The collection (ln(|Xu(0)|), u ∈ U) defines a branching random walk, see [17] for a general reference on branching random walks. We will work under the associated filtration Gn := σ (Xu, |u| ≤ n) , n ≥ 0. By construction, with the notation of Theorem 3.3, one has for all suitable measurable function f such that f(0) = 0, under γz,

X X  ξ(a) ξ(a−) f(Xu(0)) = f ze − ze . |u|=1 a≥0

From there, one can check by computations (making use of the expression of the cu- mulant function found in (36)) that     X 2 2 X 2 γz  |Xu(0)|  = z , γz  |Xu(0)| ln(|Xu(0)|) = 0 |u|=1 |u|=1

P 2 which implies that the martingale M(n) := |u|=n+1 |Xu(0)| is the critical martingale for the branching random walk. In this case, M(n) converges to 0. In order to have a non-trivial limit, one needs to consider the so-called derivative martingale defined by

X 2 D(n) := − ln(|Xu(0)|)|Xu(0)| , n ≥ 0. |u|=n+1

The aim of this section is to show that this limit is twice the duration of the excursion. First notice that the duration of the excursion is measurable with respect to the cell system, or equivalently to the growth-fragmentation X. Indeed, the number of excursions above level a with height greater than ε > 0 is measurable with respect to X, hence the local time of the excursion at level a is also measurable, and so is the total duration of the excursion by the occupation times formula. Then, by Lévy’s martingale convergence theorem, one would only have to show that the derivative martingale is the conditional expectation of the duration with respect to the filtration Gn. But the duration is not integrable, hence this strategy cannot work. Instead, one needs to use a truncation procedure introduced in [7]. (C) Let C > 0 and denote by U the set of labels obtained from U by killing all the cells (with their progeny) when their size is larger than C in absolute value.

Lemma 5.1. Let z 6= 0 and

Z R(u) T := 1 (t) dt, C {∀0≤b≤y(t), |∆e |

33 be the amount of time spent by excursions with size between −C and C. Then

2 γz(TC ) = πz RC (z/2)1{|z|

1  p p  where RC (z) := − 2π ln (1 +z ˜)/(1 − z˜) − 1 − ln (1 +z ˜)/(1 − z˜) + 1 , with z˜ = 2z C C C , is the Green function at 0 of the Cauchy process in (− 2 , 2 ), see [10].

Proof. Lemma 5.1 follows from an application of Bismut’s description of n+. In- deed, if f : R → R+ is a nonnegative measurable function, then by Proposition 2.6, Z R(u) ! n 1 (t) f(x(R(u))dt + {∀0≤b≤y(t), |∆e |

Z R(u) ! Z C/2 n 1 (t) f(x(R(u))dt = f(2z)R (z)dz. + {∀0≤b≤y(t), |∆e |

2 γz (TC ) = πz RC (z/2)1{|z|

Let (C) X 2 D (n) := π RC (Xu(0)/2)|Xu(0)| , n ≥ 0. u∈U(C): |u|=n+1 Corollary 5.2. The following identity holds for all n ≥ 0 and z 6= 0:   (C) γz TC Gn = D (n).

(C) Consequently, (D (n), n ≥ 0) is a uniformly integrable (Gn)n≥0–martingale. Proof. For simplicity, we prove this for n = 0: the identity then follows by induction from the branching property. Splitting the integral over the children of Ξ and using Theorem 3.6, we have for |z| < C, ! ! Z R(u) X Z R(u) γ 1 (t) dt = γ 1 (t) dt , z {∀0≤b≤y(t), |∆e |≤C} G1 ∆ei {∀0≤b≤y(t), |∆e |≤C} 0 b i≥1 0 b where the ei, i ≥ 1 denote the excursions created by the jumps of Ξ. Now, using Lemma 5.1, we immediately get

Z R(u) ! X 2 γ 1 (t) dt = R (∆e /2)π|∆e | 1 , z {∀0≤b≤y(t), |∆e |≤C} G1 C i i {|∆ei|

34 Theorem 5.3. Under γz, the derivative martingale (D(n), n ≥ 0) converges almost surely towards twice the duration R(u) of the Brownian excursion. Proof. The proof is standard in the branching random walk literature, see [7] or [17]. It suffices to prove it on the event where all excursions have size smaller than C, this for any C > 0. Let then C > 0 and suppose the corresponding event holds. Using 1 that RC (z) = − 2π ln(z) + Oz(1) when z → 0 and that the martingale M(n) converges to 0, we get that 1 D(C)(n) ∼ D(n). n→∞ 2 Since on our event, TC = R(u), Lévy’s martingale convergence theorem together with Corollary 5.2 imply the result.

References

[1] Aïdékon, E., Hu, Y. and Shi, Z. (2020). Points of infinite multiplicity of planar Brownian motion: measures and local times. Ann. Probab., 48, No. 4, 1785–1825.

[2] Bertoin, J. (1996). Lévy processes. Cambridge Tracts in Mathematics, 121., Cam- bridge University Press, Cambridge.

[3] Bertoin, J. (2002). Self-similar fragmentations. Ann. Inst. H. Poincaré Probab. Statist., 38, No. 3, 319–340.

[4] Bertoin, J. (2017). Markovian growth-fragmentation processes. Bernoulli, 23, 1082–1101.

[5] Bertoin, J., Budd, T. Curien, N. and Kortchemski, I. (2018). Martingales in self- similar growth-fragmentations and their connections with random planar maps. Probab. Th. Rel. Fields, 172, 663–724.

[6] Bertoin, J., Curien, N. and Kortchemski, I. (2019). On conditioning a self-similar growth-fragmentation by its intrinsic area. arXiv:1908.07830

[7] Biggins, J.D. and Kyprianou, A.E. (2004). Measure Change in Multitype Branch- ing. Adv. in Appl. Probab., 36, 544–581.

[8] Caballero, M.E. and Chaumont, L. (2006). Conditioned stable Lévy process and the Lamperti representation. J. Appl. Probab., 43, No. 4, 967–983.

[9] Chen, L., Curien, N. and Maillard, P. (2017). The perimeter cascade in critical Boltzmann quadrangulations decorated by an O(n) loop model. Annales de l’I.H.P. D.

[10] Daviaud, O. (2005). Thick points for the Cauchy process. Ann. I.H.P. B, 41, No. 5, pp. 953–970.

[11] Kuznetsov, A. (2010). Wiener-Hopf factorization and distribution of extrema for a family of Lévy processes. Ann. Appl. Probab., 20, No. 5, 1801–1830.

[12] Kuznetsov, A. and Pardo, J.C. (2013). Fluctuations of stable processes and expo- nential functionals of hypergeometric Lévy processes. Acta Appl. Math., 123,113– 139.

[13] Lawler, G.F. (2005). Conformally invariant processes in the plane. Mathematical Surveys and Monographs, 114. American Mathematical Society, Providence, RI.

35 [14] Le Gall, J.F. and Miermont, G. (2011). Scaling limits of random planar maps with large faces. Ann. Probab., 39, No. 1, 1–69.

[15] Le Gall, J.-F. and Riera, A. (2018). Growth-fragmentation processes in Brownian motion indexed by the Brownian tree To appear in Ann. Probab.

[16] Revuz, D. and Yor, M. (1999). Continuous martingales and Brownian mo- tion. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer-Verlag, Berlin, Third Edition.

[17] Shi, Z. (2015). Branching random walks. vol. 2151 of Lecture Notes in Mathemat- ics, Springer, Cham. Lecture notes from the 42nd Probability Summer School held in Saint Flour, 2012, Ecole d’Eté de Probabilités de Saint-Flour.

36