Martingale Theory Problem Set 4, with Solutions Stopping

Total Page:16

File Type:pdf, Size:1020Kb

Martingale Theory Problem Set 4, with Solutions Stopping Martingale Theory Problem set 4, with solutions Stopping 4.1 Let (Ω; F; (Fn)n≥0;P ) be a ltered probability space and S and T two stopping times. Prove (without consulting the lecture notes) that S ^ T := minfS; T g, S _ T := maxfS; T g, and S + T are also stopping times. SOLUTION: f! 2 Ω: S(!) ^ T (!) ≤ ng = f! 2 Ω: S(!) ≤ ng \ f! 2 Ω: T (!) ≤ ng; f! 2 Ω: S(!) _ T (!) ≤ ng = f! 2 Ω: S(!) ≤ ng [ f! 2 Ω: T (!) ≤ ng; n [ f! 2 Ω: S(!) + T (!) ≤ ng = (f! 2 Ω: S(!) = lg \ f! 2 Ω: T (!) ≤ n − lg) : l=0 Since the events on the right hand side of these equations are all Fn-measurable, the statement of the problem follows. 4.2 Martingales for simple symmetric random walk on Z. Let n 7! Xn be a simple symmetric random walk on the one-dimensional integer lattice Z and (Fn)n≥0 its natural ltration. (a) Prove that and 2 are both -martingales. Xn Yn := Xn − n (Fn) (b) Find a deterministic sequence such that 3 be an -martingale. an 2 R Zn := Xn+anXn (Fn) (c) Find a deterministic sequences such that 4 2 be an bn; cn 2 R Vn := Xn + bnXn + cn (Fn)-martingale. SOLUTION: Let ξj, j = 1; 2;::: be i.i.d. random variables with the common distribution P (ξj = ±1) = 1=2, Fn := σ(ξj : 1 ≤ j ≤ n) and write n X X0 = 0;Xn := ξj: j=1 1 (a) E Xn+1 Fn = E Xn + ξn+1 Fn = Xn + E ξn+1 Fn = Xn: 2 E Yn+1 Fn = E Xn+1 − (n + 1) Fn 2 2 = E Xn − n + 2Xnξn+1 + ξn+1 − 1 Fn 2 2 = Xn − n + 2XnE ξn+1 Fn + E ξn+1 Fn − 1 = Yn: (b) Write rst 3 Zn+1 = (Xn + ξn+1) + an+1(Xn + ξn+1) 2 2 3 = Zn + 3Xnξn+1 + 3Xnξn+1 + ξn+1 + (an+1 − an)Xn + an+1ξn+1: Hence, E Zn+1 Fn = ··· = Zn + (an+1 − an + 3)Xn: It follows that Zn is a martingale if and only if an+1 − an + 3 ≡ 0: Thus, in order that n 7! Zn be a martingale we must choose an = a0 − 3n: That is, 3 Zn = Xn − 3nXn is a martingale. (c) Proceed similarly as in (b). Write 4 2 Vn+1 = (Xn + ξn+1) + bn+1(Xn + ξn+1) + cn+1 3 2 2 3 4 = Vn + 4Xnξn+1 + 6Xnξn+1 + 4Xnξn+1 + ξn+1 2 2 + (bn+1 − bn)Xn + 2bn+1Xnξn+1 + bn+1ξn + (cn+1 − cn): Hence, 2 E Vn+1 Fn = ··· = Vn + (bn+1 − bn + 6)Xn + (cn+1 − cn + bn+1 + 1): It follows that Zn is a martingale if and only if bn+1 − bn + 6 ≡ 0; cn+1 − cn + bn+1 + 1: 2 Thus, in order that n 7! Zn be a martingale we must choose bn = b0 − 6n; n X 2 cn = c0 − bk − n = c0 + (b0 − 1)n − 3n(n − 1) = c0 + (b0 + 2)n − 3n : k=1 That is, 4 2 2 Vn = Xn − 6nXn − 3n + 2n is a martingale. 4.3 Gambler's Ruin, 1 A gambler wins or looses one pound in each round of betting, with equal chances and independently of the past events. She starts betting with the rm determination that she will stop gambling when either she won n pounds or she lost m pounds. (a) What is the probability that she will be winning when she stops playing further. (b) What is the expected number of her betting rounds before she will stop playing further. SOLUTION: Model the experiment with simple symmetric random walk. Let ξj, j = 1; 2;::: be i.i.d. random variables with common distribution 1 P (ξ = +1) = = P (ξ = −1) ; i 2 i and Fn = σ(ξj; 0 ≤ j ≤ n), n ≥ 0, their natural ltration. Denote n X S0 = 0;Sn := ξj; n ≥ 1: j=1 Dene the stopping times TL := inffn > 0 : Sn = −bg;TR := inffn > 0 : Sn = +ag;T := minfTL;TRg: Note that fthe gambler wins a poundsg = fT = TRg; fthe gambler looses b poundsg = fT = TLg: (a) By the Optional Stopping Theorem E (ST ) = E (S0) = 0: Hence −bP (T = TL) + aP (T = Tr) = 0: 3 On the other hand, P (T = TL) + P (T = Tr) = 1: Solving the last two equations we get a b P (T = T ) = ; P (T = T ) = : L a + b R a + b (b) First prove that 2 is yet another martingale: Mn := Sn − n 2 E Mn+1 Fn = E Sn+1 Fn − (n + 1) 2 = E Sn + 2Snξn+1 + 1 Fn − (n + 1) = ··· = Mn: Now, apply the Optional Stopping Theorem 2 2 2 0 = E (MT ) = E ST − T = P (T = TL) b + P (T = TR) a − E (T ) : Hence, using the result from (a) E (T ) = ab: 4.4 Gambler's Ruin, 2 Let the lazy random walk Xn be a Markov chain on Z with the following transition proba- bilities 3 1 P Xn+1 = i ± 1 Xn = i = ; P Xn+1 = i Xn = i = : 8 4 Denote Tk := inffn ≥ 0 : Xn = kg: Let a; b ≥ 1 be xed integer numbers. Compute P Ta < T−b X0 = 0 and E Ta ^ T−b X0 = 0 . SOLUTION: Very similar to problem 3. 4.5 Gambler's Ruin, 3 Answer the same questions as in problem 3 when the probability of winning or loosing one pound in each round is p, respectively, 1 − p, with p 2 (0; 1). Hint: Use the martingales constructed in problem 3.1. SOLUTION: Model the experiment with simple biased random walk. Let ξj, j = 1; 2;::: be i.i.d. random variables with common distribution P (ξi = +1) = p; P (ξi = −1) = q; 4 and Fn = σ(ξj; 0 ≤ j ≤ n), n ≥ 0, their natural ltration. Denote n X S0 = 0;Sn := ξj; n ≥ 1: j=1 Dene the stopping times TL := inffn > 0 : Sn = −bg;TR := inffn > 0 : Sn = +ag;T := minfTL;TRg: Note that fthe gambler wins a poundsg = fT = TRg; fthe gambler looses b poundsg = fT = TLg: (a) Use the Optional Stopping Theorem for the martingale (q=p)Sn : Sn b a 1 = E (q=p) = (p=q) P (T = TL) + (q=p) P (T = TR) : On the other hand, P (T = TL) + P (T = Tr) = 1: Solving the last two equations we get 1 − (q=p)a 1 − (p=q)b P (T = T ) = ; P (T = T ) = : L (p=q)b − (q=p)a R (q=p)a − (p=q)b (b) Now, apply the Optional Stopping Theorem to the martingale Sn − (p − q)n. Hence 1 − (p=q)b 1 − (q=p)a E (T ) = (p − q)−1E (S ) = (p − q)−1 a − b T (q=p)a − (p=q)b (p=q)b − (q=p)a a(1 − (p=q)b) + b(1 − (q=p)a) = (p − q)−1 : (q=p)a − (p=q)b 4.6 Let (Ω; F; (Fn)n≥0; P) be a ltered probability space and S and T two stopping times such that P (S ≤ T ) = 1. (a) Dene the process n 7! Cn := 11fS<n≤T g. Prove that (Cn)n≥1 is predictable process. That is: for all n ≥ 1, Cn is Fn−1-measurable. (b) Let the process (Xn)n≥0 be (Fn)-supermartingale and dene the process n 7! Yn as follows n X Y0 := 0;Yn := Ck(Xk − Xk−1): k=1 5 Prove that (Yn)n≥0 is also (Fn)-supermartingale. (c) Prove that if S and T two stopping times such that P (S ≤ T ) = 1 and (Xn)n≥0 is supermartingale then for all n ≥ 0, E (Xn^T ) ≤ E (Xn^S). SOLUTION: (a) f! : Cn(!) = 1g = f! : S(!) < n; T (!) ≥ ng = f! : S(!) ≤ n − 1g \ f! : T (!) ≤ n − 1gc: Since both events on the right hand side are Fn−1-measurable, the process n 7! Cn is predictable, indeed. (b) Yn is clearly Fn measurable. We check the supermartingale condition: E Yn+1 Fn = Yn + E Cn+1(Xn+1 − Xn) Fn = Yn + Cn+1E Xn+1 − Xn Fn ≤ Yn: In the second step we use the result from (a). In the last step we use Cn+1 ≥ 0. (c) Note that Xn^T − Xn^S = Yn; and use (b). 4.7 A two-dimensional random walk. HW Let Xn be the following two dimensional random walk: n 7! Xn is a Markov chain on the 2 two dimensional integer lattice Z with the following transition probabilities: 1 P Xn+1 = (i ± 1; j) Xn = (i; j) = ; 8 1 P Xn+1 = (i; j ± 1) Xn = (i; j) = : 8 1 P Xn+1 = (i ± 1; j ± 1) Xn = (i; j) = : 8 (a) Prove that 3 M := jX j2 − n n n 2 is a martingale with respect to the natural ltration of the process. (We denote by jxj the 2 Euclidean norm of x 2 Z .) (b) For R 2 R+ dene the stopping time 2 2 TR := inffn ≥ 0 : jXnj ≥ R g: 6 Give sharp lower and upper bounds for E TR X0 = (0; 0) : SOLUTION: Let ξj, j = 1; 2;::: be i.i.d. random two-dimensional vectors with the common distribution P (ξj = (±1; 0)) = P (ξj = (0; ±1)) = P (ξj = (±1; ±1)) = frac18; Fn := σ(ξj : 1 ≤ j ≤ n) and write n X X0 = 0;Xn := ξj: j=1 Note that 3 E (ξ ) = 0; E jξ j2 = : j j 2 (a) 2 3 2 2 3 E jXn+1j − (n + 1) Fn = E jXnj + 2Xn · ξn+1 + jξn+1j − (n + 1) Fn 2 2 2 3 2 3 2 3 = jXnj − n + E jξn+1j Fn = jXnj − n: 2 2 2 (b) Note rst that p 2 2 2 R ≤ jXTR j ≤ (R + 2) : Apply the Optional Stopping Theorem, 2 E (T ) = E jX j2 : R 3 TR From these two relations we get 2 2 p R2 ≤ E (T ) ≤ (R + 2)2: 3 R 3 4.8 We toss repeatedly a fair coin.
Recommended publications
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • Lecture 19: Polymer Models: Persistence and Self-Avoidance
    Lecture 19: Polymer Models: Persistence and Self-Avoidance Scribe: Allison Ferguson (and Martin Z. Bazant) Martin Fisher School of Physics, Brandeis University April 14, 2005 Polymers are high molecular weight molecules formed by combining a large number of smaller molecules (monomers) in a regular pattern. Polymers in solution (i.e. uncrystallized) can be characterized as long flexible chains, and thus may be well described by random walk models of various complexity. 1 Simple Random Walk (IID Steps) Consider a Rayleigh random walk (i.e. a fixed step size a with a random angle θ). Recall (from Lecture 1) that the probability distribution function (PDF) of finding the walker at position R~ after N steps is given by −3R2 2 ~ e 2a N PN (R) ∼ d (1) (2πa2N/d) 2 Define RN = |R~| as the end-to-end distance of the polymer. Then from previous results we q √ 2 2 ¯ 2 know hRN i = Na and RN = hRN i = a N. We can define an additional quantity, the radius of gyration GN as the radius from the centre of mass (assuming the polymer has an approximately 1 2 spherical shape ). Hughes [1] gives an expression for this quantity in terms of hRN i as follows: 1 1 hG2 i = 1 − hR2 i (2) N 6 N 2 N As an example of the type of prediction which can be made using this type of simple model, consider the following experiment: Pull on both ends of a polymer and measure the force f required to stretch it out2. Note that the force will be given by f = −(∂F/∂R) where F = U − TS is the Helmholtz free energy (T is the temperature, U is the internal energy, and S is the entropy).
    [Show full text]
  • 8 Convergence of Loop-Erased Random Walk to SLE2 8.1 Comments on the Paper
    8 Convergence of loop-erased random walk to SLE2 8.1 Comments on the paper The proof that the loop-erased random walk converges to SLE2 is in the paper \Con- formal invariance of planar loop-erased random walks and uniform spanning trees" by Lawler, Schramm and Werner. (arxiv.org: math.PR/0112234). This section contains some comments to give you some hope of being able to actually make sense of this paper. In the first sections of the paper, δ is used to denote the lattice spacing. The main theorem is that as δ 0, the loop erased random walk converges to SLE2. HOWEVER, beginning in section !2.2 the lattice spacing is 1, and in section 3.2 the notation δ reappears but means something completely different. (δ also appears in the proof of lemma 2.1 in section 2.1, but its meaning here is restricted to this proof. The δ in this proof has nothing to do with any δ's outside the proof.) After the statement of the main theorem, there is no mention of the lattice spacing going to zero. This is hidden in conditions on the \inner radius." For a domain D containing 0 the inner radius is rad0(D) = inf z : z = D (364) fj j 2 g Now fix a domain D containing the origin. Let be the conformal map of D onto U. We want to show that the image under of a LERW on a lattice with spacing δ converges to SLE2 in the unit disc. In the paper they do this in a slightly different way.
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • Deep Optimal Stopping
    Journal of Machine Learning Research 20 (2019) 1-25 Submitted 4/18; Revised 1/19; Published 4/19 Deep Optimal Stopping Sebastian Becker [email protected] Zenai AG, 8045 Zurich, Switzerland Patrick Cheridito [email protected] RiskLab, Department of Mathematics ETH Zurich, 8092 Zurich, Switzerland Arnulf Jentzen [email protected] SAM, Department of Mathematics ETH Zurich, 8092 Zurich, Switzerland Editor: Jon McAuliffe Abstract In this paper we develop a deep learning method for optimal stopping problems which directly learns the optimal stopping rule from Monte Carlo samples. As such, it is broadly applicable in situations where the underlying randomness can efficiently be simulated. We test the approach on three problems: the pricing of a Bermudan max-call option, the pricing of a callable multi barrier reverse convertible and the problem of optimally stopping a fractional Brownian motion. In all three cases it produces very accurate results in high- dimensional situations with short computing times. Keywords: optimal stopping, deep learning, Bermudan option, callable multi barrier reverse convertible, fractional Brownian motion 1. Introduction N We consider optimal stopping problems of the form supτ E g(τ; Xτ ), where X = (Xn)n=0 d is an R -valued discrete-time Markov process and the supremum is over all stopping times τ based on observations of X. Formally, this just covers situations where the stopping decision can only be made at finitely many times. But practically all relevant continuous- time stopping problems can be approximated with time-discretized versions. The Markov assumption means no loss of generality. We make it because it simplifies the presentation and many important problems already are in Markovian form.
    [Show full text]
  • Optimal Stopping Time and Its Applications to Economic Models
    OPTIMAL STOPPING TIME AND ITS APPLICATIONS TO ECONOMIC MODELS SIVAKORN SANGUANMOO Abstract. This paper gives an introduction to an optimal stopping problem by using the idea of an It^odiffusion. We then prove the existence and unique- ness theorems for optimal stopping, which will help us to explicitly solve opti- mal stopping problems. We then apply our optimal stopping theorems to the economic model of stock prices introduced by Samuelson [5] and the economic model of capital goods. Both economic models show that those related agents are risk-loving. Contents 1. Introduction 1 2. Preliminaries: It^odiffusion and boundary value problems 2 2.1. It^odiffusions and their properties 2 2.2. Harmonic function and the stochastic Dirichlet problem 4 3. Existence and uniqueness theorems for optimal stopping 5 4. Applications to an economic model: Stock prices 11 5. Applications to an economic model: Capital goods 14 Acknowledgments 18 References 19 1. Introduction Since there are several outside factors in the real world, many variables in classi- cal economic models are considered as stochastic processes. For example, the prices of stocks and oil do not just increase due to inflation over time but also fluctuate due to unpredictable situations. Optimal stopping problems are questions that ask when to stop stochastic pro- cesses in order to maximize utility. For example, when should one sell an asset in order to maximize his profit? When should one stop searching for the keyword to obtain the best result? Therefore, many recent economic models often include op- timal stopping problems. For example, by using optimal stopping, Choi and Smith [2] explored the effectiveness of the search engine, and Albrecht, Anderson, and Vroman [1] discovered how the search cost affects the search for job candidates.
    [Show full text]
  • Stochastic Processes and Their Applications to Change Point Detection Problems
    City University of New York (CUNY) CUNY Academic Works All Dissertations, Theses, and Capstone Projects Dissertations, Theses, and Capstone Projects 6-2016 Stochastic Processes And Their Applications To Change Point Detection Problems Heng Yang Graduate Center, City University of New York How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_etds/1316 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] Stochastic Processes And Their Applications To Change Point Detection Problems by Heng Yang A dissertation submitted to the Graduate Faculty in Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York 2016 ii c 2016 Heng Yang All Rights Reserved iii This manuscript has been read and accepted by the Graduate Faculty in Mathematics in satisfaction of the dissertation requirement for the degree of Doctor of Philosophy. Professor Olympia Hadjiliadis Date Chair of Examining Committee Professor Ara Basmajian Date Executive Officer Professor Olympia Hadjiliadis Professor Elena Kosygina Professor Tobias Sch¨afer Professor Mike Ludkovski Supervisory Committee The City University of New York iv Stochastic Processes And Their Applications To Change Point Detection Problems by Heng Yang Adviser: Professor Olympia Hadjiliadis Abstract This dissertation addresses the change point detection problem when either the post- change distribution has uncertainty or the post-change distribution is time inhomogeneous. In the case of post-change distribution uncertainty, attention is drawn to the construction of a family of composite stopping times.
    [Show full text]
  • Arxiv:2003.06465V2 [Math.PR]
    OPTIMAL STOPPING OF STOCHASTIC TRANSPORT MINIMIZING SUBMARTINGALE COSTS NASSIF GHOUSSOUB, YOUNG-HEON KIM AND AARON ZEFF PALMER Abstract. Given a stochastic state process (Xt)t and a real-valued submartingale cost process (St)t, we characterize optimal stopping times τ that minimize the expectation of Sτ while realizing given initial and target distributions µ and ν, i.e., X0 ∼ µ and Xτ ∼ ν. A dual optimization problem is considered and shown to be attained under suitable conditions. The optimal solution of the dual problem then provides a contact set, which characterizes the location where optimal stopping can occur. The optimal stopping time is uniquely determined as the first hitting time of this contact set provided we assume a natural structural assumption on the pair (Xt,St)t, which generalizes the twist condition on the cost in optimal transport theory. This paper extends the Brownian motion settings studied in [15, 16] and deals with more general costs. Contents 1. Introduction 1 2. Weak Duality and Dynamic Programming 4 2.1. Notation and Definitions 4 2.2. Dual formulation 5 2.3. Dynamic programming dual formulation 6 3. Pointwise Bounds 8 4. Dual Attainment in the Discrete Setting 10 5. Dual Attainment in Hilbert Space 11 5.1. When the cost is the expected stopping time 14 6. The General Twist Condition 14 Appendix A. Recurrent Processes 16 A.1. Dual attainment 17 A.2. When the cost is the expected stopping time 20 Appendix B. Path Monotonicity 21 References 21 arXiv:2003.06465v2 [math.PR] 22 Dec 2020 1. Introduction Given a state process (Xt)t valued in a complete metric space O, an initial distribution µ, and a target distribution ν on O, we consider the set T (µ,ν) of -possibly randomized- stopping times τ that satisfy X0 ∼ µ and Xτ ∼ ν, where here, and in the sequel, the notation Y ∼ λ means that the law of the random variable Y is the probability measure λ.
    [Show full text]
  • Random Walk in Random Scenery (RWRS)
    IMS Lecture Notes–Monograph Series Dynamics & Stochastics Vol. 48 (2006) 53–65 c Institute of Mathematical Statistics, 2006 DOI: 10.1214/074921706000000077 Random walk in random scenery: A survey of some recent results Frank den Hollander1,2,* and Jeffrey E. Steif 3,† Leiden University & EURANDOM and Chalmers University of Technology Abstract. In this paper we give a survey of some recent results for random walk in random scenery (RWRS). On Zd, d ≥ 1, we are given a random walk with i.i.d. increments and a random scenery with i.i.d. components. The walk and the scenery are assumed to be independent. RWRS is the random process where time is indexed by Z, and at each unit of time both the step taken by the walk and the scenery value at the site that is visited are registered. We collect various results that classify the ergodic behavior of RWRS in terms of the characteristics of the underlying random walk (and discuss extensions to stationary walk increments and stationary scenery components as well). We describe a number of results for scenery reconstruction and close by listing some open questions. 1. Introduction Random walk in random scenery is a family of stationary random processes ex- hibiting amazingly rich behavior. We will survey some of the results that have been obtained in recent years and list some open questions. Mike Keane has made funda- mental contributions to this topic. As close colleagues it has been a great pleasure to work with him. We begin by defining the object of our study. Fix an integer d ≥ 1.
    [Show full text]
  • Superdiffusions with Large Mass Creation--Construction and Growth
    SUPERDIFFUSIONS WITH LARGE MASS CREATION — CONSTRUCTION AND GROWTH ESTIMATES ZHEN-QING CHEN AND JÁNOS ENGLÄNDER Abstract. Superdiffusions corresponding to differential operators of the form Lu + βu − αu2 with large mass creation term β are studied. Our construction for superdiffusions with large mass creations works for the branching mechanism βu − αu1+γ ; 0 < γ < 1; as well. d d Let D ⊆ R be a domain in R . When β is large, the generalized principal eigenvalue λc of L+β in D is typically infinite. Let fTt; t ≥ 0g denote the Schrödinger semigroup of L + β in D with zero Dirichlet boundary condition. Under the mild assumption that there exists an 2 0 < h 2 C (D) so that Tth is finite-valued for all t ≥ 0, we show that there is a unique Mloc(D)-valued Markov process that satisfies a log-Laplace equation in terms of the minimal nonnegative solution to a semilinear initial value problem. Although for super-Brownian motion (SBM) this assumption requires β be less than quadratic, the quadratic case will be treated as well. When λc = 1, the usual machinery, including martingale methods and PDE as well as other similar techniques cease to work effectively, both for the construction and for the investigation of the large time behavior of the superdiffusions. In this paper, we develop the following two new techniques in the study of local/global growth of mass and for the spread of the superdiffusions: • a generalization of the Fleischmann-Swart ‘Poissonization-coupling,’ linking superprocesses with branching diffusions; • the introduction of a new concept: the ‘p-generalized principal eigenvalue.’ The precise growth rate for the total population of SBM with α(x) = β(x) = 1 + jxjp for p 2 [0; 2] is given in this paper.
    [Show full text]
  • Particle Filtering Within Adaptive Metropolis Hastings Sampling
    Particle filtering within adaptive Metropolis-Hastings sampling Ralph S. Silva Paolo Giordani School of Economics Research Department University of New South Wales Sveriges Riksbank [email protected] [email protected] Robert Kohn∗ Michael K. Pitt School of Economics Economics Department University of New South Wales University of Warwick [email protected] [email protected] October 29, 2018 Abstract We show that it is feasible to carry out exact Bayesian inference for non-Gaussian state space models using an adaptive Metropolis-Hastings sampling scheme with the likelihood approximated by the particle filter. Furthermore, an adaptive independent Metropolis Hastings sampler based on a mixture of normals proposal is computation- ally much more efficient than an adaptive random walk Metropolis proposal because the cost of constructing a good adaptive proposal is negligible compared to the cost of approximating the likelihood. Independent Metropolis-Hastings proposals are also arXiv:0911.0230v1 [stat.CO] 2 Nov 2009 attractive because they are easy to run in parallel on multiple processors. We also show that when the particle filter is used, the marginal likelihood of any model is obtained in an efficient and unbiased manner, making model comparison straightforward. Keywords: Auxiliary variables; Bayesian inference; Bridge sampling; Marginal likeli- hood. ∗Corresponding author. 1 Introduction We show that it is feasible to carry out exact Bayesian inference on the parameters of a general state space model by using the particle filter to approximate the likelihood and adaptive Metropolis-Hastings sampling to generate unknown parameters. The state space model can be quite general, but we assume that the observation equation can be evaluated analytically and that it is possible to generate from the state transition equation.
    [Show full text]
  • A Reverse Aldous-Broder Algorithm
    Annales de l’Institut Henri Poincaré - Probabilités et Statistiques 2021, Vol. 57, No. 2, 890–900 https://doi.org/10.1214/20-AIHP1101 © Association des Publications de l’Institut Henri Poincaré, 2021 A reverse Aldous–Broder algorithm Yiping Hua,1, Russell Lyonsb,2 and Pengfei Tangb,c,3 aDepartment of Mathematics, University of Washington, Seattle, USA. E-mail: [email protected] bDepartment of Mathematics, Indiana University, Bloomington, USA. E-mail: [email protected] cCurrent affiliation: Department of Mathematical Sciences, Tel Aviv University, Tel Aviv, Israel. E-mail: [email protected] Received 24 July 2019; accepted 28 August 2020 Abstract. The Aldous–Broder algorithm provides a way of sampling a uniformly random spanning tree for finite connected graphs using simple random walk. Namely, start a simple random walk on a connected graph and stop at the cover time. The tree formed by all the first-entrance edges has the law of a uniform spanning tree. Here we show that the tree formed by all the last-exit edges also has the law of a uniform spanning tree. This answers a question of Tom Hayes and Cris Moore from 2010. The proof relies on a bijection that is related to the BEST theorem in graph theory. We also give other applications of our results, including new proofs of the reversibility of loop-erased random walk, of the Aldous–Broder algorithm itself, and of Wilson’s algorithm. Résumé. L’algorithme d’Aldous–Broder fournit un moyen d’échantillonner un arbre couvrant aléatoire uniforme pour des graphes connexes finis en utilisant une marche aléatoire simple.
    [Show full text]