Stochastic Processes in Continuous Time

Total Page:16

File Type:pdf, Size:1020Kb

Stochastic Processes in Continuous Time Stochastic Processes in Continuous Time Joseph C. Watkins December 14, 2007 Contents 1 Basic Concepts 3 1.1 Notions of equivalence of stochastic processes . 3 1.2 Sample path properties . 4 1.3 Properties of filtrations . 6 1.4 Stopping times . 7 1.5 Examples of stopping times . 10 2 L´evyProcesses 12 3 Martingales 16 3.1 Regularity of Sample Paths . 18 3.2 Sample Path Regularity and L´evyProcesses . 23 3.3 Maximal Inequalities . 27 3.4 Localization . 28 3.5 Law of the Iterated Logarithm . 30 4 Markov Processes 33 4.1 Definitions and Transition Functions . 33 4.2 Operator Semigroups . 35 4.2.1 The Generator . 35 4.2.2 The Resolvent . 38 4.3 The Hille-Yosida Theorem . 40 4.3.1 Dissipative Operators . 40 4.3.2 Yosida Approximation . 41 4.3.3 Positive Operators and Feller Semigroups . 45 4.3.4 The Maximum Principle . 46 4.4 Strong Markov Property . 47 4.5 Connections to Martingales . 49 4.6 Jump Processes . 53 4.6.1 The Structure Theorem for Pure Jump Markov Processes . 53 4.6.2 Construction in the Case of Bounded Jump Rates . 55 4.6.3 Birth and Death Process . 57 4.6.4 Examples of Interacting Particle Systems . 59 1 CONTENTS 2 4.7 Sample Path Regularity . 61 4.7.1 Compactifying the State Space . 62 4.8 Transformations of Markov Processes . 66 4.8.1 Random Time Changes . 66 4.8.2 Killing . 67 4.8.3 Change of Measure . 69 4.9 Stationary Distributions . 71 4.10 One Dimensional Diffusions. 74 4.10.1 The Scale Function . 76 4.10.2 The Speed Measure . 77 4.10.3 The Characteristic Operator . 80 4.10.4 Boundary Behavior . 82 5 Stochastic Integrals 84 5.1 Quadratic Variation . 85 5.2 Definition of the Stochastic Integral . 90 5.3 The ItˆoFormula . 95 5.4 Stochastic Differential Equations . 98 5.5 ItˆoDiffusions . 101 1 BASIC CONCEPTS 3 1 Basic Concepts We now consider stochastic processes with index set Λ = [0, ∞). Thus, the process X : [0, ∞) × Ω → S can be considered as a random function of time via its sample paths or realizations t → Xt(ω), for each ω ∈ Ω. Here S is a metric space with metric d. 1.1 Notions of equivalence of stochastic processes m As before, for m ≥ 1, 0 ≤ t1 < ··· < tm, B ∈ B(S ), the Borel σ-algebra, call µt1,...,tm (B) = P {(Xt1 ,...,Xtm ) ∈ B} the finite dimensional distributions of X. We also have a variety of notions of equivalence for two stochastic processes, X and Y . Definition 1.1. 1. Y is a version of X if X and Y have the same finite dimensional distributions. 2. Y is a modification of X if for all t ≥ 0, (Xt,Yt) is an S × S-random variable and P {Yt = Xt} = 1. 3. Y is indistinguishable from X if P {ω; Yt(ω) 6= Xt(ω) for some t ≥ 0} = 0. Recall that the Daniell-Kolmogorov extension theorem guarantees that the finite dimensional distribu- tions uniquely determine a probability measure on S[0,∞). Exercise 1.2. If A ∈ σ{Xt; t ≥ 0}, then there exists a countable set C ⊂ [0, ∞) so that A ∈ σ{Xt; t ∈ C}. R T Thus, many sets of interest, e.g., {x; sup0≤s≤T f(xs) > 0} or {x : 0 f(xs) ds < ∞} are not measurable subsets of S[0,∞). Thus, we will need to find methods to find versions of a stochastic process so that these sets are measurable. Exercise 1.3. If X and Y are indistinguishable, then X is a modification of Y . If X is a modification of Y , then X and Y have the same finite dimensional distributions. 1 BASIC CONCEPTS 4 1.2 Sample path properties Definition 1.4. We call X 1. measurable if X is B[0, ∞) × F-measurable, 2. (almost surely) continuous, (left continuous, right continuous) if for (almost) all ω ∈ Ω the sample path is continuous, (left continuous, right continuous). Focusing on the regularity of sample paths, we have Lemma 1.5. Let x : [0, ∞) → S and suppose xt+ = lim xs exists for all t ≥ 0, s→t+ and xt− = lim xs exists for all t > 0. s→t− Then there exists a countable set C such that for all t ∈ [0, ∞)\C, xt− = xt = xt+. Proof. Set 1 C = {t ∈ [0, ∞); max{d(x , x ), d(x , x ), d(x , x )} > } n t− t t− t+ t t+ n If Cn ∩ [0, m] is infinite, then by the Bolzano-Weierstrass theorem, it must have a limit point t ∈ [0, m]. In this case, either xt− or xt+ would fail to exist. Now, write ∞ ∞ [ [ C = (Cn ∩ [0, m]), m=1 n=1 the countable union of finite sets, and, hence, countable. Lemma 1.6. Let D be a dense subset of [0, ∞) and let x : D → S. If for each t ≥ 0, + xt = lim xs s→t+,s∈D exists, then x+ is right continuous. If for each t > 0, − xt = lim xs s→t−,s∈D exists, then x− is left continuous. Proof. Fix t0 > 0. Given > 0, there exists δ > 0 so that + d(xt0 , xs) ≤ whenever s ∈ D ∩ (t0, t0 + δ). Consequently, for all s ∈ (t0, t0 + δ) + + + d(xt , xs ) = lim d(xt , xu) ≤ . 0 u→s+,u∈D 0 and x+ is right continuous. The left continuity is proved similarly. 1 BASIC CONCEPTS 5 − + Exercise 1.7. If both limits exist in the previous lemma for all t, then xt = xt− for all t > 0. Definition 1.8. Let CS[0, ∞) denote the space of continuous S-valued functions on [0, ∞). Let DS[0, ∞) denote the space of right continuous S-valued functions having left limits on [0, ∞). Even though we will not need to use the metric and topological issue associated with the spaces CS[0, ∞) and DS[0, ∞), we proved a brief overview of the issues. If we endow the space CS[0, ∞) with the supremum metric ρ∞(x, y) = sup0≤s≤∞ d(xs, ys), then the metric will not in general be separable. In analogy with the use of seminorms in a Frechet to define a metric, we set, for each t > 0, ρt(x, y) = sup d(xmax{s,t}, ymax{s,t}). s Then, ρT satisfies the symmetric and triangle inequality properties of a metric. However, if x and y agree on [0,T ], but xs 6= ys for some s > T , then x 6= y but ρt(x, y) = 0. However, if ρt(x, y) = 0 for all t, ¯ ¯ then x = y. Consider a bounded metric d(x0, y0) = max{d(x0, y0), 1} and setρ ¯t(x, y) = sup0≤t≤t d(xs, ys), then we can create a metric on CS[0, ∞) which is separable by giving increasingly small importance to large values of t. For example, we can use Z ∞ −t ρ¯(x, y) = e ρ¯t(x, y) dt. 0 Then, (CS[0, ∞), ρ¯) is separable whenever (S, d) is separable and complete whenever (S, d) is complete. For the space DS[0, ∞), then unless the jumps match up exactly then the distance from x to y might be large. To match up the jumps, we introduce a continuous strictly increasing function γ : [0, ∞) → [, ∞) that is one to one and onto and define γ ρt (x, y) = sup d(xmax{s,t}, ymax{γ(s),t}). s and Z ∞ 0 −t γ ρ(x, y) = inf{max{ess supt| log γt|, e ρt (x, y) dt}}. γ 0 As before, (DS[0, ∞), ρ¯) is separable whenever (S, d) is separable and complete whenever (S, d) is complete. Exercise 1.9. CR[0, ∞) with the ρ∞ metric is not separable. Hint: Find an uncountable collection of real-valued continuous functions so that the distance between each of them is 1. With a stochastic process X with sample paths in DS[0, ∞), we have the following moment condition that guarantee that X has a CS[0, ∞) version. Proposition 1.10. If (S, d) be a separable metric space and set d1(x, y) = min{d(x, y), 1}. Let X be a process with sample paths in DS[0, ∞). Suppose for each T > 0, there exist α > 1, β > 0, and C > 0 such that β α E[d1(Xt,Xs) ] ≤ C(t − s) 0 ≤ s ≤ t ≤ T. (1.1) Then almost all sample paths of X belong to CS[0, ∞). 1 BASIC CONCEPTS 6 Proof. Let T be a positive integer. Claim. 2N T X β X β d1(Xt,Xt−) ≤ lim inf d1(Xk2−N ,X(k−1)2−N ) . N→∞ 0<t≤T k=1 First note that by Lemma 1.5, the left side is the sum of a countable number of terms. In addition, note that for each n ≥ 1, {t ∈ [0,T ]; d1(Xt,Xt−) > 1/n} is a finite set. Thus, for N sufficiently large, these points are isolated by the 2−N partition of [0,T ]. Thus, in the limit, these jumps are captured. Consequently, 2N T X β X β d1(Xt,Xt−) I{d (X ,X )>1/n} ≤ lim inf d1(Xk2−N ,X(k−1)2−N ) . (1.2) 1 t t− N→∞ 0<t≤T k=1 Note that the left side of expression (1.2) increases as n increases. Thus, let n → ∞ to establish the claim. By Fatou’s lemma, and the moment inequality (1.1), 2nT X β X β E[ d1(Xt,Xt−) ] ≤ lim inf E[d1(Xk2−n ,X(k−1)2−n ) ] n→∞ 0<t≤T k=1 2nT X ≤ lim inf C2−nα n→∞ k=1 = lim inf CT 2n(1−α) = 0.
Recommended publications
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • On Hitting Times for Jump-Diffusion Processes with Past Dependent Local Characteristics
    Stochastic Processes and their Applications 47 ( 1993) 131-142 131 North-Holland On hitting times for jump-diffusion processes with past dependent local characteristics Manfred Schal Institut fur Angewandte Mathematik, Universitiit Bonn, Germany Received 30 March 1992 Revised 14 September 1992 It is well known how to apply a martingale argument to obtain the Laplace transform of the hitting time of zero (say) for certain processes starting at a positive level and being skip-free downwards. These processes have stationary and independent increments. In the present paper the method is extended to a more general class of processes the increments of which may depend both on time and past history. As a result a generalized Laplace transform is obtained which can be used to derive sharp bounds for the mean and the variance of the hitting time. The bounds also solve the control problem of how to minimize or maximize the expected time to reach zero. AMS 1980 Subject Classification: Primary 60G40; Secondary 60K30. hitting times * Laplace transform * expectation and variance * martingales * diffusions * compound Poisson process* random walks* M/G/1 queue* perturbation* control 1. Introduction Consider a process X,= S,- ct + uW, where Sis a compound Poisson process with Poisson parameter A which is disturbed by an independent standard Wiener process W This is the point of view of Dufresne and Gerber (1991). One can also look on X as a diffusion disturbed by jumps. This is the point of view of Ethier and Kurtz (1986, Section 4.10). Here u and A may be zero so that the jump term or the diffusion term may vanish.
    [Show full text]
  • THE MEASURABILITY of HITTING TIMES 1 Introduction
    Elect. Comm. in Probab. 15 (2010), 99–105 ELECTRONIC COMMUNICATIONS in PROBABILITY THE MEASURABILITY OF HITTING TIMES RICHARD F. BASS1 Department of Mathematics, University of Connecticut, Storrs, CT 06269-3009 email: [email protected] Submitted January 18, 2010, accepted in final form March 8, 2010 AMS 2000 Subject classification: Primary: 60G07; Secondary: 60G05, 60G40 Keywords: Stopping time, hitting time, progressively measurable, optional, predictable, debut theorem, section theorem Abstract Under very general conditions the hitting time of a set by a stochastic process is a stopping time. We give a new simple proof of this fact. The section theorems for optional and predictable sets are easy corollaries of the proof. 1 Introduction A fundamental theorem in the foundations of stochastic processes is the one that says that, under very general conditions, the first time a stochastic process enters a set is a stopping time. The proof uses capacities, analytic sets, and Choquet’s capacibility theorem, and is considered hard. To the best of our knowledge, no more than a handful of books have an exposition that starts with the definition of capacity and proceeds to the hitting time theorem. (One that does is [1].) The purpose of this paper is to give a short and elementary proof of this theorem. The proof is simple enough that it could easily be included in a first year graduate course in probability. In Section 2 we give a proof of the debut theorem, from which the measurability theorem follows. As easy corollaries we obtain the section theorems for optional and predictable sets. This argument is given in Section 3.
    [Show full text]
  • Lecture 19 Semimartingales
    Lecture 19:Semimartingales 1 of 10 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 19 Semimartingales Continuous local martingales While tailor-made for the L2-theory of stochastic integration, martin- 2,c gales in M0 do not constitute a large enough class to be ultimately useful in stochastic analysis. It turns out that even the class of all mar- tingales is too small. When we restrict ourselves to processes with continuous paths, a naturally stable family turns out to be the class of so-called local martingales. Definition 19.1 (Continuous local martingales). A continuous adapted stochastic process fMtgt2[0,¥) is called a continuous local martingale if there exists a sequence ftngn2N of stopping times such that 1. t1 ≤ t2 ≤ . and tn ! ¥, a.s., and tn 2. fMt gt2[0,¥) is a uniformly integrable martingale for each n 2 N. In that case, the sequence ftngn2N is called the localizing sequence for (or is said to reduce) fMtgt2[0,¥). The set of all continuous local loc,c martingales M with M0 = 0 is denoted by M0 . Remark 19.2. 1. There is a nice theory of local martingales which are not neces- sarily continuous (RCLL), but, in these notes, we will focus solely on the continuous case. In particular, a “martingale” or a “local martingale” will always be assumed to be continuous. 2. While we will only consider local martingales with M0 = 0 in these notes, this is assumption is not standard, so we don’t put it into the definition of a local martingale. tn 3.
    [Show full text]
  • Deep Optimal Stopping
    Journal of Machine Learning Research 20 (2019) 1-25 Submitted 4/18; Revised 1/19; Published 4/19 Deep Optimal Stopping Sebastian Becker [email protected] Zenai AG, 8045 Zurich, Switzerland Patrick Cheridito [email protected] RiskLab, Department of Mathematics ETH Zurich, 8092 Zurich, Switzerland Arnulf Jentzen [email protected] SAM, Department of Mathematics ETH Zurich, 8092 Zurich, Switzerland Editor: Jon McAuliffe Abstract In this paper we develop a deep learning method for optimal stopping problems which directly learns the optimal stopping rule from Monte Carlo samples. As such, it is broadly applicable in situations where the underlying randomness can efficiently be simulated. We test the approach on three problems: the pricing of a Bermudan max-call option, the pricing of a callable multi barrier reverse convertible and the problem of optimally stopping a fractional Brownian motion. In all three cases it produces very accurate results in high- dimensional situations with short computing times. Keywords: optimal stopping, deep learning, Bermudan option, callable multi barrier reverse convertible, fractional Brownian motion 1. Introduction N We consider optimal stopping problems of the form supτ E g(τ; Xτ ), where X = (Xn)n=0 d is an R -valued discrete-time Markov process and the supremum is over all stopping times τ based on observations of X. Formally, this just covers situations where the stopping decision can only be made at finitely many times. But practically all relevant continuous- time stopping problems can be approximated with time-discretized versions. The Markov assumption means no loss of generality. We make it because it simplifies the presentation and many important problems already are in Markovian form.
    [Show full text]
  • Optimal Stopping Time and Its Applications to Economic Models
    OPTIMAL STOPPING TIME AND ITS APPLICATIONS TO ECONOMIC MODELS SIVAKORN SANGUANMOO Abstract. This paper gives an introduction to an optimal stopping problem by using the idea of an It^odiffusion. We then prove the existence and unique- ness theorems for optimal stopping, which will help us to explicitly solve opti- mal stopping problems. We then apply our optimal stopping theorems to the economic model of stock prices introduced by Samuelson [5] and the economic model of capital goods. Both economic models show that those related agents are risk-loving. Contents 1. Introduction 1 2. Preliminaries: It^odiffusion and boundary value problems 2 2.1. It^odiffusions and their properties 2 2.2. Harmonic function and the stochastic Dirichlet problem 4 3. Existence and uniqueness theorems for optimal stopping 5 4. Applications to an economic model: Stock prices 11 5. Applications to an economic model: Capital goods 14 Acknowledgments 18 References 19 1. Introduction Since there are several outside factors in the real world, many variables in classi- cal economic models are considered as stochastic processes. For example, the prices of stocks and oil do not just increase due to inflation over time but also fluctuate due to unpredictable situations. Optimal stopping problems are questions that ask when to stop stochastic pro- cesses in order to maximize utility. For example, when should one sell an asset in order to maximize his profit? When should one stop searching for the keyword to obtain the best result? Therefore, many recent economic models often include op- timal stopping problems. For example, by using optimal stopping, Choi and Smith [2] explored the effectiveness of the search engine, and Albrecht, Anderson, and Vroman [1] discovered how the search cost affects the search for job candidates.
    [Show full text]
  • Stochastic Processes and Their Applications to Change Point Detection Problems
    City University of New York (CUNY) CUNY Academic Works All Dissertations, Theses, and Capstone Projects Dissertations, Theses, and Capstone Projects 6-2016 Stochastic Processes And Their Applications To Change Point Detection Problems Heng Yang Graduate Center, City University of New York How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_etds/1316 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] Stochastic Processes And Their Applications To Change Point Detection Problems by Heng Yang A dissertation submitted to the Graduate Faculty in Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York 2016 ii c 2016 Heng Yang All Rights Reserved iii This manuscript has been read and accepted by the Graduate Faculty in Mathematics in satisfaction of the dissertation requirement for the degree of Doctor of Philosophy. Professor Olympia Hadjiliadis Date Chair of Examining Committee Professor Ara Basmajian Date Executive Officer Professor Olympia Hadjiliadis Professor Elena Kosygina Professor Tobias Sch¨afer Professor Mike Ludkovski Supervisory Committee The City University of New York iv Stochastic Processes And Their Applications To Change Point Detection Problems by Heng Yang Adviser: Professor Olympia Hadjiliadis Abstract This dissertation addresses the change point detection problem when either the post- change distribution has uncertainty or the post-change distribution is time inhomogeneous. In the case of post-change distribution uncertainty, attention is drawn to the construction of a family of composite stopping times.
    [Show full text]
  • Arxiv:2003.06465V2 [Math.PR]
    OPTIMAL STOPPING OF STOCHASTIC TRANSPORT MINIMIZING SUBMARTINGALE COSTS NASSIF GHOUSSOUB, YOUNG-HEON KIM AND AARON ZEFF PALMER Abstract. Given a stochastic state process (Xt)t and a real-valued submartingale cost process (St)t, we characterize optimal stopping times τ that minimize the expectation of Sτ while realizing given initial and target distributions µ and ν, i.e., X0 ∼ µ and Xτ ∼ ν. A dual optimization problem is considered and shown to be attained under suitable conditions. The optimal solution of the dual problem then provides a contact set, which characterizes the location where optimal stopping can occur. The optimal stopping time is uniquely determined as the first hitting time of this contact set provided we assume a natural structural assumption on the pair (Xt,St)t, which generalizes the twist condition on the cost in optimal transport theory. This paper extends the Brownian motion settings studied in [15, 16] and deals with more general costs. Contents 1. Introduction 1 2. Weak Duality and Dynamic Programming 4 2.1. Notation and Definitions 4 2.2. Dual formulation 5 2.3. Dynamic programming dual formulation 6 3. Pointwise Bounds 8 4. Dual Attainment in the Discrete Setting 10 5. Dual Attainment in Hilbert Space 11 5.1. When the cost is the expected stopping time 14 6. The General Twist Condition 14 Appendix A. Recurrent Processes 16 A.1. Dual attainment 17 A.2. When the cost is the expected stopping time 20 Appendix B. Path Monotonicity 21 References 21 arXiv:2003.06465v2 [math.PR] 22 Dec 2020 1. Introduction Given a state process (Xt)t valued in a complete metric space O, an initial distribution µ, and a target distribution ν on O, we consider the set T (µ,ν) of -possibly randomized- stopping times τ that satisfy X0 ∼ µ and Xτ ∼ ν, where here, and in the sequel, the notation Y ∼ λ means that the law of the random variable Y is the probability measure λ.
    [Show full text]
  • The Ito Integral
    Notes on the Itoˆ Calculus Steven P. Lalley May 15, 2012 1 Continuous-Time Processes: Progressive Measurability 1.1 Progressive Measurability Measurability issues are a bit like plumbing – a bit ugly, and most of the time you would prefer that it remains hidden from view, but sometimes it is unavoidable that you actually have to open it up and work on it. In the theory of continuous–time stochastic processes, measurability problems are usually more subtle than in discrete time, primarily because sets of measures 0 can add up to something significant when you put together uncountably many of them. In stochastic integration there is another issue: we must be sure that any stochastic processes Xt(!) that we try to integrate are jointly measurable in (t; !), so as to ensure that the integrals are themselves random variables (that is, measurable in !). Assume that (Ω; F;P ) is a fixed probability space, and that all random variables are F−measurable. A filtration is an increasing family F := fFtgt2J of sub-σ−algebras of F indexed by t 2 J, where J is an interval, usually J = [0; 1). Recall that a stochastic process fYtgt≥0 is said to be adapted to a filtration if for each t ≥ 0 the random variable Yt is Ft−measurable, and that a filtration is admissible for a Wiener process fWtgt≥0 if for each t > 0 the post-t increment process fWt+s − Wtgs≥0 is independent of Ft. Definition 1. A stochastic process fXtgt≥0 is said to be progressively measurable if for every T ≥ 0 it is, when viewed as a function X(t; !) on the product space ([0;T ])×Ω, measurable relative to the product σ−algebra B[0;T ] × FT .
    [Show full text]
  • Tight Inequalities Among Set Hitting Times in Markov Chains
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000–000 S 0002-9939(XX)0000-0 TIGHT INEQUALITIES AMONG SET HITTING TIMES IN MARKOV CHAINS SIMON GRIFFITHS, ROSS J. KANG, ROBERTO IMBUZEIRO OLIVEIRA, AND VIRESH PATEL Abstract. Given an irreducible discrete-time Markov chain on a finite state space, we consider the largest expected hitting time T (α) of a set of stationary measure at least α for α ∈ (0, 1). We obtain tight inequalities among the values of T (α) for different choices of α. One consequence is that T (α) ≤ T (1/2)/α for all α < 1/2. As a corollary we have that, if the chain is lazy in a certain sense as well as reversible, then T (1/2) is equivalent to the chain’s mixing time, answering a question of Peres. We furthermore demonstrate that the inequalities we establish give an almost everywhere pointwise limiting characterisation of possible hitting time functions T (α) over the domain α ∈ (0, 1/2]. 1. Introduction Hitting times are a classical topic in the theory of finite Markov chains, with connections to mixing times, cover times and electrical network representations [5, 6]. In this paper, we consider a natural family of extremal problems for maximum expected hitting times. In contrast to most earlier work on hitting times that considered the maximum expected hitting times of individual states, we focus on hitting sets of states of at least a given stationary measure. Informally, we are interested in the following basic question: how much more difficult is it to hit a smaller set than a larger one? (We note that other, quite different extremal problems about hitting times have been considered, e.g.
    [Show full text]
  • Superdiffusions with Large Mass Creation--Construction and Growth
    SUPERDIFFUSIONS WITH LARGE MASS CREATION — CONSTRUCTION AND GROWTH ESTIMATES ZHEN-QING CHEN AND JÁNOS ENGLÄNDER Abstract. Superdiffusions corresponding to differential operators of the form Lu + βu − αu2 with large mass creation term β are studied. Our construction for superdiffusions with large mass creations works for the branching mechanism βu − αu1+γ ; 0 < γ < 1; as well. d d Let D ⊆ R be a domain in R . When β is large, the generalized principal eigenvalue λc of L+β in D is typically infinite. Let fTt; t ≥ 0g denote the Schrödinger semigroup of L + β in D with zero Dirichlet boundary condition. Under the mild assumption that there exists an 2 0 < h 2 C (D) so that Tth is finite-valued for all t ≥ 0, we show that there is a unique Mloc(D)-valued Markov process that satisfies a log-Laplace equation in terms of the minimal nonnegative solution to a semilinear initial value problem. Although for super-Brownian motion (SBM) this assumption requires β be less than quadratic, the quadratic case will be treated as well. When λc = 1, the usual machinery, including martingale methods and PDE as well as other similar techniques cease to work effectively, both for the construction and for the investigation of the large time behavior of the superdiffusions. In this paper, we develop the following two new techniques in the study of local/global growth of mass and for the spread of the superdiffusions: • a generalization of the Fleischmann-Swart ‘Poissonization-coupling,’ linking superprocesses with branching diffusions; • the introduction of a new concept: the ‘p-generalized principal eigenvalue.’ The precise growth rate for the total population of SBM with α(x) = β(x) = 1 + jxjp for p 2 [0; 2] is given in this paper.
    [Show full text]
  • First Passage Times of Diffusion Processes and Their Applications To
    First Passage Times of Diffusion Processes and Their Applications to Finance A thesis presented for the degree of Doctor of Philosophy Luting Li Department of Statistics The London School of Economics and Political Science United Kingdom April 2019 Declaration I certify that the thesis I have presented for examination for the Ph.D. degree of the London School of Economics and Political Science is solely my own work other than where I have clearly indicated that it is the work of others (in which case the extent of any work carried out jointly by me and any other person is clearly identified in it). The copyright of this thesis rests with the author. Quotation from it is permitted, provided that full acknowledgement is made. This thesis may not be reproduced without the prior written consent of the author. I warrant that this authorisation does not, to the best of my belief, infringe the rights of any third party. Statement of conjoint work A version of parts of Chapters 3-6 has been submitted for publication in two articles jointly authored with A. Dassios. A version of Chapter 7 has been submitted for publication jointly authored with H. Xing. i Acknowledgements Just before writing down the words appearing on this page, I sit at my desk on the 7th floor of the Columbia House, recalling those countless yet enjoyable days and nights that I have spent on pursuing knowledge and exploring the world. To me, the marvellous beauty of this journey is once in a lifetime. I would like to, first and foremost, express my sincere gratitude to my supervisors Angelos Dassios and Hao Xing, for providing me the opportunity of carrying out my research under their supervisions, for their constant support and the freedom they granted me to conduct my work according to my preferences.
    [Show full text]