Probability Approximations Via the Poisson Clumping Heuristic

Total Page:16

File Type:pdf, Size:1020Kb

Probability Approximations Via the Poisson Clumping Heuristic Preface If you place a large number of points randomly in the unit square, what is the distribution of the radius of the largest circle containing no points? Of the smallest circle containing 4 points? Why do Brownian sample paths have local maxima but not points of increase, and how nearly do they have points of increase? Given two long strings of letters drawn i.i.d. from a finite alphabet, how long is the longest consecutive (resp. non-consecutive) substring appearing in both strings? If an imaginary particle performs a simple random walk on the vertices of a high-dimensional cube, how long does it take to visit every vertex? If a particle moves under the influence of a potential field and random perturbations of velocity, how long does it take to escape from a deep potential well? If cars on a freeway move with constant speed (random from car to car), what is the longest stretch of empty road you will see during a long journey? If you take a large i.i.d. sample from a 2-dimensional rotationally-invariant distribution, what is the maximum over all half-spaces of the deviation between the empirical and true distributions? These questions cover a wide cross-section of theoretical and applied probability. The common theme is that they all deal with maxima or min- ima, in some sense. The purpose of this book is to explain a simple idea which enables one to write down, with little effort, approximate solutions to such questions. Let us try to say this idea in one paragraph. (a) Problems about random extrema can often be translated into prob- lems about sparse random sets in d 1 dimensions. ≥ (b) Sparse random sets often resemble i.i.d. random clumps thrown down randomly (i.e., centered at points of a Poisson process). (c) The problem of interest reduces to estimating mean clump size. (d) This mean clump size can be estimated by approximating the under- lying random process locally by a simpler, known process for which explicit calculations are possible. (Part (b) explains the name Poisson clumping heuristic). ii This idea is known, though rarely explicitly stated, in several specific settings, but its power and range seems not to be appreciated. I assert that this idea provides the correct way to look at extrema and rare events in a wide range of probabilistic settings: to demonstrate this assertion, this book treats over 100 examples. Our arguments are informal, and we are not going to prove anything. This is a rather eccentric format for a mathematics book | some reasons for this format are indicated later. The opening list of problems was intended to persuade every probabilist to read the book! I hope it will appeal to graduate students as well as experts, to the applied workers as well as theoreticians. Much of it should be comprehensible to the reader with a knowledge of stochastic processes at the non-measure-theoretic level (Markov chains, Poisson process, renewal theory, introduction to Brownian motion), as provided by the books of Karlin and Taylor (1975; 1982) and Ross (1983). Different chapters are somewhat independent, and the level of sophistication varies. Although the book ranges over many fields of probability theory, in each field we focus narrowly on examples where the heuristic is applicable, so this work does not constitute a complete account of any field. I have tried to maintain an honest \lecture notes" style through the main part of each chapter. At the end of each chapter is a \Commentary" giving references to background material and rigorous results. In giving references I try to give a book or survey article on the field in question, supplemented by recent research papers where appropriate: I do not attempt to attribute results to their discoverers. Almost all the examples are natural (rather than invented to show off the technique), though I haven't always given a thorough explanation of how they arise. The arguments in examples are sometimes deliberately concise. Most results depend on one key calculation, and it is easier to see this in a half-page argument than in a three-page argument. In rigorous treatments it is often necessary to spend much effort in showing that certain effects are ultimately negligible; we simply omit this effort, relying on intuition to see what the dominant effect is. No doubt one or two of our heuristic conclusions are wrong: if heuristic arguments always gave the right answer, then there wouldn't be any point in ever doing rigorous arguments, would there? Various problems which seem interesting and unsolved are noted as \thesis projects", though actually some are too easy, and others too hard, for an average Ph.D. thesis. The most-studied field of application of the heuristic is to extremes of 1- parameter stationary processes. The standard reference work on this field, Leadbetter et al. (1983), gives theoretical results covering perhaps 10 of our examples. One could write ten similar books, each covering 10 examples from another field. But I don't have the energy or inclination to do so, which is one reason why this book gives only heuristics. Another reason is that connections between examples in different fields are much clearer in the heuristic treatment than in a complete technical treatment, and I hope iii this book will make these connections more visible. At the risk of boring some readers and annoying others, here is a para- graph on the philosophy of approximations, heuristic and limit theorems. The proper business of probabilists is calculating probabilities. Often exact calculations are tedious or impossible, so we resort to approximations. A limit theorem is an assertion of the form: \the error in a certain approxi- mation tends to 0 as (say) N ". Call such limit theorem naive if there is no explicit error bound in terms! 1 of N and the parameters of the under- lying process. Such theorems are so prevalent in theoretical and applied probability that people seldom stop to ask their purpose. Given a serious applied problem involving specific parameters, the natural first steps are to seek rough analytic approximations and to run computer simulations; the next step is to do careful numerical analysis. It is hard to give any argument for the relevance of a proof of a naive limit theorem, except as a vague reassurance that your approximation is sensible, and a good heuris- tic argument seems equally reassuring. For the theoretician, the defense of naive limit theorems is \I want to prove something, and that's all I can prove". There are fields which are sufficiently hard that this is a reasonable attitude (some of the areas in Chapters G, I, J for example), but in most of the fields in this book the theoretical tools for proving naive limit the- orems have been sufficiently developed that such theorems are no longer of serious theoretical research interest (although a few books consolidating the techniques would be useful). Most of our approximations in particular examples correspond to known naive limit theorems, mentioned in the Commentaries. I deliberately de- emphasize this aspect, since as argued above I regard the naive limit theory as irrelevant for applications and mostly trite as theory. On the other hand, explicit error bounds are plainly relevant for applications and interesting as theory (because they are difficult, for a start!). In most of our examples, explicit error bounds are not know: I regard this as an important area for future research. Stein's method is a powerful modern tool for getting explicit bounds in \combinatorial" type examples, whose potential is not widely realized. Hopefully other tools will become available in the future. Acknowledgements: As someone unable to recollect what I had for dinner last night, I am even more unable to recollect the many people who (con- sciously or unconsciously) provided sources of the examples; but I thank them. Course based on partial early drafts of the book were given in Berke- ley in 1984 and Cornell in 1986, and I thank the audiences for their feed- back. In particular, I thank Persi Diaconis, Rick Durrett, Harry Kesten, V. Anantharam and Jim Pitman for helpful comments. I also thank Pilar Fresnedo for drawing the diagrams, and Ed Sznyter for a great job con- a verting my haphazard two-fingered typing into this elegant LTEX book. Contents A The Heuristic 1 A1 The M/M/1 queue. : : : : : : : : : : : : : : : : : : : : : : : 1 A2 Mosaic processes on R2. : : : : : : : : : : : : : : : : : : : : 2 A3 Mosaic processes on other spaces. : : : : : : : : : : : : : : : 5 A4 The heuristic. : : : : : : : : : : : : : : : : : : : : : : : : : : 5 A5 Estimating clump sizes. : : : : : : : : : : : : : : : : : : : : 7 A6 The harmonic mean formula. : : : : : : : : : : : : : : : : : 8 A7 Conditioning on semi-local maxima. : : : : : : : : : : : : : 10 A8 The renewal-sojourn method. : : : : : : : : : : : : : : : : : 11 A9 The ergodic-exit method. : : : : : : : : : : : : : : : : : : : 12 A10 Limit assertions. : : : : : : : : : : : : : : : : : : : : : : : : 14 A11{A21 Commentary : : : : : : : : : : : : : : : : : : : : : : : 15 B Markov Chain Hitting Times 23 B1 Introduction. : : : : : : : : : : : : : : : : : : : : : : : : : : 23 B2 The heuristic for Markov hitting times. : : : : : : : : : : : : 24 B3 Example: Basic single server queue. : : : : : : : : : : : : : : 25 B4 Example: Birth-and-death processes. : : : : : : : : : : : : : 25 B5 Example: Patterns in coin-tossing. : : : : : : : : : : : : : : 26 B6 Example: Card-shuffling. : : : : : : : : : : : : : : : : : : : : 27 B7 Example: Random walk on Zd mod N. : : : : : : : : : : : : 28 B8 Example: Random
Recommended publications
  • Introduction to Lévy Processes
    Introduction to Lévy Processes Huang Lorick [email protected] Document type These are lecture notes. Typos, errors, and imprecisions are expected. Comments are welcome! This version is available at http://perso.math.univ-toulouse.fr/lhuang/enseignements/ Year of publication 2021 Terms of use This work is licensed under a Creative Commons Attribution 4.0 International license: https://creativecommons.org/licenses/by/4.0/ Contents Contents 1 1 Introduction and Examples 2 1.1 Infinitely divisible distributions . 2 1.2 Examples of infinitely divisible distributions . 2 1.3 The Lévy Khintchine formula . 4 1.4 Digression on Relativity . 6 2 Lévy processes 8 2.1 Definition of a Lévy process . 8 2.2 Examples of Lévy processes . 9 2.3 Exploring the Jumps of a Lévy Process . 11 3 Proof of the Levy Khintchine formula 19 3.1 The Lévy-Itô Decomposition . 19 3.2 Consequences of the Lévy-Itô Decomposition . 21 3.3 Exercises . 23 3.4 Discussion . 23 4 Lévy processes as Markov Processes 24 4.1 Properties of the Semi-group . 24 4.2 The Generator . 26 4.3 Recurrence and Transience . 28 4.4 Fractional Derivatives . 29 5 Elements of Stochastic Calculus with Jumps 31 5.1 Example of Use in Applications . 31 5.2 Stochastic Integration . 32 5.3 Construction of the Stochastic Integral . 33 5.4 Quadratic Variation and Itô Formula with jumps . 34 5.5 Stochastic Differential Equation . 35 Bibliography 38 1 Chapter 1 Introduction and Examples In this introductive chapter, we start by defining the notion of infinitely divisible distributions. We then give examples of such distributions and end this chapter by stating the celebrated Lévy-Khintchine formula.
    [Show full text]
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • Detection of Time Reversibility in Time Series by Ordinal Patterns Analysis J
    Detection of time reversibility in time series by ordinal patterns analysis J. H. Mart´ınez,1, a) J. L. Herrera-Diestra,2 and M. Chavez3 1)INSERM-UM1127, Sorbonne Universit´e,Institut du Cerveau et de la Moelle Epini`ere. France 2)ICTP South American Institute for Fundamental Research, IFT-UNESP. Brazil 3)CNRS UMR7225, H^opitalPiti´eSalp^etri`ere. France (Dated: 13 September 2018) Time irreversibility is a common signature of nonlinear processes, and a fundamental property of non- equilibrium systems driven by non-conservative forces. A time series is said to be reversible if its statistical properties are invariant regardless of the direction of time. Here we propose the Time Reversibility from Ordinal Patterns method (TiROP) to assess time-reversibility from an observed finite time series. TiROP captures the information of scalar observations in time forward, as well as its time-reversed counterpart by means of ordinal patterns. The method compares both underlying information contents by quantifying its (dis)-similarity via Jensen-Shannon divergence. The statistic is contrasted with a population of divergences coming from a set of surrogates to unveil the temporal nature and its involved time scales. We tested TiROP in different synthetic and real, linear and non linear time series, juxtaposed with results from the classical Ramsey's time reversibility test. Our results depict a novel, fast-computation, and fully data-driven method- ology to assess time-reversibility at different time scales with no further assumptions over data. This approach adds new insights about the current non-linear analysis techniques, and also could shed light on determining new physiological biomarkers of high reliability and computational efficiency.
    [Show full text]
  • Introduction to Lévy Processes
    Introduction to L´evyprocesses Graduate lecture 22 January 2004 Matthias Winkel Departmental lecturer (Institute of Actuaries and Aon lecturer in Statistics) 1. Random walks and continuous-time limits 2. Examples 3. Classification and construction of L´evy processes 4. Examples 5. Poisson point processes and simulation 1 1. Random walks and continuous-time limits 4 Definition 1 Let Yk, k ≥ 1, be i.i.d. Then n X 0 Sn = Yk, n ∈ N, k=1 is called a random walk. -4 0 8 16 Random walks have stationary and independent increments Yk = Sk − Sk−1, k ≥ 1. Stationarity means the Yk have identical distribution. Definition 2 A right-continuous process Xt, t ∈ R+, with stationary independent increments is called L´evy process. 2 Page 1 What are Sn, n ≥ 0, and Xt, t ≥ 0? Stochastic processes; mathematical objects, well-defined, with many nice properties that can be studied. If you don’t like this, think of a model for a stock price evolving with time. There are also many other applications. If you worry about negative values, think of log’s of prices. What does Definition 2 mean? Increments , = 1 , are independent and Xtk − Xtk−1 k , . , n , = 1 for all 0 = . Xtk − Xtk−1 ∼ Xtk−tk−1 k , . , n t0 < . < tn Right-continuity refers to the sample paths (realisations). 3 Can we obtain L´evyprocesses from random walks? What happens e.g. if we let the time unit tend to zero, i.e. take a more and more remote look at our random walk? If we focus at a fixed time, 1 say, and speed up the process so as to make n steps per time unit, we know what happens, the answer is given by the Central Limit Theorem: 2 Theorem 1 (Lindeberg-L´evy) If σ = V ar(Y1) < ∞, then Sn − (Sn) √E → Z ∼ N(0, σ2) in distribution, as n → ∞.
    [Show full text]
  • Chapter 1 Poisson Processes
    Chapter 1 Poisson Processes 1.1 The Basic Poisson Process The Poisson Process is basically a counting processs. A Poisson Process on the interval [0 , ∞) counts the number of times some primitive event has occurred during the time interval [0 , t]. The following assumptions are made about the ‘Process’ N(t). (i). The distribution of N(t + h) − N(t) is the same for each h> 0, i.e. is independent of t. ′ (ii). The random variables N(tj ) − N(tj) are mutually independent if the ′ intervals [tj , tj] are nonoverlapping. (iii). N(0) = 0, N(t) is integer valued, right continuous and nondecreasing in t, with Probability 1. (iv). P [N(t + h) − N(t) ≥ 2]= P [N(h) ≥ 2]= o(h) as h → 0. Theorem 1.1. Under the above assumptions, the process N(·) has the fol- lowing additional properties. (1). With probability 1, N(t) is a step function that increases in steps of size 1. (2). There exists a number λ ≥ 0 such that the distribution of N(t+s)−N(s) has a Poisson distribution with parameter λ t. 1 2 CHAPTER 1. POISSON PROCESSES (3). The gaps τ1, τ2, ··· between successive jumps are independent identically distributed random variables with the exponential distribution exp[−λ x] for x ≥ 0 P {τj ≥ x} = (1.1) (1 for x ≤ 0 Proof. Let us divide the interval [0 , T ] into n equal parts and compute the (k+1)T kT expected number of intervals with N( n ) − N( n ) ≥ 2. This expected value is equal to T 1 nP N( ) ≥ 2 = n.o( )= o(1) as n →∞ n n there by proving property (1).
    [Show full text]
  • Compound Poisson Processes, Latent Shrinkage Priors and Bayesian
    Bayesian Analysis (2015) 10, Number 2, pp. 247–274 Compound Poisson Processes, Latent Shrinkage Priors and Bayesian Nonconvex Penalization Zhihua Zhang∗ and Jin Li† Abstract. In this paper we discuss Bayesian nonconvex penalization for sparse learning problems. We explore a nonparametric formulation for latent shrinkage parameters using subordinators which are one-dimensional L´evy processes. We particularly study a family of continuous compound Poisson subordinators and a family of discrete compound Poisson subordinators. We exemplify four specific subordinators: Gamma, Poisson, negative binomial and squared Bessel subordi- nators. The Laplace exponents of the subordinators are Bernstein functions, so they can be used as sparsity-inducing nonconvex penalty functions. We exploit these subordinators in regression problems, yielding a hierarchical model with multiple regularization parameters. We devise ECME (Expectation/Conditional Maximization Either) algorithms to simultaneously estimate regression coefficients and regularization parameters. The empirical evaluation of simulated data shows that our approach is feasible and effective in high-dimensional data analysis. Keywords: nonconvex penalization, subordinators, latent shrinkage parameters, Bernstein functions, ECME algorithms. 1 Introduction Variable selection methods based on penalty theory have received great attention in high-dimensional data analysis. A principled approach is due to the lasso of Tibshirani (1996), which uses the ℓ1-norm penalty. Tibshirani (1996) also pointed out that the lasso estimate can be viewed as the mode of the posterior distribution. Indeed, the ℓ1 penalty can be transformed into the Laplace prior. Moreover, this prior can be expressed as a Gaussian scale mixture. This has thus led to Bayesian developments of the lasso and its variants (Figueiredo, 2003; Park and Casella, 2008; Hans, 2009; Kyung et al., 2010; Griffin and Brown, 2010; Li and Lin, 2010).
    [Show full text]
  • Irreversibility of Financial Time Series: a Graph-Theoretical Approach
    Irreversibility of financial time series: a graph-theoretical approach Ryan Flanagan and Lucas Lacasa School of Mathematical Sciences, Queen Mary University of London The relation between time series irreversibility and entropy production has been recently inves- tigated in thermodynamic systems operating away from equilibrium. In this work we explore this concept in the context of financial time series. We make use of visibility algorithms to quantify in graph-theoretical terms time irreversibility of 35 financial indices evolving over the period 1998- 2012. We show that this metric is complementary to standard measures based on volatility and exploit it to both classify periods of financial stress and to rank companies accordingly. We then validate this approach by finding that a projection in principal components space of financial years based on time irreversibility features clusters together periods of financial stress from stable periods. Relations between irreversibility, efficiency and predictability are briefly discussed. I. INTRODUCTION The quantitative analysis of financial time series [1] is a classical field in econometrics that has received in the last decades valuable inputs from statistical physics, nonlinear dynamics and complex systems communities (see [2{4] and references therein for seminal contributions). In particular, the presence of long-range dependence, detected via multifractal measures has been used to quantify the level of development of a given market [5]. This approach was subsequently extended to address the problem of quantifying the degree of market inefficiency [6{10]. Here the adjective efficient refers to a market who is capable of integrating, at any given time, all available information in the price of an asset (the so-called weak-form of the Efficient Market Hypothesis).
    [Show full text]
  • Bayesian Nonparametric Modeling and Its Applications
    UNIVERSITY OF TECHNOLOGY,SYDNEY DOCTORAL THESIS Bayesian Nonparametric Modeling and Its Applications By Minqi LI A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy in the School of Computing and Communications Faculty of Engineering and Information Technology October 2016 Declaration of Authorship I certify that the work in this thesis has not previously been submitted for a degree nor has it been submitted as part of requirements for a degree except as fully acknowledged within the text. I also certify that the thesis has been written by me. Any help that I have received in my research work and the preparation of the thesis itself has been acknowledged. In addition, I certify that all information sources and literature used are indicated in the thesis. Signed: Date: i ii Abstract Bayesian nonparametric methods (or nonparametric Bayesian methods) take the benefit of un- limited parameters and unbounded dimensions to reduce the constraints on the parameter as- sumption and avoid over-fitting implicitly. They have proven to be extremely useful due to their flexibility and applicability to a wide range of problems. In this thesis, we study the Baye- sain nonparametric theory with Levy´ process and completely random measures (CRM). Several Bayesian nonparametric techniques are presented for computer vision and pattern recognition problems. In particular, our research and contributions focus on the following problems. Firstly, we propose a novel example-based face hallucination method, based on a nonparametric Bayesian model with the assumption that all human faces have similar local pixel structures. We use distance dependent Chinese restaurant process (ddCRP) to cluster the low-resolution (LR) face image patches and give a matrix-normal prior for learning the mapping dictionaries from LR to the corresponding high-resolution (HR) patches.
    [Show full text]
  • Construction of Dependent Dirichlet Processes Based on Poisson Processes
    Construction of Dependent Dirichlet Processes based on Poisson Processes Dahua Lin Eric Grimson John Fisher CSAIL, MIT CSAIL, MIT CSAIL, MIT [email protected] [email protected] [email protected] Abstract We present a novel method for constructing dependent Dirichlet processes. The approach exploits the intrinsic relationship between Dirichlet and Poisson pro- cesses in order to create a Markov chain of Dirichlet processes suitable for use as a prior over evolving mixture models. The method allows for the creation, re- moval, and location variation of component models over time while maintaining the property that the random measures are marginally DP distributed. Addition- ally, we derive a Gibbs sampling algorithm for model inference and test it on both synthetic and real data. Empirical results demonstrate that the approach is effec- tive in estimating dynamically varying mixture models. 1 Introduction As the cornerstone of Bayesian nonparametric modeling, Dirichlet processes (DP) [22] have been applied to a wide variety of inference and estimation problems [3, 10, 20] with Dirichlet process mixtures (DPMs) [15, 17] being one of the most successful. DPMs are a generalization of finite mixture models that allow an indefinite number of mixture components. The traditional DPM model assumes that each sample is generated independently from the same DP. This assumption is limiting in cases when samples come from many, yet dependent, DPs. HDPs [23] partially address this modeling aspect by providing a way to construct multiple DPs implicitly depending on each other via a common parent. However, their hierarchical structure may not be appropriate in some problems (e.g.
    [Show full text]
  • Fractional Poisson Process in Terms of Alpha-Stable Densities
    FRACTIONAL POISSON PROCESS IN TERMS OF ALPHA-STABLE DENSITIES by DEXTER ODCHIGUE CAHOY Submitted in partial fulfillment of the requirements For the degree of Doctor of Philosophy Dissertation Advisor: Dr. Wojbor A. Woyczynski Department of Statistics CASE WESTERN RESERVE UNIVERSITY August 2007 CASE WESTERN RESERVE UNIVERSITY SCHOOL OF GRADUATE STUDIES We hereby approve the dissertation of DEXTER ODCHIGUE CAHOY candidate for the Doctor of Philosophy degree * Committee Chair: Dr. Wojbor A. Woyczynski Dissertation Advisor Professor Department of Statistics Committee: Dr. Joe M. Sedransk Professor Department of Statistics Committee: Dr. David Gurarie Professor Department of Mathematics Committee: Dr. Matthew J. Sobel Professor Department of Operations August 2007 *We also certify that written approval has been obtained for any proprietary material contained therein. Table of Contents Table of Contents . iii List of Tables . v List of Figures . vi Acknowledgment . viii Abstract . ix 1 Motivation and Introduction 1 1.1 Motivation . 1 1.2 Poisson Distribution . 2 1.3 Poisson Process . 4 1.4 α-stable Distribution . 8 1.4.1 Parameter Estimation . 10 1.5 Outline of The Remaining Chapters . 14 2 Generalizations of the Standard Poisson Process 16 2.1 Standard Poisson Process . 16 2.2 Standard Fractional Generalization I . 19 2.3 Standard Fractional Generalization II . 26 2.4 Non-Standard Fractional Generalization . 27 2.5 Fractional Compound Poisson Process . 28 2.6 Alternative Fractional Generalization . 29 3 Fractional Poisson Process 32 3.1 Some Known Properties of fPp . 32 3.2 Asymptotic Behavior of the Waiting Time Density . 35 3.3 Simulation of Waiting Time . 38 3.4 The Limiting Scaled nth Arrival Time Distribution .
    [Show full text]
  • Compound Poisson Approximation to Estimate the Lévy Density Céline Duval, Ester Mariucci
    Compound Poisson approximation to estimate the Lévy density Céline Duval, Ester Mariucci To cite this version: Céline Duval, Ester Mariucci. Compound Poisson approximation to estimate the Lévy density. 2017. hal-01476401 HAL Id: hal-01476401 https://hal.archives-ouvertes.fr/hal-01476401 Preprint submitted on 24 Feb 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Compound Poisson approximation to estimate the L´evy density C´eline Duval ∗ and Ester Mariucci † Abstract We construct an estimator of the L´evy density, with respect to the Lebesgue measure, of a pure jump L´evy process from high frequency observations: we observe one trajectory of the L´evy process over [0,T ] at the sampling rate ∆, where ∆ 0 as T . The main novelty of our result is that we directly estimate the L´evy dens→ ity in cases→ ∞ where the process may present infinite activity. Moreover, we study the risk of the estimator with respect to Lp loss functions, 1 p < , whereas existing results only focus on p 2, . The main idea behind the≤ estimation∞ procedure that we propose is to use that∈ { “every∞} infinitely divisible distribution is the limit of a sequence of compound Poisson distributions” (see e.g.
    [Show full text]
  • PHYSICA Time-Reversal Symmetry in Dynamical Systems
    PHYSICA ELSEVIER Physica D 112 (1998) 1-39 Time-reversal symmetry in dynamical systems: A survey Jeroen S.W. Lamb a,., John A.G. Roberts b a Mathematics Institute, University of Warwick, Coventry CV4 7AL, UK b School of Mathematics, La Trobe University, Bundoora, Vic. 3083, Australia Abstract In this paper we survey the topic of time-reversal symmetry in dynamical systems. We begin with a brief discussion of the position of time-reversal symmetry in physics. After defining time-reversal symmetry as it applies to dynamical systems, we then introduce a major theme of our survey, namely the relation of time-reversible dynamical sytems to equivariant and Hamiltonian dynamical systems. We follow with a survey of the state of the art on the theory of reversible dynamical systems, including results on symmetric periodic orbits, local bifurcation theory, homoclinic orbits, and renormalization and scaling. Some areas of physics and mathematics in which reversible dynamical systems arise are discussed. In an appendix, we provide an extensive bibliography on the topic of time-reversal symmetry in dynamical systems. 1991 MSC: 58Fxx Keywords: Dynamical systems; Time-reversal symmetry; Reversibility 1. Introduction scope of our survey. Our survey also does not include a discussion on reversible cellular automata. For fur- Time-reversal symmetry is one of the fundamen- ther reading in these areas we recommend the books tal symmetries discussed in natural science. Con- by Brush [6], Sachs [22] and Hawking [14], and the sequently, it arises in many physically motivated survey paper by Toffoli and Margolus [23]. dynamical systems, in particular in classical and Our survey is largely self-contained and accompa- quantum mechanics.
    [Show full text]