Advanced Stochastic Processes: Part II

Total Page:16

File Type:pdf, Size:1020Kb

Advanced Stochastic Processes: Part II Advanced stochastic processes: Part II Jan A. Van Casteren Download free books at Jan A. Van Casteren Advanced stochastic processes Part II ii Download free eBooks at bookboon.com Advanced stochastic processes: Part II 2nd edition © 2015 Jan A. Van Casteren & bookboon.com ISBN 978-87-403-1116-7 iii Download free eBooks at bookboon.com Advanced stochastic processes: Part II Contents Contents Contents PrefaceTo See Part 1 download: Advanced stochastic processes: Part 1 i PrefaceChapter 1. Stochastic processes: prerequisites 1i 1. Conditional expectation 2 Chapter2. Lemma 1. Stochastic of Borel-Cantelli processes: prerequisites 91 3.1. StochasticConditional processes expectation and projective systems of measures 102 4.2. ALemma definition of Borel-Cantelli of Brownian motion 169 5.3. MartingalesStochastic processes and related and processes projective systems of measures 1710 4. A definition of Brownian motion 16 Chapter5. Martingales 2. Renewal and theory related and processes Markov chains 1735 1. Renewal theory 35 Chapter2. Some 2. additional Renewal theory comments and Markov on Markov chains processes 6135 3.1. MoreRenewal on Browniantheory motion 7035 4.2. GaussianSome additional vectors. comments on Markov processes 7661 5.3. Radon-NikodymMore on Brownian Theorem motion 7870 6.4. SomeGaussian martingales vectors. 7876 5. Radon-Nikodym Theorem 78 Chapter6. Some 3. martingales An introduction to stochastic processes: Brownian motion, 78 Gaussian processes and martingales 89 Chapter1. Gaussian 3. An processes introduction to stochastic processes: Brownian motion, 89 2. BrownianGaussian motion processes and related and processes martingales 9889 3.1. SomeGaussian results processes on Markov processes, on Feller semigroups and on the 89 2. Brownianmartingale motion problem and related processes 11798 4.3. Martingales,Some results submartingales, on Markov processes, supermartingales on Feller semigroups and semimartingales and on the 147 5. Regularitymartingale properties problem of stochastic processes 151117 www.sylvania.com 6.4. StochasticMartingales, integrals, submartingales, Itˆo’sformula supermartingales and semimartingales162 147 7.5. Black-ScholesRegularity properties model of stochastic processes 188151 8.6. AnStochastic Ornstein-Uhlenbeck integrals, Itˆo’sformula process in higher dimensions 197162 9.7. ABlack-Scholes version of Fernique’s model theorem We do not reinvent221188 10.8. An Miscellaneous Ornstein-Uhlenbeck process in higher dimensions 223197 9. A version of Fernique’s theorem the wheel we reinvent221 Index 237 10. Miscellaneous light. 223 IndexChapter 4. Stochastic differential equations 237243 1. Solutions to stochastic differential equationsFascinating lighting offers an infinite243 spectrum of Chapter 4. Stochastic differential equations possibilities: Innovative technologies243 and new 2. A martingale representation theorem markets provide both opportunities 272and challenges. 3.1. GirsanovSolutions transformation to stochastic differential equationsAn environment in which your expertise277243 is in high 2. A martingale representation theorem demand. Enjoy the supportive working272 atmosphere Chapter3. Girsanov 5. Some transformation related results within our global group and benefit 277295from international 1. Fourier transforms career paths. Implement sustainable295 ideas in close cooperation with other specialists and contribute to Chapter2. Convergence 5. Some related of positive results measures influencing our future. Come and join324295 us in reinventing 1. Fourier transforms iii light every day. 295 2. Convergence of positive measures 324 iii Light is OSRAM iv Click on the ad to read more Download free eBooks at bookboon.com Contents Preface i Chapter 1. Stochastic processes: prerequisites 1 1. Conditional expectation 2 2. Lemma of Borel-Cantelli 9 3. Stochastic processes and projective systems of measures 10 4. A definition of Brownian motion 16 5. Martingales and related processes 17 Chapter 2. Renewal theory and Markov chains 35 1. Renewal theory 35 2. Some additional comments on Markov processes 61 3. More on Brownian motion 70 4. Gaussian vectors. 76 Advanced5. Radon-Nikodym stochastic processes: Theorem Part II 78 Contents 6. Some martingales 78 Chapter 3. An introduction to stochastic processes: Brownian motion, Gaussian processes and martingales 89 1. Gaussian processes 89 2. Brownian motion and related processes 98 3. Some results on Markov processes, on Feller semigroups and on the martingale problem 117 4. Martingales, submartingales, supermartingales and semimartingales 147 5. Regularity properties of stochastic processes 151 6. Stochastic integrals, Itˆo’sformula 162 7. Black-Scholes model 188 8. An Ornstein-Uhlenbeck process in higher dimensions 197 9. A version of Fernique’s theorem 221 10. Miscellaneous 223 360° Index 237 Chapter 4. Stochastic differential equations 243 1. Solutions to stochastic differential360° equations 243 thinking. 2. A martingale representation theorem 272 3. Girsanov transformation 277 Chapter 5. Some related results thinking. 295 1. Fourier transforms 295 2. Convergence of positive measures 324 iii 360° thinking . 360° thinking. Discover the truth at www.deloitte.ca/careers Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities. Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities. © Deloitte & Touche LLP and affiliated entities. Discover the truthv at www.deloitte.ca/careersClick on the ad to read more Download free eBooks at bookboon.com © Deloitte & Touche LLP and affiliated entities. Contents Preface i Chapter 1. Stochastic processes: prerequisites 1 1. Conditional expectation 2 2. Lemma of Borel-Cantelli 9 3. Stochastic processes and projective systems of measures 10 4. A definition of Brownian motion 16 5. Martingales and related processes 17 Chapter 2. Renewal theory and Markov chains 35 1. Renewal theory 35 2. Some additional comments on Markov processes 61 3. More on Brownian motion 70 4. Gaussian vectors. 76 5. Radon-Nikodym Theorem 78 6. Some martingales 78 Chapter 3. An introduction to stochastic processes: Brownian motion, Gaussian processes and martingales 89 1. Gaussian processes 89 2. Brownian motion and related processes 98 3. Some results on Markov processes, on Feller semigroups and on the martingale problem 117 4. Martingales, submartingales, supermartingales and semimartingales 147 5. Regularity properties of stochastic processes 151 6. Stochastic integrals, Itˆo’sformula 162 7. Black-Scholes model 188 8. An Ornstein-Uhlenbeck process in higher dimensions 197 9. A version of Fernique’s theorem 221 Advanced10. Miscellaneous stochastic processes: Part II 223 Contents Index 237 Chapter 4. Stochastic differential equations 243 1. Solutions to stochastic differential equations 243 2. A martingale representation theorem 272 3. Girsanov transformation 277 ivChapter 5. Some related results CONTENTS 295 1. Fourier transforms 295 2. Convergence of positive measures 324 3. A taste of ergodic theory iii 340 4. Projective limits of probability distributions 357 5. Uniform integrability 369 6. Stochastic processes 373 7. Markov processes 399 8. The Doob-Meyer decomposition via Komlos theorem 409 Subjects for further research and presentations 423 Bibliography 425 Index 433 We will turn your CV into an opportunity of a lifetime Do you like cars? Would you like to be a part of a successful brand? Send us your CV on We will appreciate and reward both your enthusiasm and talent. www.employerforlife.com Send us your CV. You will be surprised where it can take you. vi Click on the ad to read more Download free eBooks at bookboon.com Advanced stochastic processes: Part II Stochastic differential equations CHAPTER 4 Stochastic differential equations Some pertinent topics in the present chapter consist of a discussion on mar- tingale theory, and a few relevant results on stochastic differential equations in spaces of finite dimension. In particular unique weak solutions to stochastic dif- ferential equations give rise to strong Markov processes whose one-dimensional distributions are governed by the corresponding second order parabolic type differential equation. Essentially speaking this chapter is part of Chapter 1 in [146]. (The author is thankful to WSPC for the permission to include this text also in the present book.) In this chapter we discuss weak and strong solutions to stochastic differential equations. We also discuss a version of the Girsanov transformation. 1. Solutions to stochastic differential equations Basically, the material in this section is taken from Ikeda and Watanabe [61]. In Subsection 1.1 we begin with a discussion on strong solutions to stochastic differential equations, after that, in Subsection 1.2 we present a martingale characterization of Brownian motion. We also pay some attention to (local) exponential martingales: see Subsection 1.3. In Subsection 1.4 the notion of weak solutions is explained. However, first we give a definition of Brownian motion which starts at a random position. 4.1. Definition. Let Ω, F, P be a probability space with filtration Ft t 0. A d-dimensional Brownianp motionq is a almost everywhere continuous adaptedp q ě process B t B1 t ,...,Bd t : t 0 such that for 0 t1 t2 t p q“p p q p qq ě du n ă ă ă ¨¨¨ ă tn and for C any Borel subset of R the following equality holds: ă8 P B t1 B 0 ,...,B tn B 0 C rp p q´ p q p q´ p` qq˘ P s p0,d tn tn 1,xn 1,xn p0,d t2 t1,x1,x2 p0,d t1, 0,x1 “ ¨¨¨ p ´ ´ ´ q¨¨¨ p ´ q p q ż C ż dx1 ...dxn. (4.1) This process is called a d-dimensional Brownian motion with initial distribution d n 1 µ if for 0 t1 t2 tn and every Borel subset of R ` the following equalityă ă holds:㨨¨ă ă8 ` ˘ P B 0 ,B t1 ,...,B tn C rp p q p q p qq P s p0,d tn tn 1,xn 1,xn p0,d t2 t1,x1,x2 p0,d t1,x0,x1 “ ¨¨¨ p ´ ´ ´ q¨¨¨ p ´ q p q ż C ż 243 243 Download free eBooks at bookboon.com Advanced stochastic processes: Part II Stochastic differential equations 244 4. STOCHASTIC DIFFERENTIAL EQUATIONS dµ x dx ...dx . (4.2) p 0q 1 n For the definition of p t,x,y see formula (4.26). By definition a filtration 0,d p q Ft t 0 is an increasing family of σ-fields, i.e.
Recommended publications
  • Weak Convergence of Measures
    Mathematical Surveys and Monographs Volume 234 Weak Convergence of Measures Vladimir I. Bogachev Weak Convergence of Measures Mathematical Surveys and Monographs Volume 234 Weak Convergence of Measures Vladimir I. Bogachev EDITORIAL COMMITTEE Walter Craig Natasa Sesum Robert Guralnick, Chair Benjamin Sudakov Constantin Teleman 2010 Mathematics Subject Classification. Primary 60B10, 28C15, 46G12, 60B05, 60B11, 60B12, 60B15, 60E05, 60F05, 54A20. For additional information and updates on this book, visit www.ams.org/bookpages/surv-234 Library of Congress Cataloging-in-Publication Data Names: Bogachev, V. I. (Vladimir Igorevich), 1961- author. Title: Weak convergence of measures / Vladimir I. Bogachev. Description: Providence, Rhode Island : American Mathematical Society, [2018] | Series: Mathe- matical surveys and monographs ; volume 234 | Includes bibliographical references and index. Identifiers: LCCN 2018024621 | ISBN 9781470447380 (alk. paper) Subjects: LCSH: Probabilities. | Measure theory. | Convergence. Classification: LCC QA273.43 .B64 2018 | DDC 519.2/3–dc23 LC record available at https://lccn.loc.gov/2018024621 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for permission to reuse portions of AMS publication content are handled by the Copyright Clearance Center. For more information, please visit www.ams.org/publications/pubpermissions. Send requests for translation rights and licensed reprints to [email protected].
    [Show full text]
  • Radon Measures
    MAT 533, SPRING 2021, Stony Brook University REAL ANALYSIS II FOLLAND'S REAL ANALYSIS: CHAPTER 7 RADON MEASURES Christopher Bishop 1. Chapter 7: Radon Measures Chapter 7: Radon Measures 7.1 Positive linear functionals on Cc(X) 7.2 Regularity and approximation theorems 7.3 The dual of C0(X) 7.4* Products of Radon measures 7.5 Notes and References Chapter 7.1: Positive linear functionals X = locally compact Hausdorff space (LCH space) . Cc(X) = continuous functionals with compact support. Defn: A linear functional I on C0(X) is positive if I(f) ≥ 0 whenever f ≥ 0, Example: I(f) = f(x0) (point evaluation) Example: I(f) = R fdµ, where µ gives every compact set finite measure. We will show these are only examples. Prop. 7.1; If I is a positive linear functional on Cc(X), for each compact K ⊂ X there is a constant CK such that jI(f)j ≤ CLkfku for all f 2 Cc(X) such that supp(f) ⊂ K. Proof. It suffices to consider real-valued I. Given a compact K, choose φ 2 Cc(X; [0; 1]) such that φ = 1 on K (Urysohn's lemma). Then if supp(f) ⊂ K, jfj ≤ kfkuφ, or kfkφ − f > 0;; kfkφ + f > 0; so kfkuI(φ) − I)f) ≥ 0; kfkuI(φ) + I)f) ≥ 0: Thus jI(f)j ≤ I(φ)kfku: Defn: let µ be a Borel measure on X and E a Borel subset of X. µ is called outer regular on E if µ(E) = inffµ(U): U ⊃ E; U open g; and is inner regular on E if µ(E) = supfµ(K): K ⊂ E; K open g: Defn: if µ is outer and inner regular on all Borel sets, then it is called regular.
    [Show full text]
  • The Gap Between Gromov-Vague and Gromov-Hausdorff-Vague Topology
    THE GAP BETWEEN GROMOV-VAGUE AND GROMOV-HAUSDORFF-VAGUE TOPOLOGY SIVA ATHREYA, WOLFGANG LOHR,¨ AND ANITA WINTER Abstract. In [ALW15] an invariance principle is stated for a class of strong Markov processes on tree-like metric measure spaces. It is shown that if the underlying spaces converge Gromov vaguely, then the processes converge in the sense of finite dimensional distributions. Further, if the underly- ing spaces converge Gromov-Hausdorff vaguely, then the processes converge weakly in path space. In this paper we systematically introduce and study the Gromov-vague and the Gromov-Hausdorff-vague topology on the space of equivalence classes of metric boundedly finite measure spaces. The latter topology is closely related to the Gromov-Hausdorff-Prohorov metric which is defined on different equivalence classes of metric measure spaces. We explain the necessity of these two topologies via several examples, and close the gap between them. That is, we show that convergence in Gromov- vague topology implies convergence in Gromov-Hausdorff-vague topology if and only if the so-called lower mass-bound property is satisfied. Further- more, we prove and disprove Polishness of several spaces of metric measure spaces in the topologies mentioned above (summarized in Figure 1). As an application, we consider the Galton-Watson tree with critical off- spring distribution of finite variance conditioned to not get extinct, and construct the so-called Kallenberg-Kesten tree as the weak limit in Gromov- Hausdorff-vague topology when the edge length are scaled down to go to zero. Contents 1. Introduction 1 2. The Gromov-vague topology 6 3.
    [Show full text]
  • Arxiv:2006.09268V2
    Metrizing Weak Convergence with Maximum Mean Discrepancies Carl-Johann Simon-Gabriel [email protected] Institute for Machine Learning ETH Zürich, Switzerland Alessandro Barp [email protected] Department of Engineering University of Cambridge, Alan Turing Institute, United Kingdom Bernhard Schölkopf [email protected] Empirical Inference Department MPI for Intelligent Systems, Tübingen, Germany Lester Mackey [email protected] Microsoft Research Cambridge, MA, USA Abstract This paper characterizes the maximum mean discrepancies (MMD) that metrize the weak convergence of probability measures for a wide class of kernels. More precisely, we prove that, on a locally compact, non-compact, Hausdorff space, the MMD of a bounded contin- uous Borel measurable kernel k, whose RKHS-functions vanish at infinity (i.e., Hk ⊂ C0), metrizes the weak convergence of probability measures if and only if k is continuous and integrally strictly positive definite ( s.p.d.) over all signed, finite, regular Borel measures. We also correct a prior result of Simon-GabrielR and Schölkopf (JMLR 2018, Thm. 12) by showing that there exist both bounded continuous s.p.d. kernels that do not metrize weak convergence and bounded continuous non- s.p.d. kernelsR that do metrize it. Keywords: Maximum Mean Discrepancy,R Metrization of weak convergence, Kernel mean embeddings, Characteristic kernels, Integrally strictly positive definite kernels 1. Introduction Although the mathematical and statistical literature has studied kernel mean embeddings arXiv:2006.09268v3 [cs.LG] 3 Sep 2021 (KMEs) and maximum mean discrepancies (MMDs) at least since the seventies (Guilbart, 1978), the machine learning community re-discovered and applied them only since the late 2000s Smola et al.
    [Show full text]
  • Topics in Ergodic Theory: Notes 1
    Topics in ergodic theory: Notes 1 Tim Austin [email protected] http://www.math.brown.edu/ timaustin 1 Introduction In the first instance, a ‘dynamical system’ in mathematics is a mathematical object intended to capture the intuition of a ‘system’ in the real world that passes through some set of different possible ‘states’ as time passes. Although the origins of the subject lie in physical systems (such as the motion of the planets through different mutual positions, historically the first example to be considered), the mathematical study of these concepts encompasses a much broader variety of models arising in the applied sciences, ranging from the fluctuations in the values of different stocks in economics to the ‘dynamics’ of population sizes in ecology. The mathematical objects used to describe such a situation consist of some set X of possible ‘states’ and a function T : X −→ X specifying the (deterministic1) movement of the system from one state to the next. In order to make a useful model, one must endow the set X with at least a little additional structure and then assume that T respects that structure. Four popular choices, roughly in order of increasing specificity, are the following: • X a measurable space and T measurable — this is the setting of measurable dynamics; 1There are also many models in probability theory of ‘stochastic dynamics’, such as Markov processes for which T is replaced by a probability kernel, which is something akin to a ‘random- ized function’ – these lie outside the scope of this course. 1 • X a topological space and T continuous — topological dynamics; • X a smooth manifold and T a smooth map — smooth dynamics; • X a compact subset of C and T analytic — complex dynamics.
    [Show full text]
  • UNIVERSITY of CALIFORNIA, SAN DIEGO a Sufficient Condition For
    UNIVERSITY OF CALIFORNIA, SAN DIEGO A sufficient condition for stochastic stability of an Internet congestion control model in terms of fluid model stability A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Mathematics by Nam H. Lee Committee in charge: Professor Ruth Williams, Chair Professor Patrick Fitzsimmons Professor Tara Javidi Professor Jason Schweinsberg Professor Paul Siegel 2008 Copyright Nam H. Lee, 2008 All rights reserved. The dissertation of Nam H. Lee is approved, and it is acceptable in quality and form for publication on microfilm: Chair University of California, San Diego 2008 iii DEDICATION I dedicate this thesis to my beloved mother. iv TABLE OF CONTENTS Signature Page . iii Dedication . iv Table of Contents . v Acknowledgements . vii Vita and Publications . viii Abstract . viii Chapter 1 Introduction . 1 1.1 Overview . 1 1.2 Notation and terminology . 3 1.3 Borel right process and Harris recurrence . 6 Chapter 2 Internet congestion control model . 9 2.1 Network structure, bandwidth sharing policy and model description . 9 2.2 Assumptions 2.2.1 and 2.2.2 . 15 2.3 Uniform framework for random variables . 18 2.4 Descriptive processes . 18 2.4.1 Finite dimensional state descriptors . 18 2.4.2 Infinite dimensional state descriptors . 20 2.5 Age process . 22 2.6 Fluid model solutions . 23 2.7 Main theorem . 25 Chapter 3 Sequence of states and fluid limits . 26 3.1 Sequence of scaled descriptors . 27 3.2 Weak limit points of scaled residual work descriptors . 29 3.2.1 Convergence of scaled exogenous flow arrival descriptors .
    [Show full text]
  • Radon Measures
    Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) kfk = sup jf(x)j: x2X We want to identify the dual of C(X) with the space of (finite) signed Borel measures on X, also known as the space of Radon measures on X. Before identifying the dual of C(X); we will first identify the set of positive linear functionals on C(X): By definition, a linear functional (13.2) α : C(X) −! R is positive provided (13.3) f 2 C(X); f ≥ 0 =) α(f) ≥ 0: Clearly, if µ is a (positive) finite Borel measure on X; then (13.4) α(f) = f dµ Z is a positive linear functional. We will establish the converse, that every positive linear functional on C(X) is of the form (13.4). It is easy to see that every positive linear functional α on C(X) is bounded. In fact, applying α to f(x) − a and to b − f(x); we see that, when a and b are real numbers (13.5) f 2 C(X); a ≤ f ≤ b =) aα(1) ≤ α(f) ≤ bα(1); 179 180 13. Radon Measures so (13.6) jα(f)j ≤ Akfk; A = α(1): To begin the construction of µ, we construct a set function µ0 on the collection O of open subsets of X by (13.7) µ0(U) = sup fα(f) : f ≺ Ug; where we say (13.8) f ≺ U () f 2 C(X); 0 ≤ f ≤ 1; and supp f ⊂ U: Here, supp f is the closure of fx : f(x) 6= 0g: Clearly µ0 is monotone.
    [Show full text]
  • Lectures on Analysis John Roe 2005–2009
    Lectures on Analysis John Roe 2005–2009 1 Lecture 1 About Functional Analysis The key objects of study in functional analysis are various kinds of topological vector spaces. The simplest of these are the Banach spaces. Let E be a vector space over the field of real or complex numbers. (In this course we will use | to denote the ground field if a result applies equally to R or C. There is a whole subject of nonarchimedean functional analysis over ground fields like the p-adic numbers, but we won’t get involved with that. See the book by Peter Schneider, 2001.) + Definition 1.1. A seminorm on a vector space E is a function p: X ! R that satisfies p(λx) = jλjp(x) for all λ 2 |, and p(x + y) 6 p(x) + p(y) (the triangle inequality). It is a norm if in addition p(x) = 0 implies x = 0. A norm, usually written kxk, defines a metric in the standard way. Definition 1.2. A Banach space is a normed vector space which is complete (every Cauchy sequence converges) in the metric arising form the norm. Standard examples of Banach spaces include function spaces such as C(X), the continuous functions on a compact topological space X, and L1(Y; µ), the in- tegrable functions on a measure space (Y; µ). However one is often led to consider spaces of functions that do not admit any natural Banach space topology, such as C1(S1), the smooth functions on the circle. Moreover, natural notions within the realm of Banach spaces themselves are not described directly by the norm (e.g.
    [Show full text]
  • (Dis)Continuity of the Fourier Transform of Measures 3
    ON THE (DIS)CONTINUITY OF THE FOURIER TRANSFORM OF MEASURES TIMO SPINDELER AND NICOLAE STRUNGARU Abstract. In this paper, we will study the continuity of the Fourier transform of measures with respect to the vague topology. We show that the Fourier transform is vaguely discon- tinuous on R, but becomes continuous when restricting to a class of Fourier transformable measures such that either the measures, or their Fourier transforms are equi-translation bounded. We discuss continuity of the Fourier transform in the product and norm topology. We show that vague convergence of positive definite measures implies the equi translation boundedness of the Fourier transforms, which explains the continuity of the Fourier transform on the cone of positive definite measures. In the appendix, we characterize vague precom- pactness of a set a measures in arbitrary LCAG, and the necessity of second countability property of a group for defining the autocorrelation measure. 1. Introduction The Fourier transform of measures on locally compact Abelian groups (LCAG) and its continuity play a central role for the theory of mathematical diffraction. When restricting to the cones of positive and positive definite measures, the Fourier transform is continuous with respect to the vague topology [12]. In fact, as shown in [21], only positive definitedness is important: the Fourier transform is vaguely continuous from the cone of positive definite measures on G to the cone of positive measures on the dual group G. Introduced by Hof [16, 17], mathematical diffraction is defined as follows. Given a trans- lation bounded measure µ which models a solid, the autocorrelationb measure γ is defined as the vague limit of the 2-point correlations γn of finite sample approximations µn of µ.
    [Show full text]
  • Mathematical Background of the P3r2 Test
    Mathematical background of the p3r2 test Niklas Hohmann∗ December 14, 2017 1 Motivation There are two pivotal principles for the construction of the test. The first one is • propability density functions (pdfs) are needed to construct certain statistical methods and to prove their optimality In the case of the p3r2 test, the statistical method used is a likelihood-ratio test. The notion of optimality for tests used here is a uniform most powerful (UMP) test. The second principle is that • a Poisson point process (PPP) can be modeled as a random measure At first sight, this complicates things. For example, the observation times y = (y1; y2; : : : ; yN ) (1) of a PPP are not taken as elements of RN , but are interpreted as a measure µy on the real numbers, which is the sum of Dirac measures: N X µy = δyj : (2) j=1 But this approach has got advantages that will become clear soon. The general goal in the next few pages is to establish a pdf for PPPs to construct a UMP test for PPPs. Ignoring more technical details, a random measure Xi is a random element with values in M(R), the set of all measures on R. Since a PPP can be ∗FAU Erlangen-Nuremberg, email: [email protected] 1 modeled as a random measure, it can be imagined as a procedure that is randomly choosing a measure similar to the one in equation (2) that can again be interpreted as observation times. Since Xi is a random element with values in M(R), the distribution Pi of Xi is a probability measure on M(R).
    [Show full text]
  • Topological Measure Theory, with Applications to Probability
    TOPOLOGICAL MEASURE THEORY, WITH APPLICATIONS TO PROBABILITY by David Bruce Pollard A thesis submitted for the degree of Doctor of Philosophy of the Australian National University June, 1976 (ii) To Camilla and Flemming Tops^e (iii) CONTENTS ABSTRACT ............................................................ (iv) PREFACE (including Statement of Originality) ......... (v) SYMBOLS AND TERMS ................................................... (vii) CHAPTER ONE: Introduction §1. Topology and measure .......................... 1 CHAPTER TWO: Integral representation theorems §2. Construction of measures ...................... 6 §3. Assumptions required for integral representations 12 §4. The largest measure dominated by a linear functional .................................. 16 §5. Finitely additive integral representations 20 §6. o- and t-additive integral representations 25 §7. The Alexandroff-Le Cam theorem .............. 34 §8. Further remarks .............................. 38 CHAPTER THREE: Weak convergence and compactness §9. Weak convergence theory: introduction 40 §10. Weak convergence and compactness for more general spaces ......................... ......... 42 §11. Historical perspective ...................... 53 CHAPTER FOUR: T-additive measures §12. The need for T-additivity .................. 57 §13. Uniform spaces 61 §14. Operations preserving T-additivity ......... 64 §15. Weak convergence of T-measures 68 §16. Induced weak convergence ...................... 79 §17. Infinite T-measures .......................... 86 CHAPTER
    [Show full text]
  • Kernel Distribution Embeddings: Universal Kernels, Characteristic Kernels and Kernel Metrics on Distributions
    Journal of Machine Learning Research 19 (2018) 1-29 Submitted 3/16; Revised 5/18; Published 9/18 Kernel Distribution Embeddings: Universal Kernels, Characteristic Kernels and Kernel Metrics on Distributions Carl-Johann Simon-Gabriel [email protected] Bernhard Sch¨olkopf [email protected] MPI for Intelligent Systems Spemannstrasse 41, 72076 T¨ubingen,Germany Editor: Ingo Steinwart Abstract Kernel mean embeddings have become a popular tool in machine learning. They map probability measures to functions in a reproducing kernel Hilbert space. The distance between two mapped measures defines a semi-distance over the probability measures known as the maximum mean discrepancy (MMD). Its properties depend on the underlying kernel and have been linked to three fundamental concepts of the kernel literature: universal, characteristic and strictly positive definite kernels. The contributions of this paper are three-fold. First, by slightly extending the usual definitions of universal, characteristic and strictly positive definite kernels, we show that these three concepts are essentially equivalent. Second, we give the first complete character- ization of those kernels whose associated MMD-distance metrizes the weak convergence of probability measures. Third, we show that kernel mean embeddings can be extended from probability measures to generalized measures called Schwartz-distributions and analyze a few properties of these distribution embeddings. Keywords: kernel mean embedding, universal kernel, characteristic kernel, Schwartz- distributions, kernel metrics on distributions, metrisation of the weak topology 1. Introduction During the past decades, kernel methods have risen to a major tool across various areas of machine learning. They were originally introduced via the \kernel trick" to generalize linear regression and classification tasks by effectively transforming the optimization over a set of linear functions into an optimization over a so-called reproducing kernel Hilbert space (RKHS) Hk, which is entirely defined by the kernel k.
    [Show full text]