Some Functional (Hölderian) Limit Theorems and Their Applications

Total Page:16

File Type:pdf, Size:1020Kb

Some Functional (Hölderian) Limit Theorems and Their Applications Some functional (H¨olderian)limit theorems and their applications (I) Alfredas Raˇckauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Universit´ede Rouen June 1{5, Rouen (Rouen 2015) Some functional limit theorems 1 / 94 2 To demonstrate possible applications of the weak invariance principle. This include detection of changed segment in a sample, e.g. epidemic change of a mean, of a variance and some other parameters. Aims of the lectures 1 To introduce the weak invariance principle in a H¨olderian framework (Lamperti's type IP) by considering large family of random structures including independent identically distributed random variables, linear processes, random fields and others. (Rouen 2015) Some functional limit theorems 2 / 94 Aims of the lectures 1 To introduce the weak invariance principle in a H¨olderian framework (Lamperti's type IP) by considering large family of random structures including independent identically distributed random variables, linear processes, random fields and others. 2 To demonstrate possible applications of the weak invariance principle. This include detection of changed segment in a sample, e.g. epidemic change of a mean, of a variance and some other parameters. (Rouen 2015) Some functional limit theorems 2 / 94 Aims of the lectures 1 To introduce the weak invariance principle in a H¨olderian framework (Lamperti's type IP) by considering large family of random structures including independent identically distributed random variables, linear processes, random fields and others. 2 To demonstrate possible applications of the weak invariance principle. This include detection of changed segment in a sample, e.g. epidemic change of a mean, of a variance and some other parameters. (Rouen 2015) Some functional limit theorems 2 / 94 Outline of the lecture Lamperti's invariance principle 1 Introduction 2 H¨olderspaces 3 Weak convergence and tightness 4 Lamperti's invariance principle 5 Adaptive FCLT 6 FCLT for triangular arrays (Rouen 2015) Some functional limit theorems 3 / 94 Motto of the lecture Theory without practice is empty Practice without theory is blind (adapted from Immanuel Kant) (Rouen 2015) Some functional limit theorems 4 / 94 Introduction First example Suppose we have a population (e.g., monthly earnings, pass-rates etc.) whose mean, µ, is unknown. In order to learn something about µ, one takes a large independent sample X1;:::; Xn from the population under consideration, and constructs the sample average 1 X := (X + ··· + X ): n n 1 n By the strong law of large numbers, X n ≈ µ. In order to find a more quantitative estimate one can use the central limit theorem (CLT) which is most rudimentary example from which one could start the story named ,,invariance principle". (Rouen 2015) Some functional limit theorems 5 / 94 CLT [classical] If σ2 2 (0; 1), then 1 D p (Sn − nµ) −−−! N (0; 1); nσ n!1 where −−−!D means convergence in distribution and N (m; σ2) denotes the n!1 normal random variable with mean m and variance σ2. −−−!D n!1 Let X1; X2;::: be independent identically distributed (i.i.d.) random 2 2 variables, EX1 = µ, EX1 = σ . Denote their partial sums by k X S0 = 0; Sk = Xj ; j ≥ 1: j=1 (Rouen 2015) Some functional limit theorems 6 / 94 −−−!D n!1 Let X1; X2;::: be independent identically distributed (i.i.d.) random 2 2 variables, EX1 = µ, EX1 = σ . Denote their partial sums by k X S0 = 0; Sk = Xj ; j ≥ 1: j=1 CLT [classical] If σ2 2 (0; 1), then 1 D p (Sn − nµ) −−−! N (0; 1); nσ n!1 where −−−!D means convergence in distribution and N (m; σ2) denotes the n!1 normal random variable with mean m and variance σ2. (Rouen 2015) Some functional limit theorems 6 / 94 Let X1; X2;::: be independent identically distributed (i.i.d.) random 2 2 variables, EX1 = µ, EX1 = σ . Denote their partial sums by k X S0 = 0; Sk = Xj ; j ≥ 1: j=1 CLT [classical] If σ2 2 (0; 1), then 1 D p (Sn − nµ) −−−! N (0; 1); nσ n!1 where −−−!D means convergence in distribution and N (m; σ2) denotes the n!1 normal random variable with mean m and variance σ2. −−−!D n!1 (Rouen 2015) Some functional limit theorems 6 / 94 By this CLT we have for a < b 2 R Z b p 1 −x2=2 lim P( n(X n − µ)/σ 2 (a; b)) = p e dx: n!1 2π a One can then use this to derive approximate confidence bounds for µ. Since P(N (0; 1) 2 [−1:96; 1:96]) ≈ 0:95, we deduce 1:96 1:96 P(µ 2 (X − p ; X + p ) ≈ 0:95 n σ n n σ n provided the sample size n is ,,large enough". (Rouen 2015) Some functional limit theorems 7 / 94 This was precisely asymptotic analysis of tn that initiated the so-called self-normalized CLT. If σ2 is unknown we can use its estimator n 1 X σ2 = (X − X )2 b n k n k=1 and explore the so called Student's t-statistics p n(X n − µ) tn = : σb (Rouen 2015) Some functional limit theorems 8 / 94 If σ2 is unknown we can use its estimator n 1 X σ2 = (X − X )2 b n k n k=1 and explore the so called Student's t-statistics p n(X n − µ) tn = : σb This was precisely asymptotic analysis of tn that initiated the so-called self-normalized CLT. (Rouen 2015) Some functional limit theorems 8 / 94 Recall that X1 belongs to the domain of attraction of a normal distribution (denoted by X1 2 DAN) if there exists a norming sequence bn " 1 such that −1 D b Sn −−−! N (0; 1): n n!1 Theorem (Selfnormalized CLT [Gin´e,G¨otze,Mason (1997)]) Let X1;:::; Xn be iid random variables. Then the convergence Sn D q −−−! N (0; 1) 2 2 n!1 X1 + ··· + Xn holds if and only if X1 2 DNA and EX1 = 0: (Rouen 2015) Some functional limit theorems 9 / 94 More precisely we have a sample X1; X2;:::; Xn and we wish to know whether there is a k∗, 1 < k∗ < n; such that EX1 = ··· = EXk∗ = µ, 0 EXk∗+1 = ··· = Xn = µ Figure: Annual temperature and µ 6= µ0: 1 Second example Now suppose you are drawing samples as time passes, and wish to know if the mean of the underlying population (e.g., annual temperature) has changed over time. (Rouen 2015) Some functional limit theorems 10 / 94 Second example Now suppose you are drawing samples as time passes, and wish to know if the mean of the underlying population (e.g., annual temperature) has changed over time. More precisely we have a sample X1; X2;:::; Xn and we wish to know whether there is a k∗, 1 < k∗ < n; such that EX1 = ··· = EXk∗ = µ, 0 EXk∗+1 = ··· = Xn = µ Figure: Annual temperature and µ 6= µ0: 1 (Rouen 2015) Some functional limit theorems 10 / 94 The simplest way to construct a required test is to take two parts of the given sample, one X1;:::; Xk and the second one Xk+1;:::; Xn, and to compare their sample averages 1 1 (X + ··· + X ) and (X + ··· + X ): k 1 k n − k k+1 n Calculating their difference gives the quantity k n h X i T (k) := X − kX n k(n − k) j n j=1 which should be large provided the equality of means before and after the moment k∗ = k fails. ∗ As the point of change k is not known one has to look at Tn(k) for all possible k = 1;:::; n. (Rouen 2015) Some functional limit theorems 11 / 94 Thus, it turns out that one needs to investigate the quantity Tn := max (Sk − kX n): 1≤k≤n This is CUSUM test statistics which is one of the most frequently used method to detect change points. Actually it might be easier to start with Mn := max (Sj − jµ); n ≥ 1 1≤j≤n and ask the question whether there is a limit distribution for Mn as n ! 1 under the assumption that X1;:::; Xn all have the same distribution? It turns out that the answer is positive, and involves an invariance principle. (Rouen 2015) Some functional limit theorems 12 / 94 Erd¨osand Kac described their method of proof as follows: ,,The proof of all these theorems follow the same pattern. It is first proved that the limiting distribution exists and is independent of the distribution of the Xi 's; then the distribution of Xi 's is chosen conveniently so that the limiting distribution can be calculated explicitly." Perhaps the story ,,invariance principle" have realy started with their providence. Erd¨os1 and Kac2contribution The limit behavior of Mn = max1≤j≤n(Sj − jµ) was precisely the question that were interested Erd¨osand Kac (1946) in. They found r Z x p 2 −y 2=2 lim P( max (Sj − jµ) ≤ nx) = e dy n!1 1≤j≤n π 0 for all x ≥ 0, provided X1; X2;::: are iid with mean µ and var(X1) = 1. 1Paul Erd¨os(26 March 1913 { 20 September 1996) was a Jewish-Hungarian mathematician 2Mark Kac (3 August 1914 { 26 October 1984) was a Polish American mathematician (Rouen 2015) Some functional limit theorems 13 / 94 Perhaps the story ,,invariance principle" have realy started with their providence. Erd¨os1 and Kac2contribution The limit behavior of Mn = max1≤j≤n(Sj − jµ) was precisely the question that were interested Erd¨osand Kac (1946) in. They found r Z x p 2 −y 2=2 lim P( max (Sj − jµ) ≤ nx) = e dy n!1 1≤j≤n π 0 for all x ≥ 0, provided X1; X2;::: are iid with mean µ and var(X1) = 1.
Recommended publications
  • Large Deviations for Stochastic Navier-Stokes Equations With
    Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 2013 Large deviations for stochastic Navier-Stokes equations with nonlinear viscosities Ming Tao Louisiana State University and Agricultural and Mechanical College, [email protected] Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Applied Mathematics Commons Recommended Citation Tao, Ming, "Large deviations for stochastic Navier-Stokes equations with nonlinear viscosities" (2013). LSU Doctoral Dissertations. 1558. https://digitalcommons.lsu.edu/gradschool_dissertations/1558 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. LARGE DEVIATIONS FOR STOCHASTIC NAVIER-STOKES EQUATIONS WITH NONLINEAR VISCOSITIES A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Mathematics by Ming Tao B.S. in Math. USTC, 2004 M.S. in Math. USTC, 2007 May 2013 Acknowledgements This dissertation would not be possible without several contributions. First of all, I am extremely grateful to my advisor, Professor Sundar, for his constant encouragement and guidance throughout this work. Meanwhile, I really wish to express my sincere thanks to all the committee members, Professors Kuo, Richardson, Sage, Stoltzfus and the Dean's representa- tive Professor Koppelman, for their help and suggestions on the corrections of this dissertaion. I also take this opportunity to thank the Mathematics department of Louisiana State University for providing me with a pleasant working environment and all the necessary facilities.
    [Show full text]
  • Probability Theory Manjunath Krishnapur
    Probability theory Manjunath Krishnapur DEPARTMENT OF MATHEMATICS,INDIAN INSTITUTE OF SCIENCE 2000 Mathematics Subject Classification. Primary ABSTRACT. These are lecture notes from the spring 2010 Probability theory class at IISc. There are so many books on this topic that it is pointless to add any more, so these are not really a substitute for a good (or even bad) book, but a record of the lectures for quick reference. I have freely borrowed a lot of material from various sources, like Durrett, Rogers and Williams, Kallenberg, etc. Thanks to all students who pointed out many mistakes in the notes/lectures. Contents Chapter 1. Measure theory 1 1.1. Probability space 1 1.2. The ‘standard trick of measure theory’! 4 1.3. Lebesgue measure 6 1.4. Non-measurable sets 8 1.5. Random variables 9 1.6. Borel Probability measures on Euclidean spaces 10 1.7. Examples of probability measures on the line 11 1.8. A metric on the space of probability measures on Rd 12 1.9. Compact subsets of P (Rd) 13 1.10. Absolute continuity and singularity 14 1.11. Expectation 16 1.12. Limit theorems for Expectation 17 1.13. Lebesgue integral versus Riemann integral 17 1.14. Lebesgue spaces: 18 1.15. Some inequalities for expectations 18 1.16. Change of variables 19 1.17. Distribution of the sum, product etc. 21 1.18. Mean, variance, moments 22 Chapter 2. Independent random variables 25 2.1. Product measures 25 2.2. Independence 26 2.3. Independent sequences of random variables 27 2.4.
    [Show full text]
  • A Brief on Characteristic Functions
    Missouri University of Science and Technology Scholars' Mine Graduate Student Research & Creative Works Student Research & Creative Works 01 Dec 2020 A Brief on Characteristic Functions Austin G. Vandegriffe Follow this and additional works at: https://scholarsmine.mst.edu/gradstudent_works Part of the Applied Mathematics Commons, and the Probability Commons Recommended Citation Vandegriffe, Austin G., "A Brief on Characteristic Functions" (2020). Graduate Student Research & Creative Works. 2. https://scholarsmine.mst.edu/gradstudent_works/2 This Presentation is brought to you for free and open access by Scholars' Mine. It has been accepted for inclusion in Graduate Student Research & Creative Works by an authorized administrator of Scholars' Mine. This work is protected by U. S. Copyright Law. Unauthorized use including reproduction for redistribution requires the permission of the copyright holder. For more information, please contact [email protected]. A Brief on Characteristic Functions A Presentation for Harmonic Analysis Missouri S&T : Rolla, MO Presentation by Austin G. Vandegriffe 2020 Contents 1 Basic Properites of Characteristic Functions 1 2 Inversion Formula 5 3 Convergence & Continuity of Characteristic Functions 9 4 Convolution of Measures 13 Appendix A Topology 17 B Measure Theory 17 B.1 Basic Measure Theory . 17 B.2 Convergence in Measure & Its Consequences . 20 B.3 Derivatives of Measures . 22 C Analysis 25 i Notation 8 For all 8P For P-almost all, where P is a measure 9 There exists () If and only if U Disjoint union
    [Show full text]
  • The Gap Between Gromov-Vague and Gromov-Hausdorff-Vague Topology
    THE GAP BETWEEN GROMOV-VAGUE AND GROMOV-HAUSDORFF-VAGUE TOPOLOGY SIVA ATHREYA, WOLFGANG LOHR,¨ AND ANITA WINTER Abstract. In [ALW15] an invariance principle is stated for a class of strong Markov processes on tree-like metric measure spaces. It is shown that if the underlying spaces converge Gromov vaguely, then the processes converge in the sense of finite dimensional distributions. Further, if the underly- ing spaces converge Gromov-Hausdorff vaguely, then the processes converge weakly in path space. In this paper we systematically introduce and study the Gromov-vague and the Gromov-Hausdorff-vague topology on the space of equivalence classes of metric boundedly finite measure spaces. The latter topology is closely related to the Gromov-Hausdorff-Prohorov metric which is defined on different equivalence classes of metric measure spaces. We explain the necessity of these two topologies via several examples, and close the gap between them. That is, we show that convergence in Gromov- vague topology implies convergence in Gromov-Hausdorff-vague topology if and only if the so-called lower mass-bound property is satisfied. Further- more, we prove and disprove Polishness of several spaces of metric measure spaces in the topologies mentioned above (summarized in Figure 1). As an application, we consider the Galton-Watson tree with critical off- spring distribution of finite variance conditioned to not get extinct, and construct the so-called Kallenberg-Kesten tree as the weak limit in Gromov- Hausdorff-vague topology when the edge length are scaled down to go to zero. Contents 1. Introduction 1 2. The Gromov-vague topology 6 3.
    [Show full text]
  • Review of Local and Global Existence Results for Stochastic Pdes with Lévy Noise
    DISCRETE AND CONTINUOUS doi:10.3934/dcds.2020241 DYNAMICAL SYSTEMS Volume 40, Number 10, October 2020 pp. 5639–5710 REVIEW OF LOCAL AND GLOBAL EXISTENCE RESULTS FOR STOCHASTIC PDES WITH LÉVY NOISE Justin Cyr1, Phuong Nguyen1;2, Sisi Tang1 and Roger Temam1;∗ 1 Department of Mathematics Indiana University Swain East Bloomington, IN 47405, USA 2 Department of Mathematics and Statistics Texas Tech University Lubbock, TX 79409, USA (Communicated by Nikolay Tzvetkov) Abstract. This article is a review of Lévy processes, stochastic integration and existence results for stochastic differential equations and stochastic partial differential equations driven by Lévy noise. An abstract PDE of the typical type encountered in fluid mechanics is considered in a stochastic setting driven by a general Lévy noise. Existence and uniqueness of a local pathwise solution is established as a demonstration of general techniques in the area. 1. Introduction. In this article we review results of existence and uniqueness of local pathwise solutions to stochastic partial differential equations (SPDEs) driven by a Lévy noise. Lévy processes are canonical generalizations of Wiener processes that possess jump discontinuities. Lévy noise is fitting in stochastic models of fluid dynamics as a way to represent interactions that occur as abrupt jolts at random times, such as bursts of wind. Models based on SPDEs with Lévy noise have been used to describe observational records in paleoclimatology in [14], with the jump events of the Lévy noise proposed as representing abrupt triggers for shifts between glacial and warm climate states. The articles [40] and [24] use processes with jumps to model phase transitions in bursts of wind that contribute to the dynamics of El Niño.
    [Show full text]
  • Notation Index
    Cambridge University Press 0521615259 - Measures, Integrals and Martingales Rene L. Schilling Index More information Notation index This is intended to aid cross-referencing, so notation that is specific to a single sec- tion is generally not listed. Some symbols are used locally, without ambiguity, in senses other than those given below. Numbers following entries are page numbers with the occasional (Pr mn) referring to Problem mn on the respective page. Unless otherwise stated, binary operations between functions such as f ± g, f · g, → ∧ ∨ −−→j f g, f g, comparisons f g, f<gor limiting relations fj f, limj fj, lim infj fj, lim sup fj, supi fi or infi fi are always understood pointwise. Alternatives are indicated by square brackets, i.e., ‘if A [B]…then P [Q] ’ should be read as ‘if A … then P ’ and ‘if B … then Q ’. Abbreviations and shorthand notation a.a. almost all, 80 ∪-stable stable under finite unions a.e. almost every(where), 80 ∩-stable stable under finite ONB orthonormal basis, 239 intersections, 32 ONS orthonormal system, 239 end of proof, x UI uniformly integrable, 163, [ ] indicates that a small 194 intermediate step is w.r.t. with respect to required, x negative always in the sense 0 (in the margin) caution, x positive always in the sense 0 Special labels, defining properties 1 2 3 Dynkin system, 31 S1 S2 S3 semi-ring, 37 M1 M2 measure, 22 1 2 3-algebra, 15 Mathematical symbols Sub- and superscripts ⊥ orthogonal complement, 235 + positive part, b bounded positive elements c compact support 367 © Cambridge University Press www.cambridge.org Cambridge University Press 0521615259 - Measures, Integrals and Martingales Rene L.
    [Show full text]
  • Tight Inclusions of C*-Dynamical Systems
    TIGHT INCLUSIONS OF C∗-DYNAMICAL SYSTEMS YAIR HARTMAN AND MEHRDAD KALANTAR Abstract. We study a notion of tight inclusions of C∗- and W ∗-dynamical systems which is meant to capture a tension between topological and mea- surable rigidity of boundary actions. An important case of such inclusions are C(X) ⊂ L∞(X,ν) for measurable boundaries with unique stationary compact models. We discuss the implications of this phenomenon in the description of Zimmer amenable intermediate factors. Furthermore, we prove applications in the problem of maximal injectivity of von Neumann algebras. 1. Introduction One of the key tools in rigidity theory is the notion of boundary actions in the sense of Furstenberg [Fur63, Fur73]. These actions are defined in both topological and measurable setups, and exploiting their dynamical and ergodic theoretical properties reveals various rigidity phenomena of the underlying groups. For example, the fact that the measurable Furstenberg-Poisson Boundaries of irreducible lattices in higher rank semisimple Lie groups, have few quotients (Factor Theorem) implies rigidity for normal subgroups (Normal Subgroup Theorem), and a classification of certain spaces related to the Furstenberg- Poisson Boundary (Intermediate Factor Theorem) implies rigidity of Invariant Random Subgroups. These rigidity phenomena are “higher-rank phenomena” arXiv:2108.06100v1 [math.OA] 13 Aug 2021 either in the classical sense of semi-simple Lie groups, or for product groups, and are based on the measure theoretical boundary. Recently, these properties were shown to imply also strong rigidity results in noncommutative settings ([BH19, BBHP20]). On the other front, dynamical properties of the topological boundaries have been shown to imply certain noncommutative rigidity properties, such as C∗- simplicity and the unique trace property ([KK17, BKKO17]).
    [Show full text]
  • Probability Theory - Part 2 Independent Random Variables
    PROBABILITY THEORY - PART 2 INDEPENDENT RANDOM VARIABLES MANJUNATH KRISHNAPUR CONTENTS 1. Introduction 2 2. Some basic tools in probability2 3. Applications of first and second moment methods 11 4. Applications of Borel-Cantelli lemmas and Kolmogorov’s zero-one law 18 5. Weak law of large numbers 20 6. Applications of weak law of large numbers 22 7. Modes of convergence 24 8. Uniform integrability 28 9. Strong law of large numbers 29 10. The law of iterated logarithm 32 11. Proof of LIL for Bernoulli random variables 34 12. Hoeffding’s inequality 36 13. Random series with independent terms 38 14. Central limit theorem - statement, heuristics and discussion 40 15. Strategies of proof of central limit theorem 43 16. Central limit theorem - two proofs assuming third moments 45 17. Central limit theorem for triangular arrays 46 18. Two proofs of the Lindeberg-Feller CLT 48 19. Sums of more heavy-tailed random variables 51 20. Appendix: Characteristic functions and their properties 52 1 1. INTRODUCTION In this second part of the course, we shall study independent random variables. Much of what we do is devoted to the following single question: Given independent random variables with known distributions, what can you say about the distribution of the sum? In the process of finding answers, we shall weave through various topics. Here is a guide to the essential aspects that you might pay attention to. Firstly, the results. We shall cover fundamental limit theorems of probability, such as the weak and strong law of large numbers, central limit theorems, poisson limit theorem, in addition to results on random series with independent summands.
    [Show full text]
  • Large Deviations for Spdes of Jump Type
    Large Deviations for SPDEs of Jump Type Xue Yang Jianliang Zhai Tusheng Zhang School of Mathematics, University of Manchester, Oxfor Road, Manchester, M13 9PL, UK Abstract In this paper, we establish a large deviation principle for a fully non-linear stochastic evolution equation driven by both Brownian motions and Poisson random measures on a given Hilbert space H. The weak convergence method plays an important role. AMS Subject Classification: Primary 60H15 Secondary 35R60, 37L55. Key Words: Large deviations; Stochastic partial differential equations; Poission random measures; Brownian motions; Tightness of measures 1 Introduction In this paper, we are concerned with large deviation principles for stochastic evolution equa- tions (stochastic partial differential equations (SPDEs) in particular) of jump type on some Hilbert space H: t t t −1 Xǫ = Xǫ (Xǫ)ds + √ǫ σ(s,Xǫ)dβ(s)+ ǫ G(s,Xǫ , v)N ǫ (dsdv). (1.1) t 0 − A s s s− Z0 Z0 Z0 ZX Here is an (normally unbounded) linear operator on H, X is a locallye compact Polish A ∞ ǫ−1 space. β =(βi)i=1 is an i.i.d. family of standard Brownian motions. N is a Poisson random −1 measure on [0, T ] X with a σ-finite mean measure ǫ λT ν, λT is the Lebesgue measure × −1 ⊗ −1 arXiv:1211.0466v1 [math.PR] 2 Nov 2012 on [0, T ] and ν is a σ finite measure on X. N ǫ ([0, t] B)= N ǫ ([0, t] B) ǫ−1tν(B), B (X) with ν(B)−< , is the compensated Poisson× random measure.× − ∀ ∈ B ∞ e Large deviations for stochastic evolution equations and stochastic partial differential equations driven by Gaussian processes have been investigated in many papers, see e.g.
    [Show full text]
  • Infinite Determinantal Measures and the Ergodic Decomposition of Infinite Pickrell Measures
    Infinite Determinantal Measures and The Ergodic Decomposition of Infinite Pickrell Measures A. Bufetov REPORT No. 43, 2011/2012, spring ISSN 1103-467X ISRN IML-R- -43-11/12- -SE+spring INFINITE DETERMINANTAL MEASURES AND THE ERGODIC DECOMPOSITION OF INFINITE PICKRELL MEASURES ALEXANDER I. BUFETOV ABSTRACT. The main result of this paper, Theorem 1.11, gives an ex- plicit description of the ergodic decomposition for infinite Pickrell mea- sures on spaces of infinite complex matrices. The main construction is that of sigma-finite analogues of determinantal measures on spaces of configurations. An example is the infinite Bessel point process, the scal- ing limit of sigma-finite analogues of Jacobi orthogonal polynomial en- sembles. The statement of Theorem 1.11 is that the infinite Bessel point process (subject to an appropriate change of variables) is precisely the ergodic decomposition measure for infinite Pickrell measures. 1. INTRODUCTION 1.1. Informal outline of the main results. The Pickrell family of mea- sures is given by the formula (s) 2n s (1) µn = constn,s det(1 + z∗z)− − dz. Here n is a natural number, s a real number, z a square n n matrix with complex entries, dz the Lebesgue measure on the space of× such ma- trices, and constn,s a normalization constant whose precise choice will be (s) explained later. The measure µn is finite if s > 1 and infinite if s 1. (s) − ≤ − By definition, the measure µn is invariant under the actions of the unitary group U(n) by multiplication on the left and on the right. If the constants constn,s are chosen appropriately, then the Pickrell fam- ily of measures has the Kolmogorov property of consistency under natural (s) projections: the push-forward of the Pickrell measure µn+1 under the nat- ural projection of cuttting the n n-corner of a (n + 1) (n + 1)-matrix × (s) × is precisely the Pickrell measure µn .
    [Show full text]
  • The Wiener Measure and Donsker's Invariance Principle
    THE WIENER MEASURE AND DONSKER'S INVARIANCE PRINCIPLE ETHAN SCHONDORF Abstract. We give the construction of the Wiener measure (Brownian mo- tion) and prove Donsker's invariance principle, which shows that a simple ran- dom walk converges to a Wiener process as time and space intervals shrink. To build up the mechanisms for this construction and proof, we first discuss the weak convergence and tightness of measures, both on a general metric space and on C[0; 1]; the space of continuous real functions on the closed unit inter- val. Then we prove the central limit theorem. Finally, we define the Wiener process and discuss a few of its basic properties. Contents 1. Introduction 1 2. Convergence of Measures 2 2.1. Weak Convergence 2 2.2. Tightness 5 2.3. Weak Convergence in the Space C 8 3. Central Limit Theorem 11 4. Wiener Processes 12 4.1. Random Walk 12 4.2. Definition 13 4.3. Fundamental Properties 13 5. Wiener Measure 14 5.1. Definition 14 5.2. Construction 15 6. Donsker's Invariance Principle 17 Acknowledgments 18 References 18 Appendix A. Inequalities 18 1. Introduction The Wiener process, also called Brownian motion,1 is one of the most studied stochastic processes. Originally described by botanist Robert Brown in 1827 when he observed the motion of a piece of pollen in a glass of water, it was not until the 1920s that the process was studied mathematically by Norbert Wiener. He gave Date: November 1, 2019. 1While in the physical sciences these two terms have different meanings, here, as in financial disciplines, we use these two terms interchangeably.
    [Show full text]
  • Stochastic Stability Under Network Utility Maximization: General File Size Distribution Mung Chiang Devavrat Shah Ao Tang
    1 Stochastic Stability Under Network Utility Maximization: General File Size Distribution Mung Chiang Devavrat Shah Ao Tang Abstract We prove the stochastic stability of resource allocation under Network Utility Maximization (NUM) under general arrival process and file size distribution with bounded support, for α-fair utilities with α sufficiently small and possibly different for different sources’ utility functions. In addition, our results imply that the system operating under α-fair utility is 1/(1 + α)-approximate stable for any α ∈ (0, ∞) for any file size distribution with bounded support. Our results are in contrast to the recent stability result of Bramson (2005) for max-min fair (i.e. α = ∞) under general arrival process and file size distribution, and that of Massoulie (2006) for proportional fair (i.e. α = 1) under Poisson arrival process and phase-type distributions. To obtain our results, we develop an appropriate Lyapunov function for the fluid model established by Gromoll and Williams (2006)1. I. INTRODUCTION In 1998, Kelly, Maullo, and Tan [15] identified the current Internet congestion control protocol with an algorithm that allocates rates to flows according to certain ‘fairness criteria’ reflected through concave utility functions, which are maximized under linear capacity constraints. An extensive amount of research since then has shown many applications of this approach, from reverse-engineering of all major types of TCP congestion control protocols in use today to development of substantially improved new protocols. In [21], [27], an interested reader can find detailed surveys on the philosophy of viewing a resource allocation or congestion control algorithm as implicitly solving a global Network Utility Maximization (NUM) problem.
    [Show full text]