Notes from Limit Theorems 2 Mihai Nica

Total Page:16

File Type:pdf, Size:1020Kb

Notes from Limit Theorems 2 Mihai Nica Notes from Limit Theorems 2 Mihai Nica Notes. These are my notes from the class Limit Theorems 2 taught by Proffe- sor McKean in Spring 2012. I have tried to carefully go over the bigger theorems from the course and fill in all the details explicitly. There is also a lot of information that is folded in from other sources. • The section on Martingales is supplemented with some notes from "A First Look at Rigorous Probability Theory" by Jeffrey S. Rosenthal, which has a really nice introduction to Martingales. • The section of the law of the iterated logarithm is supplemented with some inequalities which I looked up on the internet...mostly wikipedia and PlanetMath. • In the section on Ergodic theorem, I use a notation I found on wikipedia that I like for continued fractions. In my pen-and-paper notes, there is also a little section about Ergodic theory for geodesics on surfaces, which is really cute. However, I couldn't figure out a good way to draw the pictures so it hasn't been typed up yet. • The section on Brownian Motion is supplemented by the book Brownian Motion and Martingale's in Analysis by Richard Durret which is really wonderful. Some of the slick results are taken straight from there. • I also include an appendix with results that I found myself reviewing as I went through this stuff. Contents Chapter 1. Martingales 5 1. Definitions and Examples 5 2. Stopping times 6 3. Martingale Convergence Theorem 7 4. Applications 9 Chapter 2. The Law of the Iterated Logarithm 13 1. First Half of the Law of the Iterated Logarithm 13 2. Second Half of the Law of the Iterated Logarithm 15 Chapter 3. Ergodic Theorem 19 1. Motivation 19 2. Birkhoff's Theorem 20 3. Continued Fractions 24 Chapter 4. Brownian Motion 29 1. Motivation 29 2. Levy's Construction 30 3. Construction from Durret's Book 33 4. Some Properties 36 Chapter 5. Appendix 39 1. Conditional Random Variables 39 2. Extension Theorems 40 3 CHAPTER 1 Martingales 1. Definitions and Examples This section on Martingales contains heavy use of conditional random variables. I do a quick review of this topic from Limit Theorems 1 in the appendix. Definition 1.1. A sequence of random variables X0;X1; ::: is called a martingale if E(jXnj) < 1for all n and with probability 1: E (Xn+1jX0;X1; :::; Xn) = Xn Intuitively, this is says that the average value of Xn+1is the same as that of Xn, even if we are given the values of X0to Xn. Note that conditioning on X0; :::; Xnis just different notation for conditioning on σ(X0; :::; Xn), which is the sigma algebra generated by preimages of Borel sets through X0; :::; Xn: One can make more general martingales by replacing σ(X0; :::; Xn) with an arbitrary increasing chain of sigma algebras Fn; the results here carry over to that setting too. Example 1.2. Sometimes martingales are called \fair games". The analogy is that the random variable Xn represents the bankroll of the gambler at time n. The game is fair, because at any point in time the equity of the gambler is constant. Definition 1.3. A submartingale is when E (Xn+1jX0;X1; :::; Xn) ≥ Xn (i.e. the capital is increasing) and a supermartingale is when E (Xn+1jX0;X1; :::; Xn) ≤ Xn (i.e. the capital is decreasing) Most of the theorems for martingales work for submartingales, just change the inequality in the right place. To avoid confusion between sub-, super-, and ordinary martingales, we will sometimes call a martingale a \fair martingale". Example 1.4. The symmetric random walk, Xn = Z0 + Z1 + ::: + Zn with 1 each Zn = ±1 with probability 2 is a martingale. In terms of the fair game, this is gambling on the outcome of a fair coin. Remark. Using the properties of conditional probabilities to see that: E (Xn+2jX0;X1; :::; Xn) = E (E (Xn+2jX0;X1; :::; Xn+1) jX0; :::Xn) = E (Xn+1jX0; :::Xn) = Xn With a simple argument by induction, we get that in general: E (XmjX0;X1; :::; Xn) = Xn In particular then E(Xn) = E(X0) for every n. If τ is a random \time", (a non-negative integer) that is independent of the Xn's, then E(Xτ ) is a weighted average of E(Xn)'s, so have E(Xτ ) = E(X0) still. What if υis dependent on the 5 6 1. MARTINGALES 0 Xns? In general we cannot have equality for the example of the simple symmetric random walk (coin-flip betting), with τ =first time that Xn = −1 has E(Xn) = −1 6= 0 = E(X0): The next section gives some conditions where this holds. 2. Stopping times Definition 2.1. For a martingale fXng; A non-negative integer valued random variable τ is a stopping time if it has the property that: fτ = ng 2 σ(X1;X2;:::;Xn) Intuitively, this is saying that one can determine if τ = n just by looking at the first n steps in the martingale. Example 2.2. In the example of the random coin flipping, if we let τ be the first time so that Xn =10, then τ is a stopping time. Example 2.3. We often are interested in Xτ , the value of the martingale at the random time τ: This is precisely defined as Xτ (!) = Xτ(!)(!). Another handy P rewriting is: Xτ = Xk1fτ=kg . Lemma 2.4. If fXngis a submartingale and τ1; τ2are bounded stopping times so that 9M s.t. 0 ≤ τ1 ≤ τ2 ≤ M with probability 1, then E(Xτ1 ) ≤ E(Xτ2 ), with equality for fair martingales. Proof. For fixed k, the event fτ1 < k ≤ τ2gcan be written as fτ1 < k ≤ C τ2g = fτ1 ≤ k − 1g \ fτ2 ≤ k − 1g from which we see that the event fτ1 < k ≤ τ2g 2 σ(X0;X1;:::;Xk−1) because τ1and τ2are both stopping times. We have then the following manipulation using a telescoping series, linearity of the expectation, the fact that E(Y 1A)= E(E(Y jX0;X1;:::;Xk−1)1A) for events A 2 σ(X0;X1;:::;Xk−1), and finally the fact that E(XkjX0;X1;:::Xk−1) − Xk−1 ≥ 0 since Xn is a (sub)martingale. (with equality for fair martingales): E(Xτ2 ) − E(Xτ1 ) = E(Xτ2 − Xτ1 ) τ X2 = E( Xk − Xk−1) k=τ1+1 M ! X = E (Xk − Xk−1)1fτ1<k≤τ2g k=1 M ! X = E (E(XkjX0;X1;:::Xk−1) − Xk−1)1fτ1<k≤τ2g k=1 M X = E (E(XkjX0;X1;:::Xk−1) − Xk−1)1fτ1<k≤τ2g k=1 M X ≥ E 01fτ1<k≤τ2g k=1 = 0 Where the inequality is equality in the case of a fair martingale. 3. MARTINGALE CONVERGENCE THEOREM 7 Theorem 2.5. Say fXng is a martingale and τ a bounded stopping time, (that is 9M s.t. 0 ≤ τ ≤ M with probability 1). Then: E(Xτ ) = E(X0) Proof. Let υbe the random variable which is constantly 0. This is a stopping time! So by the above lemma, since 0 ≤ υ ≤ τ ≤ M, we have that E(Xτ ) = E(Xυ) = E(X0) Theorem 2.6. For fXnga martingale and τ a stopping time which is almost surely finite (that is P(τ < 1) = 1) we have: E(Xτ ) = E(X0) () E lim Xmin(τ;n) = lim E Xmin(τ;n) n!1 n!1 Proof. It suffices to show that E(Xτ ) = E limn!1 Xmin(τ;n) andE(X0) = limn!1 E Xmin(τ;n) . The first equality holds since P(τ < 1) = 1 gives P(limn!1 Xmin(τ;n) = Xτ ) = 1, so they agree almost surely. The second holds by the above theorem con- cerning bounded stopping times since for any n, min(τ; n) is a bounded stopping time, so we have E Xmin(τ;n) = E(X0), so equality holds in the limit too. Remark. The above theorem can be combined with things like monotone convergence theorem or Lebesgue dominated convergence theorem to switch the limits and conclude that E(Xτ ) = E(X0). Here are some examples: Example 2.7. If fXngis a martingale and τ a stopping time so that P(τ < 1) = 1 and E(jXτ j) < 1; and limn!1 E(Xn1τ>n) = 0, then E(Xτ ) = E(X0): Proof. For any n we have: Xmin(τ;n) = Xn1τ>n +Xτ 1τ≤nTaking expectation and then the limit as n ! 1, gives: lim E(Xmin(τ;n)) = lim E(Xn1τ>n) + lim E(Xτ 1τ>n) n!1 n!1 n!1 = 0 + E(Xτ ) Where the first term is 0 by hypothesis, and the second limit is justified since Xτ 1τ>n ! Xτ pointwise almost surely since P(τ < 1) = 1, and the dominant majorant E(jXτ j) < 1lets us use the Lebesgue dominated convergence theorem to conclude the convergence of the expectation. Example 2.8. Suppose fXngis a martingale and τ a stopping time so that E(τ) < 1 and jXn+1 − Xnj ≤ M < 1for some fixed M and for every n: Then E(Xτ ) = E(X0). Proof. Let Y = jX0j + Mτ. Then Y can be used as a dominant majorant in a L.D.C.T. very similar to the above example to get the conclusion. 3. Martingale Convergence Theorem The proof relies on the famous upcrossing lemma: Lemma 3.1. [The Upcrossing Lemma]. Let fXngbe a submartingale. For fixed α,β α; β 2 R, β > α;and M 2 N let UM be the number of \upcrossings" that the martingale fXngmakes of the interval α; β in the time period 1 ≤ n ≤ M.
Recommended publications
  • Appendix A. Measure and Integration
    Appendix A. Measure and integration We suppose the reader is familiar with the basic facts concerning set theory and integration as they are presented in the introductory course of analysis. In this appendix, we review them briefly, and add some more which we shall need in the text. Basic references for proofs and a detailed exposition are, e.g., [[ H a l 1 ]] , [[ J a r 1 , 2 ]] , [[ K F 1 , 2 ]] , [[ L i L ]] , [[ R u 1 ]] , or any other textbook on analysis you might prefer. A.1 Sets, mappings, relations A set is a collection of objects called elements. The symbol card X denotes the cardi- nality of the set X. The subset M consisting of the elements of X which satisfy the conditions P1(x),...,Pn(x) is usually written as M = { x ∈ X : P1(x),...,Pn(x) }.A set whose elements are certain sets is called a system or family of these sets; the family of all subsystems of a given X is denoted as 2X . The operations of union, intersection, and set difference are introduced in the standard way; the first two of these are commutative, associative, and mutually distributive. In a { } system Mα of any cardinality, the de Morgan relations , X \ Mα = (X \ Mα)and X \ Mα = (X \ Mα), α α α α are valid. Another elementary property is the following: for any family {Mn} ,whichis { } at most countable, there is a disjoint family Nn of the same cardinality such that ⊂ \ ∪ \ Nn Mn and n Nn = n Mn.Theset(M N) (N M) is called the symmetric difference of the sets M,N and denoted as M #N.
    [Show full text]
  • 6.436J Lecture 13: Product Measure and Fubini's Theorem
    MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 13 10/22/2008 PRODUCT MEASURE AND FUBINI'S THEOREM Contents 1. Product measure 2. Fubini’s theorem In elementary math and calculus, we often interchange the order of summa­ tion and integration. The discussion here is concerned with conditions under which this is legitimate. 1 PRODUCT MEASURE Consider two probabilistic experiments described by probability spaces (�1; F1; P1) and (�2; F2; P2), respectively. We are interested in forming a probabilistic model of a “joint experiment” in which the original two experiments are car­ ried out independently. 1.1 The sample space of the joint experiment If the first experiment has an outcome !1, and the second has an outcome !2, then the outcome of the joint experiment is the pair (!1; !2). This leads us to define a new sample space � = �1 × �2. 1.2 The �-field of the joint experiment Next, we need a �-field on �. If A1 2 F1, we certainly want to be able to talk about the event f!1 2 A1g and its probability. In terms of the joint experiment, this would be the same as the event A1 × �1 = f(!1; !2) j !1 2 A1; !2 2 �2g: 1 Thus, we would like our �-field on � to include all sets of the form A1 × �2, (with A1 2 F1) and by symmetry, all sets of the form �1 ×A2 (with (A2 2 F2). This leads us to the following definition. Definition 1. We define F1 ×F2 as the smallest �-field of subsets of �1 ×�2 that contains all sets of the form A1 × �2 and �1 × A2, where A1 2 F1 and A2 2 F2.
    [Show full text]
  • ERGODIC THEORY and ENTROPY Contents 1. Introduction 1 2
    ERGODIC THEORY AND ENTROPY JACOB FIEDLER Abstract. In this paper, we introduce the basic notions of ergodic theory, starting with measure-preserving transformations and culminating in as a statement of Birkhoff's ergodic theorem and a proof of some related results. Then, consideration of whether Bernoulli shifts are measure-theoretically iso- morphic motivates the notion of measure-theoretic entropy. The Kolmogorov- Sinai theorem is stated to aid in calculation of entropy, and with this tool, Bernoulli shifts are reexamined. Contents 1. Introduction 1 2. Measure-Preserving Transformations 2 3. Ergodic Theory and Basic Examples 4 4. Birkhoff's Ergodic Theorem and Applications 9 5. Measure-Theoretic Isomorphisms 14 6. Measure-Theoretic Entropy 17 Acknowledgements 22 References 22 1. Introduction In 1890, Henri Poincar´easked under what conditions points in a given set within a dynamical system would return to that set infinitely many times. As it turns out, under certain conditions almost every point within the original set will return repeatedly. We must stipulate that the dynamical system be modeled by a measure space equipped with a certain type of transformation T :(X; B; m) ! (X; B; m). We denote the set we are interested in as B 2 B, and let B0 be the set of all points in B that return to B infinitely often (meaning that for a point b 2 B, T m(b) 2 B for infinitely many m). Then we can be assured that m(B n B0) = 0. This result will be proven at the end of Section 2 of this paper. In other words, only a null set of points strays from a given set permanently.
    [Show full text]
  • Representation of the Dirac Delta Function in C(R)
    Representation of the Dirac delta function in C(R∞) in terms of infinite-dimensional Lebesgue measures Gogi Pantsulaia∗ and Givi Giorgadze I.Vekua Institute of Applied Mathematics, Tbilisi - 0143, Georgian Republic e-mail: [email protected] Georgian Technical University, Tbilisi - 0175, Georgian Republic g.giorgadze Abstract: A representation of the Dirac delta function in C(R∞) in terms of infinite-dimensional Lebesgue measures in R∞ is obtained and some it’s properties are studied in this paper. MSC 2010 subject classifications: Primary 28xx; Secondary 28C10. Keywords and phrases: The Dirac delta function, infinite-dimensional Lebesgue measure. 1. Introduction The Dirac delta function(δ-function) was introduced by Paul Dirac at the end of the 1920s in an effort to create the mathematical tools for the development of quantum field theory. He referred to it as an improper functional in Dirac (1930). Later, in 1947, Laurent Schwartz gave it a more rigorous mathematical definition as a spatial linear functional on the space of test functions D (the set of all real-valued infinitely differentiable functions with compact support). Since the delta function is not really a function in the classical sense, one should not consider the value of the delta function at x. Hence, the domain of the delta function is D and its value for f ∈ D is f(0). Khuri (2004) studied some interesting applications of the delta function in statistics. The purpose of the present paper is an introduction of a concept of the Dirac delta function in the class of all continuous functions defined in the infinite- dimensional topological vector space of all real valued sequences R∞ equipped arXiv:1605.02723v2 [math.CA] 14 May 2016 with Tychonoff topology and a representation of this functional in terms of infinite-dimensional Lebesgue measures in R∞.
    [Show full text]
  • 1 Probability Spaces
    BROWN UNIVERSITY Math 1610 Probability Notes Samuel S. Watson Last updated: December 18, 2015 Please do not hesitate to notify me about any mistakes you find in these notes. My advice is to refresh your local copy of this document frequently, as I will be updating it throughout the semester. 1 Probability Spaces We model random phenomena with a probability space, which consists of an arbitrary set ­, a collection1 of subsets of ­, and a map P : [0, 1], where and P satisfy certain F F! F conditions detailed below. An element ! ­ is called an outcome, an element E is 2 2 F called an event, and we interpret P(E) as the probability of the event E. To connect this setup to the way you usually think about probability, regard ! as having been randomly selected from ­ in such a way that, for each event E, the probability that ! is in E (in other words, that E happens) is equal to P(E). If E and F are events, then the event “E and F” corresponds to the ! : ! E and ! F , f 2 2 g abbreviated as E F. Similarly, E F is the event that E happens or F happens, and \ [ ­ E is the event that E does not happen. We refer to ­ E as the complement of E, and n n sometimes denote2 it by Ec. To ensure that we can perform these basic operations, we require that is closed under F them. In other words, ­ E must be an event whenever E is an event (that is, ­ E n n 2 F whenever E ).
    [Show full text]
  • Linz 2013 — Non-Classical Measures and Integrals
    LINZ 2013 racts Abst 34 Fuzzy Theory Set th Non-Classical Measures Linz Seminar on Seminar Linz Bildungszentrum St. Magdalena, Linz, Austria Linz, Magdalena, St. Bildungszentrum February 26 – March 2, 2013 2, March – 26 February and Integrals Erich Peter Klement Radko Mesiar Abstracts Endre Pap Editors LINZ 2013 — NON-CLASSICAL MEASURES AND INTEGRALS Dedicated to the memory of Lawrence Neff Stout ABSTRACTS Radko Mesiar, Endre Pap, Erich Peter Klement Editors Printed by: Universit¨atsdirektion, Johannes Kepler Universit¨at, A-4040 Linz 2 Since their inception in 1979, the Linz Seminars on Fuzzy Set Theory have emphasized the development of mathematical aspects of fuzzy sets by bringing together researchers in fuzzy sets and established mathematicians whose work outside the fuzzy setting can provide directions for further research. The philos- ophy of the seminar has always been to keep it deliberately small and intimate so that informal critical discussions remain central. LINZ 2013 will be the 34th seminar carrying on this tradition and is devoted to the theme “Non-Classical Measures and Integrals”. The goal of the seminar is to present and to discuss recent advances in non-classical measure theory and corresponding integrals and their various applications in pure and applied mathematics. A large number of highly interesting contributions were submitted for pos- sible presentation at LINZ 2013. In order to maintain the traditional spirit of the Linz Seminars — no parallel sessions and enough room for discussions — we selected those thirty-three submissions which, in our opinion, fitted best to the focus of this seminar. This volume contains the abstracts of this impressive selection.
    [Show full text]
  • Dirac Delta Function of Matrix Argument
    Dirac Delta Function of Matrix Argument ∗ Lin Zhang Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018, PR China Abstract Dirac delta function of matrix argument is employed frequently in the development of di- verse fields such as Random Matrix Theory, Quantum Information Theory, etc. The purpose of the article is pedagogical, it begins by recalling detailed knowledge about Heaviside unit step function and Dirac delta function. Then its extensions of Dirac delta function to vector spaces and matrix spaces are discussed systematically, respectively. The detailed and elemen- tary proofs of these results are provided. Though we have not seen these results formulated in the literature, there certainly are predecessors. Applications are also mentioned. 1 Heaviside unit step function H and Dirac delta function δ The materials in this section are essential from Hoskins’ Book [4]. There are also no new results in this section. In order to be in a systematic way, it is reproduced here. The Heaviside unit step function H is defined as 1, x > 0, H(x) := (1.1) < 0, x 0. arXiv:1607.02871v2 [quant-ph] 31 Oct 2018 That is, this function is equal to 1 over (0, +∞) and equal to 0 over ( ∞, 0). This function can − equally well have been defined in terms of a specific expression, for instance 1 x H(x)= 1 + . (1.2) 2 x | | The value H(0) is left undefined here. For all x = 0, 6 dH(x) H′(x)= = 0 dx ∗ E-mail: [email protected]; [email protected] 1 corresponding to the obvious fact that the graph of the function y = H(x) has zero slope for all x = 0.
    [Show full text]
  • Topics of Measure Theory on Infinite Dimensional Spaces
    mathematics Article Topics of Measure Theory on Infinite Dimensional Spaces José Velhinho Faculdade de Ciências, Universidade da Beira Interior, Rua Marquês d’Ávila e Bolama, 6201-001 Covilhã, Portugal; [email protected] Received: 20 June 2017; Accepted: 22 August 2017; Published: 29 August 2017 Abstract: This short review is devoted to measures on infinite dimensional spaces. We start by discussing product measures and projective techniques. Special attention is paid to measures on linear spaces, and in particular to Gaussian measures. Transformation properties of measures are considered, as well as fundamental results concerning the support of the measure. Keywords: measure; infinite dimensional space; nuclear space; projective limit 1. Introduction We present here a brief introduction to the subject of measures on infinite dimensional spaces. The author’s background is mathematical physics and quantum field theory, and that is likely to be reflected in the text, but an hopefully successful effort was made to produce a review of interest to a broader audience. We have references [1–3] as our main inspiration. Obviously, some important topics are not dealt with, and others are discussed from a particular perspective, instead of another. Notably, we do not discuss the perspective of abstract Wiener spaces, emerging from the works of Gross and others [4–7]. Instead, we approach measures in general linear spaces from the projective perspective (see below). For the sake of completeness, we include in Section2 fundamental notions and definitions from measure theory, with particular attention to the issue of s-additivity. We start by considering in Section3 the infinite product of a family of probability measures.
    [Show full text]
  • Arxiv:1910.04914V2 [Math.FA] 10 Mar 2020 Ijitmmesof Members Disjoint Ti Lsia Seeg 3 ],Ta H E Function Set the That 6]), [3, E.G
    GENERAL COUNTABLE PRODUCT MEASURES JUAN CARLOS SAMPEDRO Abstract. A new construction of product measures is given for an arbitrary sequence of measure spaces via outer measure techniques giving a coherent extension of the classical theory of finite product measures to countable many. Moreover, the Lebesgue spaces of this measures are simplified in terms of finite product measures. This decomposition simplifies all the considerations regarding infinite dimensional integration and gives to it a computational framework. 1. Introduction The classical theory of product measures deals with two measure spaces (X, ΣX ,µX ) and (Y, ΣY ,µY ) in order to construct the product measure space (X×Y, ΣX ⊗ΣY ,µX ⊗µY ), where ΣX ⊗ ΣY is the σ-algebra generated by RX×Y := {A × B : A ∈ ΣX , B ∈ ΣY } and µX ⊗ µY is a measure on ΣX ⊗ ΣY satisfying the identity (1) (µX ⊗ µY )(A × B)= µX (A) · µY (B) for every A ∈ ΣX , B ∈ ΣY . The most common method to prove the existence of this measure is through the celebrated Caratheodory extension theorem as follows. Denote by U(RX×Y ) the family of finite unions of elements of RX×Y , then it is easy to verify that U(RX×Y ) is an algebra of subsets of X × Y and that every element of U(RX×Y ) can be written as a finite union of pairwise disjoint members of RX×Y . Define the set function µ0 : U(RX×Y ) −! [0, +∞] N 7! N i=1 Ai × Bi i=1 µX (Ai) · µY (Bi). It is classical (see e.g. [3,U 6]), that the set functionP µ0 is well defined and σ-additive on U(RX×Y ).
    [Show full text]
  • Universities of Leeds, Sheffield and York
    promoting access to White Rose research papers Universities of Leeds, Sheffield and York http://eprints.whiterose.ac.uk/ This is an author produced version of a paper published in Journal of Geometry and Physics White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/74924/ Published paper: Bogachev, L and Daletskii, A (2013) Cluster point processes on manifolds. Journal of Geometry and Physics, 63 (1). 45 - 79 (35). http://dx.doi.org/10.1016/j.geomphys.2012.09.007 White Rose Research Online [email protected] Cluster Point Processes on Manifolds Leonid Bogacheva and Alexei Daletskiib aDepartment of Statistics, University of Leeds, Leeds LS2 9JT, UK. E-mail: [email protected] bDepartment of Mathematics, University of York, York YO10 5DD, UK. E-mail: [email protected] Abstract The probability distribution µcl of a general cluster point process in a Riemannian manifold X (with independent random clusters attached to points of a configuration with distribution µ) is studied via the projection of an auxiliary measure µˆ in the space of configurations γˆ = {(x, y¯)} ⊂ X × X, where x ∈ X indicates a cluster “cen- F n tre” and y¯ ∈ X := n X represents a corresponding cluster relative to x. We show that the measure µcl is quasi-invariant with respect to the group Diff0(X) of compactly supported diffeomorphisms of X, and prove an integration-by-parts formula for µcl. The associated equilibrium stochastic dynamics is then constructed using the method of Dirichlet forms. General constructions are illustrated by examples including Euclidean spaces, Lie groups, homogeneous spaces, Riemannian manifolds of non-positive curva- ture and metric spaces.
    [Show full text]
  • Probability and Measure
    Probability and Measure Stefan Grosskinsky Cambridge, Michaelmas 2005 These notes and other information about the course are available on www.statslab.cam.ac.uk/∼stefan/teaching/probmeas.html The text is based on – and partially copied from – notes on the same course by James Nor- ris, Alan Stacey and Geoffrey Grimmett. 1 Introduction Motivation from two perspectives: 1. Probability Probability space Ω, P(Ω), P , where Ω is a set, P(Ω) the set of events (power set in this case) and P : P(Ω) → [0, 1] is the probability measure. If Ω is countable, for every A ∈ P(Ω) we have X P(A) = P {ω} . ω∈A So calculating probabilities just involves (possibly infinite) sums. If Ω = (0, 1] and P is the uniform probability measure on (0, 1] then for every ω ∈ Ω it is P(ω) = 0. So X 1 = P (0, 1] 6= “ P {ω} “. w∈(0,1] 2. Integration (Analysis) In general, for continuous Ω, the probability of a set A ⊆ Ω can be computed as Z P(A) = 1A(x) ρ(x) dx , Ω provided this (Riemann-)integral exists. Here 1A is the indicator function of the set A and ρ is the probability density. In the example above ρ ≡ 1 and this leads to P (a, b] = R 1 1 R b 0 (a,b](x) dx = a dx = b − a. Using standard properties of Riemann-integrals this approach works fine, as long as A is a finite union or intersection of intervals. But e.g. for A = (0, 1] ∩ Q, the Riemann-integral R 1 1 0 A(x) dx is not defined, although the probability for this event is intuitively 0.
    [Show full text]
  • Product Measure and Fubini's Theorem
    MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2018 Lecture 9 PRODUCT MEASURE AND FUBINI’S THEOREM Contents 1. Product measure 2. Fubini’s theorem In elementary math and calculus, we often interchange the order of summa- tion and integration. The discussion here is concerned with conditions under which this is legitimate. 1 PRODUCT MEASURE Consider two probabilistic experiments with probability spaces ( Ω1, F1, P1) and (Ω 2, F2, P2), respectively. We are interested in forming a probabilistic model of a “joint experiment” in which the original two experiments are carried out independently. 1.1 The sample space of the joint experiment If the first experiment has an outcome !1, and the second has an outcome !2, then the outcome of the joint experiment is the pair (!1,!2). This leads us to define a new sample space Ω = Ω 1 × Ω 2. 1.2 The ˙-algebra of the joint experiment Next, we need a ˙-algebra on Ω . If A1 ∈ F1, we certainly want to be able to talk about the event {!1 ∈ A1} and its probability. In terms of the joint experiment, this would be the same as the event A1 × Ω 1 = {(!1,!2) | !1 ∈ A1,!2 ∈ Ω 2}. 1 Thus, we would like our ˙-algebra on Ω to include all sets of the form A1 × Ω 2, (with A1 ∈ F1) and by symmetry, all sets of the form Ω 1 ×A2 (with (A2 ∈ F2). This leads us to the following definition. Definition 1. We define F1 × F2 as the smallest ˙-algebra of subsets of Ω1 × Ω 2 that contains all sets of the form A1 × Ω 2 and Ω1 × A2, where A1 ∈ F1 and A2 ∈ F2.
    [Show full text]