Probability Spaces and Random Variables

Total Page:16

File Type:pdf, Size:1020Kb

Probability Spaces and Random Variables Chapter 15 Probability Spaces and Random Variables We have already introduced the notion of a probability space, namely a measure space (X; F; µ) with the property that µ(X) = 1. Here we look further at some basic notions and results of probability theory. First, we give some terminology. A set S F is called an event, and 2 µ(S) is called the probability of the event, often denoted P (S). The image to have in mind is that one picks a point x X at random, and µ(S) is the 2 probability that x S. A measurable function f : X R is called a (real) 2 ! random variable. (One also speaks of a measurable map F : (X; F) (Y; G) ! as a Y -valued random variable, given a measurable space (Y; G).) If f is integrable, one sets (15.1) E(f) = f dµ, Z X called the expectation of f, or the mean of f. One defines the variance of f as (15.2) Var(f) = f a 2 dµ, a = E(f): Z j − j X The random variable f has finite variance if and only if f L2(X; µ). 2 A random variable f : X R induces a probability measure ν on R, ! f called the probability distribution of f: (15.3) ν (S) = µ(f −1(S)); S B(R); f 2 207 208 15. Probability Spaces and Random Variables where B(R) denotes the σ-algebra of Borel sets in R. It is clear that f 2 L1(X; µ) if and only if x dν (x) < and that j j f 1 R (15.4) E(f) = x dνf (x): Z R We also have 2 (15.5) Var(f) = (x a) dνf (x); a = E(f): Z − R To illustrate some of these concepts, consider coin tosses. We start with 1 (15.6) X = h; t ; µ( h ) = µ( t ) = : f g f g f g 2 The event h is that the coin comes up heads, and t gives tails. We also f g f g form k (15.7) X = X; µ = µ µ, k k × · · · × jY=1 representing the set of possible outcomes of k successive coin tosses. If H(k; `) is the event that there are exactly ` heads among the k coin tosses, its probability is k (15.8) µ (H(k; `)) = 2−k : k ` If N : X R yields the number of heads that come up in k tosses, i.e., k k ! N (x) is the number of h's that occur in x = (x ; : : : ; x ) X , then k 1 k 2 k k (15.9) E(N ) = : k 2 The measure ν on R is supported on the set ` Z+ : 0 ` k , and Nk f 2 ≤ ≤ g k (15.10) ν ( ` ) = µ (H(k; `)) = 2−k : Nk f g k ` A central area of probability theory is the study of the large k behavior of events in spaces such as (X ; µ ), particularly various limits as k . k k ! 1 This leads naturally to a consideration of the infinite product space 1 (15.11) Z = X; jY=1 15. Probability Spaces and Random Variables 209 with σ-algebra and product measure ν, constructed at the end of Chapter Z 6. In case (X; µ) is given by (15.6), one can consider N : Z R, the number k ! of heads that come up in the first k throws; N = N π , where π : Z X k ke◦ k k ! k is the natural projection. It is a fundamental fact of probability theory that e for almost all infinite sequences of random coin tosses (i.e., with probability 1) the fraction k−1N of them that come up heads tends to 1=2 as k . k ! 1 This is a special case of the \law of large numbers," several versions of which e will be established below. −1 Note that k Nk has the form k e 1 (15.12) f ; k j Xj=1 where f : Z R has the form j ! (15.13) f (x) = f(x ); f : X R; x = (x ; x ; x ; : : : ) Z: j j ! 1 2 3 2 The random variables fj have the important properties of being independent and identically distributed. We define these last two terms in general, for random variables on a probability space (X; F; µ) that need not have a product structure. Say we have random variables f1; : : : ; fk : X R. Extending (15.3), we have a Rk ! measure νFk on called the joint probability distribution: (15.14) ν (S) = µ(F −1(S)); S B(Rk); F = (f ; : : : ; f ) : X Rk: Fk k 2 k 1 k ! We say (15.15) f ; : : : ; f are independent ν = ν ν : 1 k () Fk f1 × · · · × fk We also say (15.16) f and f are identically distributed ν = ν : i j () fi fj If f : j N is an infinite sequence of random variables on X, we say f j 2 g they are independent if and only if each finite subset is. Equivalently, we can form (15.17) F = (f ; f ; f ; : : : ) : X R1 = R; 1 2 3 ! jY≥1 set (15.18) ν (S) = µ(F −1(S)); S B(R1); F 2 and then independence is equivalent to (15.19) νF = νfj : jY≥1 It is an easy exercise to show that the random variables in (15.13) are inde- pendent and identically distributed. Here is a simple consequence of independence. 210 15. Probability Spaces and Random Variables Lemma 15.1. If f ; f L2(X; µ) are independent, then 1 2 2 (15.20) E(f1f2) = E(f1)E(f2): Proof. We have x2; y2 L1(R2; ν ), and hence xy L1(R2; ν ). 2 f1;f2 2 f1;f2 Then E(f1f2) = xy dνf1;f2 (x; y) Z R2 (15.21) = xy dνf1 (x) dνf2 (y) Z R2 = E(f1)E(f2): The following is a version of the weak law of large numbers. (See the exercises for a more general version.) Proposition 15.2. Let f : j N be independent, identically distributed f j 2 g random variables on (X; F; µ). Assume fj have finite variance, and set a = E(fj). Then k 1 (15.22) f a; in L2-norm; k j −! Xj=1 and hence in measure, as k . ! 1 Proof. Using Lemma 15.1, we have k k 2 2 1 1 2 b (15.23) f a = f a 2 = ; j 2 2 j L k − L k k − k k Xj=1 Xj=1 2 2 since (f a; f a) 2 = E(f a)E(f a) = 0; j = `, and f a 2 = b j − ` − L j − ` − 6 k j − kL is independent of j. Clearly (15.23) implies (15.22). Convergence in measure then follows by Tchebychev's inequality. The strong law of large numbers produces pointwise a.e. convergence and relaxes the L2-hypothesis made in Proposition 15.2. Before proceeding to the general case, we first treat the product case (15.11){(15.13). 15. Probability Spaces and Random Variables 211 Proposition 15.3. Let (Z; ; ν) be a product of a countable number of Z factors of a probability space (X; F; µ), as in (15.11). Assume p [1; ), let 2 1 f Lp(X; µ), and define f Lp(Z; ν), as in (15.13). Set a = E(f). Then 2 j 2 k 1 (15.24) f a; in Lp-norm and ν-a.e.; k j ! Xj=1 as k . ! 1 Proof. This follows from ergodic theorems established in Chapter 14. In j−1 fact, note that fj = T f1, where T g(x) = f('(x)), for (15.25) ' : Z Z; '(x ; x ; x ; : : : ) = (x ; x ; x ; : : : ): ! 1 2 3 2 3 4 By Proposition 14.11, ' is ergodic. We see that k k−1 1 1 (15.26) f = T jf = A f ; k j k 1 k 1 Xj=1 Xj=0 as in (14.4). The convergence A f a asserted in (15.24) now follows k 1 ! from Proposition 14.8. We now establish the following strong law of large numbers. Theorem 15.4. Let (X; F; µ) be a probability space, and let f : j N f j 2 g be independent, identically distributed random variables in Lp(X; µ), with p [1; ). Set a = E(f ). Then 2 1 j k 1 (15.27) f a; in Lp-norm and µ-a.e.; k j ! Xj=1 as k . ! 1 Proof. Our strategy is to reduce this to Proposition 15.3. We have a map F : X R1 as in (15.17), yielding a measure ν on R1, as in (15.18), ! F which is actually a product measure, as in (15.19). We have coordinate functions (15.28) ξ : R1 R; ξ (x ; x ; x ; : : : ) = x ; j −! j 1 2 3 j and (15.29) f = ξ F: j j ◦ 212 15. Probability Spaces and Random Variables Note that ξ Lp(R1; ν ) and j 2 F (15.30) ξj dνF = xj dνf = a: Z Z j R1 R Now Proposition 15.3 implies k 1 (15.31) ξ a; in Lp-norm and ν -a.e.; k j ! F Xj=1 on (R1; ν ), as k . Since (15.29) holds and F is measure-preserving, F ! 1 (15.27) follows. Note that if fj are independent random variables on X and Fk = (f ; : : : ; f ) : X Rk, then 1 k ! G(f1; : : : ; fk) dµ = G(x1; : : : ; xk) dνF Z Z k X Rk (15.32) = G(x1; : : : ; xk) dνf1 (x1) dνf (xk): Z · · · k Rk In particular, we have for the sum k (15.33) Sk = fk Xj=1 and a Borel set B R that ⊂ νS (B) = χB(f1 + + fk) dµ k Z · · · X (15.34) = χB(x1 + + xk) dνf1 (x1) dνf (xk): Z · · · · · · k Rk Recalling the definition (9.60) of convolution of measures, we see that (15.35) ν = ν ν : Sk f1 ∗ · · · ∗ fk Given a random variable f : X R, the function ! −iξf −ixξ (15.36) χf (ξ) = E(e ) = e dνf (x) = p2π ν^f (ξ) Z R 15.
Recommended publications
  • Probability and Statistics Lecture Notes
    Probability and Statistics Lecture Notes Antonio Jiménez-Martínez Chapter 1 Probability spaces In this chapter we introduce the theoretical structures that will allow us to assign proba- bilities in a wide range of probability problems. 1.1. Examples of random phenomena Science attempts to formulate general laws on the basis of observation and experiment. The simplest and most used scheme of such laws is: if a set of conditions B is satisfied =) event A occurs. Examples of such laws are the law of gravity, the law of conservation of mass, and many other instances in chemistry, physics, biology... If event A occurs inevitably whenever the set of conditions B is satisfied, we say that A is certain or sure (under the set of conditions B). If A can never occur whenever B is satisfied, we say that A is impossible (under the set of conditions B). If A may or may not occur whenever B is satisfied, then A is said to be a random phenomenon. Random phenomena is our subject matter. Unlike certain and impossible events, the presence of randomness implies that the set of conditions B do not reflect all the necessary and sufficient conditions for the event A to occur. It might seem them impossible to make any worthwhile statements about random phenomena. However, experience has shown that many random phenomena exhibit a statistical regularity that makes them subject to study. For such random phenomena it is possible to estimate the chance of occurrence of the random event. This estimate can be obtained from laws, called probabilistic or stochastic, with the form: if a set of conditions B is satisfied event A occurs m times =) repeatedly n times out of the n repetitions.
    [Show full text]
  • The Probability Set-Up.Pdf
    CHAPTER 2 The probability set-up 2.1. Basic theory of probability We will have a sample space, denoted by S (sometimes Ω) that consists of all possible outcomes. For example, if we roll two dice, the sample space would be all possible pairs made up of the numbers one through six. An event is a subset of S. Another example is to toss a coin 2 times, and let S = fHH;HT;TH;TT g; or to let S be the possible orders in which 5 horses nish in a horse race; or S the possible prices of some stock at closing time today; or S = [0; 1); the age at which someone dies; or S the points in a circle, the possible places a dart can hit. We should also keep in mind that the same setting can be described using dierent sample set. For example, in two solutions in Example 1.30 we used two dierent sample sets. 2.1.1. Sets. We start by describing elementary operations on sets. By a set we mean a collection of distinct objects called elements of the set, and we consider a set as an object in its own right. Set operations Suppose S is a set. We say that A ⊂ S, that is, A is a subset of S if every element in A is contained in S; A [ B is the union of sets A ⊂ S and B ⊂ S and denotes the points of S that are in A or B or both; A \ B is the intersection of sets A ⊂ S and B ⊂ S and is the set of points that are in both A and B; ; denotes the empty set; Ac is the complement of A, that is, the points in S that are not in A.
    [Show full text]
  • 1 Probabilities
    1 Probabilities 1.1 Experiments with randomness We will use the term experiment in a very general way to refer to some process that produces a random outcome. Examples: (Ask class for some first) Here are some discrete examples: • roll a die • flip a coin • flip a coin until we get heads And here are some continuous examples: • height of a U of A student • random number in [0, 1] • the time it takes until a radioactive substance undergoes a decay These examples share the following common features: There is a proce- dure or natural phenomena called the experiment. It has a set of possible outcomes. There is a way to assign probabilities to sets of possible outcomes. We will call this a probability measure. 1.2 Outcomes and events Definition 1. An experiment is a well defined procedure or sequence of procedures that produces an outcome. The set of possible outcomes is called the sample space. We will typically denote an individual outcome by ω and the sample space by Ω. Definition 2. An event is a subset of the sample space. This definition will be changed when we come to the definition ofa σ-field. The next thing to define is a probability measure. Before we can do this properly we need some more structure, so for now we just make an informal definition. A probability measure is a function on the collection of events 1 that assign a number between 0 and 1 to each event and satisfies certain properties. NB: A probability measure is not a function on Ω.
    [Show full text]
  • Propensities and Probabilities
    ARTICLE IN PRESS Studies in History and Philosophy of Modern Physics 38 (2007) 593–625 www.elsevier.com/locate/shpsb Propensities and probabilities Nuel Belnap 1028-A Cathedral of Learning, University of Pittsburgh, Pittsburgh, PA 15260, USA Received 19 May 2006; accepted 6 September 2006 Abstract Popper’s introduction of ‘‘propensity’’ was intended to provide a solid conceptual foundation for objective single-case probabilities. By considering the partly opposed contributions of Humphreys and Miller and Salmon, it is argued that when properly understood, propensities can in fact be understood as objective single-case causal probabilities of transitions between concrete events. The chief claim is that propensities are well-explicated by describing how they fit into the existing formal theory of branching space-times, which is simultaneously indeterministic and causal. Several problematic examples, some commonsense and some quantum-mechanical, are used to make clear the advantages of invoking branching space-times theory in coming to understand propensities. r 2007 Elsevier Ltd. All rights reserved. Keywords: Propensities; Probabilities; Space-times; Originating causes; Indeterminism; Branching histories 1. Introduction You are flipping a fair coin fairly. You ascribe a probability to a single case by asserting The probability that heads will occur on this very next flip is about 50%. ð1Þ The rough idea of a single-case probability seems clear enough when one is told that the contrast is with either generalizations or frequencies attributed to populations asserted while you are flipping a fair coin fairly, such as In the long run; the probability of heads occurring among flips is about 50%. ð2Þ E-mail address: [email protected] 1355-2198/$ - see front matter r 2007 Elsevier Ltd.
    [Show full text]
  • Topic 1: Basic Probability Definition of Sets
    Topic 1: Basic probability ² Review of sets ² Sample space and probability measure ² Probability axioms ² Basic probability laws ² Conditional probability ² Bayes' rules ² Independence ² Counting ES150 { Harvard SEAS 1 De¯nition of Sets ² A set S is a collection of objects, which are the elements of the set. { The number of elements in a set S can be ¯nite S = fx1; x2; : : : ; xng or in¯nite but countable S = fx1; x2; : : :g or uncountably in¯nite. { S can also contain elements with a certain property S = fx j x satis¯es P g ² S is a subset of T if every element of S also belongs to T S ½ T or T S If S ½ T and T ½ S then S = T . ² The universal set ­ is the set of all objects within a context. We then consider all sets S ½ ­. ES150 { Harvard SEAS 2 Set Operations and Properties ² Set operations { Complement Ac: set of all elements not in A { Union A \ B: set of all elements in A or B or both { Intersection A [ B: set of all elements common in both A and B { Di®erence A ¡ B: set containing all elements in A but not in B. ² Properties of set operations { Commutative: A \ B = B \ A and A [ B = B [ A. (But A ¡ B 6= B ¡ A). { Associative: (A \ B) \ C = A \ (B \ C) = A \ B \ C. (also for [) { Distributive: A \ (B [ C) = (A \ B) [ (A \ C) A [ (B \ C) = (A [ B) \ (A [ C) { DeMorgan's laws: (A \ B)c = Ac [ Bc (A [ B)c = Ac \ Bc ES150 { Harvard SEAS 3 Elements of probability theory A probabilistic model includes ² The sample space ­ of an experiment { set of all possible outcomes { ¯nite or in¯nite { discrete or continuous { possibly multi-dimensional ² An event A is a set of outcomes { a subset of the sample space, A ½ ­.
    [Show full text]
  • (Measure Theory for Dummies) UWEE Technical Report Number UWEETR-2006-0008
    A Measure Theory Tutorial (Measure Theory for Dummies) Maya R. Gupta {gupta}@ee.washington.edu Dept of EE, University of Washington Seattle WA, 98195-2500 UWEE Technical Report Number UWEETR-2006-0008 May 2006 Department of Electrical Engineering University of Washington Box 352500 Seattle, Washington 98195-2500 PHN: (206) 543-2150 FAX: (206) 543-3842 URL: http://www.ee.washington.edu A Measure Theory Tutorial (Measure Theory for Dummies) Maya R. Gupta {gupta}@ee.washington.edu Dept of EE, University of Washington Seattle WA, 98195-2500 University of Washington, Dept. of EE, UWEETR-2006-0008 May 2006 Abstract This tutorial is an informal introduction to measure theory for people who are interested in reading papers that use measure theory. The tutorial assumes one has had at least a year of college-level calculus, some graduate level exposure to random processes, and familiarity with terms like “closed” and “open.” The focus is on the terms and ideas relevant to applied probability and information theory. There are no proofs and no exercises. Measure theory is a bit like grammar, many people communicate clearly without worrying about all the details, but the details do exist and for good reasons. There are a number of great texts that do measure theory justice. This is not one of them. Rather this is a hack way to get the basic ideas down so you can read through research papers and follow what’s going on. Hopefully, you’ll get curious and excited enough about the details to check out some of the references for a deeper understanding.
    [Show full text]
  • Probability Theory Review 1 Basic Notions: Sample Space, Events
    Fall 2018 Probability Theory Review Aleksandar Nikolov 1 Basic Notions: Sample Space, Events 1 A probability space (Ω; P) consists of a finite or countable set Ω called the sample space, and the P probability function P :Ω ! R such that for all ! 2 Ω, P(!) ≥ 0 and !2Ω P(!) = 1. We call an element ! 2 Ω a sample point, or outcome, or simple event. You should think of a sample space as modeling some random \experiment": Ω contains all possible outcomes of the experiment, and P(!) gives the probability that we are going to get outcome !. Note that we never speak of probabilities except in relation to a sample space. At this point we give a few examples: 1. Consider a random experiment in which we toss a single fair coin. The two possible outcomes are that the coin comes up heads (H) or tails (T), and each of these outcomes is equally likely. 1 Then the probability space is (Ω; P), where Ω = fH; T g and P(H) = P(T ) = 2 . 2. Consider a random experiment in which we toss a single coin, but the coin lands heads with 2 probability 3 . Then, once again the sample space is Ω = fH; T g but the probability function 2 1 is different: P(H) = 3 , P(T ) = 3 . 3. Consider a random experiment in which we toss a fair coin three times, and each toss is independent of the others. The coin can come up heads all three times, or come up heads twice and then tails, etc.
    [Show full text]
  • 1 Probability Measure and Random Variables
    1 Probability measure and random variables 1.1 Probability spaces and measures We will use the term experiment in a very general way to refer to some process that produces a random outcome. Definition 1. The set of possible outcomes is called the sample space. We will typically denote an individual outcome by ω and the sample space by Ω. Set notation: A B, A is a subset of B, means that every element of A is also in B. The union⊂ A B of A and B is the of all elements that are in A or B, including those that∪ are in both. The intersection A B of A and B is the set of all elements that are in both of A and B. ∩ n j=1Aj is the set of elements that are in at least one of the Aj. ∪n j=1Aj is the set of elements that are in all of the Aj. ∩∞ ∞ j=1Aj, j=1Aj are ... Two∩ sets A∪ and B are disjoint if A B = . denotes the empty set, the set with no elements. ∩ ∅ ∅ Complements: The complement of an event A, denoted Ac, is the set of outcomes (in Ω) which are not in A. Note that the book writes it as Ω A. De Morgan’s laws: \ (A B)c = Ac Bc ∪ ∩ (A B)c = Ac Bc ∩ ∪ c c ( Aj) = Aj j j [ \ c c ( Aj) = Aj j j \ [ (1) Definition 2. Let Ω be a sample space. A collection of subsets of Ω is a σ-field if F 1.
    [Show full text]
  • CSE 21 Mathematics for Algorithm and System Analysis
    CSE 21 Mathematics for Algorithm and System Analysis Unit 1: Basic Count and List Section 3: Set (cont’d) Section 4: Probability and Basic Counting CSE21: Lecture 4 1 Quiz Information • The first quiz will be in the first 15 minutes of the next class (Monday) at the same classroom. • You can use textbook and notes during the quiz. • For all the questions, no final number is necessary, arithmetic formula is enough. • Write down your analysis, e.g., applicable theorem(s)/rule(s). We will give partial credit if the analysis is correct but the result is wrong. CSE21: Lecture 4 2 Correction • For set U={1, 2, 3, 4, 5}, A={1, 2, 3}, B={3, 4}, – Set Difference A − B = {1, 2}, B − A ={4} – Symmetric Difference: A ⊕ B = ( A − B)∪(B − A)= {1, 2} ∪{4} = {1, 2, 4} CSE21: Lecture 4 3 Card Hand Illustration • 5 card hand of full house: a pair and a triple • 5 card hand with two pairs CSE21: Lecture 4 4 Review: Binomial Coefficient • Binomial Coefficient: number of subsets of A of size (or cardinality) k: n n! C(n, k) = = k k (! n − k)! CSE21: Lecture 4 5 Review : Deriving Recursions • How to construct the things of a given size by using the same type of things of a smaller size? • Recursion formula of binomial coefficient – C(0,0) = 1, – C(0, k) = 0 for k ≠ 0 and – C(n,k) = C(n−1, k−1)+ C(n−1, k) for n > 0; • It shows how recursion works and tells another way calculating C(n,k) besides the formula n n! C(n, k) = = k k (! n − k)! 6 Learning Outcomes • By the end of this lesson, you should be able to – Calculate set partition number by recursion.
    [Show full text]
  • Probability Theory: STAT310/MATH230; September 12, 2010 Amir Dembo
    Probability Theory: STAT310/MATH230; September 12, 2010 Amir Dembo E-mail address: [email protected] Department of Mathematics, Stanford University, Stanford, CA 94305. Contents Preface 5 Chapter 1. Probability, measure and integration 7 1.1. Probability spaces, measures and σ-algebras 7 1.2. Random variables and their distribution 18 1.3. Integration and the (mathematical) expectation 30 1.4. Independence and product measures 54 Chapter 2. Asymptotics: the law of large numbers 71 2.1. Weak laws of large numbers 71 2.2. The Borel-Cantelli lemmas 77 2.3. Strong law of large numbers 85 Chapter 3. Weak convergence, clt and Poisson approximation 95 3.1. The Central Limit Theorem 95 3.2. Weak convergence 103 3.3. Characteristic functions 117 3.4. Poisson approximation and the Poisson process 133 3.5. Random vectors and the multivariate clt 140 Chapter 4. Conditional expectations and probabilities 151 4.1. Conditional expectation: existence and uniqueness 151 4.2. Properties of the conditional expectation 156 4.3. The conditional expectation as an orthogonal projection 164 4.4. Regular conditional probability distributions 169 Chapter 5. Discrete time martingales and stopping times 175 5.1. Definitions and closure properties 175 5.2. Martingale representations and inequalities 184 5.3. The convergence of Martingales 191 5.4. The optional stopping theorem 203 5.5. Reversed MGs, likelihood ratios and branching processes 209 Chapter 6. Markov chains 225 6.1. Canonical construction and the strong Markov property 225 6.2. Markov chains with countable state space 233 6.3. General state space: Doeblin and Harris chains 255 Chapter 7.
    [Show full text]
  • Characterization of Palm Measures Via Bijective Point-Shifts
    The Annals of Probability 2005, Vol. 33, No. 5, 1698–1715 DOI 10.1214/009117905000000224 © Institute of Mathematical Statistics, 2005 CHARACTERIZATION OF PALM MEASURES VIA BIJECTIVE POINT-SHIFTS BY MATTHIAS HEVELING AND GÜNTER LAST Universität Karlsruhe The paper considers a stationary point process N in Rd . A point-map picks a point of N in a measurable way. It is called bijective [Thorisson, H. (2000). Coupling, Stationarity, and Regeneration. Springer, New York] if it is generating (by suitable shifts) a bijective mapping on N. Mecke [Math. Nachr. 65 (1975) 335–344] proved that the Palm measure of N is point- stationary in the sense that it is invariant under bijective point-shifts. Our main result identifies this property as being characteristic for Palm measures. This generalizes a fundamental classical result for point processes on the line (see, e.g., Theorem 11.4 in [Kallenberg, O. (2002). Foundations of Modern Probability, 2nd ed. Springer, New York]) and solves a problem posed in [Thorisson, H. (2000). Coupling, Stationarity, and Regeneration. Springer, New York] and [Ferrari, P. A., Landim, C. and Thorisson, H. (2004). Ann. Inst. H. Poincaré Probab. Statist. 40 141–152]. Our second result guarantees the existence of bijective point-maps that have (almost surely with respect to the Palm measure of N) no fixed points. This answers another question asked by Thorisson. Our final result shows that there is a directed graph with vertex set N that is defined in a translation-invariant way and whose components are almost surely doubly infinite paths. This generalizes and complements one of the main results in [Holroyd, A.
    [Show full text]
  • Probability Spaces Lecturer: Dr
    EE5110: Probability Foundations for Electrical Engineers July-November 2015 Lecture 4: Probability Spaces Lecturer: Dr. Krishna Jagannathan Scribe: Jainam Doshi, Arjun Nadh and Ajay M 4.1 Introduction Just as a point is not defined in elementary geometry, probability theory begins with two entities that are not defined. These undefined entities are a Random Experiment and its Outcome: These two concepts are to be understood intuitively, as suggested by their respective English meanings. We use these undefined terms to define other entities. Definition 4.1 The Sample Space Ω of a random experiment is the set of all possible outcomes of a random experiment. An outcome (or elementary outcome) of the random experiment is usually denoted by !: Thus, when a random experiment is performed, the outcome ! 2 Ω is picked by the Goddess of Chance or Mother Nature or your favourite genie. Note that the sample space Ω can be finite or infinite. Indeed, depending on the cardinality of Ω, it can be classified as follows: 1. Finite sample space 2. Countably infinite sample space 3. Uncountable sample space It is imperative to note that for a given random experiment, its sample space is defined depending on what one is interested in observing as the outcome. We illustrate this using an example. Consider a person tossing a coin. This is a random experiment. Now consider the following three cases: • Suppose one is interested in knowing whether the toss produces a head or a tail, then the sample space is given by, Ω = fH; T g. Here, as there are only two possible outcomes, the sample space is said to be finite.
    [Show full text]