Probability (Graduate Class) Lecture Notes

Total Page:16

File Type:pdf, Size:1020Kb

Probability (Graduate Class) Lecture Notes Probability (graduate class) Lecture Notes Tomasz Tkocz∗ These lecture notes were written for the graduate course 21-721 Probability that I taught at Carnegie Mellon University in Spring 2020. ∗Carnegie Mellon University; [email protected] 1 Contents 1 Probability space 6 1.1 Definitions . .6 1.2 Basic examples . .7 1.3 Conditioning . 11 1.4 Exercises . 13 2 Random variables 14 2.1 Definitions and basic properties . 14 2.2 π λ systems . 16 − 2.3 Properties of distribution functions . 16 2.4 Examples: discrete and continuous random variables . 18 2.5 Exercises . 21 3 Independence 22 3.1 Definitions . 22 3.2 Product measures and independent random variables . 24 3.3 Examples . 25 3.4 Borel-Cantelli lemmas . 28 3.5 Tail events and Kolmogorov's 0 1law .................. 29 − 3.6 Exercises . 31 4 Expectation 33 4.1 Definitions and basic properties . 33 4.2 Variance and covariance . 35 4.3 Independence again, via product measures . 36 4.4 Exercises . 39 5 More on random variables 41 5.1 Important distributions . 41 5.2 Gaussian vectors . 45 5.3 Sums of independent random variables . 46 5.4 Density . 47 5.5 Exercises . 49 6 Important inequalities and notions of convergence 52 6.1 Basic probabilistic inequalities . 52 6.2 Lp-spaces . 54 6.3 Notions of convergence . 59 6.4 Exercises . 63 2 7 Laws of large numbers 67 7.1 Weak law of large numbers . 67 7.2 Strong law of large numbers . 73 7.3 Exercises . 78 8 Weak convergence 81 8.1 Definition and equivalences . 81 8.2 Relations to other notions of convergence and basic algebraic properties 86 8.3 Compactness . 87 8.4 Exercises . 90 9 Characteristic functions 92 9.1 Definition and basic properties . 92 9.2 Inversion formulae . 94 9.3 Relations to convergence in distribution . 97 9.4 Exercises . 100 10 Central limit theorem 105 10.1 Auxiliary elementary lemmas . 105 10.2 Vanilla Central Limit Theorem . 107 10.3 Lindeberg's Central Limit Theorem . 108 10.4 Multidimensional case . 110 10.5 Poisson limit theorem . 111 10.6 Exercises . 114 11 Quantitative versions of the limit theorem* 118 11.1 Berry-Esseen theorem via Stein's mehtod . 118 11.2 Local central limit theorem . 124 11.3 Poisson limit theorem . 128 11.4 Exercises . 133 12 Conditional expectation 134 12.1 Construction . 134 12.2 Important properties . 137 12.3 Basic examples . 139 12.4 Exercises . 141 13 Martingales I 142 13.1 Definitions and basic examples . 142 13.2 Martingale transforms and stopping times . 144 13.3 Convergence theorem . 148 3 13.4 Exercises . 151 14 Martingales II 154 14.1 L2 martingales . 154 14.2 Uniformly integrable martingales . 159 14.3 Maximal inequalities . 161 14.4 Martingales bounded in Lp, p > 1...................... 163 14.5 Exercises . 165 15 Applications of martingale theory 166 15.1 Series of independent random variables . 166 15.2 Kolmogorov's 0 1 law and strong law of large numbers . 168 − 15.3 Kakutani's theorem . 168 15.4 The law of the iterated logarithm for Gaussians . 170 15.5 The Radon-Nikodym theorem . 172 15.6 Exercises . 177 16 Large deviations 178 16.1 Moment generating functions . 179 16.2 Upper bounds: Chernoff's inequality . 183 16.3 Cram´er'stheorem . 185 16.4 Quantitative bounds . 189 16.5 Bounds on the expected maximum of random variables . 190 16.6 A flavour of concentration inequalities . 193 16.7 Exercises . 197 A Appendix: Carath´eodory's theorem 198 B Appendix: Dynkin's theorem 202 C Appendix: Fubini's theorem 203 D Appendix: Infninte products of measures 207 E Appendix: Construction of expectation 210 E.1 Nonnegative random variables . 211 E.2 General random variables . 214 E.3 Lebesgue's dominated convergence theorem . 215 F Appendix: Lindeberg's swapping argument 216 G Appendix: The moment method 221 4 H Appendix: Feller's converse to the central limit theorem 222 I Appendix: Uniform integrability 224 5 1 Probability space 1.1 Definitions Let Ω be a set. A collection of its subsets is called a σ-algebra (sometimes also F σ-field) if (i)Ω , 2 F (ii) for every A , we have Ac , that is is closed under taking complements, 2 F 2 F F S (iii) for every sets A1;A2;::: in , we have 1 A , that is is closed under F n=1 n 2 F F taking coutable unions. Note that these imply that ? and that is also closed under taking set difference, 2 F F countable intersections, etc. For instance, = ?; Ω is the trivial σ-algebra and F f g = 2Ω (all the subsets of Ω) is the largest possible σ-algebra. F Suppose is a σ-algebra on the set Ω. A function F µ: [0; + ] F! 1 is called a measure if (i) µ(?) = 0, (ii) µ is countably-additive, that is for every pairwise disjoint sets A1;A2;::: in , we F have ! [1 X1 µ An = µ(An): n=1 n=1 The measure µ is finite if µ(Ω) < , σ-finite if Ω is a countable union of sets in of 1 F finite measure. The measure µ is a probability measure if µ(Ω) = 1. A probability space is a triple (Ω; ; P), where Ω is a set, is a σ-algebra on Ω and F F P is a probability measure on . The sets in are called events. The empty set is called F F an impossible event because P (?) = 0. Set operations have natural interpretations, for instance for \A B", we say \A and B occur", for \A B", we say \A or B occurs", \ [ c T S for \A ", we say \A does not occur", for \ n1=1 k1=n Ak", we say “infinitely many of the events Ak occur", etc. This definition is a starting point of modern probability theory. It was laid as foundations by Kolmogorov who presented his axiom system for probability theory in 1933. We record some basic and useful properties of probability measures. 1.1 Theorem. Let (Ω; ; P) be a probability space. Let A; B; A1;A2;::: be events. Then F (i) P (Ac) = 1 P (A), − 6 (ii) if A B, then P (B A) = P (B) P (A) and P (A) P (B), ⊂ n − ≤ (iii) P (A B) = P (A) + P (B) P (A B), [ − \ Sn P P P (iv) P ( Ai) = P (Ai) P (Ai Aj) + P (Ai Aj Ak) i=1 i − i<j \ i<j<k \ \ n 1 + ( 1) − P (A1 An), − · · · − \···\ S P (v) P ( 1 Ai) 1 P (Ai), i=1 ≤ i=1 Sn Pn (vi) if A1;:::;An are pairwise disjoint, then P ( i=1 Ai) = i=1 P (Ai). We omit proofs (which are rather standard). Part (iv) is the so-called inclusion- exclusion formula. Part (v) is the so-called union bound. We also have the following continuity of measure-type results for monotone events. 1.2 Theorem. Let (Ω; ; P) be a probability space. Let A1;A2;::: be events. F (i) if the events A are increasing, that is A1 A2 :::, then n ⊂ ⊂ ! [1 P Ak = lim P (An) ; n k=1 !1 (ii) if the events A are decreasing, that is A1 A2 :::, then n ⊃ ⊃ ! \1 P Ak = lim P (An) ; n k=1 !1 Proof. (i) It helps consider the events B1 = A1;B2 = A2 A1;B3 = A3 A2;::: n n which are disjoint. We skip the details. Part (ii) can be obtained from (i) by using the complements. 1.3 Remark. Theorem 1.1 and Theorem 1.2 (i) hold for arbitrary measures (the proofs do not need the assumption P (Ω) = 1 of the measure P being probabilistic). Theorem 1.2 (ii) holds for arbitrary measures P as long as P (Ak) < for some k. 1 1.2 Basic examples Ω 1.4 Example. Let Ω = !1;!2;::: be a countable set and = 2 (all subsets). f g F Defining a probability measure on (Ω; ) really amounts to specifying a nonnegative F P sequence p1; p2;::: such that pi = 1 and defining P ( !i ) = pi. Then for every i f g subset A of Ω, X P (A) = pi: i:!i A 2 7 Conversely, since X 1 = P (Ω) = P ( !i ) ; f g i every probability measure is of this form. 1.5 Example. Let Ω be a finite nonempty set and = 2Ω. The uniform probability F measure on (Ω; ) (sometimes referred to as classical) is defined as F A P (A) = j j; Ω j j for every subset A of Ω, where here denotes cardinality. j · j Our next two examples will require nontrivial constructions. We wish to define two probability spaces which will be reasonable models of 1) selecting a point uniformly at random on the interval [0; 1] 2) tossing a fair coin infinitely many times. As much as choosing the ground set Ω is fairly natural, say Ω = [0; 1] for 1), defining an appropriate σ-algebra and a probability measure on it poses certain challenges. Let us first try to illustrate possible subtleties. Let Ω = [0; 1]. If (Ω; ; P) is meant to be a probability space modelling selecting a F point uniformly at random on [0; 1], for 0 a < b 1, we should have P ((a; b)) = b a ≤ ≤ − (the probability that a point is in the interval (a; b) equal its length), and more generally, P should be translation-invariant. Thus should at the very least contain all intervals. F Thus let be such a σ-algebra, that is the smallest σ-algebra containing all the intervals F in [0; 1]; we write.
Recommended publications
  • The Open Handbook of Formal Epistemology
    THEOPENHANDBOOKOFFORMALEPISTEMOLOGY Richard Pettigrew &Jonathan Weisberg,Eds. THEOPENHANDBOOKOFFORMAL EPISTEMOLOGY Richard Pettigrew &Jonathan Weisberg,Eds. Published open access by PhilPapers, 2019 All entries copyright © their respective authors and licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. LISTOFCONTRIBUTORS R. A. Briggs Stanford University Michael Caie University of Toronto Kenny Easwaran Texas A&M University Konstantin Genin University of Toronto Franz Huber University of Toronto Jason Konek University of Bristol Hanti Lin University of California, Davis Anna Mahtani London School of Economics Johanna Thoma London School of Economics Michael G. Titelbaum University of Wisconsin, Madison Sylvia Wenmackers Katholieke Universiteit Leuven iii For our teachers Overall, and ultimately, mathematical methods are necessary for philosophical progress. — Hannes Leitgeb There is no mathematical substitute for philosophy. — Saul Kripke PREFACE In formal epistemology, we use mathematical methods to explore the questions of epistemology and rational choice. What can we know? What should we believe and how strongly? How should we act based on our beliefs and values? We begin by modelling phenomena like knowledge, belief, and desire using mathematical machinery, just as a biologist might model the fluc- tuations of a pair of competing populations, or a physicist might model the turbulence of a fluid passing through a small aperture. Then, we ex- plore, discover, and justify the laws governing those phenomena, using the precision that mathematical machinery affords. For example, we might represent a person by the strengths of their beliefs, and we might measure these using real numbers, which we call credences. Having done this, we might ask what the norms are that govern that person when we represent them in that way.
    [Show full text]
  • Arxiv:1909.00374V3 [Math.PR] 1 Apr 2021 2010 Mathematics Subject Classification
    CONTRACTION PRINCIPLE FOR TRAJECTORIES OF RANDOM WALKS AND CRAMER'S´ THEOREM FOR KERNEL-WEIGHTED SUMS VLADISLAV VYSOTSKY Abstract. In 2013 A.A. Borovkov and A.A. Mogulskii proved a weaker-than-standard \metric" large deviations principle (LDP) for trajectories of random walks in Rd whose increments have the Laplace transform finite in a neighbourhood of zero. We prove that general metric LDPs are preserved under uniformly continuous mappings. This allows us to transform the result of Borovkov and Mogulskii into standard LDPs. We also give an explicit integral representation of the rate function they found. As an application, we extend the classical Cram´ertheorem by proving an LPD for kernel-weighted sums of i.i.d. random vectors in Rd. 1. Introduction The study of large deviations of trajectories of random walks was initiated by A.A. Borovkov in the 1960's. In 1976 A.A. Mogulskii [24] proved a large deviations result for the trajectories of a multidimensional random walk under the assumption that the Laplace transform of its increments is finite. In [24] Mogulskii also studied the large deviations under the weaker Cram´ermoment assumption, i.e. when the Laplace transform is finite in a neighbourhood of zero, but these results appear to be of a significantly limited use. The further progress was due to the concept of metric large deviations principles (LDPs, in short) on general metric spaces introduced by Borovkov and Mogulskii in [5] in 2010. The upper bound in such metric LDPs is worse than the conventional one { the infimum of the rate function is taken over shrinking "-neighbourhoods of a set rather than over its closure as in standard LDPs; compare definitions (1) and (2) below.
    [Show full text]
  • Long Term Risk: a Martingale Approach
    Long Term Risk: A Martingale Approach Likuan Qin∗ and Vadim Linetskyy Department of Industrial Engineering and Management Sciences McCormick School of Engineering and Applied Sciences Northwestern University Abstract This paper extends the long-term factorization of the pricing kernel due to Alvarez and Jermann (2005) in discrete time ergodic environments and Hansen and Scheinkman (2009) in continuous ergodic Markovian environments to general semimartingale environments, with- out assuming the Markov property. An explicit and easy to verify sufficient condition is given that guarantees convergence in Emery's semimartingale topology of the trading strate- gies that invest in T -maturity zero-coupon bonds to the long bond and convergence in total variation of T -maturity forward measures to the long forward measure. As applications, we explicitly construct long-term factorizations in generally non-Markovian Heath-Jarrow- Morton (1992) models evolving the forward curve in a suitable Hilbert space and in the non-Markovian model of social discount of rates of Brody and Hughston (2013). As a fur- ther application, we extend Hansen and Jagannathan (1991), Alvarez and Jermann (2005) and Bakshi and Chabi-Yo (2012) bounds to general semimartingale environments. When Markovian and ergodicity assumptions are added to our framework, we recover the long- term factorization of Hansen and Scheinkman (2009) and explicitly identify their distorted probability measure with the long forward measure. Finally, we give an economic interpre- tation of the recovery theorem of Ross (2013) in non-Markovian economies as a structural restriction on the pricing kernel leading to the growth optimality of the long bond and iden- tification of the physical measure with the long forward measure.
    [Show full text]
  • Probabilities, Random Variables and Distributions A
    Probabilities, Random Variables and Distributions A Contents A.1 EventsandProbabilities................................ 318 A.1.1 Conditional Probabilities and Independence . ............. 318 A.1.2 Bayes’Theorem............................... 319 A.2 Random Variables . ................................. 319 A.2.1 Discrete Random Variables ......................... 319 A.2.2 Continuous Random Variables ....................... 320 A.2.3 TheChangeofVariablesFormula...................... 321 A.2.4 MultivariateNormalDistributions..................... 323 A.3 Expectation,VarianceandCovariance........................ 324 A.3.1 Expectation................................. 324 A.3.2 Variance................................... 325 A.3.3 Moments................................... 325 A.3.4 Conditional Expectation and Variance ................... 325 A.3.5 Covariance.................................. 326 A.3.6 Correlation.................................. 327 A.3.7 Jensen’sInequality............................. 328 A.3.8 Kullback–LeiblerDiscrepancyandInformationInequality......... 329 A.4 Convergence of Random Variables . 329 A.4.1 Modes of Convergence . 329 A.4.2 Continuous Mapping and Slutsky’s Theorem . 330 A.4.3 LawofLargeNumbers........................... 330 A.4.4 CentralLimitTheorem........................... 331 A.4.5 DeltaMethod................................ 331 A.5 ProbabilityDistributions............................... 332 A.5.1 UnivariateDiscreteDistributions...................... 333 A.5.2 Univariate Continuous Distributions . 335
    [Show full text]
  • (Introduction to Probability at an Advanced Level) - All Lecture Notes
    Fall 2018 Statistics 201A (Introduction to Probability at an advanced level) - All Lecture Notes Aditya Guntuboyina August 15, 2020 Contents 0.1 Sample spaces, Events, Probability.................................5 0.2 Conditional Probability and Independence.............................6 0.3 Random Variables..........................................7 1 Random Variables, Expectation and Variance8 1.1 Expectations of Random Variables.................................9 1.2 Variance................................................ 10 2 Independence of Random Variables 11 3 Common Distributions 11 3.1 Ber(p) Distribution......................................... 11 3.2 Bin(n; p) Distribution........................................ 11 3.3 Poisson Distribution......................................... 12 4 Covariance, Correlation and Regression 14 5 Correlation and Regression 16 6 Back to Common Distributions 16 6.1 Geometric Distribution........................................ 16 6.2 Negative Binomial Distribution................................... 17 7 Continuous Distributions 17 7.1 Normal or Gaussian Distribution.................................. 17 1 7.2 Uniform Distribution......................................... 18 7.3 The Exponential Density...................................... 18 7.4 The Gamma Density......................................... 18 8 Variable Transformations 19 9 Distribution Functions and the Quantile Transform 20 10 Joint Densities 22 11 Joint Densities under Transformations 23 11.1 Detour to Convolutions......................................
    [Show full text]
  • Probability with Engineering Applications ECE 313 Course Notes
    Probability with Engineering Applications ECE 313 Course Notes Bruce Hajek Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign January 2017 c 2017 by Bruce Hajek All rights reserved. Permission is hereby given to freely print and circulate copies of these notes so long as the notes are left intact and not reproduced for commercial purposes. Email to [email protected], pointing out errors or hard to understand passages or providing comments, is welcome. Contents 1 Foundations 3 1.1 Embracing uncertainty . .3 1.2 Axioms of probability . .6 1.3 Calculating the size of various sets . 10 1.4 Probability experiments with equally likely outcomes . 13 1.5 Sample spaces with infinite cardinality . 15 1.6 Short Answer Questions . 20 1.7 Problems . 21 2 Discrete-type random variables 25 2.1 Random variables and probability mass functions . 25 2.2 The mean and variance of a random variable . 27 2.3 Conditional probabilities . 32 2.4 Independence and the binomial distribution . 34 2.4.1 Mutually independent events . 34 2.4.2 Independent random variables (of discrete-type) . 36 2.4.3 Bernoulli distribution . 37 2.4.4 Binomial distribution . 38 2.5 Geometric distribution . 41 2.6 Bernoulli process and the negative binomial distribution . 43 2.7 The Poisson distribution{a limit of binomial distributions . 45 2.8 Maximum likelihood parameter estimation . 47 2.9 Markov and Chebychev inequalities and confidence intervals . 50 2.10 The law of total probability, and Bayes formula . 53 2.11 Binary hypothesis testing with discrete-type observations .
    [Show full text]
  • Chapter 2: Probability
    16 Chapter 2: Probability The aim of this chapter is to revise the basic rules of probability. By the end of this chapter, you should be comfortable with: • conditional probability, and what you can and can’t do with conditional expressions; • the Partition Theorem and Bayes’ Theorem; • First-Step Analysis for finding the probability that a process reaches some state, by conditioning on the outcome of the first step; • calculating probabilities for continuous and discrete random variables. 2.1 Sample spaces and events Definition: A sample space, Ω, is a set of possible outcomes of a random experiment. Definition: An event, A, is a subset of the sample space. This means that event A is simply a collection of outcomes. Example: Random experiment: Pick a person in this class at random. Sample space: Ω= {all people in class} Event A: A = {all males in class}. Definition: Event A occurs if the outcome of the random experiment is a member of the set A. In the example above, event A occurs if the person we pick is male. 17 2.2 Probability Reference List The following properties hold for all events A, B. • P(∅)=0. • 0 ≤ P(A) ≤ 1. • Complement: P(A)=1 − P(A). • Probability of a union: P(A ∪ B)= P(A)+ P(B) − P(A ∩ B). For three events A, B, C: P(A∪B∪C)= P(A)+P(B)+P(C)−P(A∩B)−P(A∩C)−P(B∩C)+P(A∩B∩C) . If A and B are mutually exclusive, then P(A ∪ B)= P(A)+ P(B).
    [Show full text]
  • On the Form of the Large Deviation Rate Function for the Empirical Measures
    Bernoulli 20(4), 2014, 1765–1801 DOI: 10.3150/13-BEJ540 On the form of the large deviation rate function for the empirical measures of weakly interacting systems MARKUS FISCHER Department of Mathematics, University of Padua, via Trieste 63, 35121 Padova, Italy. E-mail: fi[email protected] A basic result of large deviations theory is Sanov’s theorem, which states that the sequence of em- pirical measures of independent and identically distributed samples satisfies the large deviation principle with rate function given by relative entropy with respect to the common distribution. Large deviation principles for the empirical measures are also known to hold for broad classes of weakly interacting systems. When the interaction through the empirical measure corresponds to an absolutely continuous change of measure, the rate function can be expressed as relative entropy of a distribution with respect to the law of the McKean–Vlasov limit with measure- variable frozen at that distribution. We discuss situations, beyond that of tilted distributions, in which a large deviation principle holds with rate function in relative entropy form. Keywords: empirical measure; Laplace principle; large deviations; mean field interaction; particle system; relative entropy; Wiener measure 1. Introduction Weakly interacting systems are families of particle systems whose components, for each fixed number N of particles, are statistically indistinguishable and interact only through the empirical measure of the N-particle system. The study of weakly interacting systems originates in statistical mechanics and kinetic theory; in this context, they are often referred to as mean field systems. arXiv:1208.0472v4 [math.PR] 16 Oct 2014 The joint law of the random variables describing the states of the N-particle system of a weakly interacting system is invariant under permutations of components, hence determined by the distribution of the associated empirical measure.
    [Show full text]
  • Large Deviation Theory
    Large Deviation Theory J.M. Swart October 19, 2016 2 3 Preface The earliest origins of large deviation theory lie in the work of Boltzmann on en- tropy in the 1870ies and Cram´er'stheorem from 1938 [Cra38]. A unifying math- ematical formalism was only developed starting with Varadhan's definition of a `large deviation principle' (LDP) in 1966 [Var66]. Basically, large deviation theory centers around the observation that suitable func- tions F of large numbers of i.i.d. random variables (X1;:::;Xn) often have the property that −snI(x) P F (X1;:::;Xn) 2 dx ∼ e as n ! 1; (LDP) where sn are real contants such that limn!1 sn = 1 (in most cases simply sn = n). In words, (LDP) says that the probability that F (X1;:::;Xn) takes values near a point x decays exponentially fast, with speed sn, and rate function I. Large deviation theory has two different aspects. On the one hand, there is the question of how to formalize the intuitive formula (LDP). This leads to the al- ready mentioned definition of `large deviation principles' and involves quite a bit of measure theory and real analysis. The most important basic results of the ab- stract theory were proved more or less between 1966 and 1991, when O'Brian en Verwaat [OV91] and Puhalskii [Puk91] proved that exponential tightness implies a subsequential LDP. The abstract theory of large deviation principles plays more or less the same role as measure theory in (usual) probability theory. On the other hand, there is a much richer and much more important side of large deviation theory, which tries to identify rate functions I for various functions F of independent random variables, and study their properties.
    [Show full text]
  • Basics of the Theory of Large Deviations
    Basics of the Theory of Large Deviations Michael Scheutzow February 1, 2018 Abstract In this note, we collect basic results of the theory of large deviations. Missing proofs can be found in the monograph [1]. 1 Introduction We start by recalling the following computation which was done in the course Wahrscheinlichkeitstheorie I (and which is also done in the course Versicherungs- mathematik). Assume that X; X1;X2; ::: are i.i.d real-valued random variables on a proba- bility space (Ω; F; P) and x 2 R, λ > 0. Then, by Markov’s inequality, n n X λX1 P Xi ≥ nx ≤ exp{−λnxg E e = exp{−n(λx − Λ(λ))g; i=1 where Λ(λ) := log E expfλXg. Defining I(x) := supλ≥0fλx − Λ(λ)g, we therefore get n X P Xi ≥ nx ≤ exp{−nI(x)g; i=1 which often turns out to be a good bound. 1 2 Cramer’s theorem for real-valued random vari- ables Definition 2.1. a) Let X be a real-valued random variable. The function Λ: R ! (−∞; 1] defined by λX Λ(λ) := log Ee is called cumulant generating function or logarithmic moment generating function. Let DΛ := fλ : Λ(λ) < 1g. b) Λ∗ : R ! [0; 1] defined by Λ∗(x) := supfλx − Λ(λ)g λ2R ∗ is called Fenchel-Legendre transform of Λ. Let DΛ∗ := fλ :Λ (λ) < 1g. In the following we will often use the convenient abbreviation Λ∗(F ) := inf Λ∗(x) x2F for a subset F ⊆ R (with the usual convention that the infimum of the empty set is +1).
    [Show full text]
  • Lecture 2. Conditional Probability
    Math 408 - Mathematical Statistics Lecture 2. Conditional Probability January 18, 2013 Konstantin Zuev (USC) Math 408, Lecture 2 January 18, 2013 1 / 9 Agenda Motivation and Definition Properties of Conditional Probabilities Law of Total Probability Bayes' Theorem Examples I False Positive Paradox I Monty Hall Problem Summary Konstantin Zuev (USC) Math 408, Lecture 2 January 18, 2013 2 / 9 Motivation and Definition Recall (see Lecture 1) that the sample space is the set of all possible outcomes of an experiment. Suppose we are interested only in part of the sample space, the part where we know some event { call it A { has happened, and we want to know how likely it is that various other events (B; C; D :::) have also happened. What we want is the conditional probability of B given A. Definition If P(A) > 0, then the conditional probability of B given A is P(AB) P(BjA) = P(A) Useful Interpretation: Think of P(BjA) as the fraction of times B occurs among those in which A occurs Konstantin Zuev (USC) Math 408, Lecture 2 January 18, 2013 3 / 9 Properties of Conditional Probabilities Here are some facts about conditional probabilities: 1 For any fixed A such that P(A) > 0, P(·|A) is a probability, i.e. it satisfies the rules of probability: I 0 ≤ P(BjA) ≤ 1 I P(ΩjA) = 1 I P(;jA) = 0 ¯ I P(BjA) + P(BjA) = 1 I P(B + CjA) = P(BjA) + P(CjA) − P(BCjA) 2 Important: The rules of probability apply to events on the left of the bar.
    [Show full text]
  • The Law of Total Probability, Bayes' Rule, and Random Variables
    The Law of Total Probability, Bayes’ Rule, and Random Variables (Oh My!) Administrivia o Homework 2 is posted and is due two Friday’s from now o If you didn’t start early last time, please do so this time. Good Milestones: § Finish Problems 1-3 this week. More math, some programming. § Finish Problems 4-5 next week. Less math, more programming. Administrivia Reminder of the course Collaboration Policy: o Inspiration is Free: you may discuss homework assignments with anyone. You are especially encouraged to discuss solutions with your instructor and your classmates. o Plagiarism is Forbidden: the assignments and code that you turn in should be written entirely on your own. o Do NOT Search for a Solution On-Line: You may not actively search for a solution to the problem from the internet. This includes posting to sources like StackExchange, Reddit, Chegg, etc o Violation of ANY of the above will result in an F in the course / trip to Honor Council Previously on CSCI 3022 o Conditional Probability: The probability that A occurs given that C occurred P (A C) P (A C)= ∩ | P (C) o Multiplication Rule: P (A C)=P (A C) P (C) ∩ | o Independence: Events A and B are independent if 1. P (A B)=P (A) | 2. P (B A)=P (B) | 3. P (A B)=P (A)P (B) ∩ Law of Total Probability Example: Suppose I have two bags of marbles. The first bag contains 6 white marbles and 4 black marbles. The second bag contains 3 white marbles and 7 black marbles.
    [Show full text]