Probability and Random Processes

Total Page:16

File Type:pdf, Size:1020Kb

Probability and Random Processes Probability and Random Processes Serik Sagitov, Chalmers University of Technology and Gothenburg University Abstract Lecture notes based on the book Probability and Random Processes by Geoffrey Grimmett and David Stirzaker. Last updated June 2, 2014. Contents Abstract 1 1 Random events and random variables 2 1.1 Probability space . 2 1.2 Conditional probability and independence . 3 1.3 Random variables . 3 1.4 Random vectors . 4 1.5 Filtration . 6 2 Expectation and conditional expectation 6 2.1 Expectation . 6 2.2 Conditional expectation and prediction . 8 2.3 Multinomial distribution . 9 2.4 Multivariate normal distribution . 10 2.5 Sampling from a distribution . 10 2.6 Probability generating, moment generating, and characteristic functions . 11 3 Convergence of random variables 12 3.1 Borel-Cantelli lemmas . 12 3.2 Inequalities . 13 3.3 Modes of convergence . 13 3.4 Continuity of expectation . 15 3.5 Weak law of large numbers . 17 3.6 Central limit theorem . 18 3.7 Strong LLN . 18 4 Markov chains 19 4.1 Simple random walks . 19 4.2 Markov chains . 21 4.3 Stationary distributions . 23 4.4 Reversibility . 24 4.5 Branching process . 24 4.6 Poisson process and continuous-time Markov chains . 25 4.7 The generator of a continuous-time Markov chain . 26 5 Stationary processes 27 5.1 Weakly and strongly stationary processes . 27 5.2 Linear prediction . 27 5.3 Linear combination of sinusoids . 28 5.4 The spectral representation . 30 5.5 Stochastic integral . 31 1 5.6 The ergodic theorem for the weakly stationary processes . 33 5.7 The ergodic theorem for the strongly stationary processes . 34 5.8 Gaussian processes . 35 6 Renewal theory and Queues 36 6.1 Renewal function and excess life . 36 6.2 LLN and CLT for the renewal process . 38 6.3 Stopping times and Wald's equation . 38 6.4 Stationary excess lifetime distribution . 39 6.5 Renewal-reward processes . 41 6.6 Regeneration technique for queues . 42 6.7 M/M/1 queues . 43 6.8 M/G/1 queues . 44 6.9 G/M/1 queues . 46 6.10 G/G/1 queues . 47 7 Martingales 49 7.1 Definitions and examples . 49 7.2 Convergence in L2 ........................................ 50 7.3 Doob's decomposition . 52 7.4 Hoeffding’s inequality . 53 7.5 Convergence in L1 ........................................ 54 7.6 Bounded stopping times. Optional sampling theorem . 56 7.7 Unbounded stopping times . 57 7.8 Maximal inequality . 59 7.9 Backward martingales. Strong LLN . 61 8 Diffusion processes 62 8.1 The Wiener process . 62 8.2 Properties of the Wiener process . 63 8.3 Examples of diffusion processes . 64 8.4 The Ito formula . 64 8.5 The Black-Scholes formula . 66 1 Random events and random variables 1.1 Probability space A random experiment is modeled in terms of a probability space (Ω; F; P) • the sample space Ω is the set of all possible outcomes of the experiment, • the σ-field (or sigma-algebra) F is a collection of measurable subsets A ⊂ Ω (which are called random events) satisfying 1. ; 2 F, 1 2. if Ai 2 F, 0 = 1; 2;:::, then [i=1Ai 2 F, countable unions, 3. if A 2 F, then Ac 2 F, complementary event, • the probability measure P is a function on F satisfying three probability axioms 1. if A 2 F, then P(A) ≥ 0, 2. P(Ω) = 1, 1 P1 3. if Ai 2 F, 0 = 1; 2;::: are all disjoint, then P([i=1Ai) = i=1 P(Ai). 2 De Morgan's laws c c \ [ c [ \ c Ai = Ai ; Ai = Ai : i i i i Properties derived from the axioms P(;) = 0; c P(A ) = 1 − P(A); P(A [ B) = P(A) + P(B) − P(A \ B): Inclusion-exclusion rule X X X P(A1 [ ::: [ An) = P(Ai) − P(Ai \ Aj) + P(Ai \ Aj \ Ak) − ::: i i<j i<j<k n+1 + (−1) P(A1 \ ::: \ An): Continuity of the probability measure 1 • if A1 ⊂ A2 ⊂ ::: and A = [i=1Ai = limi!1 Ai, then P(A) = limi!1 P(Ai), 1 • if B1 ⊃ B2 ⊃ ::: and B = \i=1Bi = limi!1 Bi, then P(B) = limi!1 P(Bi). 1.2 Conditional probability and independence If P(B) > 0, then the conditional probability of A given B is P(A \ B) P(AjB) = : P(B) The law of total probability and the Bayes formula. Let B1;:::;Bn be a partition of Ω, then n X P(A) = P(AjBi)P(Bi); i=1 P(AjBj)P(Bj) P(BjjA) = Pn : i=1 P(AjBi)P(Bi) Definition 1.1 Events A1;:::;An are independent, if for any subset of events (Ai1 ;:::;Aik ) P(Ai1 \ ::: \ Aik ) = P(Ai1 ) ::: P(Aik ): Example 1.2 Pairwise independence does not imply independence of three events. Toss two coins and consider three events • A =fheads on the first coing, • B =ftails on the first coing, • C =fone head and one tailg. Clearly, P(AjC) = P(A) and P(BjC) = P(B) but P(A \ BjC) = 0. 1.3 Random variables A real random variable is a measurable function X :Ω ! R so that different outcomes ! 2 Ω can give different values X(!). Measurability of X(!): f! : X(!) ≤ xg 2 F for any real number x: Probability distribution PX (B) = P(X 2 B) defines a new probability space (R; B; PX ), where B = σ(all open intervals) is the Borel sigma-algebra. 3 Definition 1.3 Distribution function (cumulative distribution function) F (x) = FX (x) = PX f(−∞; x]g = P(X ≤ x): In terms of the distribution function we get P(a < X ≤ b) = F (b) − F (a); P(X < x) = F (x−); P(X = x) = F (x) − F (x−): Any monotone right-continuous function with lim F (x) = 0 and lim F (x) = 1 x→−∞ x!1 can be a distribution function. Definition 1.4 The random variable X is called discrete, if for some countable set of possible values P(X 2 fx1; x2;:::g) = 1: Its distribution is described by the probability mass function f(x) = P(X = x). The random variable X is called (absolutely) continuous, if its distribution has a probability density function f(x): Z x F (x) = f(y)dy; for all x; −∞ so that f(x) = F 0(x) almost everywhere. Example 1.5 The indicator of a random event 1A = 1f!2Ag with p = P(A) has a Bernoulli distribution P(1A = 1) = p; P(1A = 0) = 1 − p: Pn For several events Sn = i=1 1Ai counts the number of events that occurred. If independent events A1;A2;::: have the same probability p = P(Ai), then Sn has a binomial distribution Bin(n; p) n (S = k) = pk(1 − p)n−k; k = 0; 1; : : : ; n: P n k Example 1.6 (Cantor distribution) Consider (Ω; F; P) with Ω = [0; 1], F = B[0;1], and P([0; 1]) = 1 −1 P([0; 1=3]) = P([2=3; 1]) = 2 −2 P([0; 1=9]) = P([2=9; 1=3]) = P([2=3; 7=9]) = P([8=9; 1]) = 2 and so on. Put X(!) = !, its distribution, called the Cantor distribution, is neither discrete nor contin- uous. Its distribution function, called the Cantor function, is continuous but not absolutely continuous. 1.4 Random vectors Definition 1.7 The joint distribution of a random vector X = (X1;:::;Xn) is the function FX(x1; : : : ; xn) = P(fX1 ≤ x1g \ ::: \ fXn ≤ xng): Marginal distributions FX1 (x) = FX(x; 1;:::; 1); FX2 (x) = FX(1; x; 1;:::; 1); ::: FXn (x) = FX(1;:::; 1; x): 4 The existence of the joint probability density function f(x1; : : : ; xn) means that the distribution function Z x1 Z xn FX(x1; : : : ; xn) = ::: f(y1; : : : ; yn)dy1 : : : dyn; for all (x1; : : : ; xn); −∞ −∞ n @ F (x1;:::;xn) is absolutely continuous, so that f(x1; : : : ; xn) = almost everywhere. @x1:::@xn Definition 1.8 Random variables (X1;:::;Xn) are called independent if for any (x1; : : : ; xn) P(X1 ≤ x1;:::;Xn ≤ xn) = P(X1 ≤ x1) ::: P(Xn ≤ xn): In the jointly continuous case this equivalent to f(x1; : : : ; xn) = fX1 (x1) : : : fXn (xn): Example 1.9 In general, the joint distribution can not be recovered form the marginal distributions. If FX;Y (x; y) = xy1f(x;y)2[0;1]2g; then vectors (X; Y ) and (X; X) have the same marginal distributions. Example 1.10 Consider 8 −x −y < 1 − e − xe if 0 ≤ x ≤ y; F (x; y) = 1 − e−y − ye−y if 0 ≤ y ≤ x; : 0 otherwise. Show that F (x; y) is the joint distribution function of some pair (X; Y ). Find the marginal distribution functions and densities. Solution. Three properties should be satisfied for F (x; y) to be the joint distribution function of some pair (X; Y ): 1. F (x; y) is non-decreasing on both variables, 2. F (x; y) ! 0 as x ! −∞ and y ! −∞, 3. F (x; y) ! 1 as x ! 1 and y ! 1. Observe that @2F (x; y) f(x; y) = = e−y1 @x@y f0≤x≤yg is always non-negative. Thus the first property follows from the integral representation: Z x Z y F (x; y) = f(u; v)dudv; −∞ −∞ which, for 0 ≤ x ≤ y, is verifies as Z x Z y Z x Z y f(u; v)dudv = e−vdv du = 1 − e−x − xe−y; −∞ −∞ 0 u and for 0 ≤ y ≤ x as Z x Z y Z y Z y f(u; v)dudv = e−vdv du = 1 − e−y − ye−y: −∞ −∞ 0 u The second and third properties are straightforward.
Recommended publications
  • Arxiv:1906.06684V1 [Math.NA]
    Randomized Computation of Continuous Data: Is Brownian Motion Computable?∗ Willem L. Fouch´e1, Hyunwoo Lee2, Donghyun Lim2, Sewon Park2, Matthias Schr¨oder3, Martin Ziegler2 1 University of South Africa 2 KAIST 3 University of Birmingham Abstract. We consider randomized computation of continuous data in the sense of Computable Analysis. Our first contribution formally confirms that it is no loss of generality to take as sample space the Cantor space of infinite fair coin flips. This extends [Schr¨oder&Simpson’05] and [Hoyrup&Rojas’09] considering sequences of suitably and adaptively biased coins. Our second contribution is concerned with 1D Brownian Motion (aka Wiener Process), a prob- ability distribution on the space of continuous functions f : [0, 1] → R with f(0) = 0 whose computability has been conjectured [Davie&Fouch´e’2013; arXiv:1409.4667,§6]. We establish that this (higher-type) random variable is computable iff some/every computable family of moduli of continuity (as ordinary random variables) has a computable probability distribution with respect to the Wiener Measure. 1 Introduction Randomization is a powerful technique in classical (i.e. discrete) Computer Science: Many difficult problems have turned out to admit simple solutions by algorithms that ‘roll dice’ and are efficient/correct/optimal with high probability [DKM+94,BMadHS99,CS00,BV04]. Indeed, fair coin flips have been shown computationally universal [Wal77]. Over continuous data, well-known closely connected to topology [Grz57] [Wei00, 2.2+ 3], notions of proba- § § bilistic computation are more subtle [BGH15,Col15]. 1.1 Overview Section 2 resumes from [SS06] the question of how to represent Borel probability measures.
    [Show full text]
  • Effective Program Reasoning Using Bayesian Inference
    EFFECTIVE PROGRAM REASONING USING BAYESIAN INFERENCE Sulekha Kulkarni A DISSERTATION in Computer and Information Science Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy 2020 Supervisor of Dissertation Mayur Naik, Professor, Computer and Information Science Graduate Group Chairperson Mayur Naik, Professor, Computer and Information Science Dissertation Committee: Rajeev Alur, Zisman Family Professor of Computer and Information Science Val Tannen, Professor of Computer and Information Science Osbert Bastani, Research Assistant Professor of Computer and Information Science Suman Nath, Partner Research Manager, Microsoft Research, Redmond To my father, who set me on this path, to my mother, who leads by example, and to my husband, who is an infinite source of courage. ii Acknowledgments I want to thank my advisor Prof. Mayur Naik for giving me the invaluable opportunity to learn and experiment with different ideas at my own pace. He supported me through the ups and downs of research, and helped me make the Ph.D. a reality. I also want to thank Prof. Rajeev Alur, Prof. Val Tannen, Prof. Osbert Bastani, and Dr. Suman Nath for serving on my dissertation committee and for providing valuable feedback. I am deeply grateful to Prof. Alur and Prof. Tannen for their sound advice and support, and for going out of their way to help me through challenging times. I am also very grateful for Dr. Nath's able and inspiring mentorship during my internship at Microsoft Research, and during the collaboration that followed. Dr. Aditya Nori helped me start my Ph.D.
    [Show full text]
  • 1 Dependent and Independent Events 2 Complementary Events 3 Mutually Exclusive Events 4 Probability of Intersection of Events
    1 Dependent and Independent Events Let A and B be events. We say that A is independent of B if P (AjB) = P (A). That is, the marginal probability of A is the same as the conditional probability of A, given B. This means that the probability of A occurring is not affected by B occurring. It turns out that, in this case, B is independent of A as well. So, we just say that A and B are independent. We say that A depends on B if P (AjB) 6= P (A). That is, the marginal probability of A is not the same as the conditional probability of A, given B. This means that the probability of A occurring is affected by B occurring. It turns out that, in this case, B depends on A as well. So, we just say that A and B are dependent. Consider these events from the card draw: A = drawing a king, B = drawing a spade, C = drawing a face card. Events A and B are independent. If you know that you have drawn a spade, this does not change the likelihood that you have actually drawn a king. Formally, the marginal probability of drawing a king is P (A) = 4=52. The conditional probability that your card is a king, given that it a spade, is P (AjB) = 1=13, which is the same as 4=52. Events A and C are dependent. If you know that you have drawn a face card, it is much more likely that you have actually drawn a king than it would be ordinarily.
    [Show full text]
  • Lecture 2: Modeling Random Experiments
    Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2021 Lecture 2: Modeling Random Experiments Relevant textbook passages: Pitman [5]: Sections 1.3–1.4., pp. 26–46. Larsen–Marx [4]: Sections 2.2–2.5, pp. 18–66. 2.1 Axioms for probability measures Recall from last time that a random experiment is an experiment that may be conducted under seemingly identical conditions, yet give different results. Coin tossing is everyone’s go-to example of a random experiment. The way we model random experiments is through the use of probabilities. We start with the sample space Ω, the set of possible outcomes of the experiment, and consider events, which are subsets E of the sample space. (We let F denote the collection of events.) 2.1.1 Definition A probability measure P or probability distribution attaches to each event E a number between 0 and 1 (inclusive) so as to obey the following axioms of probability: Normalization: P (?) = 0; and P (Ω) = 1. Nonnegativity: For each event E, we have P (E) > 0. Additivity: If EF = ?, then P (∪ F ) = P (E) + P (F ). Note that while the domain of P is technically F, the set of events, that is P : F → [0, 1], we may also refer to P as a probability (measure) on Ω, the set of realizations. 2.1.2 Remark To reduce the visual clutter created by layers of delimiters in our notation, we may omit some of them simply write something like P (f(ω) = 1) orP {ω ∈ Ω: f(ω) = 1} instead of P {ω ∈ Ω: f(ω) = 1} and we may write P (ω) instead of P {ω} .
    [Show full text]
  • Probability and Counting Rules
    blu03683_ch04.qxd 09/12/2005 12:45 PM Page 171 C HAPTER 44 Probability and Counting Rules Objectives Outline After completing this chapter, you should be able to 4–1 Introduction 1 Determine sample spaces and find the probability of an event, using classical 4–2 Sample Spaces and Probability probability or empirical probability. 4–3 The Addition Rules for Probability 2 Find the probability of compound events, using the addition rules. 4–4 The Multiplication Rules and Conditional 3 Find the probability of compound events, Probability using the multiplication rules. 4–5 Counting Rules 4 Find the conditional probability of an event. 5 Find the total number of outcomes in a 4–6 Probability and Counting Rules sequence of events, using the fundamental counting rule. 4–7 Summary 6 Find the number of ways that r objects can be selected from n objects, using the permutation rule. 7 Find the number of ways that r objects can be selected from n objects without regard to order, using the combination rule. 8 Find the probability of an event, using the counting rules. 4–1 blu03683_ch04.qxd 09/12/2005 12:45 PM Page 172 172 Chapter 4 Probability and Counting Rules Statistics Would You Bet Your Life? Today Humans not only bet money when they gamble, but also bet their lives by engaging in unhealthy activities such as smoking, drinking, using drugs, and exceeding the speed limit when driving. Many people don’t care about the risks involved in these activities since they do not understand the concepts of probability.
    [Show full text]
  • A Guide on Probability Distributions
    powered project A guide on probability distributions R-forge distributions Core Team University Year 2008-2009 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic ditribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 132 13 Misc 134 Conclusion 135 Bibliography 135 A Mathematical tools 138 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • 1 — a Single Random Variable
    1 | A SINGLE RANDOM VARIABLE Questions involving probability abound in Computer Science: What is the probability of the PWF world falling over next week? • What is the probability of one packet colliding with another in a network? • What is the probability of an undergraduate not turning up for a lecture? • When addressing such questions there are often added complications: the question may be ill posed or the answer may vary with time. Which undergraduate? What lecture? Is the probability of turning up different on Saturdays? Let's start with something which appears easy to reason about: : : Introduction | Throwing a die Consider an experiment or trial which consists of throwing a mathematically ideal die. Such a die is often called a fair die or an unbiased die. Common sense suggests that: The outcome of a single throw cannot be predicted. • The outcome will necessarily be a random integer in the range 1 to 6. • The six possible outcomes are equiprobable, each having a probability of 1 . • 6 Without further qualification, serious probabilists would regard this collection of assertions, especially the second, as almost meaningless. Just what is a random integer? Giving proper mathematical rigour to the foundations of probability theory is quite a taxing task. To illustrate the difficulty, consider probability in a frequency sense. Thus a probability 1 of 6 means that, over a long run, one expects to throw a 5 (say) on one-sixth of the occasions that the die is thrown. If the actual proportion of 5s after n throws is p5(n) it would be nice to say: 1 lim p5(n) = n !1 6 Unfortunately this is utterly bogus mathematics! This is simply not a proper use of the idea of a limit.
    [Show full text]
  • Probabilities, Random Variables and Distributions A
    Probabilities, Random Variables and Distributions A Contents A.1 EventsandProbabilities................................ 318 A.1.1 Conditional Probabilities and Independence . ............. 318 A.1.2 Bayes’Theorem............................... 319 A.2 Random Variables . ................................. 319 A.2.1 Discrete Random Variables ......................... 319 A.2.2 Continuous Random Variables ....................... 320 A.2.3 TheChangeofVariablesFormula...................... 321 A.2.4 MultivariateNormalDistributions..................... 323 A.3 Expectation,VarianceandCovariance........................ 324 A.3.1 Expectation................................. 324 A.3.2 Variance................................... 325 A.3.3 Moments................................... 325 A.3.4 Conditional Expectation and Variance ................... 325 A.3.5 Covariance.................................. 326 A.3.6 Correlation.................................. 327 A.3.7 Jensen’sInequality............................. 328 A.3.8 Kullback–LeiblerDiscrepancyandInformationInequality......... 329 A.4 Convergence of Random Variables . 329 A.4.1 Modes of Convergence . 329 A.4.2 Continuous Mapping and Slutsky’s Theorem . 330 A.4.3 LawofLargeNumbers........................... 330 A.4.4 CentralLimitTheorem........................... 331 A.4.5 DeltaMethod................................ 331 A.5 ProbabilityDistributions............................... 332 A.5.1 UnivariateDiscreteDistributions...................... 333 A.5.2 Univariate Continuous Distributions . 335
    [Show full text]
  • Why Retail Therapy Works: It Is Choice, Not Acquisition, That
    ASSOCIATION FOR CONSUMER RESEARCH Labovitz School of Business & Economics, University of Minnesota Duluth, 11 E. Superior Street, Suite 210, Duluth, MN 55802 Why Retail Therapy Works: It Is Choice, Not Acquisition, That Primarily Alleviates Sadness Beatriz Pereira, University of Michigan, USA Scott Rick, University of Michigan, USA Can shopping be used strategically as an emotion regulation tool? Although prior work demonstrates that sadness encourages spending, it is unclear whether and why shopping actually alleviates sadness. Our work suggests that shopping can heal, but that it is the act of choosing (e.g., between money and products), rather than the act of acquiring (e.g., simply being endowed with money or products), that primarily alleviates sadness. Two experiments that induced sadness and then manipulated whether participants made monetarily consequential choices support our conclusions. [to cite]: Beatriz Pereira and Scott Rick (2011) ,"Why Retail Therapy Works: It Is Choice, Not Acquisition, That Primarily Alleviates Sadness", in NA - Advances in Consumer Research Volume 39, eds. Rohini Ahluwalia, Tanya L. Chartrand, and Rebecca K. Ratner, Duluth, MN : Association for Consumer Research, Pages: 732-733. [url]: http://www.acrwebsite.org/volumes/1009733/volumes/v39/NA-39 [copyright notice]: This work is copyrighted by The Association for Consumer Research. For permission to copy or use this work in whole or in part, please contact the Copyright Clearance Center at http://www.copyright.com/. 732 / Working Papers SIGNIFICANCE AND Implications OF THE RESEARCH In this study, we examine how people’s judgment on the probability of a conjunctive event influences their subsequent inference (e.g., after successfully getting five papers accepted what is the probability of getting tenure?).
    [Show full text]
  • Probabilities and Expectations
    Probabilities and Expectations Ashique Rupam Mahmood September 9, 2015 Probabilities tell us about the likelihood of an event in numbers. If an event is certain to occur, such as sunrise, probability of that event is said to be 1. Pr(sunrise) = 1. If an event will certainly not occur, then its probability is 0. So, probability maps events to a number in [0; 1]. How do you specify an event? In the discussions of probabilities, events are technically described as a set. At this point it is important to go through some basic concepts of sets and maybe also functions. Sets A set is a collection of distinct objects. For example, if we toss a coin once, the set of all possible distinct outcomes will be S = head; tail , where head denotes a head and the f g tail denotes a tail. All sets we consider here are finite. An element of a set is denoted as head S.A subset of a set is denoted as head S. 2 f g ⊂ What are the possible subsets of S? These are: head ; tail ;S = head; tail , and f g f g f g φ = . So, note that a set is a subset of itself: S S. Also note that, an empty set fg ⊂ (a collection of nothing) is a subset of any set: φ S.A union of two sets A and B is ⊂ comprised of all the elements of both sets and denoted as A B. An intersection of two [ sets A and B is comprised of only the common elements of both sets and denoted as A B.
    [Show full text]
  • Index English to Chinese
    probability A absolutely continuous random variables 绝对连续 随机变 量 analytic theory of probability 解析学概率论 antithetic variables 对 偶变 量 Archimedes 阿基米德 Ars Conjectandi associative law for events (概率)事件结 合律 axioms of probability 概率论 基本公理 axioms of surprise B ballot problem 选 票问题 Banach match problem 巴拿赫火柴问题 basic principle of counting 计 数基本原理 generalized (广义 ) Bayes’s formula ⻉ 叶斯公式 Bernoulli random variables 伯努利随机变 量 Bernoulli trials 伯努利实验 Bernstein polynomials 伯恩斯坦多项 式 Bertand’s paradox ⻉ 特朗悖论 best prize problem beta distribution ⻉ 塔分布 binary symmetric channel binomial coefficents 二项 式系数 binomial random variables 二项 随机变 量 normal approximation 正态 分布逼近 approximation to hypergeometric 超几何分布逼近 computing its mass function 计 算分布函数 moments of 矩(动 差) simulation of 模拟 sums of independent 独立 二项 随机变 量 之和 binomial theorem 二项 式定理 birthday problem 生日问题 probability bivariate exponential distribution 二元指数分布 bivariate normal distribution 二元正态 分布 Bonferroni’s inequality 邦费罗 尼不等式 Boole’s inequality 布尔不等式 Box-Muller simulation technique Box-Muller模拟 Branching process 分支过 程 Buffon’s needle problem 普丰投针问题 C Cantor distribution 康托尔分布 Cauchy distribution 柯西分布 Cauchy - Schwarz inequality 柯西-施瓦兹不等式 center of gravity 重心 central limit theorem 中心极限定理 channel capacity 信道容量 Chapman-Kolmogorov equation 查 普曼-科尔莫戈罗 夫等式 Chebychev’s inequality 切比雪夫不等式 Chernoff bound 切诺 夫界 chi-squared distribution 卡方分布 density function 密度函数 relation to gamma distribution 与伽⻢ 函数关系 simulation of 模拟 coding theory 编码 理论 and entropy 熵 combination 组 合 combinatorial analysis 组 合分析 combinatorial identities
    [Show full text]
  • Lecture 2: Random Experiments; Probability Spaces; Random Variables; Independence
    Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 2: Random Experiments; Probability Spaces; Random Variables; Independence Relevant textbook passages: Pitman [4]: Sections 1.3–1.4., pp. 26–46. Larsen–Marx [3]: Sections 2.2–2.5, pp. 18–66. The great coin-flipping experiment This year there were 194 submissions of 128 flips, for a total of 24,832 tosses! You can findthe data at http://www.math.caltech.edu/~2016-17/2term/ma003/Data/FlipsMaster.txt Recall that I put predictions into a sealed envelope. Here are the predictions of the average number of runs, by length, compared to the experimental results. Run Theoretical Predicted Total Average How well length average range runs runs did I do? 1 32.5 31.3667 –33.6417 6340 32.680412 Nailed it. 2 16.125 15.4583 –16.8000 3148 16.226804 Nailed it. 3 8 7.5500 – 8.4583 1578 8.134021 Nailed it. 4 3.96875 3.6417 – 4.3000 725 3.737113 Nailed it. 5 1.96875 1.7333 – 2.2083 388 2.000000 Nailed it. 6 0.976563 0.8083 – 1.1500 187 0.963918 Nailed it. 7 0.484375 0.3667 – 0.6083 101 0.520619 Nailed it. 8 0.240234 0.1583 – 0.3333 49 0.252577 Nailed it. 9 0.119141 0.0583 – 0.1833 16 0.082474 Nailed it. 10 0.059082 0.0167 – 0.1083 12 0.061856 Nailed it. 11 0.0292969 0.0000 – 0.0667 9 0.046392 Nailed it. 12 0.0145264 0.0000 – 0.0417 2 0.010309 Nailed it.
    [Show full text]