Probability Theory Oral Exam Study Notes

Total Page:16

File Type:pdf, Size:1020Kb

Probability Theory Oral Exam Study Notes Probability Theory Oral Exam study notes Notes transcribed by Mihai Nica Abstract. These are some study notes that I made while studying for my oral exams on the topic of Probability Theory. I took these notes from a few dierent sources, and they are not in any particular order. They tend to move around a lot. They also skip some of the basics of measure theory which are covered in the real analysis notes. Please be extremely caution with these notes: they are rough notes and were originally only for me to help me study. They are not complete and likely have errors. I have made them available to help other students on their oral exams (Note: I specialized in probability theory, so these go a bit further into a few topics that most people would do for their oral exam). See also the sections on Conditional Expectation and the Law of the Iterated Logarithm from my Limit Theorem II notes. Contents Independence and Weak Law of Large Numbers 5 1.1. Independence 5 1.2. Weak Law of Large Numbers 9 Borel Cantelli Lemmas 12 2.3. Borel Cantelli Lemmas 12 2.4. Bounded Convergence Theorem 15 Central Limit Theorems 17 3.5. The De Moivre-Laplace Theorem 17 3.6. Weak Convergence 17 3.7. Characteristic Functions 24 3.8. The moment problem 31 3.9. The Central Limit Theorem 33 3.10. Other Facts about CLT results 35 3.11. Law of the Iterated Log 36 Moment Methods 39 4.12. Basics of the Moment Method 39 4.13. Poisson RVs 42 4.14. Central Limit Theorem 42 Martingales 51 5.15. Martingales 51 5.16. Stopping Times 53 Uniform Integrability 58 6.17. An 'absolute continuity' property 58 6.18. Denition of a UI family 58 6.19. Two simple sucient conditions for the UI property 59 6.20. UI property of conditional expectations 59 6.21. Convergence in Probability 60 6.22. Elementary Proof of Bounded Convergence Theorem 60 6.23. Necessary and Sucient Conditions for L1 convergence 60 UI Martingales 62 7.24. UI Martingales 62 7.25. Levy's 'Upward' Theorem 62 7.26. Martingale Proof of Kolmogorov 0-1 Law 63 7.27. Levy's 'Downward' Theorem 63 7.28. Martingale Proof of the Strong Law 64 3 CONTENTS 4 7.29. Doob's Subartingale Inequality 64 Kolmogorov's Three Series Theorem using L2 Martingales 66 A few dierent proofs of the LLN 73 9.30. Truncation Lemma 73 9.31. Truncation + K3 Theorem + Kronecker Lemma 74 9.32. Truncation + Sparsication 76 9.33. Levy's Downward Theorem and the 0-1 Law 79 9.34. Ergodic Theorem 80 9.35. Non-convergence for innite mean 81 Ergodic Theorems 82 10.36. Denitions and Examples 82 10.37. Birkho's Ergodic Theorem 86 10.38. Recurrence 88 10.39. A Subadditive Ergodic Theorem 88 10.40. Applications 90 Large Deviations 91 10.41. LDP for Finite Dimesnional Spaces 92 10.42. Cramer's Theorem 103 Bibliography 109 Independence and Weak Law of Large Numbers These are notes from Chapter 2 of [2]. 1.1. Independence Definition. A; B indep if P(A \ B) = P(A)P(B), X; Y are indep if P(X 2 C; Y 2 D) = P(X 2 C)P(Y 2 D) for every C; D 2 R. Two σ−algebras are independent if A 2 F and B 2 G has A; B independent. Exercise. (2.1.1) Show that if X; Y are indep then σ(X) and σ(Y ) are. ii) Show that if X is F-measurable and Y is G−measurable then and F , G are inde- pendent, then X and Y are independent Proof. This is immediate from the denitions. Exercise. (2.1.2) i) Show A; B are independent then Acand B are independent too. ii) Show A; B are independent i 1A and 1B are independent. Proof. i) P(B) = P(A\B)+P(Ac \B) = P(A)P(B)+P(Ac \B) , rearrange. ii) Simple using if and c otherwise. f1A 2 Cg = A 1 2 C = A Remark. By the above quick exercises, all of independence is dened by in- dependence of σ-algbras, so we will view that as the ventral object. Definition. F1; F2 ::: are independent if for any nite subset and for any sets from an index set we have Q . Ai 2 Fi i 2 I ⊂ N P (\Ai) = P(Ai) X1;X2;::: are independent if σ(X1); σ(X2) ::: are independent.A1;A2;::: are independent if are independent. 1A1 ; 1A2 ;::: Exercise. (2.1.3.) Same as previous exercise with more than two sets. Example. (2.1.1) The usually example of three events which are pairwise in- dependent but not independent on the space of three fair coinips. 1.1.1. Sucient Conditions for Independence. We will work our way to Theorem 2.1.3 which is the main result for this subsection. Definition. We call a collection of sets A1; A2 ::: An independent if any col- lection from an index set i 2 I ⊂ f1; : : : ; ng Ai 2 Ai is independent. (Just like the denition for the σ- algebras only we dont require A to be a sigma algebra) Lemma. (2.1.1) If we suppose that each Ai contains Ω then the criteria for independence works with I = f1; : : : ; ng Proof. When you put Ak = Ω it doesn't change the intersection and it doesnt change the product since . P(Ak) = 1 5 1.1. INDEPENDENCE 6 Definition. A π−system is a collection A which is closed under intersections, i.e. A; B 2 A =) A \ B 2 A. A λ−system is a collection L that satises: i) Ω 2 L ii) A; B 2 L and A ⊂ B =) B − A 2 L iii) An 2 L and An " A =) A 2 L Remark. (Mihai - From Wiki) An equivalent def'n of a λ−system is: i) Ω 2 L ii) A 2 L =) Ac 2 L iii) disjoint 1 A1;A2;::: 2 L =)[n=1An 2 L In this form, the denition of a λ−system is more comparable to the denition of a σ algebra, and we can see that it is strictly easier to be a λ−system than a σ−algebra (only needed to be closed under disjoint unions rather than arbitrary countable unions). The rst denition presented by Durret however is more useful since it is easier to check in practice! Theorem. (2.1.2) (Dynkin's π − λ) Theorem. If P is a π−system and L is a λ−system that contain P, then σ(P) ⊂ L. Proof. In the appendix apparently? Will come back to this when I do measure theory. Theorem. (2.1.3) Suppose A1; A2 are independent of each other and each Ai is a π−system. Then σ(A1); : : : ; σ(An) are independent. Proof. (You can basically reduce to the case n = 2 ) Fix any A2;:::;An in A2;:::; An respectively and let F = A2\:::\An Let L = fA : P(A \ F ) = P(A)P(F )g. Then we verify that L is a λ−system by using basic properties of P. For A ⊂ B both in F we have: P ((B − A) \ F ) = P (B \ F ) − P(A \ F ) = P(B)P(F ) − P(A)P(F ) = P(B − A)P(F ) (increasing limits is easy by continuity of probability) By the π − λ theorem, σ(A1) ⊂ L, since this works for any F, we have then that σ(A1); A2; A3;:::; An are independent. Iterating the argument n − 1 more times gives the desired result. Remark. (Durret) The reason the π − λ theorem is helpful here is because it is hard to check that for A; B 2 L that A \ B 2 L or that A [ B 2 L. However, it is easy to check that if A; B 2 L with A ⊂ B then B − A 2 L. The π − λ theorem coverts these π and λ systems (which are easier to work with) to σ algebras (which are harder to work with, but more useful) Theorem. (2.1.4) In order for X1;X2;:::;Xn to be independent, it is su- cient that for all x1; x2; : : : ; xn 2 (−∞; 1] that: n Y P (X1 ≤ x1;:::;Xn ≤ xn) = P (Xi ≤ xi) i=1 Proof. Let Ai be sets of the form fXi ≤ xig. It is easy to check that this is a π−system, so the result is a direct application of the previous theore m. 1.1. INDEPENDENCE 7 Exercise. (2.1.4.) Suppose (X1;:::;Xn) has density f(x1; : : : ; xn) and f can be written as a product . Show that the 0 are independent. g1(x1) · ::: · gn(xn) Xis Proof. Let g~i(xi) = cigi(xi) where ci is chosen so that g~i = 1. Can verify Q Q that ci = 1 from f = g and f = 1. Then integrate along´ the margin to see that each g~i is in fact a pdf for X´ i. Can then apply Thm 2.1.4 after replacing g's by g~'s Exercise. (2.1.5) Same as 2.1.4 but on a discrete space with a probability mass function instead of a probability density function. Proof. Work the same but with sums instead of integrals. We will now prove that functions of independent random variables are still independent (to be made more precise in a bit) Theorem. (2.1.5) Suppose Fi;j 1 ≤ i ≤ n and 1 ≤ j ≤ m(i) are independent σ−algebras and let Gi = σ([jFi;j).
Recommended publications
  • Arxiv:1906.06684V1 [Math.NA]
    Randomized Computation of Continuous Data: Is Brownian Motion Computable?∗ Willem L. Fouch´e1, Hyunwoo Lee2, Donghyun Lim2, Sewon Park2, Matthias Schr¨oder3, Martin Ziegler2 1 University of South Africa 2 KAIST 3 University of Birmingham Abstract. We consider randomized computation of continuous data in the sense of Computable Analysis. Our first contribution formally confirms that it is no loss of generality to take as sample space the Cantor space of infinite fair coin flips. This extends [Schr¨oder&Simpson’05] and [Hoyrup&Rojas’09] considering sequences of suitably and adaptively biased coins. Our second contribution is concerned with 1D Brownian Motion (aka Wiener Process), a prob- ability distribution on the space of continuous functions f : [0, 1] → R with f(0) = 0 whose computability has been conjectured [Davie&Fouch´e’2013; arXiv:1409.4667,§6]. We establish that this (higher-type) random variable is computable iff some/every computable family of moduli of continuity (as ordinary random variables) has a computable probability distribution with respect to the Wiener Measure. 1 Introduction Randomization is a powerful technique in classical (i.e. discrete) Computer Science: Many difficult problems have turned out to admit simple solutions by algorithms that ‘roll dice’ and are efficient/correct/optimal with high probability [DKM+94,BMadHS99,CS00,BV04]. Indeed, fair coin flips have been shown computationally universal [Wal77]. Over continuous data, well-known closely connected to topology [Grz57] [Wei00, 2.2+ 3], notions of proba- § § bilistic computation are more subtle [BGH15,Col15]. 1.1 Overview Section 2 resumes from [SS06] the question of how to represent Borel probability measures.
    [Show full text]
  • On the Normality of Numbers
    ON THE NORMALITY OF NUMBERS Adrian Belshaw B. Sc., University of British Columbia, 1973 M. A., Princeton University, 1976 A THESIS SUBMITTED 'IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in the Department of Mathematics @ Adrian Belshaw 2005 SIMON FRASER UNIVERSITY Fall 2005 All rights reserved. This work may not be reproduced in whole or in part, by photocopy or other means, without the permission of the author. APPROVAL Name: Adrian Belshaw Degree: Master of Science Title of Thesis: On the Normality of Numbers Examining Committee: Dr. Ladislav Stacho Chair Dr. Peter Borwein Senior Supervisor Professor of Mathematics Simon Fraser University Dr. Stephen Choi Supervisor Assistant Professor of Mathematics Simon Fraser University Dr. Jason Bell Internal Examiner Assistant Professor of Mathematics Simon Fraser University Date Approved: December 5. 2005 SIMON FRASER ' u~~~snrllbrary DECLARATION OF PARTIAL COPYRIGHT LICENCE The author, whose copyright is declared on the title page of this work, has granted to Simon Fraser University the right to lend this thesis, project or extended essay to users of the Simon Fraser University Library, and to make partial or single copies only for such users or in response to a request from the library of any other university, or other educational institution, on its own behalf or for one of its users. The author has further granted permission to Simon Fraser University to keep or make a digital copy for use in its circulating collection, and, without changing the content, to translate the thesislproject or extended essays, if technically possible, to any medium or format for the purpose of preservation of the digital work.
    [Show full text]
  • A Guide on Probability Distributions
    powered project A guide on probability distributions R-forge distributions Core Team University Year 2008-2009 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic ditribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 132 13 Misc 134 Conclusion 135 Bibliography 135 A Mathematical tools 138 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • The Number of Solutions of Φ(X) = M
    Annals of Mathematics, 150 (1999), 1–29 The number of solutions of φ(x) = m By Kevin Ford Abstract An old conjecture of Sierpi´nskiasserts that for every integer k > 2, there is a number m for which the equation φ(x) = m has exactly k solutions. Here φ is Euler’s totient function. In 1961, Schinzel deduced this conjecture from his Hypothesis H. The purpose of this paper is to present an unconditional proof of Sierpi´nski’sconjecture. The proof uses many results from sieve theory, in particular the famous theorem of Chen. 1. Introduction Of fundamental importance in the theory of numbers is Euler’s totient function φ(n). Two famous unsolved problems concern the possible values of the function A(m), the number of solutions of φ(x) = m, also called the multiplicity of m. Carmichael’s Conjecture ([1],[2]) states that for every m, the equation φ(x) = m has either no solutions x or at least two solutions. In other words, no totient can have multiplicity 1. Although this remains an open problem, very large lower bounds for a possible counterexample are relatively easy to obtain, the latest being 101010 ([10], Th. 6). Recently, the author has also shown that either Carmichael’s Conjecture is true or a positive proportion of all values m of Euler’s function give A(m) = 1 ([10], Th. 2). In the 1950’s, Sierpi´nski conjectured that all multiplicities k > 2 are possible (see [9] and [16]), and in 1961 Schinzel [17] deduced this conjecture from his well-known Hypothesis H [18].
    [Show full text]
  • PRIME-PERFECT NUMBERS Paul Pollack [email protected] Carl
    PRIME-PERFECT NUMBERS Paul Pollack Department of Mathematics, University of Illinois, Urbana, Illinois 61801, USA [email protected] Carl Pomerance Department of Mathematics, Dartmouth College, Hanover, New Hampshire 03755, USA [email protected] In memory of John Lewis Selfridge Abstract We discuss a relative of the perfect numbers for which it is possible to prove that there are infinitely many examples. Call a natural number n prime-perfect if n and σ(n) share the same set of distinct prime divisors. For example, all even perfect numbers are prime-perfect. We show that the count Nσ(x) of prime-perfect numbers in [1; x] satisfies estimates of the form c= log log log x 1 +o(1) exp((log x) ) ≤ Nσ(x) ≤ x 3 ; as x ! 1. We also discuss the analogous problem for the Euler function. Letting N'(x) denote the number of n ≤ x for which n and '(n) share the same set of prime factors, we show that as x ! 1, x1=2 x7=20 ≤ N (x) ≤ ; where L(x) = xlog log log x= log log x: ' L(x)1=4+o(1) We conclude by discussing some related problems posed by Harborth and Cohen. 1. Introduction P Let σ(n) := djn d be the sum of the proper divisors of n. A natural number n is called perfect if σ(n) = 2n and, more generally, multiply perfect if n j σ(n). The study of such numbers has an ancient pedigree (surveyed, e.g., in [5, Chapter 1] and [28, Chapter 1]), but many of the most interesting problems remain unsolved.
    [Show full text]
  • Random Number Generation Using Normal Numbers
    Random Number Generation Using Normal Numbers Random Number Generation Using Normal Numbers Michael Mascagni1 and F. Steve Brailsford2 Department of Computer Science1;2 Department of Mathematics1 Department of Scientific Computing1 Graduate Program in Molecular Biophysics1 Florida State University, Tallahassee, FL 32306 USA AND Applied and Computational Mathematics Division, Information Technology Laboratory1 National Institute of Standards and Technology, Gaithersburg, MD 20899-8910 USA E-mail: [email protected] or [email protected] or [email protected] URL: http://www.cs.fsu.edu/∼mascagni In collaboration with Dr. David H. Bailey, Lawrence Berkeley Laboratory and UC Davis Random Number Generation Using Normal Numbers Outline of the Talk Introduction Normal Numbers Examples of Normal Numbers Properties Relationship with standard pseudorandom number generators Normal Numbers and Random Number Recursions Normal from recursive sequence Source Code Seed Generation Calculation Code Initial Calculation Code Iteration Implementation and Results Conclusions and Future Work Random Number Generation Using Normal Numbers Introduction Normal Numbers Normal Numbers: Types of Numbers p I Rational numbers - q where p and q are integers I Irrational numbers - not rational I b-dense numbers - α is b-dense () in its base-b expansion every possible finite string of consecutive digits appears I If α is b-dense then α is also irrational; it cannot have a repeating/terminating base-b digit expansion I Normal number - α is b-normal () in its base-b expansion
    [Show full text]
  • Algorithmic Data Analytics, Small Data Matters and Correlation Versus Causation Arxiv:1309.1418V9 [Cs.CE] 26 Jul 2017
    Algorithmic Data Analytics, Small Data Matters and Correlation versus Causation∗ Hector Zenil Unit of Computational Medicine, SciLifeLab, Department of Medicine Solna, Karolinska Institute, Stockholm, Sweden; Department of Computer Science, University of Oxford, UK; and Algorithmic Nature Group, LABORES, Paris, France. Abstract This is a review of aspects of the theory of algorithmic information that may contribute to a framework for formulating questions related to complex, highly unpredictable systems. We start by contrasting Shannon entropy and Kolmogorov-Chaitin complexity, which epit- omise correlation and causation respectively, and then surveying classical results from algorithmic complexity and algorithmic proba- bility, highlighting their deep connection to the study of automata frequency distributions. We end by showing that though long-range algorithmic prediction models for economic and biological systems may require infinite computation, locally approximated short-range estimations are possible, thereby demonstrating how small data can deliver important insights into important features of complex “Big Data”. arXiv:1309.1418v9 [cs.CE] 26 Jul 2017 Keywords: Correlation; causation; complex systems; algorithmic probability; computability; Kolmogorov-Chaitin complexity. ∗Invited contribution to the festschrift Predictability in the world: philosophy and science in the complex world of Big Data edited by J. Wernecke on the occasion of the retirement of Prof. Dr. Klaus Mainzer, Springer Verlag. Chapter based on an invited talk delivered to UNAM-CEIICH via videoconference from The University of Sheffield in the U.K. for the Alan Turing colloquium “From computers to life” http: //www.complexitycalculator.com/TuringUNAM.pdf) in June, 2012. 1 1 Introduction Complex systems have been studied for some time now, but it was not until recently that sampling complex systems became possible, systems ranging from computational methods such as high-throughput biology, to systems with sufficient storage capacity to store and analyse market transactions.
    [Show full text]
  • Index English to Chinese
    probability A absolutely continuous random variables 绝对连续 随机变 量 analytic theory of probability 解析学概率论 antithetic variables 对 偶变 量 Archimedes 阿基米德 Ars Conjectandi associative law for events (概率)事件结 合律 axioms of probability 概率论 基本公理 axioms of surprise B ballot problem 选 票问题 Banach match problem 巴拿赫火柴问题 basic principle of counting 计 数基本原理 generalized (广义 ) Bayes’s formula ⻉ 叶斯公式 Bernoulli random variables 伯努利随机变 量 Bernoulli trials 伯努利实验 Bernstein polynomials 伯恩斯坦多项 式 Bertand’s paradox ⻉ 特朗悖论 best prize problem beta distribution ⻉ 塔分布 binary symmetric channel binomial coefficents 二项 式系数 binomial random variables 二项 随机变 量 normal approximation 正态 分布逼近 approximation to hypergeometric 超几何分布逼近 computing its mass function 计 算分布函数 moments of 矩(动 差) simulation of 模拟 sums of independent 独立 二项 随机变 量 之和 binomial theorem 二项 式定理 birthday problem 生日问题 probability bivariate exponential distribution 二元指数分布 bivariate normal distribution 二元正态 分布 Bonferroni’s inequality 邦费罗 尼不等式 Boole’s inequality 布尔不等式 Box-Muller simulation technique Box-Muller模拟 Branching process 分支过 程 Buffon’s needle problem 普丰投针问题 C Cantor distribution 康托尔分布 Cauchy distribution 柯西分布 Cauchy - Schwarz inequality 柯西-施瓦兹不等式 center of gravity 重心 central limit theorem 中心极限定理 channel capacity 信道容量 Chapman-Kolmogorov equation 查 普曼-科尔莫戈罗 夫等式 Chebychev’s inequality 切比雪夫不等式 Chernoff bound 切诺 夫界 chi-squared distribution 卡方分布 density function 密度函数 relation to gamma distribution 与伽⻢ 函数关系 simulation of 模拟 coding theory 编码 理论 and entropy 熵 combination 组 合 combinatorial analysis 组 合分析 combinatorial identities
    [Show full text]
  • Normality and the Digits of Π
    Normality and the Digits of π David H. Bailey∗ Jonathan M. Borweiny Cristian S. Caludez Michael J. Dinneenx Monica Dumitrescu{ Alex Yeek February 15, 2012 1 Introduction The question of whether (and why) the digits of well-known constants of mathematics are statistically random in some sense has long fascinated mathematicians. Indeed, one prime motivation in computing and analyzing digits of π is to explore the age-old question of whether and why these digits appear \random." The first computation on ENIAC in 1949 of π to 2037 decimal places was proposed by John von Neumann to shed some light on the distribution of π (and of e)[8, pp. 277{281]. Since then, numerous computer-based statistical checks of the digits of π, for instance, so far have failed to disclose any deviation from reasonable statistical norms. See, for instance, Table1, which presents the counts of individual hexadecimal digits among the first trillion hex digits, as obtained by Yasumasa Kanada. By contrast, ∗Lawrence Berkeley National Laboratory, Berkeley, CA 94720. [email protected]. Supported in part by the Director, Office of Computational and Technology Research, Division of Mathemat- ical, Information, and Computational Sciences of the U.S. Department of Energy, under contract number DE-AC02-05CH11231. yCentre for Computer Assisted Research Mathematics and its Applications (CARMA), Uni- versity of Newcastle, Callaghan, NSW 2308, Australia. [email protected]. Distinguished Professor King Abdul-Aziz Univ, Jeddah. zDepartment of Computer Science, University of Auckland, Private Bag 92019, Auckland, New Zealand. [email protected]. xDepartment of Computer Science, University of Auckland, Private Bag 92019, Auckland, New Zealand.
    [Show full text]
  • Mathematical Constants and Sequences
    Mathematical Constants and Sequences a selection compiled by Stanislav Sýkora, Extra Byte, Castano Primo, Italy. Stan's Library, ISSN 2421-1230, Vol.II. First release March 31, 2008. Permalink via DOI: 10.3247/SL2Math08.001 This page is dedicated to my late math teacher Jaroslav Bayer who, back in 1955-8, kindled my passion for Mathematics. Math BOOKS | SI Units | SI Dimensions PHYSICS Constants (on a separate page) Mathematics LINKS | Stan's Library | Stan's HUB This is a constant-at-a-glance list. You can also download a PDF version for off-line use. But keep coming back, the list is growing! When a value is followed by #t, it should be a proven transcendental number (but I only did my best to find out, which need not suffice). Bold dots after a value are a link to the ••• OEIS ••• database. This website does not use any cookies, nor does it collect any information about its visitors (not even anonymous statistics). However, we decline any legal liability for typos, editing errors, and for the content of linked-to external web pages. Basic math constants Binary sequences Constants of number-theory functions More constants useful in Sciences Derived from the basic ones Combinatorial numbers, including Riemann zeta ζ(s) Planck's radiation law ... from 0 and 1 Binomial coefficients Dirichlet eta η(s) Functions sinc(z) and hsinc(z) ... from i Lah numbers Dedekind eta η(τ) Functions sinc(n,x) ... from 1 and i Stirling numbers Constants related to functions in C Ideal gas statistics ... from π Enumerations on sets Exponential exp Peak functions (spectral) ..
    [Show full text]
  • Cryptographic Analysis of Random
    CRYPTOGRAPHIC ANALYSIS OF RANDOM SEQUENCES By SANDHYA RANGINENI Bachelor of Technology in Computer Science and Engineering Jawaharlal Nehru Technological University Anantapur, India 2009 Submitted to the Faculty of the Graduate College of the Oklahoma State University in partial fulfillment of the requirements for the Degree of MASTER OF SCIENCE July, 2011 CRYPTOGRAPHIC ANALYSIS OF RANDOM SEQUENCES Thesis Approved: Dr. Subhash Kak Thesis Adviser Dr. Johnson Thomas Dr. Sanjay Kapil Dr. Mark E. Payton Dean of the Graduate College ii TABLE OF CONTENTS Chapter Page I. INTRODUCTION ......................................................................................................1 1.1 Random Numbers and Random Sequences .......................................................1 1.2 Types of Random Number Generators ..............................................................2 1.3 Random Number Generation Methods ..............................................................3 1.4 Applications of Random Numbers .....................................................................5 1.5 Statistical tests of Random Number Generators ................................................7 II. COMPARISION OF SOME RANDOM SEQUENCES ..........................................9 2.1 Binary Decimal Sequences ................................................................................9 2.1.1 Properties of Decimal Sequences ..........................................................10 • Frequency characteristics .....................................................................10
    [Show full text]
  • Handbook on Probability Distributions
    R powered R-forge project Handbook on probability distributions R-forge distributions Core Team University Year 2009-2010 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic distribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 133 13 Misc 135 Conclusion 137 Bibliography 137 A Mathematical tools 141 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]