Distribution ---Summaries

Total Page:16

File Type:pdf, Size:1020Kb

Distribution ---Summaries Distribution ------ Summaries 476 Distribution Summaries Discrete name P(k) = P(X = k) mean vanance and range for k E range uniform 1 a+b (b-a+1)2-1 -- on {a, a + L, , , ,b} b-a+1 2 12 Bernoulli (p) P(l) =p: P(O) = 1- p p p(l- p) on {a, I} binomial (n., p) (npk(l - p),,-k np np(l - p) on {O, L ' . , , n} Poisson (11) ,,-11 , 1," -- /1 on {a, L 2, ... } h' 11 (C) (~- C) hypergeometric N, k n _. k nC (n. C) - II (C) CV~C) (N-n) on{O ....• n} C~) N N 1'v N-1 geometric (p) 1 1-p (1 __ p)k-Ip - -- on {L2,:L.} , P p2 geometric (p) 1-p 1-p (l-p)"p -- -- on {O.l, 2".} P p2 negative binomial (r. p) (k+r-l)p'(l_P)" r(l - p) r(l-p) on {a, 1,2, ... } r - 1 P p2 I Distribution Summaries 477 Continuous t uudefincd. density f (.r) c.d.±". F(:r) name range .\Ican Variancc for .r E range for :r E range 1 .r ~ 0 o+iJ (Ii ~ (/)2 uniform (a. Ii) (0. iJ) -- -- -- b~a 1i~1I. 2 12 1 --.-1 r 2 normal (0.1) (~x.x) --e 2' cp(.t) 0 1 V27T 1 -- I I _1.(:r_I,)2/()"2 2 nonna] (fl.. ( 2 ) (~:x.:x) --e 2 cP 1/ a V27Ta C: !l) I I I I I exponential (A) I (0. x) A,,-A.l" 1 ~ ( AJ' I/A 1/>.2 = gamma (l. A) I I I I --l .' 1 (AX)' j 1~e-AIL-- I ganllna (r. A) (O.x) r (r) -I >.':r'- 1e- A., k' r / A r/A2 I I !-c=o I i for integer r I I I chi-square (1/) as above for A = ~. I (O.x) r( ¥)-1 (~)'i -1 c· 2 1/ 211 = gamma (¥. ~ ) I' = ¥ if II is e\"pn I I I I I I I - I ! 1 /iT 1 ~ 7f r ~ r- -I Ra~']('igh (O,x) ,ce-:z 1 -- ! V"2 2 I I J I see Exercise cJ.6.5 r I beta (1',8) (0. 1) rr,. + 8) X,-l (1 ~ rj'-1 -- \'(r)[(s) . for integer,. and "' r+s [ I I (, +'1';") ;;-+ II . I I I I I 1 1 I I I 1 1 I I Callch~' (~x,x) - + ~ arrtan(.l') t 7f(l :r2) I + 2 7f I I ! 1 f I .. I ! ~ arcsine I I 1 2 I I (0.1) ~ arcsin( If) - I - =beta (1/2,1/2) 7fvx(1 ~ :r) 7f 2 11 I I I I I I i I 478 Distribution Summaries Beta Parameters: r > 0 and s > 0 Range: x E [0,1] Density function: where 1 B(r, s) = r xr-1(1 _ x)S-ldx = r(r)f(s) Jo r(r+s) is the beta function, and f(r) is the gamma function (see gamma distributions). Cumulative distribution function: (Exercises in Section 4.6.) No simple general formula for r or s not an integer. See tables of the incomplete beta function. For integers rand s 1'+8-1 ( ) S 1 i P(Xr,s ::; x) = L r +: -1 xi(l _ xr+ - - (0::; x ::; 1) z=r Mean and standard deviation: (4.6) r E(Xrs) =- , r+s Special cases: • r = s = 1: The uniform [0, 1] distribution. • r = s = 1/2: The arcsine distribution. Sources and applications: • Order statistics of uniform variables (4.6). • Ratios of gamma variables (5.4). • Bayesian inference for unknown probabilities. Normal approximation: • Good for large rand s. Distribution Summaries 479 Binomial Parameters: n = number of trials (n = 1,2, ... ) p = probability of success on each trial (0 :s p ~ 1) Range: k E {O, L ... ,n} Probability function: (2.1) P(k) = P(5 = k) = G)pk(1 - p)n~k (k = 0,1, ... , n) 5 (number of successes in n independent trials with) where = probability p of success on each trial = Xl + ... + Xn where Xi = indicator of success on trial i. Mean and standard deviation: (3.2, 3.3) E(5) = IL = np 5D(5) = a = VrLp(1 - p) Mode: (2.1) int(np+p) Consecutive odds ratios: (2.1) P(k) (n-k+l) p (decreasing) P(k - 1) k I-p Special case: 0.3) Binomial (1. p) == Bernoulli (p), distribution of the indicator of an event A with probability P(A) = p. Normal approximation: (2.2, 2.3) If (J = vInP( 1 - p) is sufficiently large P(k) ~;¢1 . (k-a~ - IL) where ¢( z) is the standard normal density function where <l> is the standard normal cumulative distribution function. Poisson approximation: (2.4) If P is close to zero P(k) ~ e~l"lLk /k! where 11 = np 480 Distribution Summaries Exponential Parameter: A > 0, the rate of an exponential random variable T. Range: t E [0,00) Density function: (4.2) P(T E dt)/dt = Ae-)"t (t ~ 0) Cumulative distribution function: (4.2) P(T -S t) = 1 - e-)"t (t ~ 0) Often T is interpreted as a lifetime. Survival function: P(T > t) = e-)"t (t:::: 0) Mean and Standard Deviation: (4.2) E(T) = l/A SD(T) = l/A Interpretation of A: A = P(T E dtlT > t)/dt is the constant hazard rate or chance per unit time of death given survival to time t. See Section 4.3 for a discussion of non-constant hazard rates. Characterizations: • Only distribution with constant hazard rate. • Only distribution with the memoryless property P(T > t + siT> t) = P(T > s) for all s, t > 0 Sources: • Time until the next arrival in a Poisson process with rate A. • Approximation to geometric (p) distribution for small p. • Approximation to beta (1, s) distribution for large s. • Spacings and shortest spacings of uniform order statistics. Distribution Summaries 481 Gamma Parameters: r > 0 (shape) A > 0 (rate or inverse scale) Range: t E [0, CX)) Density function: (4.2,5.4) P(Tr,A E dt)/dt = f(r)~l Artr~le~At (t ~ 0) where f(r) = Jooo tr~le~tdt is the gamma function. Note: f(r) = (r - 1)! for integer r. Cumulative distribution function: (4.2) No formula for non-integer r. See tables of the incomplete gamma function. For integer l' where Nt,A denotes the number of points up to time t in a Poisson process with rate A, and has Poisson (At) distribution. Mean and standard deviation: (4.2) Special cases: • gamma (1, A) is exponential (A). • gamma (71/2,1/2) is chi-square (n), the distribution of the sum of the squares of n independent standard normals. Sources: • Sum of l' independent exponential (A) variables. • Time until the nh arrival in a Poisson process with rate A. • Bayesian inference for unknown Poisson rates. • Approximation to negative binomial (1', p) for small p. • Approximation to beta (1', s) for large s. Transformations: (Notation: X rv F means X is a random variable with distribution F.) Scaling: T rv gamma (1', A) {==} AT rv gamma (1',1) Sums: For independent Ti rv gamma (ri' A) Li Ti rv gamma (Li ri, A) Ratios: For independent Tr,A and T."A TrA T~ rv beta(r, s) independent of the sum Tr,A + T."A rv gamma (1' + 8, A) r,A + S,A Higher moments: For s > 0 E[(T. )S] = f(r + s) r,A r( r )A8 Normal approximation: If l' is sufficiently large, the distribution of the standardized gamma variable Zr·A = [Tr,A - E(Tr,A)]/ SD(Tr,A) is approximately standard normal. 482 Distribution Summaries Geometric and Negative Binomial Geometric Parameter: p = success probability. Range: n E {1, 2, ... } DefInition: Distribution of the waiting time T to first success in independent trials with prob­ ability p of success on each trial. Probability function: 0.6, 3.4) P(n) = P(T = n) = (1 - p)n-lp (n = 1,2, ... ) Let F = T - 1 denote the number of failures before the first success. The distribution of F is the geometric distribution on {O, 1,2, ... }. Tail probabilities: P(T > n) = P(first n trials are failures) = (1 _ p)n Mean and Standard Deviation: (3.4) E(T) = l/p SD(T)=~/p Negative Binomial Parameters: p = success probability, r = number of successes. Range: n E {O, 1,2, ... } DefInition: Distribution of the number of failures Fr before the rth success in Bernoulli trials with probability p of success on each trial. Probability function: (3.4) P(Fr=n)=P(Tr=n+r)= (n+r-1)pr(1_p)n (n=O,l, ... ) r - 1 where Tr is the waiting time to the rth success. The distribution of Tr = Fr + r is the negative binomial distribution on {r, r + 1, ... }. Mean and standard deviation: (3.4) E(Fr) = r(l- p)/p SD(Fr) = Jr(l - p)/p Sum of geometries: The sum of r independent geometric (p) random variables on {O, 1, 2, ... } has negative binomial (r. p) distribution. Distribution Summaries 483 Hypergeometric n== sample size Parameters: N = total population size G = number of good elements in population Range: 9 E {a, l. ... , 71} Deitnition: The hypergeometric (71, N, G) distribution is the distribution of the number 8 of good elements in a random sample of size 71 without replacement from a population of size N with G good elements and B = N - G bad ones. Probability function: (2.5) P( ) = P(8 = ) = ((n) (G)g(Bh = (~) (~) 9 9 9 (N)n (~) is the chance of getting 9 good elements and b bad elements in the random sample of size 71.
Recommended publications
  • Arcsine Laws for Random Walks Generated from Random Permutations with Applications to Genomics
    Applied Probability Trust (4 February 2021) ARCSINE LAWS FOR RANDOM WALKS GENERATED FROM RANDOM PERMUTATIONS WITH APPLICATIONS TO GENOMICS XIAO FANG,1 The Chinese University of Hong Kong HAN LIANG GAN,2 Northwestern University SUSAN HOLMES,3 Stanford University HAIYAN HUANG,4 University of California, Berkeley EROL PEKOZ,¨ 5 Boston University ADRIAN ROLLIN,¨ 6 National University of Singapore WENPIN TANG,7 Columbia University 1 Email address: [email protected] 2 Email address: [email protected] 3 Email address: [email protected] 4 Email address: [email protected] 5 Email address: [email protected] 6 Email address: [email protected] 7 Email address: [email protected] 1 Postal address: Department of Statistics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong 2 Postal address: Department of Mathematics, Northwestern University, 2033 Sheridan Road, Evanston, IL 60208 2 Current address: University of Waikato, Private Bag 3105, Hamilton 3240, New Zealand 1 2 X. Fang et al. Abstract A classical result for the simple symmetric random walk with 2n steps is that the number of steps above the origin, the time of the last visit to the origin, and the time of the maximum height all have exactly the same distribution and converge when scaled to the arcsine law. Motivated by applications in genomics, we study the distributions of these statistics for the non-Markovian random walk generated from the ascents and descents of a uniform random permutation and a Mallows(q) permutation and show that they have the same asymptotic distributions as for the simple random walk.
    [Show full text]
  • Free Infinite Divisibility and Free Multiplicative Mixtures of the Wigner Distribution
    FREE INFINITE DIVISIBILITY AND FREE MULTIPLICATIVE MIXTURES OF THE WIGNER DISTRIBUTION Victor Pérez-Abreu and Noriyoshi Sakuma Comunicación del CIMAT No I-09-0715/ -10-2009 ( PE/CIMAT) Free Infinite Divisibility of Free Multiplicative Mixtures of the Wigner Distribution Victor P´erez-Abreu∗ Department of Probability and Statistics, CIMAT Apdo. Postal 402, Guanajuato Gto. 36000, Mexico [email protected] Noriyoshi Sakumay Department of Mathematics, Keio University, 3-14-1, Hiyoshi, Yokohama 223-8522, Japan. [email protected] March 19, 2010 Abstract Let I∗ and I be the classes of all classical infinitely divisible distributions and free infinitely divisible distributions, respectively, and let Λ be the Bercovici-Pata bijection between I∗ and I: The class type W of symmetric distributions in I that can be represented as free multiplicative convolutions of the Wigner distribution is studied. A characterization of this class under the condition that the mixing distribution is 2-divisible with respect to free multiplicative convolution is given. A correspondence between sym- metric distributions in I and the free counterpart under Λ of the positive distributions in I∗ is established. It is shown that the class type W does not include all symmetric distributions in I and that it does not coincide with the image under Λ of the mixtures of the Gaussian distribution in I∗. Similar results for free multiplicative convolutions with the symmetric arcsine measure are obtained. Several well-known and new concrete examples are presented. AMS 2000 Subject Classification: 46L54, 15A52. Keywords: Free convolutions, type G law, free stable law, free compound distribution, Bercovici-Pata bijection.
    [Show full text]
  • Location-Scale Distributions
    Location–Scale Distributions Linear Estimation and Probability Plotting Using MATLAB Horst Rinne Copyright: Prof. em. Dr. Horst Rinne Department of Economics and Management Science Justus–Liebig–University, Giessen, Germany Contents Preface VII List of Figures IX List of Tables XII 1 The family of location–scale distributions 1 1.1 Properties of location–scale distributions . 1 1.2 Genuine location–scale distributions — A short listing . 5 1.3 Distributions transformable to location–scale type . 11 2 Order statistics 18 2.1 Distributional concepts . 18 2.2 Moments of order statistics . 21 2.2.1 Definitions and basic formulas . 21 2.2.2 Identities, recurrence relations and approximations . 26 2.3 Functions of order statistics . 32 3 Statistical graphics 36 3.1 Some historical remarks . 36 3.2 The role of graphical methods in statistics . 38 3.2.1 Graphical versus numerical techniques . 38 3.2.2 Manipulation with graphs and graphical perception . 39 3.2.3 Graphical displays in statistics . 41 3.3 Distribution assessment by graphs . 43 3.3.1 PP–plots and QQ–plots . 43 3.3.2 Probability paper and plotting positions . 47 3.3.3 Hazard plot . 54 3.3.4 TTT–plot . 56 4 Linear estimation — Theory and methods 59 4.1 Types of sampling data . 59 IV Contents 4.2 Estimators based on moments of order statistics . 63 4.2.1 GLS estimators . 64 4.2.1.1 GLS for a general location–scale distribution . 65 4.2.1.2 GLS for a symmetric location–scale distribution . 71 4.2.1.3 GLS and censored samples .
    [Show full text]
  • Clear-Sighted Statistics: Module 7: Basic Concepts of Probability
    City University of New York (CUNY) CUNY Academic Works Open Educational Resources Queensborough Community College 2020 Clear-Sighted Statistics: Module 7: Basic Concepts of Probability Edward Volchok CUNY Queensborough Community College How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/qb_oers/85 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] Clear-Sighted Statistics: An OER Textbook Module 7: Basic Concepts of Probability “It is remarkable that a science [probability] that began by considering games of chance should itself be raised to the ranks of the most important subject of human knowledge.”1 –Pierre-Simon Laplace “The most important questions in life are, for the most part, really only problems of probability.”2 –Pierre-Simon Laplace Two seventeenth century French mathematicians, Blaise Pascal and Pierre de Fermat have been considered the founders of probability theory for over 300 years. Between July and October of 1654, Pascal and Fermat exchanged a series of letters about dice games that addressed questions raised by Antoine Gombaud, who was also known as the Chevalier de Méré. Gombaud is often depicted as the gambler in this story.3 With their letters, Pascal and Fermat established the core components of modern probability theory. While only seven of these letters have survived, some of the leading mathematicians of the time were aware of and commented on this correspondence. One was the era’s top scientist, Christiaan Huygens and teacher of the mathematician Gottfried Wilhelm Leibniz, alluded to these letters in 1657.
    [Show full text]
  • Handbook on Probability Distributions
    R powered R-forge project Handbook on probability distributions R-forge distributions Core Team University Year 2009-2010 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic distribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 133 13 Misc 135 Conclusion 137 Bibliography 137 A Mathematical tools 141 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • 1Principles of Probability
    Principles of 1Probability The Principles of Probability Are the Foundations of Entropy Fluids flow, boil, freeze, and evaporate. Solids melt and deform. Oil and wa- ter don’t mix. Metals and semiconductors conduct electricity. Crystals grow. Chemicals react and rearrange, take up heat and give it off. Rubber stretches and retracts. Proteins catalyze biological reactions. What forces drive these processes? This question is addressed by statistical thermodynamics, a set of tools for modeling molecular forces and behavior, and a language for interpret- ing experiments. The challenge in understanding these behaviors is that the properties that can be measured and controlled, such as density, temperature, pressure, heat capacity, molecular radius, or equilibrium constants, do not predict the tenden- cies and equilibria of systems in a simple and direct way. To predict equilibria, we must step into a different world, where we use the language of energy, en- tropy, enthalpy, and free energy. Measuring the density of liquid water just below its boiling temperature does not hint at the surprise that just a few de- grees higher, above the boiling temperature, the density suddenly drops more than a thousandfold. To predict density changes and other measurable prop- erties, you need to know about the driving forces, the entropies and energies. We begin with entropy. Entropy is one of the most fundamental concepts in statistical thermody- namics. It describes the tendency of matter toward disorder. The concepts that 1 we introduce in this chapter, probability, multiplicity, combinatorics, averages, and distribution functions, provide a foundation for describing entropy. What Is Probability? Here are two statements of probability.
    [Show full text]
  • Field Guide to Continuous Probability Distributions
    Field Guide to Continuous Probability Distributions Gavin E. Crooks v 1.0.0 2019 G. E. Crooks – Field Guide to Probability Distributions v 1.0.0 Copyright © 2010-2019 Gavin E. Crooks ISBN: 978-1-7339381-0-5 http://threeplusone.com/fieldguide Berkeley Institute for Theoretical Sciences (BITS) typeset on 2019-04-10 with XeTeX version 0.99999 fonts: Trump Mediaeval (text), Euler (math) 271828182845904 2 G. E. Crooks – Field Guide to Probability Distributions Preface: The search for GUD A common problem is that of describing the probability distribution of a single, continuous variable. A few distributions, such as the normal and exponential, were discovered in the 1800’s or earlier. But about a century ago the great statistician, Karl Pearson, realized that the known probabil- ity distributions were not sufficient to handle all of the phenomena then under investigation, and set out to create new distributions with useful properties. During the 20th century this process continued with abandon and a vast menagerie of distinct mathematical forms were discovered and invented, investigated, analyzed, rediscovered and renamed, all for the purpose of de- scribing the probability of some interesting variable. There are hundreds of named distributions and synonyms in current usage. The apparent diver- sity is unending and disorienting. Fortunately, the situation is less confused than it might at first appear. Most common, continuous, univariate, unimodal distributions can be orga- nized into a small number of distinct families, which are all special cases of a single Grand Unified Distribution. This compendium details these hun- dred or so simple distributions, their properties and their interrelations.
    [Show full text]
  • Arcsine Laws for Random Walks Generated from Random
    ARCSINE LAWS FOR RANDOM WALKS GENERATED FROM RANDOM PERMUTATIONS WITH APPLICATIONS TO GENOMICS XIAO FANG, HAN LIANG GAN, SUSAN HOLMES, HAIYAN HUANG, EROL PEKOZ,¨ ADRIAN ROLLIN,¨ AND WENPIN TANG Abstract. A classical result for the simple symmetric random walk with 2n steps is that the number of steps above the origin, the time of the last visit to the origin, and the time of the maximum height all have exactly the same distribution and converge when scaled to the arcsine law. Motivated by applications in genomics, we study the distributions of these statistics for the non-Markovian random walk generated from the ascents and descents of a uniform random permutation and a Mallows(q) permutation and show that they have the same asymptotic distributions as for the simple random walk. We also give an unexpected conjecture, along with numerical evidence and a partial proof in special cases, for the result that the number of steps above the origin by step 2n for the uniform permutation generated walk has exactly the same discrete arcsine distribution as for the simple random walk, even though the other statistics for these walks have very different laws. We also give explicit error bounds to the limit theorems using Stein’s method for the arcsine distribution, as well as functional central limit theorems and a strong embedding of the Mallows(q) permutation which is of independent interest. Key words : Arcsine distribution, Brownian motion, L´evy statistics, limiting distribu- tion, Mallows permutation, random walks, Stein’s method, strong embedding, uniform permutation. AMS 2010 Mathematics Subject Classification: 60C05, 60J65, 05A05.
    [Show full text]
  • [Math.PR] 14 Feb 2019 Recurrence of Markov Chain Traces
    Recurrence of Markov chain traces Itai Benjamini ∗ Jonathan Hermon † Abstract It is shown that transient graphs for the simple random walk do not admit a nearest neighbor transient Markov chain (not necessarily a reversible one), that crosses all edges with positive probability, while there is such chain for the square grid Z2. In particular, the d-dimensional grid Zd admits such a Markov chain only when d = 2. For d = 2 we present a relevant example due to Gady Kozma, while the general statement for transient graphs is obtained by proving that for every transient irreducible Markov chain on a countable state space which admits a stationary measure, its trace is almost surely recurrent for simple random walk. The case that the Markov chain is reversible is due to Gurel-Gurevich, Lyons and the first named author (2007). We exploit recent results in potential theory of non-reversible Markov chains in order to extend their result to the non-reversible setup. Keywords: Recurrence, trace, capacity. 1 Introduction In this paper we prove that for every transient nearest neighbourhood Markov chain (not necessarily a reversible one) on a graph, admitting a stationary measure, the (random) graph formed by the vertices it visited and edges it crossed is a.s. recurrent for simple random walk. We use this to give partial answers to the following question: Which graphs admit a nearest neighbourhood transient Markov chain that crosses all edges with positive probability? 1 arXiv:1711.03479v4 [math.PR] 14 Feb 2019 We start with some notation and definitions. Let G =(V, E) be a connected graph .
    [Show full text]
  • Modeling Liver Cancer and Leukemia Data Using Arcsine-Gaussian Distribution
    Computers, Materials & Continua Tech Science Press DOI:10.32604/cmc.2021.015089 Article Modeling Liver Cancer and Leukemia Data Using Arcsine-Gaussian Distribution Farouq Mohammad A. Alam1, Sharifah Alrajhi1, Mazen Nassar1,2 and Ahmed Z. Afify3,* 1Department of Statistics, Faculty of Science, King Abdulaziz University, Jeddah, 21589, Saudi Arabia 2Department of Statistics, Faculty of Commerce, Zagazig University, Zagazig, 44511, Egypt 3Department of Statistics, Mathematics and Insurance, Benha University, Benha, 13511, Egypt *Corresponding Author: Ahmed Z. Afify. Email: [email protected] Received: 06 November 2020; Accepted: 12 December 2020 Abstract: The main objective of this paper is to discuss a general family of distributions generated from the symmetrical arcsine distribution. The considered family includes various asymmetrical and symmetrical probability distributions as special cases. A particular case of a symmetrical probability distribution from this family is the Arcsine–Gaussian distribution. Key sta- tistical properties of this distribution including quantile, mean residual life, order statistics and moments are derived. The Arcsine–Gaussian parameters are estimated using two classical estimation methods called moments and maximum likelihood methods. A simulation study which provides asymptotic distribution of all considered point estimators, 90% and 95% asymptotic confidence intervals are performed to examine the estimation efficiency of the considered methods numerically. The simulation results show that both biases and variances of the estimators tend to zero as the sample size increases, i.e., the estimators are asymptotically consistent. Also, when the sample size increases the coverage probabilities of the confidence intervals increase to the nominal levels, while the corresponding length decrease and approach zero. Two real data sets from the medicine filed are used to illustrate the flexibility of the Arcsine–Gaussian distribution as compared with the normal, logistic, and Cauchy models.
    [Show full text]
  • Deepnet Technologies, Makers of a Wide Range of Advantage Gambling Training Products and Software (Blackjack, Poker, Craps)
    EXPERT TIPS ON CASINO GAMBLING BY FRANK SCOBLETE DOMINATOR BILL BURTON STICKMAN Golden Touch, Expert Tips on Casino Gambling Copyright © 2007, Golden Touch Craps LLC All rights reserved. Except for brief passages used in legitimate reviews, no parts of this book may be reproduced, translated, or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or by any storage and retrieval system, without the express written permission of the publisher. Page layout and typesetting by www.DeepNetTech.com. The material in this book is intended to inform and educate the reader and in no way represents an inducement to gamble legally or illegally. iii Table of Contents Table of Contents.......................................................................................................... v Foreword........................................................................................................................1 1 - This Ain't Streaking.................................................................................................. 3 2 - Craps is Easy to Learn ............................................................................................ 7 3 - I Hate These Commercials ...................................................................................... 9 4 - TV Educated Poker Players................................................................................... 11 5 - What It Costs You to Play Craps..........................................................................
    [Show full text]
  • Probability Theory Introductory Concepts
    Chapter 5: Probability Theory Introductory Concepts: Experiment Sample Space Event notes2 Probability: 3 types Classical Probability Relative Frequency Probability Subjective Probability notes2 Law of large numbers Expressing Probability in terms of "odds" notes2 Concepts involving events. Consider the experiment of rolling a pair of fair dice. Event Complement of an event (A') Mutually exclusive events Exhaustive (or ‘collectively exhaustive’) events notes2 Experiment: Roll a pair of fair dice (one red, one green) Sample Space: (1,1) (1,2) (1,3) (1,4) (1,5) (1,6) (2,1) (2,2) (2,3) (2,4) (2,5) (2,6) (3,1) (3,2) (3,3) (3,4) (3,5) (3,6) (4,1) (4,2) (4,3) (4,4) (4,5) (4,6) (5,1) (5,2) (5,3) (5,4) (5,5) (5,6) (6,1) (6,2) (6,3) (6,4) (6,5) (6,6) __________________________________________________________________________ Assumption: the dice are “fair” (balanced), so each sample point has the same probability of 1/36. Let A = {Total is a 6} = {(1, 5), (2, 4), (3, 3), (4, 2), (5,1)}. Then P{A} = ________ Let B = {Red die is even} = {(2,1), (2,2), (2,3), (2,4), (2,5), (2,6), (4,1), (4,2), (4,3), (4,4), (4,5), (4,6), (6,1), (6,2), (6,3), (6,4), (6,5), (6,6)} Then P{B} = ________ Combinations of Events: Union: P{A or B} = P{(2,1), (2,2), (2,3), (2,4), (2,5), (2,6), (4,1), (4,2), (4,3), (4,4), (4,5), (4,6), (6,1), (6,2), (6,3), (6,4), (6,5), (6,6), (1,5), (3,3), (5,1)} So P{A or B} = ________ Intersection: P{A and B} = P{(2, 4), (4, 2)} So P{A and B} = ________ notes2 Relating unions and intersections: P{A or B} = So in our example P{A or B} = AB notes2 Recall the dice example: A = {total is 6} = {(1, 5), (2, 4), (3, 3), (4, 2), (5,1)} B = {Red (first) die is even} = {(2,1), (2,2), (2,3), (2,4), (2,5), (2,6), (4,1), (4,2), (4,3), (4,4), (4,5), (4,6), (6,1), (6,2), (6,3), (6,4), (6,5), (6,6)} P{A} = 5/36; P{B} = 18/36; P{A or B} = 21/36; P{A and B} = 2/36 The concept of conditional probability: P{A | B} denotes the conditional probability of A given B, meaning the probability that event A occurs if we know that event B has occurred.
    [Show full text]