Numerics 2D INTERPOLATION 3-2, 3-125 2-Level Full Factorial Designs 6

Total Page:16

File Type:pdf, Size:1020Kb

Numerics 2D INTERPOLATION 3-2, 3-125 2-Level Full Factorial Designs 6 Numerics BINPPF 8-2, 8-21 2D INTERPOLATION 3-2, 3-125 biplot 4-68, 4-69 2-level full factorial designs 6-12 Birnbaum-Saunders distribution 8-91 bisection algorithm 3-95 A bisquare transformation 2-6 ABS 6-1, 6-3 BIVARIATE INTERPOLATION 3-2, 3-6 absolute value 2-24 bivariate normal cumulative distribution function 8-23 ALPHA 5-3 bivariate normal distribution 8-1 analytic derivatives 3-39 bivariate normal probability density function 8-25 ARCCOS 7-1, 7-2 BIWEIGHT 2-2, 2-6 arccosecant 7-10 biweight transformation 2-6 ARCCOSH 7-1, 7-4 bootstrap 2-2, 2-9, 2-11, 2-25 arccosine 7-2 bootstrap analysis 2-9, 2-11 ARCCOT 7-1, 7-6 BOOTSTRAP INDEX 2-2, 2-9 arccotangent 7-6 BOOTSTRAP SAMPLE 2-2, 2-9, 2-11, 2-53 ARCCOTH 7-1, 7-8 bootstrap sample 2-11 ARCCSC 7-1, 7-10 BVNCDF 8-1, 8-23 ARCCSCH 7-1, 7-12 BVNPDF 8-1, 8-25 ARCSEC 7-1, 7-14 arcsecant 7-14 C ARCSECH 7-1, 7-16 cannonical correlation analysis 4-3 ARCSIN 7-1, 7-18 CANTOR NUMBERS 3-1, 3-8 arcsine 7-18 Cantor sets 3-8 ARCSINH 7-1, 7-20 capability index 2-17, 2-19 ARCTAN 7-1, 7-22 CAUCDF 8-2, 8-27 arctangent 7-22 Cauchy cumulative distribution function 8-27 ARCTANH 7-1, 7-24 Cauchy distribution 8-2 AUTOCORRELATION 2-2, 2-3 Cauchy percent point function 8-31 AUTOCOVARIANCE 2-2, 2-4 Cauchy probability density function 8-29 AVERAGE ABSOLUTE DEVIATION 2-1, 2-5 CAUCHY RANDOM NUMBERS 5-2 Cauchy sparsity function 8-33 B CAUPDF 8-2, 8-29 base 10 logarithm 6-47 CAUPPF 8-2, 8-31 base 2 logarithm 6-45 CAUSF 8-2, 8-33 BESS0 6-2, 6-5 censored 3-124 BESS1 6-2, 6-5 characteristic equation 4-20, 4-24 Bessel function 6-5 characteristic value 4-20, 4-24 BESSEL functions 6-5 CHEB0 6-2, 6-14 BETA 5-3, 6-1, 6-8 CHEB1 6-2, 6-14 beta cumulative distribution function 8-11 CHEB10 6-2, 6-15 beta distribution 8-1 CHEB2 6-2, 6-14 beta function 6-8 CHEB3 6-2, 6-14 beta percent point function 8-15 CHEB4 6-2, 6-14 beta probability density function 8-13 CHEB5 6-2, 6-15 BETA RANDOM NUMBERS 5-3 CHEB6 6-2, 6-15 BETAI 6-1, 6-10 CHEB7 6-2, 6-15 BETCDF 8-1, 8-11 CHEB8 6-2, 6-15 BETPDF 8-1, 8-13 CHEB9 6-2, 6-15 BETPPF 8-1, 8-15 Chebychev polynomial 6-14 BILINEAR INTERPOLATION 3-2, 3-4 Chebychev’s thereom 2-17, 2-19 binary coded 3-13 chi-square cumulative distribution function 8-35 binary pattern 6-12 Chi-square distribution 8-2 BINCDF 8-2, 8-17 chi-square distribution 8-163 binomial cumulative distribution function 8-17 chi-square percent point function 8-39 binomial distribution 8-2 chi-square probability density function 8-37 binomial percent point function 8-21 CHI-SQUARED RANDOM NUMBERS 5-3 binomial probability density function 8-19 CHOLESKY DECOMPOSITION 4-1, 4-3 BINOMIAL RANDOM NUMBERS 5-4 Cholesky decomposition 4-3, 4-71, 4-73 BINPAT 6-2, 6-12 Cholesky square root 4-3 BINPDF 8-2, 8-19 CHSCDF 8-2, 8-35 CHSPDF 8-2, 8-37 D CHSPPF 8-2, 8-39 DATA 3-1 classical adjoint 4-11 data analysis 1-1 COCODE 3-1, 3-10 DECILE 2-2, 2-21 COCOPY 3-1, 3-11 decimal to octal 6-18 CODE 3-1, 3-12 DECOCT 6-1, 6-18 CODE2 3-1, 3-13 DECONVOLUTION 3-2 CODE4 3-1, 3-14 definite integral 3-55 CODE8 3-1, 3-15 DERIVATIVE 3-2, 3-39 CODEH 3-16 derivative at specific points 3-39 CODEN 3-17 derivative function 3-39 COEFFICIENT OF VARIATION 2-13 derivatives 3-96 cofactors 4-11, 4-13 determinant 4-11, 4-13, 4-18, 4-32 command driven 1-1 DEXCDF 8-2, 8-41 COMOVEMENT 2-2, 2-14 DEXPDF 8-2, 8-43 comovement coefficient 2-14 DEXPPF 8-2, 8-45 complementary error function 6-22 DEXSF 8-2, 8-47 complementary incomplete gamma function 6-31 DIAGONAL MATRIX 4-2, 4-8 COMPLEX ADDITION 3-2, 3-18 differential equations 3-96 COMPLEX CONJUGATE 3-19 DIM 6-1, 6-19 COMPLEX CONJUGATES 3-2 DIRECTION ANGLES 3-122 COMPLEX DIVISION 3-2, 3-20 DIRECTION COSINES 3-122 COMPLEX EXPONENTIATION 3-2, 3-21 DISCDF 8-2, 8-49 COMPLEX MULTIPLICATION 3-2, 3-22 discrete convolution 3-29, 3-37 COMPLEX ROOTS 3-2, 3-23 discrete cosine transform 3-31 COMPLEX SQUARE ROOT 3-27 discrete Fourier transform 3-43, 3-47, 3-58, 3-63 COMPLEX SQUARE ROOTS 3-2 discrete inverse Fourier transforms 3-43, 3-47, 3-58, 3-63 COMPLEX SUBTRACTION 3-2, 3-28 discrete sine transform 3-110 complex variable 3-18, 3-19, 3-20, 3-21, 3-22, 3-23, 3-27, 3-28 discrete uniform cumulative distribution 8-49 condition number 4-3, 4-18, 4-47 discrete uniform distribution 8-2 constraints 4-44 discrete uniform percent point function 8-53 CONVOLUTION 3-2, 3-29 discrete uniform probability density function 8-51 CORRELATION 2-2, 2-15 DISCRETE UNIFORM RANDOM NUMBERS 5-3 correlation 2-2, 4-57 DISPDF 8-2 correlation coefficient 2-15, 4-7 DISPPF 8-2, 8-53 CORRELATION MATRIX 4-1, 4-7 distance 3-119 COS 7-1, 7-26 DISTINCT 3-1, 3-42 cosecant 7-34 DNFCDF 8-2, 8-55 COSH 7-1, 7-28 DNFPPF 8-2, 8-57 cosine 7-26 DNTCDF 8-2, 8-59 COSINE TRANSFORM 3-2, 3-31 DNTPPF 8-2, 8-61 COT 7-1, 7-30 dot product 3-120 cotangent 7-30 double exponential cumulative distribution function 8-41 COTH 7-1, 7-32 Double exponential distribution 8-2 COUNT 2-49 double exponential percent point function 8-45 COVARIANCE 2-2, 2-16 double exponential probability density function 8-43 covariance 2-2, 4-57 double exponential sparsity function 8-47 CP 2-2, 2-17 doubly non-central F cumulative distribution function 8-55 CPK 2-2, 2-19 doubly non-central F distribution 8-2 cross product 3-118 doubly non-central F percent point function 8-57 CSC 7-1, 7-34 doubly non-central t cumulative distribution function 8-59 CSCH 7-1, 7-36 doubly non-central t distribution 8-2 CSQRT 3-27 cubic spline interpolation 3-56 E cumulative distribution function 8-4 eigenvalue 4-20, 4-24, 4-49, 4-50, 4-57 CUMULATIVE INTEGRAL 3-2, 3-33 eigenvector 4-20, 4-24, 4-57 CUMULATIVE PRODUCT 3-1, 3-34 EISPACK 4-20 CUMULATIVE SUM 3-1 EMCDF 8-203 ERF 6-1, 6-20 ERFC 6-1, 6-22 Erlang distribution 8-95, 8-97, 8-99 FOURIER TRANSFORM 3-2, 3-47 error function 6-20 Fourier transform 3-43, 3-47, 3-58, 3-63 EV1CDF 8-2, 8-63 FPDF 8-2, 8-85 EV1PDF 8-2, 8-65 FPPF 8-3, 8-87 EV1PPF 8-2, 8-67 FRACT 6-1, 6-26 EV2CDF 8-2, 8-69 FRACTAL 3-2, 3-50 EV2PDF 8-2, 8-71 fractional part of a number 6-26 EV2PPF 8-2, 8-73 FRECHET 5-4 EXP 6-1, 6-24 Frechet distribution 8-2, 8-71 EXPCDF 8-2, 8-75 frequencies of distinct values 3-53 EXPECTED LOSS 2-2, 2-23 FREQUENCY 3-1, 3-53 exponential cumulative distribution function 8-75 frequency data 2-65, 2-66, 2-68 Exponential distribution 8-2 frequency domain 3-31, 3-43, 3-47, 3-58, 3-63, 3-110 exponential function 6-24 functions 1-1 exponential percent point function 8-79 exponential probability density function 8-77 G EXPONENTIAL RANDOM NUMBERS 5-2 GAMCDF 8-3, 8-95 exponential sparsity function 8-81 GAMMA 5-3, 6-1, 6-27 EXPPDF 8-2, 8-77 gamma cumulative distribution function 8-95 EXPPPF 8-2, 8-79 Gamma distribution 8-3 EXPSF 8-2, 8-81 gamma function 6-27, 6-49 EXTREME 2-1, 2-24 gamma percent point function 8-99 Extreme value analysis 2-36 gamma probability density function 8-97 EXTREME VALUE TYPE 1 RANDOM NUMBERS 5-2 GAMMA RANDOM NUMBERS 5-3 EXTREME VALUE TYPE 2 RANDOM NUMBERS 5-3 GAMMAI 6-1, 6-29 extreme value type I cumulative distribution function 8-63 GAMMAIC 6-1, 6-31 Extreme Value Type I distribution 8-2 GAMMAIP 6-2, 6-33 extreme value type I percent point function 8-67 GAMMAR 6-2, 6-35 extreme value type I probability density function 8-65 GAMPDF 8-3, 8-97 extreme value type II cumulative distribution function 8-69 GAMPPF 8-3, 8-99 Extreme Value Type II distribution 8-2 Gaussian quadrature 3-33 extreme value type II percent point function 8-73 generalized Pareto cumulative distribution function 8-107, 8- extreme value type II probability density function 8-71 109, 8-111 extremes 2-1 Generalized Pareto distribution 8-3 GENERALIZED PARETO RANDOM NUMBERS 5-4 F GENERATOR MULTIPLICATION 3-54 F cumulative distribution function 8-83 GEOCDF 8-3, 8-101 F distribution 8-2 geometric cumulative distribution function 8-101 F percent point function 8-87 Geometric distribution 8-3 F probability density function 8-85 geometric percent point function 8-105 F RANDOM NUMBERS 5-3 geometric probability density function 8-103 factorial 6-27, 6-49 GEOMETRIC RANDOM NUMBERS 5-4 fast Fourier transform 3-43 GEOPDF 8-3, 8-103 Fatigue Life distribution 8-3 GEOPPF 8-3, 8-105 FATIGUE LIFE RANDOM NUMBERS 5-4 GEPCDF 8-3, 8-107 fatigue-life cumulative distribution function 8-89 GEPPDF 8-3, 8-109 fatigue-life percent point function 8-93 GEPPPF 8-111 fatigue-life probability density function 8-91 graphics 1-1 FCDF 8-2, 8-83 GUMBEL 5-3 FFT 3-2, 3-43 Gumbel distribution 8-2, 8-65 Fibonnacci generator 5-2 FIBONNACCI NUMBERS 3-1, 3-46 H first order derivatives 3-40 half-normal cumulative distribution function 8-113 first order differential equations 3-96 Half-Normal distribution 8-3 Fisher’s discriminant analysis 4-77 half-normal percent point function 8-117 FIT 1-1 half-normal probability density function 8-115 fit 1-1, 2-6, 2-59 HALFNORMAL RANDOM NUMBERS 5-2 FLCDF 8-3, 8-89 HFNCDF 8-3, 8-113 FLPDF 8-3, 8-91 HFNPDF 8-3, 8-115 FLPPF 8-3, 8-93 HFNPPF 8-3, 8-117 higher order derivatives 3-40 Julia set 6-39 hinge coded 3-16 HYPCDF 8-3, 8-119 K hyperbolic arccosecant 7-12 K 5-4 hyperbolic arccosine 7-4 Koch curve 3-50 hyperbolic arccotangent 7-8 Koch snowflake 3-50 hyperbolic arcsecant 7-16 Kruskal- Wallis 3-93 hyperbolic arcsine 7-20 KURTOSIS 2-1, 2-26 hyperbolic arctangent 7-24 hyperbolic cosecant 7-36 L hyperbolic cosine 7-28 LAMBDA 5-3 hyperbolic cotangent 7-32 LAMCDF 8-5, 8-131 hyperbolic secant 7-40 LAMPDF 8-5, 8-133 hyperbolic sine 7-44 LAMPPF 8-5, 8-135 hyperbolic tangent 7-48 LAMSF 8-5, 8-137 hypergeometric cumulative distribution function 8-119 Laplace distribution 8-2, 8-43 hypergeometric distribution 8-3 Leigh-Perlman comovement coefficient
Recommended publications
  • Modelling Censored Losses Using Splicing: a Global Fit Strategy With
    Modelling censored losses using splicing: a global fit strategy with mixed Erlang and extreme value distributions Tom Reynkens∗a, Roel Verbelenb, Jan Beirlanta,c, and Katrien Antoniob,d aLStat and LRisk, Department of Mathematics, KU Leuven. Celestijnenlaan 200B, 3001 Leuven, Belgium. bLStat and LRisk, Faculty of Economics and Business, KU Leuven. Naamsestraat 69, 3000 Leuven, Belgium. cDepartment of Mathematical Statistics and Actuarial Science, University of the Free State. P.O. Box 339, Bloemfontein 9300, South Africa. dFaculty of Economics and Business, University of Amsterdam. Roetersstraat 11, 1018 WB Amsterdam, The Netherlands. August 14, 2017 Abstract In risk analysis, a global fit that appropriately captures the body and the tail of the distribution of losses is essential. Modelling the whole range of the losses using a standard distribution is usually very hard and often impossible due to the specific characteristics of the body and the tail of the loss distribution. A possible solution is to combine two distributions in a splicing model: a light-tailed distribution for the body which covers light and moderate losses, and a heavy-tailed distribution for the tail to capture large losses. We propose a splicing model with a mixed Erlang (ME) distribution for the body and a Pareto distribution for the tail. This combines the arXiv:1608.01566v4 [stat.ME] 11 Aug 2017 flexibility of the ME distribution with the ability of the Pareto distribution to model extreme values. We extend our splicing approach for censored and/or truncated data. Relevant examples of such data can be found in financial risk analysis. We illustrate the flexibility of this splicing model using practical examples from risk measurement.
    [Show full text]
  • ACTS 4304 FORMULA SUMMARY Lesson 1: Basic Probability Summary of Probability Concepts Probability Functions
    ACTS 4304 FORMULA SUMMARY Lesson 1: Basic Probability Summary of Probability Concepts Probability Functions F (x) = P r(X ≤ x) S(x) = 1 − F (x) dF (x) f(x) = dx H(x) = − ln S(x) dH(x) f(x) h(x) = = dx S(x) Functions of random variables Z 1 Expected Value E[g(x)] = g(x)f(x)dx −∞ 0 n n-th raw moment µn = E[X ] n n-th central moment µn = E[(X − µ) ] Variance σ2 = E[(X − µ)2] = E[X2] − µ2 µ µ0 − 3µ0 µ + 2µ3 Skewness γ = 3 = 3 2 1 σ3 σ3 µ µ0 − 4µ0 µ + 6µ0 µ2 − 3µ4 Kurtosis γ = 4 = 4 3 2 2 σ4 σ4 Moment generating function M(t) = E[etX ] Probability generating function P (z) = E[zX ] More concepts • Standard deviation (σ) is positive square root of variance • Coefficient of variation is CV = σ/µ • 100p-th percentile π is any point satisfying F (π−) ≤ p and F (π) ≥ p. If F is continuous, it is the unique point satisfying F (π) = p • Median is 50-th percentile; n-th quartile is 25n-th percentile • Mode is x which maximizes f(x) (n) n (n) • MX (0) = E[X ], where M is the n-th derivative (n) PX (0) • n! = P r(X = n) (n) • PX (1) is the n-th factorial moment of X. Bayes' Theorem P r(BjA)P r(A) P r(AjB) = P r(B) fY (yjx)fX (x) fX (xjy) = fY (y) Law of total probability 2 If Bi is a set of exhaustive (in other words, P r([iBi) = 1) and mutually exclusive (in other words P r(Bi \ Bj) = 0 for i 6= j) events, then for any event A, X X P r(A) = P r(A \ Bi) = P r(Bi)P r(AjBi) i i Correspondingly, for continuous distributions, Z P r(A) = P r(Ajx)f(x)dx Conditional Expectation Formula EX [X] = EY [EX [XjY ]] 3 Lesson 2: Parametric Distributions Forms of probability
    [Show full text]
  • Identifying Probability Distributions of Key Variables in Sow Herds
    Iowa State University Capstones, Theses and Creative Components Dissertations Summer 2020 Identifying Probability Distributions of Key Variables in Sow Herds Hao Tong Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/creativecomponents Part of the Veterinary Preventive Medicine, Epidemiology, and Public Health Commons Recommended Citation Tong, Hao, "Identifying Probability Distributions of Key Variables in Sow Herds" (2020). Creative Components. 617. https://lib.dr.iastate.edu/creativecomponents/617 This Creative Component is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Creative Components by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. 1 Identifying Probability Distributions of Key Variables in Sow Herds Hao Tong, Daniel C. L. Linhares, and Alejandro Ramirez Iowa State University, College of Veterinary Medicine, Veterinary Diagnostic and Production Animal Medicine Department, Ames, Iowa, USA Abstract Figuring out what probability distribution your data fits is critical for data analysis as statistical assumptions must be met for specific tests used to compare data. However, most studies with swine populations seldom report information about the distribution of the data. In most cases, sow farm production data are treated as having a normal distribution even when they are not. We conducted this study to describe the most common probability distributions in sow herd production data to help provide guidance for future data analysis. In this study, weekly production data from January 2017 to June 2019 were included involving 47 different sow farms.
    [Show full text]
  • Multivariate Phase Type Distributions - Applications and Parameter Estimation
    Downloaded from orbit.dtu.dk on: Sep 28, 2021 Multivariate phase type distributions - Applications and parameter estimation Meisch, David Publication date: 2014 Document Version Publisher's PDF, also known as Version of record Link back to DTU Orbit Citation (APA): Meisch, D. (2014). Multivariate phase type distributions - Applications and parameter estimation. Technical University of Denmark. DTU Compute PHD-2014 No. 331 General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Ph.D. Thesis Multivariate phase type distributions Applications and parameter estimation David Meisch DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Kongens Lyngby PHD-2014-331 Preface This thesis has been submitted as a partial fulfilment of the requirements for the Danish Ph.D. degree at the Technical University of Denmark (DTU). The work has been carried out during the period from June 1st, 2010, to January 31th 2014, in the Section of Statistic in the Department of Applied Mathematics and Computer Science at DTU.
    [Show full text]
  • Handbook on Probability Distributions
    R powered R-forge project Handbook on probability distributions R-forge distributions Core Team University Year 2009-2010 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic distribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 133 13 Misc 135 Conclusion 137 Bibliography 137 A Mathematical tools 141 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • CONTINUOUS DISTRIBUTIONS Laplace Transform
    J. Virtamo 38.3143 Queueing Theory / Continuous Distributions 1 CONTINUOUS DISTRIBUTIONS Laplace transform (Laplace-Stieltjes transform) Definition The Laplace transform of a non-negative random variable X 0 with the probability density ≥ function f(x) is defined as st sX st f ∗(s)= ∞ e− f(t)dt = E[e− ] = ∞ e− dF (t) alsodenotedas (s) Z0 Z0 LX Mathematically it is the Laplace transform of the pdf function. • In dealing with continuous random variables the Laplace transform has the same role as • the generating function has in the case of discrete random variables. s – if X is a discrete integer-valued ( 0) r.v., then f ∗(s)= (e− ) ≥ G J. Virtamo 38.3143 Queueing Theory / Continuous Distributions 2 Laplace transform of a sum Let X and Y be independent random variables with L-transforms fX∗ (s) and fY∗ (s). s(X+Y ) fX∗ +Y (s) = E[e− ] sX sY = E[e− e− ] sX sY = E[e− ]E[e− ] (independence) = fX∗ (s)fY∗ (s) fX∗ +Y (s)= fX∗ (s)fY∗ (s) J. Virtamo 38.3143 Queueing Theory / Continuous Distributions 3 Calculating moments with the aid of Laplace transform By derivation one sees d sX sX f ∗0(s)= E[e− ]=E[ Xe− ] ds − Similarly, the nth derivative is n (n) d sX n sX f ∗ (s)= E[e− ] = E[( X) e− ] dsn − Evaluating these at s = 0 one gets E[X] = f ∗0(0) − 2 E[X ] =+f ∗00(0) . n n (n) E[X ] = ( 1) f ∗ (0) − J. Virtamo 38.3143 Queueing Theory / Continuous Distributions 4 Laplace transform of a random sum Consider the random sum Y = X + + X 1 ··· N where the Xi are i.i.d.
    [Show full text]
  • Field Guide to Continuous Probability Distributions
    Field Guide to Continuous Probability Distributions Gavin E. Crooks v 1.0.0 2019 G. E. Crooks – Field Guide to Probability Distributions v 1.0.0 Copyright © 2010-2019 Gavin E. Crooks ISBN: 978-1-7339381-0-5 http://threeplusone.com/fieldguide Berkeley Institute for Theoretical Sciences (BITS) typeset on 2019-04-10 with XeTeX version 0.99999 fonts: Trump Mediaeval (text), Euler (math) 271828182845904 2 G. E. Crooks – Field Guide to Probability Distributions Preface: The search for GUD A common problem is that of describing the probability distribution of a single, continuous variable. A few distributions, such as the normal and exponential, were discovered in the 1800’s or earlier. But about a century ago the great statistician, Karl Pearson, realized that the known probabil- ity distributions were not sufficient to handle all of the phenomena then under investigation, and set out to create new distributions with useful properties. During the 20th century this process continued with abandon and a vast menagerie of distinct mathematical forms were discovered and invented, investigated, analyzed, rediscovered and renamed, all for the purpose of de- scribing the probability of some interesting variable. There are hundreds of named distributions and synonyms in current usage. The apparent diver- sity is unending and disorienting. Fortunately, the situation is less confused than it might at first appear. Most common, continuous, univariate, unimodal distributions can be orga- nized into a small number of distinct families, which are all special cases of a single Grand Unified Distribution. This compendium details these hun- dred or so simple distributions, their properties and their interrelations.
    [Show full text]
  • A New Heavy Tailed Class of Distributions Which Includes the Pareto
    risks Article A New Heavy Tailed Class of Distributions Which Includes the Pareto Deepesh Bhati 1 , Enrique Calderín-Ojeda 2,* and Mareeswaran Meenakshi 1 1 Department of Statistics, Central University of Rajasthan, Kishangarh 305817, India; [email protected] (D.B.); [email protected] (M.M.) 2 Department of Economics, Centre for Actuarial Studies, University of Melbourne, Melbourne, VIC 3010, Australia * Correspondence: [email protected] Received: 23 June 2019; Accepted: 10 September 2019; Published: 20 September 2019 Abstract: In this paper, a new heavy-tailed distribution, the mixture Pareto-loggamma distribution, derived through an exponential transformation of the generalized Lindley distribution is introduced. The resulting model is expressed as a convex sum of the classical Pareto and a special case of the loggamma distribution. A comprehensive exploration of its statistical properties and theoretical results related to insurance are provided. Estimation is performed by using the method of log-moments and maximum likelihood. Also, as the modal value of this distribution is expressed in closed-form, composite parametric models are easily obtained by a mode matching procedure. The performance of both the mixture Pareto-loggamma distribution and composite models are tested by employing different claims datasets. Keywords: pareto distribution; excess-of-loss reinsurance; loggamma distribution; composite models 1. Introduction In general insurance, pricing is one of the most complex processes and is no easy task but a long-drawn exercise involving the crucial step of modeling past claim data. For modeling insurance claims data, finding a suitable distribution to unearth the information from the data is a crucial work to help the practitioners to calculate fair premiums.
    [Show full text]
  • The Negative Binomial-Erlang Distribution with Applications
    International Journal of Pure and Applied Mathematics Volume 92 No. 3 2014, 389-401 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu AP doi: http://dx.doi.org/10.12732/ijpam.v92i3.7 ijpam.eu THE NEGATIVE BINOMIAL-ERLANG DISTRIBUTION WITH APPLICATIONS Siriporn Kongrod1, Winai Bodhisuwan2 §, Prasit Payakkapong3 1,2,3Department of Statistics Kasetsart University Chatuchak, Bangkok, 10900, THAILAND Abstract: This paper introduces a new three-parameter of the mixed negative binomial distribution which is called the negative binomial-Erlang distribution. This distribution obtained by mixing the negative binomial distribution with the Erlang distribution. The negative binomial-Erlang distribution can be used to describe count data with a large number of zeros. The negative binomial- exponential is presented as special cases of the negative binomial-Erlang distri- bution. In addition, we present some properties of the negative binomial-Erlang distribution including factorial moments, mean, variance, skewness and kurto- sis. The parameter estimation for negative binomial-Erlang distribution by the maximum likelihood estimation are provided. Applications of the the nega- tive binomial-Erlang distribution are carried out two real count data sets. The result shown that the negative binomial-Erlang distribution is better than fit when compared the Poisson and negative binomial distributions. AMS Subject Classification: 60E05 Key Words: mixed distribution, count data, Erlang distribution, the negative binomial-Erlang distribution, overdispersion c 2014 Academic Publications, Ltd. Received: December 4, 2013 url: www.acadpubl.eu §Correspondence author 390 S. Kongrod, W. Bodhisuwan, P. Payakkapong 1. Introduction The Poisson distribution is a discrete probability distribution for the counts of events that occur randomly in a given interval of time, since data in multiple research fields often achieve the Poisson distribution.
    [Show full text]
  • Continuous Random Variables: the Gamma Distribution and the Normal Distribution 2
    Continuous random variables: the Gamma distribu- tion and the normal distribution Chrysafis Vogiatzis Lecture 8 Learning objectives After these lectures, we will be able to: • Give examples of Gamma, Erlang, and normally distributed random variables. • Recall when to and how to use: – Gamma and Erlang distributed random variables. – normally distributed random variables. • Recognize when to use the exponential, the Poisson, and the Erlang distribution. • Use the standard normal distribution table to calculate prob- abilities of normally distributed random variables. Motivation: Congratulations, you are our 100,000th customer! Last time, we discussed about the probability of the next customer arriving in the next hour, next day, next year. What about the proba- bility of the 10th customer arriving at a certain time? Or, consider a printer that starts to fail and needs maintenance after the 1000th job: what is the probability these failures start happening a month from now? Motivation: Food poisoning and how to avoid it A chef is using a new thermometer to tell whether certain foods have been adequately cooked. For example, chicken has to be at an inter- nal temperature of 165 Fahrenheit or more to be adequately cooked; otherwise, we run the risk of salmonella. The restaurant wants to take no chances! The chef, then, takes a look at the temperature read- ing at the thermometer and sees 166. What is the probability that the chicken is adequately cooked, if we assume that the thermometer is right within a margin of error? continuous random variables: the gamma distribution and the normal distribution 2 The Gamma and the Erlang distribution Assume again that you are given a rate l with which some events happen.
    [Show full text]
  • Queueing Systems
    Queueing Systems Ivo Adan and Jacques Resing Department of Mathematics and Computing Science Eindhoven University of Technology P.O. Box 513, 5600 MB Eindhoven, The Netherlands March 26, 2015 Contents 1 Introduction7 1.1 Examples....................................7 2 Basic concepts from probability theory 11 2.1 Random variable................................ 11 2.2 Generating function............................... 11 2.3 Laplace-Stieltjes transform........................... 12 2.4 Useful probability distributions........................ 12 2.4.1 Geometric distribution......................... 12 2.4.2 Poisson distribution........................... 13 2.4.3 Exponential distribution........................ 13 2.4.4 Erlang distribution........................... 14 2.4.5 Hyperexponential distribution..................... 15 2.4.6 Phase-type distribution......................... 16 2.5 Fitting distributions.............................. 17 2.6 Poisson process................................. 18 2.7 Exercises..................................... 20 3 Queueing models and some fundamental relations 23 3.1 Queueing models and Kendall's notation................... 23 3.2 Occupation rate................................. 25 3.3 Performance measures............................. 25 3.4 Little's law................................... 26 3.5 PASTA property................................ 27 3.6 Exercises..................................... 28 4 M=M=1 queue 29 4.1 Time-dependent behaviour........................... 29 4.2 Limiting behavior...............................
    [Show full text]
  • The Hyperlang Probability Distribution­ a Generalized Traffic Headway Model R
    The Hyperlang Probability Distribution­ A Generalized Traffic Headway Model R. F. DAWSON and L.A. CHIMINI, University of Connecticut This research is concerned with the development of the hyperlang probability distribution as a generalized time headway model for single-lane traffic flows on two-lane, two-way roadways. The study methodology involved a process that is best described as "model evolution," and included: (a) identification of salient headway prop­ erties; (b) construction of mathematical micro-components to simulate essential headway properties; (c) integration of the micro-components into a general mathematical headway model; (d) numerical evaluation of model parameters; and (e) statistical evaluation of the model . The proposed hyperlang headway model is a linear combination of a translated exponential function and a translated Erlang func­ tion. The exponential component of the distribution describes the free (unconstrained) headways in the traffic stream, and the Erlang component describes the constrained headways. It is a very flex­ ible model that can decay to a simple exponential function, to an Erlang function, or to a hyper-exponential function as might be required by the traffic situation. It is likely, however, that a traf­ fic stream will always contain both free and constrained vehicles; and that the general form of the hyperlang function will be required in order to effect an adequate description of the composite headways. The parameters of the hyperlang function were evaluated for data sets obtained from the 1965 Highway Capacity Manual and from a 1967 Purdue University research project. The proposed hyperlang model proved to be a sound descriptor of the reported headways for volumes ranging from about 150 vph to about 1050 vph.
    [Show full text]