1 Appendix: Common Distributions

Total Page:16

File Type:pdf, Size:1020Kb

1 Appendix: Common Distributions 1 Appendix: Common distributions This chapter provides details for common univariate and multivariate distributions, in- cluding de…nitions, moments, and simulation. Many distributions can be parameterized in di¤erent ways. Deyroye (1986) provides a complete treatments of random number genera- tion, although care must be taken to insure the parameterizations are consistent. Uniform A random variable X has a uniform distribution on the interval [ ; ], denoted [ ; ] ; if the probability density function (pdf) is U 1 p (x ; ) = (1) j for x [ ; ] and 0 otherwise. The mean and variance of a uniform random variable 2 + ( )2 are E (x) = 2 and var (x) = 12 , respectively. The uniform distribution plays a foundational role in random number generation. In particular, uniform random numbers are required for the inverse transform simulation method, accept-reject algorithms, and the Metropolis algorithm. Fast and accurate pre-programmed algorithms are available in most statistical software packages and programming languages. Bernoulli A random variable X 0; 1 has a Bernoulli distribution with parameter , denoted 2 f g X er () if the probability mass function is B x 1 x Prob (X = x ) = (1 ) . (2) j The mean and variance of a Bernoulli random variable are E(X) = and var(X) = (1 ), respectively. To simulate X er (), B 1. Draw U (0; 1) U 2. Set X = 1 if U > . 1 Binomial A random variable X 0; 1; :::; n has a binomial distribution with parameters n 2 f g and , denoted, X in (n; ), if the probability mass function is B n! x n x Prob (X = x n; ) = (1 ) , (3) j x!(n x)! where n! = n (n 1)! = n (n 1) 2 1. The mean and variance of a binomial ran- dom variable are E(X) = n and var(X) = n (1 ), respectively. The Binomial distribution arises as the distribution of a sum of n independent Bernoulli trials. The binomial is closely related to a number of other distributions. If W1; :::; Wn are i.i.d. n er (p), then Wi in (n; p). As n increases and p 0 and np = , B i=1 B ! 1 ! X in (n; p) converges in distribution to a Poisson distribution with parameter . B P To simulate X in (n; ), B 1. Draw W1; :::; Wn independently Wi er () B 2. Set X = # (Xi = 1) . Poisson A random variable X N+ (the non-negative integers) has a Poisson distribution 2 with parameter , denoted X oi () ; if the probability mass function is P x Prob (X = x ) = e : (4) j x! The mean and variance of a Poisson random variable are E(X) = and var(X) = ; respectively. To simulate X oi (), P 1. Draw Zi i 1 independently, Zi exp (1) f g n 2. Set X = inf n 0 : Zi > . i=1 n X o Exponential 2 A random variable X R+ has an exponential distribution with parameter , de- 2 noted, X exp (), if the pdf is 1 x p (x ) = exp . (5) j The mean and variance of an exponential random variable are E (X) = and var (X) = 2, respectively. The inverse transform method is easiest way to simulate exponential random vari- x ables, since cumulative distribution function is F (x) = 1 e : To simulate X exp (), 1. Draw U [0; 1] U 2. Set X = ln (1 U) . Gamma A random variable X R+ has a gamma distribution with parameters and , 2 denoted X ( ; ), if the pdf is G 1 p (x ; ) = x exp ( x) , (6) j ( ) where the gamma function is de…ned in Appendix 4. The mean and variance of 1 2 a gamma random variable are E(X) = and var(X) = , respectively. It is important to note that there are di¤erently parameterizations of the gamma distribution. For example, some authors (and MATLAB) parameterize the gamma density as 1 1 p (x ; ) = x exp ( x= ) . j ( ) Notice that if Y ( ; 1) and X = Y= , then X ( ; ). To see this, note that G G the inverse transform is Y = X and dY=dX = , which implies that 1 1 1 p (x ; ) = (x ) exp ( x)( ) = x exp ( x) , j ( ) ( ) which is the density of ( ; ) random variable. The exponential distribution is a G 1 special case of the gamma distribution when = 1: X (1; ) implies that G X exp (). 3 Gamma random variable simulation is standard, with built-in generators in most software packages. These algorithms typically use accept/reject algorithms that are customized to the speci…c values of and . To simulate X ( ; ) when is G integer-valued, 1. Draw X1; :::; X independently Xi exp(1) 2. Set X = Xi. i=1 X For non-integer , accept-reject methods provide fast and accurate algorithms for gamma simulation. To avoid confusion over parameterizations, the transformation method can be used. To simulate X ( ; ) ; G 1. Draw Y ( ; 1) G Y 2. Set X = . Beta A random variable X [0; 1] has a beta distribution with parameters and , 2 denoted X ( ; ), if the pdf is B 1 1 x (1 x) p (x ; ) = , (7) j B ( ; ) where ( ) ( ) B ( ; ) = ( + ) is the beta function. Since p (x ; ) dx = 1, j R 1 1 1 B ( ; ) = x (1 x) dx: Z0 The mean and variance of a beta random variable are E (X) = and var (X) = , (8) + ( + )2 ( + + 1) respectively. If = = 1, then X (0; 1). U 4 If and are integers, to simulate X ( ; ), B 1. Draw X1 ( ; 1) and X2 ( ; 1) G G X 2. Set X = 1 : X1 + X2 For the general case, fast algorithms involving accept-reject, composition, and trans- formation methods are available in standard software packages. Chi-squared A random variable X R+ has a Chi-squared distribution with parameter , denoted 2 X 2 if the pdf is X 1 x 2 1 p (x ) = x exp . (9) j 2 2 2 2 The mean and variance of X are E (X) = and var (X) = 2, respectively. The 2-distribution is a special case of the gamma distribution: 2 = ; 1 . X X2 G 2 2 Simulating chi-squared random variables typically uses the transformation method. For integer values of . the following two-step procedure simulates a 2 random X variable: 1. Draw Z1; :::; Z independently Zi (0; 1) N 2 2.Set X = Zi . i=1 X When is large, simulating using normal random variables is computationally costly and alternative more computationally e¢ cient algorithms use gamma random variable generation. Inverse gamma A random variable X R+ has an inverse gamma distribution, denoted by X 2 ( ; ), if the pdf is IG exp p (x ; ) = x . (10) ( ) x +1 j 5 The mean and variance of the inverse gamma distribution are 2 E (X) = and var (X) = (11) 1 ( 1)2 ( 2) 1 for > 2. If Y ( ; ) ; then X = Y ( ; ). To see this, note that G IG 0 1 1 1 1 1 1 = y exp ( y) dy = exp 2 dx 0 ( ) ( ) x x x Z Z1 1 1 = exp dx. ( ) x +1 x Z0 The following two-steps simulate an ( ; ) IG 1: Draw Y ( ; 1) G 2: Set X = . Y Again, as in the case of the gamma distribution, some authors use a di¤erent para- meterization for this distribution as, so it is important to be careful to make sure you are drawing using the correct parameters. In the case of prior distributions over scale parameters, 2, it is additional complicated because some authors such as Zellner (1971) parameterize models in terms of instead of 2. Normal A random variable X R has a normal distribution with parameters and 2, 2 denoted X (; 2), if the pdf is N 2 2 1 (x ) p x ; = exp 2 : (12) j p22 2 ! The mean and variance are E (X) = and var (X) = 2. Given the importance of normal random variables, all software packages have func- tions to draw normal random variables. The algorithms typically use transformation methods drawing uniform and exponential random variables or look-up tables. Lognormal 6 2 A random variable X R+ has a lognormal distribution with parameters and , 2 denoted by X (; 2) if the pdf is LN 1 1 p x ; 2 = x ; 2 = exp (ln x )2 : (13) j j xp22 22 + 1 2 The mean and variance of the normal distribution are E (X) = e 2 and similarly var (X) = exp (2 + 2) (exp (2) 1). It is related to a normal distribution via the transformation X = e+Z . Although …nite moments of the lognormal exist, the distribution does not admit a moment-generating function. Simulating lognormal random variables via the transformation method is straightfor- ward since X = e+" where " (0; 1) is (; 2). N LN Truncated Normal A random variable X has a truncated normal distribution with parameters ; 2 and truncation region ( ; b) if the pdf is (x ; 2) p (x a < x < b) = j ; j (b ; 2) (a ; 2) j j where and it is clear that b (x ; 2) dx = (b ; 2). The mean of a truncated 1 j j normal distribution is R a b E(X a < X < b) = ; j b a where x is the standard normal density evaluated at (x ) = and x is the stan- dard normal CDF evaluated at (x ) =. The inversion method can be used to simulate a truncated normal random variable. A two-step algorithm provides a draw from a truncated standard normal, 1: U U [0; 1] 1 2: X = [ (a) + U ( (b) (a))] ; a 1=2 2 where (a) = (2) exp ( x =2) dx. For a general truncated normal, X 2 1 (; ) 1[a;b] TN R 1: Draw U U [0; 1] 1 a b a 2: X = + + U ; 1 where is the inverse of the error function.
Recommended publications
  • A Tail Quantile Approximation Formula for the Student T and the Symmetric Generalized Hyperbolic Distribution
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Schlüter, Stephan; Fischer, Matthias J. Working Paper A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution IWQW Discussion Papers, No. 05/2009 Provided in Cooperation with: Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics Suggested Citation: Schlüter, Stephan; Fischer, Matthias J. (2009) : A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution, IWQW Discussion Papers, No. 05/2009, Friedrich-Alexander-Universität Erlangen-Nürnberg, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung (IWQW), Nürnberg This Version is available at: http://hdl.handle.net/10419/29554 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu IWQW Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung Diskussionspapier Discussion Papers No.
    [Show full text]
  • Algorithms for Operations on Probability Distributions in a Computer Algebra System
    W&M ScholarWorks Dissertations, Theses, and Masters Projects Theses, Dissertations, & Master Projects 2001 Algorithms for operations on probability distributions in a computer algebra system Diane Lynn Evans College of William & Mary - Arts & Sciences Follow this and additional works at: https://scholarworks.wm.edu/etd Part of the Mathematics Commons, and the Statistics and Probability Commons Recommended Citation Evans, Diane Lynn, "Algorithms for operations on probability distributions in a computer algebra system" (2001). Dissertations, Theses, and Masters Projects. Paper 1539623382. https://dx.doi.org/doi:10.21220/s2-bath-8582 This Dissertation is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Dissertations, Theses, and Masters Projects by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. Reproduced with with permission permission of the of copyright the copyright owner. owner.Further reproductionFurther reproduction prohibited without prohibited permission. without permission. ALGORITHMS FOR OPERATIONS ON PROBABILITY DISTRIBUTIONS IN A COMPUTER ALGEBRA SYSTEM A Dissertation Presented to The Faculty of the Department of Applied Science The College of William & Mary in Virginia In Partial Fulfillment Of the Requirements for the Degree of Doctor of Philosophy by Diane Lynn Evans July 2001 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 3026405 Copyright 2001 by Evans, Diane Lynn All rights reserved. ___ ® UMI UMI Microform 3026405 Copyright 2001 by Bell & Howell Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code.
    [Show full text]
  • Generating Random Samples from User-Defined Distributions
    The Stata Journal (2011) 11, Number 2, pp. 299–304 Generating random samples from user-defined distributions Katar´ına Luk´acsy Central European University Budapest, Hungary lukacsy [email protected] Abstract. Generating random samples in Stata is very straightforward if the distribution drawn from is uniform or normal. With any other distribution, an inverse method can be used; but even in this case, the user is limited to the built- in functions. For any other distribution functions, their inverse must be derived analytically or numerical methods must be used if analytical derivation of the inverse function is tedious or impossible. In this article, I introduce a command that generates a random sample from any user-specified distribution function using numeric methods that make this command very generic. Keywords: st0229, rsample, random sample, user-defined distribution function, inverse method, Monte Carlo exercise 1 Introduction In statistics, a probability distribution identifies the probability of a random variable. If the random variable is discrete, it identifies the probability of each value of this vari- able. If the random variable is continuous, it defines the probability that this variable’s value falls within a particular interval. The probability distribution describes the range of possible values that a random variable can attain, further referred to as the sup- port interval, and the probability that the value of the random variable is within any (measurable) subset of that range. Random sampling refers to taking a number of independent observations from a probability distribution. Typically, the parameters of the probability distribution (of- ten referred to as true parameters) are unknown, and the aim is to retrieve them using various estimation methods on the random sample generated from this probability dis- tribution.
    [Show full text]
  • Random Variables Generation
    Random Variables Generation Revised version of the slides based on the book Discrete-Event Simulation: a first course L.L. Leemis & S.K. Park Section(s) 6.1, 6.2, 7.1, 7.2 c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation Random Variables Generation 1/80 Introduction Monte Carlo Simulators differ from Trace Driven simulators because of the use of Random Number Generators to represent the variability that affects the behaviors of real systems. Uniformly distributed random variables are the most elementary representations that we can use in Monte Carlo simulation, but are not enough to capture the complexity of real systems. We must thus devise methods for generating instances (variates) of arbitrary random variables Properly using uniform random numbers, it is possible to obtain this result. In the sequel we will first recall some basic properties of Discrete and Continuous random variables and then we will discuss several methods to obtain their variates Discrete-Event Simulation Random Variables Generation 2/80 Basic Probability Concepts Empirical Probability, derives from performing an experiment many times n and counting the number of occurrences na of an event A The relative frequency of occurrence of event is na/n The frequency theory of probability asserts thatA the relative frequency converges as n → ∞ n Pr( )= lim a A n→∞ n Axiomatic Probability is a formal, set-theoretic approach Mathematically construct the sample space and calculate the number of events A The two are complementary! Discrete-Event Simulation Random
    [Show full text]
  • On Half-Cauchy Distribution and Process
    International Journal of Statistika and Mathematika, ISSN: 2277- 2790 E-ISSN: 2249-8605, Volume 3, Issue 2, 2012 pp 77-81 On Half-Cauchy Distribution and Process Elsamma Jacob1 , K Jayakumar2 1Malabar Christian College, Calicut, Kerala, INDIA. 2Department of Statistics, University of Calicut, Kerala, INDIA. Corresponding addresses: [email protected], [email protected] Research Article Abstract: A new form of half- Cauchy distribution using sin ξ cos ξ Marshall-Olkin transformation is introduced. The properties of () = − ξ, () = ξ, ≥ 0 the new distribution such as density, cumulative distribution ξ ξ function, quantiles, measure of skewness and distribution of the extremes are obtained. Time series models with half-Cauchy Remark 1.1. For the HC distribution the moments do distribution as stationary marginal distributions are not not exist. developed so far. We develop first order autoregressive process Remark 1.2. The HC distribution is infinitely divisible with the new distribution as stationary marginal distribution and (Bondesson (1987)) and self decomposable (Diedhiou the properties of the process are studied. Application of the (1998)). distribution in various fields is also discussed. Keywords: Autoregressive Process, Geometric Extreme Stable, Relationship with other distributions: Half-Cauchy Distribution, Skewness, Stationarity, Quantiles. 1. Let Y be a folded t variable with pdf given by 1. Introduction: ( ) > 0, (1.5) The half Cauchy (HC) distribution is derived from 2Γ( ) () = 1 + , ν ∈ , the standard Cauchy distribution by folding the ν ()√νπ curve on the origin so that only positive values can be observed. A continuous random variable X is said When ν = 1 , (1.5) reduces to 2 1 > 0 to have the half Cauchy distribution if its survival () = , function is given by 1 + (x)=1 − tan , > 0 (1.1) Thus, HC distribution coincides with the folded t distribution with = 1 degree of freedom.
    [Show full text]
  • Handbook on Probability Distributions
    R powered R-forge project Handbook on probability distributions R-forge distributions Core Team University Year 2009-2010 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic distribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 133 13 Misc 135 Conclusion 137 Bibliography 137 A Mathematical tools 141 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • 5 Introduction to the Theory of Order Statistics and Rank Statistics • This
    5 Introduction to the Theory of Order Statistics and Rank Statistics • This section will contain a summary of important definitions and theorems that will be useful for understanding the theory of order and rank statistics. In particular, results will be presented for linear rank statistics. • Many nonparametric tests are based on test statistics that are linear rank statistics. { For one sample: The Wilcoxon-Signed Rank Test is based on a linear rank statistic. { For two samples: The Mann-Whitney-Wilcoxon Test, the Median Test, the Ansari- Bradley Test, and the Siegel-Tukey Test are based on linear rank statistics. • Most of the information in this section can be found in Randles and Wolfe (1979). 5.1 Order Statistics • Let X1;X2;:::;Xn be a random sample of continuous random variables having cdf F (x) and pdf f(x). th • Let X(i) be the i smallest random variable (i = 1; 2; : : : ; n). • X(1);X(2);:::;X(n) are referred to as the order statistics for X1;X2;:::;Xn. By defini- tion, X(1) < X(2) < ··· < X(n). Theorem 5.1: Let X(1) < X(2) < ··· < X(n) be the order statistics for a random sample from a distribution with cdf F (x) and pdf f(x). The joint density for the order statistics is n Y g(x(1); x(2); : : : ; x(n)) = n! f(x(i)) for − 1 < x(1) < x(2) < ··· < x(n) < 1 (16) i=1 = 0 otherwise th Theorem 5.2: The marginal density for the j order statistic X(j) (j = 1; 2; : : : ; n) is n! g (t) = [F (t)]j−1 [1 − F (t)]n−j f(t) − 1 < t < 1: j (j − 1)!(n − j)! • For random variable X with cdf F (x), the inverse distribution F −1(·) is defined as F −1(y) = inffx : F (x) ≥ yg 0 < y < 1: • If F (x) is strictly increasing between 0 and 1, then there is only one x such that F (x) = y.
    [Show full text]
  • Introduction to Monte Carlo Simulations Ebrahim Shayesteh Agenda
    F19: Introduction to Monte Carlo simulations Ebrahim Shayesteh Agenda . Introduction and repetition . Monte Carlo methods: Background, Introduction, Motivation . Example 1: Buffon’s needle . Simple Sampling . Example 2: Travel time from A to B . Accuracy: Variance reduction techniques . VRT 1: Complementary random numbers . Example 3: DC OPF problem 2 Repetition: fault models . Note: "Non-repairable systems” . A summary of functions to describe the stochastic variable T (when a failure occurs): . Cumulative distribution function: F t PT t . survivor Rt P T t 1 F t function: . Probability f (t) F t R t density function: f t F t . Failure rate: zt Rt 1 F t 3 Repetition: repairable systems . Repairable systems: ”alternating renewal process” X(t) 1 0 t T 1 D 1 T 2 D 2 T 3 D 3 T 4 . Mean Time To Failure (MTTF) . Mean Down Time (MDT) . Mean Time To Repair (MTTR), sometimes, but not always the same as MDT . Mean Time Between Failure (MTBF = MTTF + MDT) 4 Repetition: repairable systems . The “availability” for a unit is defined as the probability that the unit is operational at a given time t 1 . Note: – if the unit cannot be repaired A(t)=R(t) – if the unit can be repaired, the availability will depend both on the lifetime distribution and the repair time. 5 Repetition: repairable systems . The share of when the unit has been working thus becomes: n 1 n T T i n i i 1 i 1 n n 11n n TDi i Ti D i i 11i n i 11n i . It results when ∞ in: ET MTTF Aav ET ED MTTF MDT 6 Repetition: repairable systems .
    [Show full text]
  • Lecture 3: Probability Metrics
    Lecture 3: Probability metrics Prof. Dr. Svetlozar Rachev Institute for Statistics and Mathematical Economics University of Karlsruhe Portfolio and Asset Liability Management Summer Semester 2008 Prof. Dr. Svetlozar Rachev (University of Karlsruhe) Lecture 3: Probability metrics 2008 1 / 93 Copyright These lecture-notes cannot be copied and/or distributed without permission. The material is based on the text-book: Svetlozar T. Rachev, Stoyan Stoyanov, and Frank J. Fabozzi Advanced Stochastic Models, Risk Assessment, and Portfolio Optimization: The Ideal Risk, Uncertainty, and Performance Measures John Wiley, Finance, 2007 Prof. Svetlozar (Zari) T. Rachev Chair of Econometrics, Statistics and Mathematical Finance School of Economics and Business Engineering University of Karlsruhe Kollegium am Schloss, Bau II, 20.12, R210 Postfach 6980, D-76128, Karlsruhe, Germany Tel. +49-721-608-7535, +49-721-608-2042(s) Fax: +49-721-608-3811 http://www.statistik.uni-karslruhe.de Prof. Dr. Svetlozar Rachev (University of Karlsruhe) Lecture 3: Probability metrics 2008 2 / 93 Introduction Theory of probability metrics came from the investigations related to limit theorems in probability theory. A well-known example is Central Limit Theorem (CLT) but there are many other limit theorems, such as the Generalized CLT, the max-stable CLT, functional limit theorems, etc. The limit law can be regarded as an approximation to the stochastic model considered and, therefore, can be accepted as an approximate substitute. How large an error we make by adopting the approximate model? This question can be investigated by studying the distance between the limit law and the stochastic model and whether it is, for example, sum or maxima of i.i.d.
    [Show full text]
  • A Note on Generalized Inverses
    A note on generalized inverses Paul Embrechts1, Marius Hofert2 2014-02-17 Abstract Motivated by too restrictive or even incorrect statements about generalized inverses in the literature, properties about these functions are investigated and proven. Examples and counterexamples show the importance of generalized inverses in mathematical theory and its applications. Keywords Increasing function, generalized inverse, distribution function, quantile function. MSC2010 60E05, 62E15, 26A48. 1 Introduction It is well known that a real-valued, continuous, and strictly monotone function of a single variable possesses an inverse on its range. It is also known that one can drop the assumptions of continuity and strict monotonicity (even the assumption of considering points in the range) to obtain the notion of a generalized inverse. Generalized inverses play an important role in probability theory and statistics in terms of quantile functions, and in financial and insurance mathematics, for example, as Value-at-Risk or return period. Generalized inverses of increasing functions which are not necessarily distribution functions also frequently appear, for example, as transformations of random variables. In particular, proving the famous invariance principle of copulas under strictly increasing transformations on the ranges of the underlying random variables involves such transformations. One can often work with generalized inverses as one does with ordinary inverses. To see this, one has to have several properties about generalized inverses at hand. Although these properties are often stated in the literature, one rarely finds detailed proofs of these results. Moreover, some of the statements found and often referred to are incorrect. The main goal of this paper is therefore to state and prove important properties about generalized inverses of increasing functions.
    [Show full text]
  • Introduction to Stochastic Processes Frans Willekens 19 October 2015
    Introduction to stochastic processes Frans Willekens 19 October 2015 Overview Actions of agents and interactions between agents cannot be predicted with certainty, even if we know a lot about an actor, his or her social network and the contextual factors that could trigger a need or desire to act. Decisions to act are made under uncertainty. Agent-based models (ABM) should account for the uncertainties and the impact of chance on decision outcomes. As a consequence, agent-based models should be probability models. Random variables constitute the elementary building blocks of a probability model. Random variables may take on a finite number of values (discrete random variable) or an infinite number of values (continuous random variable). The likelihood of a value or range of values is expressed as probabilities. Each random variable is characterized by a probability distribution. A distinction is made between empirical (observed) distributions and theoretical distributions. The normal distribution, the exponential distribution, the binomial/multinomial distribution and the Poisson distribution are common probability distributions. The waiting time to an action or interaction is a random variable, characterized by a waiting time distribution. The exponential distribution, the Gompertz distribution, the extreme value distribution and the gamma distribution are used regularly in demography. The outcome of an action is a random variable too. If the outcome is a continuous variable (e.g. reward), possible values are described by a probability density function. If the number of possible outcomes is finite, which is often the case in demography and social sciences, the random variable is discrete and the distribution of the likelihood of each value is the probability mass function.
    [Show full text]
  • Gain Control Explains the Effect of Distraction in Human Perceptual, Cognitive, and Economic Decision Making
    Gain control explains the effect of distraction in human perceptual, cognitive, and economic decision making Vickie Lia,1, Elizabeth Michaelb, Jan Balaguera, Santiago Herce Castañóna,c, and Christopher Summerfielda aDepartment of Experimental Psychology, University of Oxford, OX2 6GG Oxford, United Kingdom; bDepartment of Psychology, University of Cambridge, CB2 3EB Cambridge, United Kingdom; and cDepartment of Psychology and Educational Sciences, University of Geneva, 1202 Geneva, Switzerland Edited by Randolph Blake, Vanderbilt University, Nashville, TN, and approved July 3, 2018 (received for review March 26, 2018) When making decisions, humans are often distracted by irrelevant economists have charted the irrational influence that a decoy al- information. Distraction has a different impact on perceptual, ternative of value Z has on choices between two choice-relevant cognitive, and value-guided choices, giving rise to well-described prospects X and Y,whereX > Y (10–13). A common finding is behavioral phenomena such as the tilt illusion, conflict adaptation, that rational choices (i.e., for X > Y) initially decline as Z increases or economic decoy effects. However, a single, unified model that in value but then increase sharply as Z comes to approximately can account for all these phenomena has yet to emerge. Here, we match the other two items in value (Fig. 1C); other stereotypical “ ” X offer one such account, based on adaptive gain control, and decoy effects are observed when alternatives are character- additionally show that it successfully predicts a range of counter- ized by more than one attribute (discussed below). intuitive new behavioral phenomena on variants of a classic In the fields of psychology, economics, and neuroscience, diverse cognitive paradigm, the Eriksen flanker task.
    [Show full text]