Introduction to Monte Carlo Simulations Ebrahim Shayesteh Agenda

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Monte Carlo Simulations Ebrahim Shayesteh Agenda F19: Introduction to Monte Carlo simulations Ebrahim Shayesteh Agenda . Introduction and repetition . Monte Carlo methods: Background, Introduction, Motivation . Example 1: Buffon’s needle . Simple Sampling . Example 2: Travel time from A to B . Accuracy: Variance reduction techniques . VRT 1: Complementary random numbers . Example 3: DC OPF problem 2 Repetition: fault models . Note: "Non-repairable systems” . A summary of functions to describe the stochastic variable T (when a failure occurs): . Cumulative distribution function: F t PT t . survivor Rt P T t 1 F t function: . Probability f (t) F t R t density function: f t F t . Failure rate: zt Rt 1 F t 3 Repetition: repairable systems . Repairable systems: ”alternating renewal process” X(t) 1 0 t T 1 D 1 T 2 D 2 T 3 D 3 T 4 . Mean Time To Failure (MTTF) . Mean Down Time (MDT) . Mean Time To Repair (MTTR), sometimes, but not always the same as MDT . Mean Time Between Failure (MTBF = MTTF + MDT) 4 Repetition: repairable systems . The “availability” for a unit is defined as the probability that the unit is operational at a given time t 1 . Note: – if the unit cannot be repaired A(t)=R(t) – if the unit can be repaired, the availability will depend both on the lifetime distribution and the repair time. 5 Repetition: repairable systems . The share of when the unit has been working thus becomes: n 1 n T T i n i i 1 i 1 n n 11n n TDi i Ti D i i 11i n i 11n i . It results when ∞ in: ET MTTF Aav ET ED MTTF MDT 6 Repetition: repairable systems . The failure frequency (referred to as either ω or f) during the total time interval i is provided by: 1 f MTTF MDT . Note the difference between failure rate (λ = 1/MTTF)and failure frequency (f=1/MTBF). For short down time compared to the operation time (i.e. MDT << MTTF), this difference is negligible: λ≈f. o This assumption is often applicable and used within reliability analyses of power distribution! 7 System of components . A technical system can be described as consisting of a number of functional blocks that are interconnected to perform a number of required functions – where components are modeled as blocks. There are two fundamental system categories: 1. Serial systems (often in power distribution contexts referred to as radial system/lines/feeders) 2. Parallel systems • Often, a system can be seen as a composition of several subsystems of these two fundamental types 8 Methods: approximate equations . Approximate equations for a serial system (MDT << MTTF is assumed): • Failure rate, Unit, e.g: [failures/year]: n s i • Unavailability, i1 Unit, e.g: [hours/year]: n U s iri • Average repair time, i1 Unit, e.g: [hours]: U s rs s 9 Methods: approximate equations . Approximate equations for a parallel system (MDT << MTTF is assumed): • Failure rate, 12 (r1 r2 ) Unit, e.g: [failures/year]: p 12 (r1 r2 ) 1 1r1 2r2 • Unavailability, r r Unit, e.g: [hours/year]: 1 2 rp • Average repair time, r1 r2 Unit, e.g: [hours]: U p prp 12r1r2 10 System indices Additional reliability measures - System indices . Previously calculated measures for system reliability λs,Us and rs specifies expected values, or mean, of a probability distribution. These measures however not describe the impact of a fault which can mean significant differences for different load points: • For example a load point with one customer and a load of load 10 kW and another with 100 customers and load of 500 MW. In order to take into account more aspects, system indices are calculated. 11 System indices Customer-oriented reliability indices . System average interruption frequency index SAIFI [failures/year, customer] ∑ λ ∑ . Customer average interruption frequency index CAIFI [Failure/year, customer] ∑ λ ∑⊂ λi is the failure rate of load point i (LPi) and Ni is equal to number of customers in LPi 12 System indices Customer-oriented reliability indices . System average interruption duration index SAIDI [hours/year, customer]: ∑ ∑ . Customer average interruption duration index CAIDI [hours/failure]: ∑ ∑ λ Ui is the outage time of load point i (LPi) and Ni is equal to number of customers in Lpi 13 System indices . Average service availability index (ASAI) [probability between 0 and 1] or [%]: ∑ 8760 ∑ ∑ 8760 where 8760 is number of hours/year . Also Average service unavailability index (ASUI) are used: ASUI = 1-ASAI 14 System indices Energy-oriented reliability indices . Energy not supplied index (ENS) [kWh/year] . Average energy not supplied index (AENS) [kWh/year, customer] ∑ ∑ ∑ . Ui is the outage time of load point i (LPi), Ni is equal to number of customers in Lpi and La(i) is average average load of Lpi : 15 Agenda . Introduction and repetition . Monte Carlo methods: Background, Introduction, Motivation . Example 1: Buffon’s needle . Simple Sampling . Example 2: Travel time from A to B . Accuracy: Variance reduction techniques . VRT 1: Complementary random numbers . Example 3: DC OPF problem 16 Monte Carlo methods: background . A class of methods used to solve mathematical problems by studying random samples. It is, in another word, an experimental approach to solve a problem. Theoretical basis of Monte Carlo is the Law of Large Numbers: • The average of several independent stochastic variables with the same expected value m is close to m, when the number of stochastic variables is large enough. • The result is that: ∑ → → ∞ ∑ → → ∞ 17 Monte Carlo methods: background . The second most important (i.e., useful) theoretical result for Monte Carlo is the Central Limit Theorem. CLT: The sum of a sufficiently large number of independent identically distributed random variables becomes normally distributed as N increases. This is useful for us because we can draw useful conclusions from the results from a large number of samples (e.g., 68.7% within one standard deviation, etc.). 18 Monte Carlo methods: simulation . The word “simulation” in Monte Carlo Simulation is derived from Latin simulare, which means “to make like”. Thus, a simulation is an attempt to imitate natural or technical systems. Different simulation methods: • Physical simulation: Study a copy of the original system which is usually smaller and less expensive than the real system. • Computer simulation: Study a mathematical model of the original system. • Interactive simulation: Study a system (either physical or a computer simulation) and its human operators. 19 Monte Carlo methods: simulation Y X g(Y) . Inputs: • The inputs are random variables with known probability distributions. • For convenience, we collect all input variables in a vector, Y. 20 Monte Carlo methods: simulation Y X g(Y) . Model: • The model is represented by the mathematical function, g(Y). • The random behavior of the system is captured by the inputs, i.e., the model is deterministic! Hence, if x1 = g(y1), x2 = g(y2) and y1 = y2 then x1 = x2. 21 Monte Carlo methods: simulation Y X g(Y) . Outputs: • The outputs are random variables with unknown probability distributions. • For convenience, we collect all output variables in a vector, X. The objective of the simulation is to study the probability distribution of X!. 22 Monte Carlo methods: simulation example Y X g(Y) . Inputs: • The status of all primary lines, all lateral lines, and the amount of power consumption and number of customers at each load points. Model: • The structure of the distribution system given the above inputs. Outputs: • The reliability measures, e.g., the value of system indices. 23 Monte Carlo methods: motivation . Assume that we want to calculate the expectation value, E[X], of the system X = g(Y). According to the definition of expectation value we get the following expression: . What reasons are there to solve this problem using Monte Carlo methods rather than analytical methods? 24 Monte Carlo methods: motivation . Complexity: The model g(y) may not be an explicit function. • Example: The outputs, x, are given by the solution to an optimization problem, where the inputs y appear as parameters, i.e., . Problem size: The model may have too many inputs or outputs. • Example: 10 inputs ⇒ integrate over 10 dimensions! 25 Monte Carlo methods: motivation Analytic model or simulation method? . The analytic models are usually valid under certain restrictive assumptions such as independence of the inputs, limited status number, etc. MC method can be used for large problems with multiple status. Physical visibility of a complex system is higher in the simulation method. The analytical methods are more accurate than simulations as long as no simplifying assumption is considered. Otherwise, it cannot be compared. In the case of future development in the system, simulation methods are more appropriate since future developments may be more tractable. For small systems, the analytic methods are faster while enough random scenarios need to be simulated in MC method which takes longer time. 26 Monte Carlo methods: motivation Analytic model or simulation method? . Advantages of each method: Analitical Monte-Carlo Exact results if there are limited The analyses are very flexible assumptions The outputs are fast once the The model extention is easy model is obtained Computer is not necessarily It can easily be understood needed 27 Agenda . Introduction and repetition . Monte Carlo methods: Background, Introduction, Motivation . Example 1: Buffon’s needle . Simple Sampling . Example 2: Travel time from A to B . Accuracy: Variance reduction techniques . VRT 1: Complementary random numbers . Example 3: DC OPF problem 28 Example 1: Buffon’s needle . The position of the needle can be described using two parameters: • a = least distance from the needle center to one of the parallel lines (0 ≤ a ≤ d/2). • ϑ = least angle between the needle direction and the parallel lines (0 ≤ϑ≤π/2). The needle will cross a line if its projection on a line perpendicular to the parallel lines is larger than the distance to the closest line, i.e.
Recommended publications
  • A Tail Quantile Approximation Formula for the Student T and the Symmetric Generalized Hyperbolic Distribution
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Schlüter, Stephan; Fischer, Matthias J. Working Paper A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution IWQW Discussion Papers, No. 05/2009 Provided in Cooperation with: Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics Suggested Citation: Schlüter, Stephan; Fischer, Matthias J. (2009) : A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution, IWQW Discussion Papers, No. 05/2009, Friedrich-Alexander-Universität Erlangen-Nürnberg, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung (IWQW), Nürnberg This Version is available at: http://hdl.handle.net/10419/29554 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu IWQW Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung Diskussionspapier Discussion Papers No.
    [Show full text]
  • Algorithms for Operations on Probability Distributions in a Computer Algebra System
    W&M ScholarWorks Dissertations, Theses, and Masters Projects Theses, Dissertations, & Master Projects 2001 Algorithms for operations on probability distributions in a computer algebra system Diane Lynn Evans College of William & Mary - Arts & Sciences Follow this and additional works at: https://scholarworks.wm.edu/etd Part of the Mathematics Commons, and the Statistics and Probability Commons Recommended Citation Evans, Diane Lynn, "Algorithms for operations on probability distributions in a computer algebra system" (2001). Dissertations, Theses, and Masters Projects. Paper 1539623382. https://dx.doi.org/doi:10.21220/s2-bath-8582 This Dissertation is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Dissertations, Theses, and Masters Projects by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. Reproduced with with permission permission of the of copyright the copyright owner. owner.Further reproductionFurther reproduction prohibited without prohibited permission. without permission. ALGORITHMS FOR OPERATIONS ON PROBABILITY DISTRIBUTIONS IN A COMPUTER ALGEBRA SYSTEM A Dissertation Presented to The Faculty of the Department of Applied Science The College of William & Mary in Virginia In Partial Fulfillment Of the Requirements for the Degree of Doctor of Philosophy by Diane Lynn Evans July 2001 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 3026405 Copyright 2001 by Evans, Diane Lynn All rights reserved. ___ ® UMI UMI Microform 3026405 Copyright 2001 by Bell & Howell Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code.
    [Show full text]
  • Generating Random Samples from User-Defined Distributions
    The Stata Journal (2011) 11, Number 2, pp. 299–304 Generating random samples from user-defined distributions Katar´ına Luk´acsy Central European University Budapest, Hungary lukacsy [email protected] Abstract. Generating random samples in Stata is very straightforward if the distribution drawn from is uniform or normal. With any other distribution, an inverse method can be used; but even in this case, the user is limited to the built- in functions. For any other distribution functions, their inverse must be derived analytically or numerical methods must be used if analytical derivation of the inverse function is tedious or impossible. In this article, I introduce a command that generates a random sample from any user-specified distribution function using numeric methods that make this command very generic. Keywords: st0229, rsample, random sample, user-defined distribution function, inverse method, Monte Carlo exercise 1 Introduction In statistics, a probability distribution identifies the probability of a random variable. If the random variable is discrete, it identifies the probability of each value of this vari- able. If the random variable is continuous, it defines the probability that this variable’s value falls within a particular interval. The probability distribution describes the range of possible values that a random variable can attain, further referred to as the sup- port interval, and the probability that the value of the random variable is within any (measurable) subset of that range. Random sampling refers to taking a number of independent observations from a probability distribution. Typically, the parameters of the probability distribution (of- ten referred to as true parameters) are unknown, and the aim is to retrieve them using various estimation methods on the random sample generated from this probability dis- tribution.
    [Show full text]
  • Random Variables Generation
    Random Variables Generation Revised version of the slides based on the book Discrete-Event Simulation: a first course L.L. Leemis & S.K. Park Section(s) 6.1, 6.2, 7.1, 7.2 c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation Random Variables Generation 1/80 Introduction Monte Carlo Simulators differ from Trace Driven simulators because of the use of Random Number Generators to represent the variability that affects the behaviors of real systems. Uniformly distributed random variables are the most elementary representations that we can use in Monte Carlo simulation, but are not enough to capture the complexity of real systems. We must thus devise methods for generating instances (variates) of arbitrary random variables Properly using uniform random numbers, it is possible to obtain this result. In the sequel we will first recall some basic properties of Discrete and Continuous random variables and then we will discuss several methods to obtain their variates Discrete-Event Simulation Random Variables Generation 2/80 Basic Probability Concepts Empirical Probability, derives from performing an experiment many times n and counting the number of occurrences na of an event A The relative frequency of occurrence of event is na/n The frequency theory of probability asserts thatA the relative frequency converges as n → ∞ n Pr( )= lim a A n→∞ n Axiomatic Probability is a formal, set-theoretic approach Mathematically construct the sample space and calculate the number of events A The two are complementary! Discrete-Event Simulation Random
    [Show full text]
  • On Half-Cauchy Distribution and Process
    International Journal of Statistika and Mathematika, ISSN: 2277- 2790 E-ISSN: 2249-8605, Volume 3, Issue 2, 2012 pp 77-81 On Half-Cauchy Distribution and Process Elsamma Jacob1 , K Jayakumar2 1Malabar Christian College, Calicut, Kerala, INDIA. 2Department of Statistics, University of Calicut, Kerala, INDIA. Corresponding addresses: [email protected], [email protected] Research Article Abstract: A new form of half- Cauchy distribution using sin ξ cos ξ Marshall-Olkin transformation is introduced. The properties of () = − ξ, () = ξ, ≥ 0 the new distribution such as density, cumulative distribution ξ ξ function, quantiles, measure of skewness and distribution of the extremes are obtained. Time series models with half-Cauchy Remark 1.1. For the HC distribution the moments do distribution as stationary marginal distributions are not not exist. developed so far. We develop first order autoregressive process Remark 1.2. The HC distribution is infinitely divisible with the new distribution as stationary marginal distribution and (Bondesson (1987)) and self decomposable (Diedhiou the properties of the process are studied. Application of the (1998)). distribution in various fields is also discussed. Keywords: Autoregressive Process, Geometric Extreme Stable, Relationship with other distributions: Half-Cauchy Distribution, Skewness, Stationarity, Quantiles. 1. Let Y be a folded t variable with pdf given by 1. Introduction: ( ) > 0, (1.5) The half Cauchy (HC) distribution is derived from 2Γ( ) () = 1 + , ν ∈ , the standard Cauchy distribution by folding the ν ()√νπ curve on the origin so that only positive values can be observed. A continuous random variable X is said When ν = 1 , (1.5) reduces to 2 1 > 0 to have the half Cauchy distribution if its survival () = , function is given by 1 + (x)=1 − tan , > 0 (1.1) Thus, HC distribution coincides with the folded t distribution with = 1 degree of freedom.
    [Show full text]
  • Handbook on Probability Distributions
    R powered R-forge project Handbook on probability distributions R-forge distributions Core Team University Year 2009-2010 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic distribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 133 13 Misc 135 Conclusion 137 Bibliography 137 A Mathematical tools 141 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • 5 Introduction to the Theory of Order Statistics and Rank Statistics • This
    5 Introduction to the Theory of Order Statistics and Rank Statistics • This section will contain a summary of important definitions and theorems that will be useful for understanding the theory of order and rank statistics. In particular, results will be presented for linear rank statistics. • Many nonparametric tests are based on test statistics that are linear rank statistics. { For one sample: The Wilcoxon-Signed Rank Test is based on a linear rank statistic. { For two samples: The Mann-Whitney-Wilcoxon Test, the Median Test, the Ansari- Bradley Test, and the Siegel-Tukey Test are based on linear rank statistics. • Most of the information in this section can be found in Randles and Wolfe (1979). 5.1 Order Statistics • Let X1;X2;:::;Xn be a random sample of continuous random variables having cdf F (x) and pdf f(x). th • Let X(i) be the i smallest random variable (i = 1; 2; : : : ; n). • X(1);X(2);:::;X(n) are referred to as the order statistics for X1;X2;:::;Xn. By defini- tion, X(1) < X(2) < ··· < X(n). Theorem 5.1: Let X(1) < X(2) < ··· < X(n) be the order statistics for a random sample from a distribution with cdf F (x) and pdf f(x). The joint density for the order statistics is n Y g(x(1); x(2); : : : ; x(n)) = n! f(x(i)) for − 1 < x(1) < x(2) < ··· < x(n) < 1 (16) i=1 = 0 otherwise th Theorem 5.2: The marginal density for the j order statistic X(j) (j = 1; 2; : : : ; n) is n! g (t) = [F (t)]j−1 [1 − F (t)]n−j f(t) − 1 < t < 1: j (j − 1)!(n − j)! • For random variable X with cdf F (x), the inverse distribution F −1(·) is defined as F −1(y) = inffx : F (x) ≥ yg 0 < y < 1: • If F (x) is strictly increasing between 0 and 1, then there is only one x such that F (x) = y.
    [Show full text]
  • Lecture 3: Probability Metrics
    Lecture 3: Probability metrics Prof. Dr. Svetlozar Rachev Institute for Statistics and Mathematical Economics University of Karlsruhe Portfolio and Asset Liability Management Summer Semester 2008 Prof. Dr. Svetlozar Rachev (University of Karlsruhe) Lecture 3: Probability metrics 2008 1 / 93 Copyright These lecture-notes cannot be copied and/or distributed without permission. The material is based on the text-book: Svetlozar T. Rachev, Stoyan Stoyanov, and Frank J. Fabozzi Advanced Stochastic Models, Risk Assessment, and Portfolio Optimization: The Ideal Risk, Uncertainty, and Performance Measures John Wiley, Finance, 2007 Prof. Svetlozar (Zari) T. Rachev Chair of Econometrics, Statistics and Mathematical Finance School of Economics and Business Engineering University of Karlsruhe Kollegium am Schloss, Bau II, 20.12, R210 Postfach 6980, D-76128, Karlsruhe, Germany Tel. +49-721-608-7535, +49-721-608-2042(s) Fax: +49-721-608-3811 http://www.statistik.uni-karslruhe.de Prof. Dr. Svetlozar Rachev (University of Karlsruhe) Lecture 3: Probability metrics 2008 2 / 93 Introduction Theory of probability metrics came from the investigations related to limit theorems in probability theory. A well-known example is Central Limit Theorem (CLT) but there are many other limit theorems, such as the Generalized CLT, the max-stable CLT, functional limit theorems, etc. The limit law can be regarded as an approximation to the stochastic model considered and, therefore, can be accepted as an approximate substitute. How large an error we make by adopting the approximate model? This question can be investigated by studying the distance between the limit law and the stochastic model and whether it is, for example, sum or maxima of i.i.d.
    [Show full text]
  • A Note on Generalized Inverses
    A note on generalized inverses Paul Embrechts1, Marius Hofert2 2014-02-17 Abstract Motivated by too restrictive or even incorrect statements about generalized inverses in the literature, properties about these functions are investigated and proven. Examples and counterexamples show the importance of generalized inverses in mathematical theory and its applications. Keywords Increasing function, generalized inverse, distribution function, quantile function. MSC2010 60E05, 62E15, 26A48. 1 Introduction It is well known that a real-valued, continuous, and strictly monotone function of a single variable possesses an inverse on its range. It is also known that one can drop the assumptions of continuity and strict monotonicity (even the assumption of considering points in the range) to obtain the notion of a generalized inverse. Generalized inverses play an important role in probability theory and statistics in terms of quantile functions, and in financial and insurance mathematics, for example, as Value-at-Risk or return period. Generalized inverses of increasing functions which are not necessarily distribution functions also frequently appear, for example, as transformations of random variables. In particular, proving the famous invariance principle of copulas under strictly increasing transformations on the ranges of the underlying random variables involves such transformations. One can often work with generalized inverses as one does with ordinary inverses. To see this, one has to have several properties about generalized inverses at hand. Although these properties are often stated in the literature, one rarely finds detailed proofs of these results. Moreover, some of the statements found and often referred to are incorrect. The main goal of this paper is therefore to state and prove important properties about generalized inverses of increasing functions.
    [Show full text]
  • Introduction to Stochastic Processes Frans Willekens 19 October 2015
    Introduction to stochastic processes Frans Willekens 19 October 2015 Overview Actions of agents and interactions between agents cannot be predicted with certainty, even if we know a lot about an actor, his or her social network and the contextual factors that could trigger a need or desire to act. Decisions to act are made under uncertainty. Agent-based models (ABM) should account for the uncertainties and the impact of chance on decision outcomes. As a consequence, agent-based models should be probability models. Random variables constitute the elementary building blocks of a probability model. Random variables may take on a finite number of values (discrete random variable) or an infinite number of values (continuous random variable). The likelihood of a value or range of values is expressed as probabilities. Each random variable is characterized by a probability distribution. A distinction is made between empirical (observed) distributions and theoretical distributions. The normal distribution, the exponential distribution, the binomial/multinomial distribution and the Poisson distribution are common probability distributions. The waiting time to an action or interaction is a random variable, characterized by a waiting time distribution. The exponential distribution, the Gompertz distribution, the extreme value distribution and the gamma distribution are used regularly in demography. The outcome of an action is a random variable too. If the outcome is a continuous variable (e.g. reward), possible values are described by a probability density function. If the number of possible outcomes is finite, which is often the case in demography and social sciences, the random variable is discrete and the distribution of the likelihood of each value is the probability mass function.
    [Show full text]
  • Gain Control Explains the Effect of Distraction in Human Perceptual, Cognitive, and Economic Decision Making
    Gain control explains the effect of distraction in human perceptual, cognitive, and economic decision making Vickie Lia,1, Elizabeth Michaelb, Jan Balaguera, Santiago Herce Castañóna,c, and Christopher Summerfielda aDepartment of Experimental Psychology, University of Oxford, OX2 6GG Oxford, United Kingdom; bDepartment of Psychology, University of Cambridge, CB2 3EB Cambridge, United Kingdom; and cDepartment of Psychology and Educational Sciences, University of Geneva, 1202 Geneva, Switzerland Edited by Randolph Blake, Vanderbilt University, Nashville, TN, and approved July 3, 2018 (received for review March 26, 2018) When making decisions, humans are often distracted by irrelevant economists have charted the irrational influence that a decoy al- information. Distraction has a different impact on perceptual, ternative of value Z has on choices between two choice-relevant cognitive, and value-guided choices, giving rise to well-described prospects X and Y,whereX > Y (10–13). A common finding is behavioral phenomena such as the tilt illusion, conflict adaptation, that rational choices (i.e., for X > Y) initially decline as Z increases or economic decoy effects. However, a single, unified model that in value but then increase sharply as Z comes to approximately can account for all these phenomena has yet to emerge. Here, we match the other two items in value (Fig. 1C); other stereotypical “ ” X offer one such account, based on adaptive gain control, and decoy effects are observed when alternatives are character- additionally show that it successfully predicts a range of counter- ized by more than one attribute (discussed below). intuitive new behavioral phenomena on variants of a classic In the fields of psychology, economics, and neuroscience, diverse cognitive paradigm, the Eriksen flanker task.
    [Show full text]
  • Stratified Random Sampling for Dependent Inputs
    Stratified Random Sampling for Dependent Inputs Anirban Mondal Case Western Reserve University, Cleveland, OH 44106, USA Abhijit Mandal Wayne State University, Detroit, MI 48202, USA April 2, 2019 Abstract A new approach of obtaining stratified random samples from statistically dependent random variables is described. The proposed method can be used to obtain samples from the input space of a computer forward model in estimating expectations of functions of the corresponding output variables. The advantage of the proposed method over the existing methods is that it preserves the exact form of the joint distribution on the input variables. The asymptotic distribution of the new estimator is derived. Asymptotically, the variance of the estimator using the proposed method is less than that obtained using the simple random sampling, with the degree of variance reduction depending on the degree of additivity in the function being integrated. This technique is applied to a practical example related to the performance of the river flood inundation model. Keywords: Stratified sampling; Latin hypercube sampling; Variance reduction; Sampling with dependent random variables; Monte Carlo simulation. arXiv:1904.00555v1 [stat.ME] 1 Apr 2019 1 Introduction Mathematical models are widely used by engineers and scientists to describe physical, economic, and social processes. Often these models are complex in nature and are described by a system of ordinary or partial differential equations, which cannot be solved analytically. Given the values of the input parameters of the processes, complex computer codes are widely used to solve such 1 systems numerically, providing the corresponding outputs. These computer models, also known as forward models, are used for prediction, uncertainty analysis, sensitivity analysis, and model calibration.
    [Show full text]