Chapter 4 Multivariate Distributions

Total Page:16

File Type:pdf, Size:1020Kb

Load more

RS – 4 – Multivariate Distributions Chapter 4 Multivariate distributions k ≥ 2 Multivariate Distributions All the results derived for the bivariate case can be generalized to n RV. The joint CDF of X1, X2, …, Xk will have the form: P(x1, x2, …, xk ) when the RVs are discrete F(x1, x2, …, xk ) when the RVs are continuous 1 RS – 4 – Multivariate Distributions Joint Probability Function Definition: Joint Probability Function Let X1, X2, …, Xk denote k discrete random variables, then p(x1, x2, …, xk ) is joint probability function of X1, X2, …, Xk if 1. 0px1 , , xn 1 2. px 1, , xn 1 xx1 n 3. P XXA11,,nn pxx ,, x1,, xAn Joint Density Function Definition: Joint density function Let X1, X2, …, Xk denote k continuous random variables, then n f(x1, x2, …, xk) = δ /δx1,δx2, …,δxk F(x1, x2, …, xk) is the joint density function of X1, X2, …, Xk if 1. fx1, , xn 0 2. fx,, xdx ,, dx 1 11nn A 3. P XXA,, fxxdxdx ,, ,, 111nnn 2 RS – 4 – Multivariate Distributions Example: The Multinomial distribution Suppose that we observe an experiment that has k possible outcomes {O1, O2, …, Ok } independently n times. Let p1, p2, …, pk denote probabilities of O1, O2, …, Ok respectively. Let Xi denote the number of times that outcome Oi occurs in the n repetitions of the experiment. Then the joint probability function of the random variables X1, X2, …, Xk is n! xx12 xk px112,, xnk pp p xx12!! xk ! Example: The Multinomial distribution xx12 xk Note: p12pp k is the probability of a sequence of length n containing x1 outcomes O1 x2 outcomes O2 … xk outcomes Ok n! n xx12!! xk !x12xx k is the number of ways of choosing the positions for the x1 outcomes O1, x2 outcomes O2, …, xk outcomes Ok 3 RS – 4 – Multivariate Distributions Example: The Multinomial distribution nnx 1 nx12 x xk xx12 x3 xk n! nx!! nx x 112 xnx112123123!!! x nx x !! x nx x x ! n! x12!!xx k ! n! xx12 xk px112,, xnk pp p xx12!! xk ! n xx12 xk pp12 pk xx12 xk This is called the Multinomial distribution Example: The Multinomial distribution Suppose that an earnings announcements has three possible outcomes: O1 – Positive stock price reaction – (30% chance) O2 – No stock price reaction – (50% chance) O3 - Negative stock price reaction – (20% chance) Hence p1 = 0.30, p2 = 0.50, p3 = 0.20. Suppose today 4 firms released earnings announcements (n = 4). Let X = the number that result in a positive stock price reaction, Y = the number that result in no reaction, and Z = the number that result in a negative reaction. Find the distribution of X, Y and Z. Compute P[X + Y ≥ Z] 4! xyz pxyz, , 0.30 0.50 0.20 x y z 4 xyz!!! 4 RS – 4 – Multivariate Distributions z Table: p(x,y,z) x y 01234 0 0 0 0 0 0 0.0016 0 1 0 0 0 0.0160 0 0 2 0 0 0.0600 0 0 0 3 0 0.1000 0 0 0 0 4 0.0625 0 0 0 0 1 0 0 0 0 0.0096 0 1 1 0 0 0.0720 0 0 1 2 0 0.1800 0 0 0 1 3 0.1500 0 0 0 0 1400000 2 0 0 0 0.0216 0 0 2 1 0 0.1080 0 0 0 2 2 0.1350 0 0 0 0 2300000 2400000 3 0 0 0.0216 0 0 0 3 1 0.0540 0 0 0 0 3200000 3300000 3400000 4 0 0.0081 0 0 0 0 4100000 4200000 4300000 4400000 z P [X + Y ≥ Z] x y 01234 0 0 0 0 0 0 0.0016 0 1 0 0 0 0.0160 0 = 0.9728 0 2 0 0 0.0600 0 0 0 3 0 0.1000 0 0 0 0 4 0.0625 0 0 0 0 1 0 0 0 0 0.0096 0 1 1 0 0 0.0720 0 0 1 2 0 0.1800 0 0 0 1 3 0.1500 0 0 0 0 1400000 2 0 0 0 0.0216 0 0 2 1 0 0.1080 0 0 0 2 2 0.1350 0 0 0 0 2300000 2400000 3 0 0 0.0216 0 0 0 3 1 0.0540 0 0 0 0 3200000 3300000 3400000 4 0 0.0081 0 0 0 0 4100000 4200000 4300000 4400000 5 RS – 4 – Multivariate Distributions Example: The Multivariate Normal distribution Recall the univariate normal distribution 2 1 x 1 2 fx e 2 the bivariate normal distribution 2 2 1 xxxxxxyy 2 1 212 xxyy fxy, e 2 21xy Example: The Multivariate Normal distribution The k-variate Normal distribution is given by: 1 1 1 2 xμ x μ fx1,, xk fx k /2 1/2 e 2 where x1 1 11 12 1k x x 2 μ 2 12 22 2k xk k 12kk kk 6 RS – 4 – Multivariate Distributions Marginal joint probability function Definition: Marginal joint probability function Let X1, X2, …, Xq, Xq+1 …, Xk denote k discrete random variables with joint probability function p(x1, x2, …, xq, xq+1 …, xk ) then the marginal joint probability function of X1, X2, …, Xq is pxx,, pxx,, 12qq 1 1 n xxqn1 When X1, X2, …, Xq, Xq+1 …, Xk is continuous, then the marginal joint density function of X1, X2, …, Xq is f xx,, fxxdxdx,, 12qq 1 1 nqn 1 Conditional joint probability function Definition: Conditional joint probability function Let X1, X2, …, Xq, Xq+1 …, Xk denote k discrete random variables with joint probability function p(x1, x2, …, xq, xq+1 …, xk ) then the conditional joint probability function of X1, X2, …, Xq given Xq+1 = xq+1 , …, Xk = xk is px 1 ,, xk pxxxx11qq k 11,,qq ,, k p xx,, qkq11 k For the continuous case, we have: fx ,, x fxxxx,, ,, 1 k 11qq k 11qq k f xx,, qkq11 k 7 RS – 4 – Multivariate Distributions Conditional joint probability function Definition: Independence of sects of vectors Let X1, X2, …, Xq, Xq+1 …, Xk denote k continuous random variables with joint probability density function f(x1, x2, …, xq, xq+1 …, xk ) then the variables X1, X2, …, Xq are independent of Xq+1, …, Xk if f xxfxxfx,, ,, ,, x 11111 kq qqkqk A similar definition for discrete random variables. Conditional joint probability function Definition: Mutual Independence Let X1, X2, …, Xk denote k continuous random variables with joint probability density function f(x1, x2, …, xk ) then the variables X1, X2, …, Xk are called mutually independent if f xxfxfxfx11122,, kkk A similar definition for discrete random variables. 8 RS – 4 – Multivariate Distributions Multivariate marginal pdfs - Example Let X, Y, Z denote 3 jointly distributed random variable with joint density function then 2 Kx yz01,01,01 x y z fxyz,, 0otherwise Find the value of K. Determine the marginal distributions of X, Y and Z. Determine the joint marginal distributions of X, Y X, Z Y, Z Multivariate marginal pdfs - Example Solution: Determining the value of K. 111 1,, f x y z dxdydz K x2 yz dxdydz 000 x1 11x3 111 K xyz dydz K yz dydz 0033x0 00 y1 11111y 2 KyzdzKzdz 0032 32 12 y0 if K 1 7 zz2 11 7 KKK 1 340 34 12 9 RS – 4 – Multivariate Distributions Multivariate marginal pdfs - Example The marginal distribution of X. 12 11 f x f x,, y z dydz x2 yz dydz 1 7 00 112 y 1 1222y 12 1 x y z dz x z dz 7200y 0 72 2 1 1222z 12 1 x zx for 0 x 1 74740 Multivariate marginal pdfs - Example The marginal distribution of X,Y. 12 1 f xy,,, f xyzdz x2 yzdz 12 7 0 2 z1 12 2 z xz y 72z0 122 1 x yxy for 0 1,0 1 72 10 RS – 4 – Multivariate Distributions Multivariate marginal pdfs - Example Find the conditional distribution of: 1. Z given X = x, Y = y, 2. Y given X = x, Z = z, 3. X given Y = y, Z = z, 4. Y , Z given X = x, 5. X , Z given Y = y 6. X , Y given Z = z 7. Y given X = x, 8. X given Y = y 9. X given Z = z 10. Z given X = x, 11. Z given Y = y 12. Y given Z = z Multivariate marginal pdfs - Example The marginal distribution of X,Y. 122 1 f12 xy, x y for 01,01 x y 72 Thus the conditional distribution of Z given X = x,Y = y is 12 2 fxyz,, x yz 7 fxy, 122 1 12 x y 72 xyz2 for 0z 1 1 xy2 2 11 RS – 4 – Multivariate Distributions Multivariate marginal pdfs - Example The marginal distribution of X. 122 1 f1 xx for 0 x 1 74 Then, the conditional distribution of Y , Z given X = x is 12 2 fxyz,, x yz 7 fx 122 1 1 x 74 xyz2 for 0yz 1,0 1 1 x2 4 Expectations for Multivariate Distributions Definition: Expectation Let X1, X2, …, Xn denote n jointly distributed random variable with joint density function f(x1, x2, …, xn ) then EgX1 ,, Xn g x,, x f x ,, x dx ,, dx 111nnn 12 RS – 4 – Multivariate Distributions Expectations for Multivariate Distributions - Example Let X, Y, Z denote 3 jointly distributed random variable with joint density function then 12 2 7 x yz01,01,01 x y z fxyz,, 0otherwise Determine E[XYZ]. Solution: 111 12 E XYZ xyz x2 yz dxdydz 000 7 12 111 x 322yz xy z dxdydz 7 000 Expectations for Multivariate Distributions - Example 111 12 12 111 E XYZ xyzx2 yzdxdydz x 322yz xy z dxdydz 7 000 7 000 1142x 1 11 12xx22 3 22 yz y zdydz yz 2 y zdydz 7400 2x 0 7 00 1123y 1 3312yy22 zzdzzzdz2 7200 3y 0 723 1 32zz23 31231717 74 90 749 73684 13 RS – 4 – Multivariate Distributions Some Rules for Expectations – Rule 1 1.
Recommended publications
  • Lecture 22: Bivariate Normal Distribution Distribution

    Lecture 22: Bivariate Normal Distribution Distribution

    6.5 Conditional Distributions General Bivariate Normal Let Z1; Z2 ∼ N (0; 1), which we will use to build a general bivariate normal Lecture 22: Bivariate Normal Distribution distribution. 1 1 2 2 f (z1; z2) = exp − (z1 + z2 ) Statistics 104 2π 2 We want to transform these unit normal distributions to have the follow Colin Rundel arbitrary parameters: µX ; µY ; σX ; σY ; ρ April 11, 2012 X = σX Z1 + µX p 2 Y = σY [ρZ1 + 1 − ρ Z2] + µY Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 1 / 22 6.5 Conditional Distributions 6.5 Conditional Distributions General Bivariate Normal - Marginals General Bivariate Normal - Cov/Corr First, lets examine the marginal distributions of X and Y , Second, we can find Cov(X ; Y ) and ρ(X ; Y ) Cov(X ; Y ) = E [(X − E(X ))(Y − E(Y ))] X = σX Z1 + µX h p i = E (σ Z + µ − µ )(σ [ρZ + 1 − ρ2Z ] + µ − µ ) = σX N (0; 1) + µX X 1 X X Y 1 2 Y Y 2 h p 2 i = N (µX ; σX ) = E (σX Z1)(σY [ρZ1 + 1 − ρ Z2]) h 2 p 2 i = σX σY E ρZ1 + 1 − ρ Z1Z2 p 2 2 Y = σY [ρZ1 + 1 − ρ Z2] + µY = σX σY ρE[Z1 ] p 2 = σX σY ρ = σY [ρN (0; 1) + 1 − ρ N (0; 1)] + µY = σ [N (0; ρ2) + N (0; 1 − ρ2)] + µ Y Y Cov(X ; Y ) ρ(X ; Y ) = = ρ = σY N (0; 1) + µY σX σY 2 = N (µY ; σY ) Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 2 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 3 / 22 6.5 Conditional Distributions 6.5 Conditional Distributions General Bivariate Normal - RNG Multivariate Change of Variables Consequently, if we want to generate a Bivariate Normal random variable Let X1;:::; Xn have a continuous joint distribution with pdf f defined of S.
  • On Two-Echelon Inventory Systems with Poisson Demand and Lost Sales

    On Two-Echelon Inventory Systems with Poisson Demand and Lost Sales

    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universiteit Twente Repository On two-echelon inventory systems with Poisson demand and lost sales Elisa Alvarez, Matthieu van der Heijden Beta Working Paper series 366 BETA publicatie WP 366 (working paper) ISBN ISSN NUR 804 Eindhoven December 2011 On two-echelon inventory systems with Poisson demand and lost sales Elisa Alvarez Matthieu van der Heijden University of Twente University of Twente The Netherlands1 The Netherlands 2 Abstract We derive approximations for the service levels of two-echelon inventory systems with lost sales and Poisson demand. Our method is simple and accurate for a very broad range of problem instances, including cases with both high and low service levels. In contrast, existing methods only perform well for limited problem settings, or under restrictive assumptions. Key words: inventory, spare parts, lost sales. 1 Introduction We consider a two-echelon inventory system for spare part supply chains, consisting of a single central depot and multiple local warehouses. Demand arrives at each local warehouse according to a Poisson process. Each location controls its stocks using a one-for-one replenishment policy. Demand that cannot be satisfied from stock is served using an emergency shipment from an external source with infinite supply and thus lost to the system. It is well-known that the analysis of such lost sales inventory systems is more complex than the equivalent with full backordering (Bijvank and Vis [3]). In particular, the analysis of the central depot is complex, since (i) the order process is not Poisson, and (ii) the order arrival rate depends on the inventory states of the local warehouses: Local warehouses only generate replenishment orders if they have stock on hand.
  • Distance Between Multinomial and Multivariate Normal Models

    Distance Between Multinomial and Multivariate Normal Models

    Chapter 9 Distance between multinomial and multivariate normal models SECTION 1 introduces Andrew Carter’s recursive procedure for bounding the Le Cam distance between a multinomialmodelandits approximating multivariatenormal model. SECTION 2 develops notation to describe the recursive construction of randomizations via conditioning arguments, then proves a simple Lemma that serves to combine the conditional bounds into a single recursive inequality. SECTION 3 applies the results from Section 2 to establish the bound involving randomization of the multinomial distribution in Carter’s inequality. SECTION 4 sketches the argument for the bound involving randomization of the multivariate normalin Carter’s inequality. SECTION 5 outlines the calculation for bounding the Hellinger distance between a smoothed Binomialand its approximating normaldistribution. 1. Introduction The multinomial distribution M(n,θ), where θ := (θ1,...,θm ), is the probability m measure on Z+ defined by the joint distribution of the vector of counts in m cells obtained by distributing n balls independently amongst the cells, with each ball assigned to a cell chosen from the distribution θ. The variance matrix nVθ corresponding to M(n,θ) has (i, j)th element nθi {i = j}−nθi θj . The central limit theorem ensures that M(n,θ) is close to the N(nθ,nVθ ), in the sense of weak convergence, for fixed m when n is large. In his doctoral dissertation, Andrew Carter (2000a) considered the deeper problem of bounding the Le Cam distance (M, N) between models M :={M(n,θ) : θ ∈ } and N :={N(nθ,nVθ ) : θ ∈ }, under mild regularity assumptions on . For example, he proved that m log m maxi θi <1>(M, N) ≤ C √ provided sup ≤ C < ∞, n θ∈ mini θi for a constant C that depends only on C.
  • The Beta-Binomial Distribution Introduction Bayesian Derivation

    The Beta-Binomial Distribution Introduction Bayesian Derivation

    In Danish: 2005-09-19 / SLB Translated: 2008-05-28 / SLB Bayesian Statistics, Simulation and Software The beta-binomial distribution I have translated this document, written for another course in Danish, almost as is. I have kept the references to Lee, the textbook used for that course. Introduction In Lee: Bayesian Statistics, the beta-binomial distribution is very shortly mentioned as the predictive distribution for the binomial distribution, given the conjugate prior distribution, the beta distribution. (In Lee, see pp. 78, 214, 156.) Here we shall treat it slightly more in depth, partly because it emerges in the WinBUGS example in Lee x 9.7, and partly because it possibly can be useful for your project work. Bayesian Derivation We make n independent Bernoulli trials (0-1 trials) with probability parameter π. It is well known that the number of successes x has the binomial distribution. Considering the probability parameter π unknown (but of course the sample-size parameter n is known), we have xjπ ∼ bin(n; π); where in generic1 notation n p(xjπ) = πx(1 − π)1−x; x = 0; 1; : : : ; n: x We assume as prior distribution for π a beta distribution, i.e. π ∼ beta(α; β); 1By this we mean that p is not a fixed function, but denotes the density function (in the discrete case also called the probability function) for the random variable the value of which is argument for p. 1 with density function 1 p(π) = πα−1(1 − π)β−1; 0 < π < 1: B(α; β) I remind you that the beta function can be expressed by the gamma function: Γ(α)Γ(β) B(α; β) = : (1) Γ(α + β) In Lee, x 3.1 is shown that the posterior distribution is a beta distribution as well, πjx ∼ beta(α + x; β + n − x): (Because of this result we say that the beta distribution is conjugate distribution to the binomial distribution.) We shall now derive the predictive distribution, that is finding p(x).
  • 9. Binomial Distribution

    9. Binomial Distribution

    J. K. SHAH CLASSES Binomial Distribution 9. Binomial Distribution THEORETICAL DISTRIBUTION (Exists only in theory) 1. Theoretical Distribution is a distribution where the variables are distributed according to some definite mathematical laws. 2. In other words, Theoretical Distributions are mathematical models; where the frequencies / probabilities are calculated by mathematical computation. 3. Theoretical Distribution are also called as Expected Variance Distribution or Frequency Distribution 4. THEORETICAL DISTRIBUTION DISCRETE CONTINUOUS BINOMIAL POISSON Distribution Distribution NORMAL OR Student’s ‘t’ Chi-square Snedecor’s GAUSSIAN Distribution Distribution F-Distribution Distribution A. Binomial Distribution (Bernoulli Distribution) 1. This distribution is a discrete probability Distribution where the variable ‘x’ can assume only discrete values i.e. x = 0, 1, 2, 3,....... n 2. This distribution is derived from a special type of random experiment known as Bernoulli Experiment or Bernoulli Trials , which has the following characteristics (i) Each trial must be associated with two mutually exclusive & exhaustive outcomes – SUCCESS and FAILURE . Usually the probability of success is denoted by ‘p’ and that of the failure by ‘q’ where q = 1-p and therefore p + q = 1. (ii) The trials must be independent under identical conditions. (iii) The number of trial must be finite (countably finite). (iv) Probability of success and failure remains unchanged throughout the process. : 442 : J. K. SHAH CLASSES Binomial Distribution Note 1 : A ‘trial’ is an attempt to produce outcomes which is neither sure nor impossible in nature. Note 2 : The conditions mentioned may also be treated as the conditions for Binomial Distributions. 3. Characteristics or Properties of Binomial Distribution (i) It is a bi parametric distribution i.e.
  • Chapter 6 Continuous Random Variables and Probability

    Chapter 6 Continuous Random Variables and Probability

    EF 507 QUANTITATIVE METHODS FOR ECONOMICS AND FINANCE FALL 2019 Chapter 6 Continuous Random Variables and Probability Distributions Chap 6-1 Probability Distributions Probability Distributions Ch. 5 Discrete Continuous Ch. 6 Probability Probability Distributions Distributions Binomial Uniform Hypergeometric Normal Poisson Exponential Chap 6-2/62 Continuous Probability Distributions § A continuous random variable is a variable that can assume any value in an interval § thickness of an item § time required to complete a task § temperature of a solution § height in inches § These can potentially take on any value, depending only on the ability to measure accurately. Chap 6-3/62 Cumulative Distribution Function § The cumulative distribution function, F(x), for a continuous random variable X expresses the probability that X does not exceed the value of x F(x) = P(X £ x) § Let a and b be two possible values of X, with a < b. The probability that X lies between a and b is P(a < X < b) = F(b) -F(a) Chap 6-4/62 Probability Density Function The probability density function, f(x), of random variable X has the following properties: 1. f(x) > 0 for all values of x 2. The area under the probability density function f(x) over all values of the random variable X is equal to 1.0 3. The probability that X lies between two values is the area under the density function graph between the two values 4. The cumulative density function F(x0) is the area under the probability density function f(x) from the minimum x value up to x0 x0 f(x ) = f(x)dx 0 ò xm where
  • 1. How Different Is the T Distribution from the Normal?

    1. How Different Is the T Distribution from the Normal?

    Statistics 101–106 Lecture 7 (20 October 98) c David Pollard Page 1 Read M&M §7.1 and §7.2, ignoring starred parts. Reread M&M §3.2. The eects of estimated variances on normal approximations. t-distributions. Comparison of two means: pooling of estimates of variances, or paired observations. In Lecture 6, when discussing comparison of two Binomial proportions, I was content to estimate unknown variances when calculating statistics that were to be treated as approximately normally distributed. You might have worried about the effect of variability of the estimate. W. S. Gosset (“Student”) considered a similar problem in a very famous 1908 paper, where the role of Student’s t-distribution was first recognized. Gosset discovered that the effect of estimated variances could be described exactly in a simplified problem where n independent observations X1,...,Xn are taken from (, ) = ( + ...+ )/ a normal√ distribution, N . The sample mean, X X1 Xn n has a N(, / n) distribution. The random variable X Z = √ / n 2 2 Phas a standard normal distribution. If we estimate by the sample variance, s = ( )2/( ) i Xi X n 1 , then the resulting statistic, X T = √ s/ n no longer has a normal distribution. It has a t-distribution on n 1 degrees of freedom. Remark. I have written T , instead of the t used by M&M page 505. I find it causes confusion that t refers to both the name of the statistic and the name of its distribution. As you will soon see, the estimation of the variance has the effect of spreading out the distribution a little beyond what it would be if were used.
  • A Matrix-Valued Bernoulli Distribution Gianfranco Lovison1 Dipartimento Di Scienze Statistiche E Matematiche “S

    A Matrix-Valued Bernoulli Distribution Gianfranco Lovison1 Dipartimento Di Scienze Statistiche E Matematiche “S

    Journal of Multivariate Analysis 97 (2006) 1573–1585 www.elsevier.com/locate/jmva A matrix-valued Bernoulli distribution Gianfranco Lovison1 Dipartimento di Scienze Statistiche e Matematiche “S. Vianelli”, Università di Palermo Viale delle Scienze 90128 Palermo, Italy Received 17 September 2004 Available online 10 August 2005 Abstract Matrix-valued distributions are used in continuous multivariate analysis to model sample data matrices of continuous measurements; their use seems to be neglected for binary, or more generally categorical, data. In this paper we propose a matrix-valued Bernoulli distribution, based on the log- linear representation introduced by Cox [The analysis of multivariate binary data, Appl. Statist. 21 (1972) 113–120] for the Multivariate Bernoulli distribution with correlated components. © 2005 Elsevier Inc. All rights reserved. AMS 1991 subject classification: 62E05; 62Hxx Keywords: Correlated multivariate binary responses; Multivariate Bernoulli distribution; Matrix-valued distributions 1. Introduction Matrix-valued distributions are used in continuous multivariate analysis (see, for exam- ple, [10]) to model sample data matrices of continuous measurements, allowing for both variable-dependence and unit-dependence. Their potentials seem to have been neglected for binary, and more generally categorical, data. This is somewhat surprising, since the natural, elementary representation of datasets with categorical variables is precisely in the form of sample binary data matrices, through the 0-1 coding of categories. E-mail address: [email protected] URL: http://dssm.unipa.it/lovison. 1 Work supported by MIUR 60% 2000, 2001 and MIUR P.R.I.N. 2002 grants. 0047-259X/$ - see front matter © 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.jmva.2005.06.008 1574 G.
  • Skewed Double Exponential Distribution and Its Stochastic Rep- Resentation

    Skewed Double Exponential Distribution and Its Stochastic Rep- Resentation

    EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS Vol. 2, No. 1, 2009, (1-20) ISSN 1307-5543 – www.ejpam.com Skewed Double Exponential Distribution and Its Stochastic Rep- resentation 12 2 2 Keshav Jagannathan , Arjun K. Gupta ∗, and Truc T. Nguyen 1 Coastal Carolina University Conway, South Carolina, U.S.A 2 Bowling Green State University Bowling Green, Ohio, U.S.A Abstract. Definitions of the skewed double exponential (SDE) distribution in terms of a mixture of double exponential distributions as well as in terms of a scaled product of a c.d.f. and a p.d.f. of double exponential random variable are proposed. Its basic properties are studied. Multi-parameter versions of the skewed double exponential distribution are also given. Characterization of the SDE family of distributions and stochastic representation of the SDE distribution are derived. AMS subject classifications: Primary 62E10, Secondary 62E15. Key words: Symmetric distributions, Skew distributions, Stochastic representation, Linear combina- tion of random variables, Characterizations, Skew Normal distribution. 1. Introduction The double exponential distribution was first published as Laplace’s first law of error in the year 1774 and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the error, disregarding sign. This distribution comes up as a model in many statistical problems. It is also considered in robustness studies, which suggests that it provides a model with different characteristics ∗Corresponding author. Email address: (A. Gupta) http://www.ejpam.com 1 c 2009 EJPAM All rights reserved. K. Jagannathan, A. Gupta, and T. Nguyen / Eur.
  • 3.2.3 Binomial Distribution

    3.2.3 Binomial Distribution

    3.2.3 Binomial Distribution The binomial distribution is based on the idea of a Bernoulli trial. A Bernoulli trail is an experiment with two, and only two, possible outcomes. A random variable X has a Bernoulli(p) distribution if 8 > <1 with probability p X = > :0 with probability 1 − p, where 0 ≤ p ≤ 1. The value X = 1 is often termed a “success” and X = 0 is termed a “failure”. The mean and variance of a Bernoulli(p) random variable are easily seen to be EX = (1)(p) + (0)(1 − p) = p and VarX = (1 − p)2p + (0 − p)2(1 − p) = p(1 − p). In a sequence of n identical, independent Bernoulli trials, each with success probability p, define the random variables X1,...,Xn by 8 > <1 with probability p X = i > :0 with probability 1 − p. The random variable Xn Y = Xi i=1 has the binomial distribution and it the number of sucesses among n independent trials. The probability mass function of Y is µ ¶ ¡ ¢ n ¡ ¢ P Y = y = py 1 − p n−y. y For this distribution, t n EX = np, Var(X) = np(1 − p),MX (t) = [pe + (1 − p)] . 1 Theorem 3.2.2 (Binomial theorem) For any real numbers x and y and integer n ≥ 0, µ ¶ Xn n (x + y)n = xiyn−i. i i=0 If we take x = p and y = 1 − p, we get µ ¶ Xn n 1 = (p + (1 − p))n = pi(1 − p)n−i. i i=0 Example 3.2.2 (Dice probabilities) Suppose we are interested in finding the probability of obtaining at least one 6 in four rolls of a fair die.
  • Random Variable = a Real-Valued Function of an Outcome X = F(Outcome)

    Random Variable = a Real-Valued Function of an Outcome X = F(Outcome)

    Random Variables (Chapter 2) Random variable = A real-valued function of an outcome X = f(outcome) Domain of X: Sample space of the experiment. Ex: Consider an experiment consisting of 3 Bernoulli trials. Bernoulli trial = Only two possible outcomes – success (S) or failure (F). • “IF” statement: if … then “S” else “F” • Examine each component. S = “acceptable”, F = “defective”. • Transmit binary digits through a communication channel. S = “digit received correctly”, F = “digit received incorrectly”. Suppose the trials are independent and each trial has a probability ½ of success. X = # successes observed in the experiment. Possible values: Outcome Value of X (SSS) (SSF) (SFS) … … (FFF) Random variable: • Assigns a real number to each outcome in S. • Denoted by X, Y, Z, etc., and its values by x, y, z, etc. • Its value depends on chance. • Its value becomes available once the experiment is completed and the outcome is known. • Probabilities of its values are determined by the probabilities of the outcomes in the sample space. Probability distribution of X = A table, formula or a graph that summarizes how the total probability of one is distributed over all the possible values of X. In the Bernoulli trials example, what is the distribution of X? 1 Two types of random variables: Discrete rv = Takes finite or countable number of values • Number of jobs in a queue • Number of errors • Number of successes, etc. Continuous rv = Takes all values in an interval – i.e., it has uncountable number of values. • Execution time • Waiting time • Miles per gallon • Distance traveled, etc. Discrete random variables X = A discrete rv.
  • Extremal Dependence Concepts

    Extremal Dependence Concepts

    Extremal dependence concepts Giovanni Puccetti1 and Ruodu Wang2 1Department of Economics, Management and Quantitative Methods, University of Milano, 20122 Milano, Italy 2Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON N2L3G1, Canada Journal version published in Statistical Science, 2015, Vol. 30, No. 4, 485{517 Minor corrections made in May and June 2020 Abstract The probabilistic characterization of the relationship between two or more random variables calls for a notion of dependence. Dependence modeling leads to mathematical and statistical challenges and recent developments in extremal dependence concepts have drawn a lot of attention to probability and its applications in several disciplines. The aim of this paper is to review various concepts of extremal positive and negative dependence, including several recently established results, reconstruct their history, link them to probabilistic optimization problems, and provide a list of open questions in this area. While the concept of extremal positive dependence is agreed upon for random vectors of arbitrary dimensions, various notions of extremal negative dependence arise when more than two random variables are involved. We review existing popular concepts of extremal negative dependence given in literature and introduce a novel notion, which in a general sense includes the existing ones as particular cases. Even if much of the literature on dependence is focused on positive dependence, we show that negative dependence plays an equally important role in the solution of many optimization problems. While the most popular tool used nowadays to model dependence is that of a copula function, in this paper we use the equivalent concept of a set of rearrangements.