Alpha-Skew Generalized Normal Distribution and Its Applications

Total Page:16

File Type:pdf, Size:1020Kb

Alpha-Skew Generalized Normal Distribution and Its Applications Available at http://pvamu.edu/aam Applications and Applied Appl. Appl. Math. Mathematics: ISSN: 1932-9466 An International Journal (AAM) Vol. 14, Issue 2 (December 2019), pp. 784 - 804 Alpha-Skew Generalized Normal Distribution and its Applications 1*Eisa Mahmoudi, 2Hamideh Jafari and 3RahmatSadat Meshkat Department of Statistics Yazd University Yazd, Iran [email protected]; [email protected]; [email protected] * Corresponding Author Received: July 8, 2018; Accepted: November 11, 2019 Abstract The main object of this paper is to introduce a new family of distributions, which is quite flexible to fit both unimodal and bimodal shapes. This new family is entitled alpha-skew generalized normal (ASGN), that skews the symmetric distributions, especially generalized normal distribution through this paper. Here, some properties of this new distribution including cumulative distribution function, survival function, hazard rate function and moments are derived. To estimate the model parameters, the maximum likelihood estimators and the asymptotic distribution of the estimators are discussed. The observed information matrix is derived. Finally, the flexibility of the new distribution, as well as its effectiveness in comparison with other distributions, are shown via an application. Keywords: Alpha-skew distribution; Generalized normal distribution; Maximum likelihood estimator; Skew-normal distribution MSC 2010 No.: 60E05, 62F10 1. Introduction Recently, the skew-symmetric distributions have received considerable amount of attention, for example, the alpha skew normal (ASN) distribution is studied by Elal-Olivero (2010). First, he introduced a new symmetric bimodal-normal distribution with the pdf 784 AAM: Intern. J., Vol. 14, Issue 2 (December 2019) 785 f( y ) y2 ( y ), where is standard normal density function, and then defined alpha-skew-normal distribution as (1 y )2 1 f( y ; ) ( y ), y ¡¡, , 2 2 where is skew parameter. A random variable with ASN distribution is denoted by X~ ASN ( ). Also, Shams Harandi and Alamatsaz (2012) introduced the alpha-skew Laplace (ASL) distribution with the pdf (1 y )2 1 f( y ; ) e||y , y ¡¡, , 4(1 2 ) and investigated some of its properties and denoted this random variable with X~ ASL ( ) . In this paper, a more general case of these two mentioned distributions is presented, which is called as alpha-skew generalized normal (ASGN) distribution which is obtained by adding a skew parameter to the generalized normal distribution (GND) proposed first by Lee et al. (2013) with pdf 1 ||y f( y ; ) e , y ¡ , 0. 1 1 2 The motivations for considering this density are: 1. The alpha-skew generalized normal distribution with at most two modes is very flexible and includes five classes of important distributions: Normal, Laplace, Alpha-Skew- Normal, Alpha-Skew-Laplace and Bimodal-Generalized Normal as special cases. 2. It seems that the admissible intervals for the skewness and the kurtosis parameters are 0.685, 3.803 and 1.250, 3.325 , respectively, which are wider than those of the Azzalini’s the ASN distributions, 0.811, 0.811 and 1.300, 0.749 , respectively. 3. In many applied studies, data may be skew and bimodal with thinner or thicker tails than normal. We believe that the distribution ASGN can illustrate a better fit with respect to the fitted models for some this kind of real data sets. It is obvious that the new introduced models are not provided best fit in compared to all other models, but of course it is necessary to provide best fit in compared to its sub-models. The remainder of the paper is organized as follows: In Section 2, we define the ASGN distribution and outline some special cases of the distribution. Also, some properties of the distribution are 786 E. Mahmoudi et al. investigated in this section. cumulative distribution, survival and hazard rate functions are obtained in Section 3. we provide a general expansion for the moments of the ASGN distribution in Section 4. Skewness and kurtosis indices are presented in Section 5. Stochastic representation for this model is discussed in Section 6. We introduce the location-scale version of this distribution in Section 7. In Section 8, we focus on maximum likelihood estimation (MLE) and calculate the elements of the observed information matrix. A simulation study is performed in Section 9. Application of the ASGN distribution using a set of real data is given in Section 10. Finally, Section 11 concludes the paper. 2. Alpha-skew generalized normal distribution Definition 2.1. If a random variable Y has the following density function, 3 y ye2 f( y ; ) , y ¡ , 3 2 where 0 , then we say that follows the bimodal-generalized normal BGN distribution. Definition 2.2. If a random variable X has the following density function, 1 1 2 x (1x ) 1 f(;,) x e , x ¡ , (1) 2 2/ 31 22 where ¡ and 0 , then we say that follows the alpha-skew generalized normal distribution with parameters and , which is denoted by X~ ASGN ( , ) . If X~ ASGN ( , ) , the following properties are concluded directly from the definition: If 0 and 1, then X~ Laplas (0,1) . If 0 and 2 , then XN~ (0,1) . If , then X~ ASL ( ) . If , then X~ ASN ( ) . If , then X~ BGN ( ) . X~ ASGN ( ) . AAM: Intern. J., Vol. 14, Issue 2 (December 2019) 787 Theorem 2.1. The alpha-skew generalized normal density function has at most two modes. Proof: Differentiating (1), we have 1 x 2 e2 ( x 1) 22 x 2 x 2 x x ) fx . 2 2/ 31 22 Due to , the above equation is not solved simply. So, it is sufficient to prove the especial cases. Putting 1 in the above equation, we get ex 22 x2 1 x 2 2 if x 0, 1 fx( ) (2) 2 41 x 22 e x 2 1 x 2 2 if x 0. It is obvious that each expression in the above has at most two roots. The roots of second 11 2 11 2 expression occur in x and x and the roots of first expression 1 2 11 2 11 2 occur in x* and x* . While 2 10, we have the following 1 2 cases * * a) for 1: it is easily observed that x2 0 , xx11, 0 and x2 0; * * b) for 1: it is observed that x2 0 , xx11,0 and x2 0 . Therefore, there are only three acceptable roots in each case. Hence, fx can have at most two modes. For 11 , fx () has not real root, but it is positive for x 0 and is negative for x 0 . In this case, is unimodal. Putting 2 , we obtain ()x 2 3 2 2 f( x )2 x 2 x 2 2 x 2 , 2 788 E. Mahmoudi et al. where fx () has at most three roots and also in this case, fx has at most two modes. For 2, the complicated mathematical computations are needed, but the claim can be proved by drawing some plots of density function for different values of . Plots of the alpha-skew generalized normal density function for selected parameter values are given in Figure 1. Figure 1. Plots of the alpha skew generalized normal density function for different values of and 3. The cumulative distribution, survival and hazard rate functions If X~ ASGN ( , ) , then cumulative distribution function of X is given as AAM: Intern. J., Vol. 14, Issue 2 (December 2019) 789 2 2/3 (x ) 1/ 2 ( x ) 1 ( x ) , 2 , 2 , if x 0, 2 2/ 31 22 Fx( ; , ) 2 2/3 2 2/ 3x 1/ 2 x 1 x 1 2 , 2 , 2 , 4 , if x 0, 2 2/ 31 2 2 where the incomplete gamma is defined as (a , x ) ta1 exp( t ) dt . x The survival and hazard rate functions of random variable from the ASGN distribution is as following, if x 0 , 1 1 2 2/3 2 1/ 2 2 2/ 3 (x ) 2 2 , Sx() 2 2/ 31 22 1/ 2 (xx ) 1 ( ) 1 1 , , 1 , 2 2/ 31 2 1 1 2 2/3 2 1/ 2 2 2/ 3 (x ) 2 2 , Hx() 3 (x ) 2 ( x ) 1 ( x ) 1/ 1/ , 2 , 2 , 1 1/ 2 (xx ) 1 ( ) 11 2 , 2 , 2 1 2 , 3 (x ) 2 ( x ) 1 ( x ) 1/ 1/ , 2 , 2 , and if x 0 , X 790 E. Mahmoudi et al. 2 2/3x 1/ 2 x 1 x , 2 , 2 , Sx(), 2 2/ 31 22 2 2/3 2 2/ 3x 1/ 2 x 1 x 1 2 , 2 , 2 , 4 Hx() . 1/ 1/ 3 x 21xx , 2 , 2 , 4. Central moments and moment generating function If X~ ASGN ( , ) , then the r th central moments of X can be obtained as r/ 1/ r 1/ rr 32 r ( 1) 1 2 ( 1) 1 r EX() 2 2/ 31 24 rr/ r 1 2 ( 1) 1 , 2 2/ 31 24 and the moment generating function of can be written as nn tX t E() X MX ()() t E e n0 n! r/ 1/ r 1/ rr 32 r ( 1) 1 2 ( 1) 1 t n n0 n! 2 2/ 31 24 r1 r r 1 2 ( 1) 1 . 2 2/ 31 24 By applying mathematical calculations and separating even and odd expressions, we get AAM: Intern. J., Vol. 14, Issue 2 (December 2019) 791 2nn 2 2 2n 1 2 n 3 2 t 2n 42 t 21n MtX (). n0 (2n )!2 2/ 31 24 5. The indices of skewness and kurtosis Suppose X~ ASGN ( , ) . Using first to fourth central moments, 1 and 2 can be obtained which are the indices of skewness and kurtosis of ASGN distribution, respectively. 2/ 3 2 EX(),1 2 2/ 31 2 2/ 2 2/ 53 2 2 EX(),2 2 2/ 31 2 4/ 5 2 3 EX(),3 2 2/ 31 2 4/ 2 2/ 7 5 2 4 EX( ) 4 .
Recommended publications
  • Summary Notes for Survival Analysis
    Summary Notes for Survival Analysis Instructor: Mei-Cheng Wang Department of Biostatistics Johns Hopkins University Spring, 2006 1 1 Introduction 1.1 Introduction De¯nition: A failure time (survival time, lifetime), T , is a nonnegative-valued random vari- able. For most of the applications, the value of T is the time from a certain event to a failure event. For example, a) in a clinical trial, time from start of treatment to a failure event b) time from birth to death = age at death c) to study an infectious disease, time from onset of infection to onset of disease d) to study a genetic disease, time from birth to onset of a disease = onset age 1.2 De¯nitions De¯nition. Cumulative distribution function F (t). F (t) = Pr(T · t) De¯nition. Survial function S(t). S(t) = Pr(T > t) = 1 ¡ Pr(T · t) Characteristics of S(t): a) S(t) = 1 if t < 0 b) S(1) = limt!1 S(t) = 0 c) S(t) is non-increasing in t In general, the survival function S(t) provides useful summary information, such as the me- dian survival time, t-year survival rate, etc. De¯nition. Density function f(t). 2 a) If T is a discrete random variable, f(t) = Pr(T = t) b) If T is (absolutely) continuous, the density function is Pr (Failure occurring in [t; t + ¢t)) f(t) = lim ¢t!0+ ¢t = Rate of occurrence of failure at t: Note that dF (t) dS(t) f(t) = = ¡ : dt dt De¯nition. Hazard function ¸(t).
    [Show full text]
  • A Skew Extension of the T-Distribution, with Applications
    J. R. Statist. Soc. B (2003) 65, Part 1, pp. 159–174 A skew extension of the t-distribution, with applications M. C. Jones The Open University, Milton Keynes, UK and M. J. Faddy University of Birmingham, UK [Received March 2000. Final revision July 2002] Summary. A tractable skew t-distribution on the real line is proposed.This includes as a special case the symmetric t-distribution, and otherwise provides skew extensions thereof.The distribu- tion is potentially useful both for modelling data and in robustness studies. Properties of the new distribution are presented. Likelihood inference for the parameters of this skew t-distribution is developed. Application is made to two data modelling examples. Keywords: Beta distribution; Likelihood inference; Robustness; Skewness; Student’s t-distribution 1. Introduction Student’s t-distribution occurs frequently in statistics. Its usual derivation and use is as the sam- pling distribution of certain test statistics under normality, but increasingly the t-distribution is being used in both frequentist and Bayesian statistics as a heavy-tailed alternative to the nor- mal distribution when robustness to possible outliers is a concern. See Lange et al. (1989) and Gelman et al. (1995) and references therein. It will often be useful to consider a further alternative to the normal or t-distribution which is both heavy tailed and skew. To this end, we propose a family of distributions which includes the symmetric t-distributions as special cases, and also includes extensions of the t-distribution, still taking values on the whole real line, with non-zero skewness. Let a>0 and b>0be parameters.
    [Show full text]
  • Supplementary Materials
    Int. J. Environ. Res. Public Health 2018, 15, 2042 1 of 4 Supplementary Materials S1) Re-parametrization of log-Laplace distribution The three-parameter log-Laplace distribution, LL(δ ,,ab) of relative risk, μ , is given by the probability density function b−1 μ ,0<<μ δ 1 ab δ f ()μ = . δ ab+ a+1 δ , μ ≥ δ μ 1−τ τ Let b = and a = . Hence, the probability density function of LL( μ ,,τσ) can be re- σ σ τ written as b μ ,0<<=μ δ exp(μ ) 1 ab δ f ()μ = ya+ b δ a μδ≥= μ ,exp() μ 1 ab exp(b (log(μμ )−< )), log( μμ ) = μ ab+ exp(a (μμ−≥ log( ))), log( μμ ) 1 ab exp(−−b| log(μμ ) |), log( μμ ) < = μ ab+ exp(−−a|μμ log( ) |), log( μμ ) ≥ μμ− −−τ | log( )τ | μμ < exp (1 ) , log( ) τ 1(1)ττ− σ = μσ μμ− −≥τμμ| log( )τ | exp , log( )τ . σ S2) Bayesian quantile estimation Let Yi be the continuous response variable i and Xi be the corresponding vector of covariates with the first element equals one. At a given quantile levelτ ∈(0,1) , the τ th conditional quantile T i i = β τ QY(|X ) τ of y given X is then QYτ (|iiXX ) ii ()where τ iiis the th conditional quantile and β τ i ()is a vector of quantile coefficients. Contrary to the mean regression generally aiming to minimize the squared loss function, the quantile regression links to a special class of check or loss τ βˆ τ function. The th conditional quantile can be estimated by any solution, i (), such that ββˆ τ =−ρ T τ ρ =−<τ iiii() argmini τ (Y X ()) where τ ()zz ( Iz ( 0))is the quantile loss function given in (1) and I(.) is the indicator function.
    [Show full text]
  • 1. How Different Is the T Distribution from the Normal?
    Statistics 101–106 Lecture 7 (20 October 98) c David Pollard Page 1 Read M&M §7.1 and §7.2, ignoring starred parts. Reread M&M §3.2. The eects of estimated variances on normal approximations. t-distributions. Comparison of two means: pooling of estimates of variances, or paired observations. In Lecture 6, when discussing comparison of two Binomial proportions, I was content to estimate unknown variances when calculating statistics that were to be treated as approximately normally distributed. You might have worried about the effect of variability of the estimate. W. S. Gosset (“Student”) considered a similar problem in a very famous 1908 paper, where the role of Student’s t-distribution was first recognized. Gosset discovered that the effect of estimated variances could be described exactly in a simplified problem where n independent observations X1,...,Xn are taken from (, ) = ( + ...+ )/ a normal√ distribution, N . The sample mean, X X1 Xn n has a N(, / n) distribution. The random variable X Z = √ / n 2 2 Phas a standard normal distribution. If we estimate by the sample variance, s = ( )2/( ) i Xi X n 1 , then the resulting statistic, X T = √ s/ n no longer has a normal distribution. It has a t-distribution on n 1 degrees of freedom. Remark. I have written T , instead of the t used by M&M page 505. I find it causes confusion that t refers to both the name of the statistic and the name of its distribution. As you will soon see, the estimation of the variance has the effect of spreading out the distribution a little beyond what it would be if were used.
    [Show full text]
  • SOLUTION for HOMEWORK 1, STAT 3372 Welcome to Your First
    SOLUTION FOR HOMEWORK 1, STAT 3372 Welcome to your first homework. Remember that you are always allowed to use Tables allowed on SOA/CAS exam 4. You can find them on my webpage. Another remark: 6 minutes per problem is your “speed”. Now you probably will not be able to solve your problems so fast — but this is the goal. Try to find mistakes (and get extra points) in my solutions. Typically they are silly arithmetic mistakes (not methodological ones). They allow me to check that you did your HW on your own. Please do not e-mail me about your findings — just mention them on the first page of your solution and count extra points. You can use them to compensate for wrongly solved problems in this homework. They cannot be counted beyond 10 maximum points for each homework. Now let us look at your problems. General Remark: It is always prudent to begin with writing down what is given and what you need to establish. This step can help you to understand/guess a possible solution. Also, your solution should be written neatly so, if you have extra time, it is easier to check your solution. 1. Problem 2.4 Given: The hazard function is h(x)=(A + e2x)I(x ≥ 0) and S(0.4) = 0.5. Find: A Solution: I need to relate h(x) and S(x) to find A. There is a formula on page 17, also discussed in class, which is helpful here. Namely, x − h(u)du S(x)= e 0 . R It allows us to find A.
    [Show full text]
  • Study of Generalized Lomax Distribution and Change Point Problem
    STUDY OF GENERALIZED LOMAX DISTRIBUTION AND CHANGE POINT PROBLEM Amani Alghamdi A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 2018 Committee: Arjun K. Gupta, Committee Co-Chair Wei Ning, Committee Co-Chair Jane Chang, Graduate Faculty Representative John Chen Copyright c 2018 Amani Alghamdi All rights reserved iii ABSTRACT Arjun K. Gupta and Wei Ning, Committee Co-Chair Generalizations of univariate distributions are often of interest to serve for real life phenomena. These generalized distributions are very useful in many fields such as medicine, physics, engineer- ing and biology. Lomax distribution (Pareto-II) is one of the well known univariate distributions that is considered as an alternative to the exponential, gamma, and Weibull distributions for heavy tailed data. However, this distribution does not grant great flexibility in modeling data. In this dissertation, we introduce a generalization of the Lomax distribution called Rayleigh Lo- max (RL) distribution using the form obtained by El-Bassiouny et al. (2015). This distribution provides great fit in modeling wide range of real data sets. It is a very flexible distribution that is related to some of the useful univariate distributions such as exponential, Weibull and Rayleigh dis- tributions. Moreover, this new distribution can also be transformed to a lifetime distribution which is applicable in many situations. For example, we obtain the inverse estimation and confidence intervals in the case of progressively Type-II right censored situation. We also apply Schwartz information approach (SIC) and modified information approach (MIC) to detect the changes in parameters of the RL distribution.
    [Show full text]
  • A Study of Non-Central Skew T Distributions and Their Applications in Data Analysis and Change Point Detection
    A STUDY OF NON-CENTRAL SKEW T DISTRIBUTIONS AND THEIR APPLICATIONS IN DATA ANALYSIS AND CHANGE POINT DETECTION Abeer M. Hasan A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 2013 Committee: Arjun K. Gupta, Co-advisor Wei Ning, Advisor Mark Earley, Graduate Faculty Representative Junfeng Shang. Copyright c August 2013 Abeer M. Hasan All rights reserved iii ABSTRACT Arjun K. Gupta, Co-advisor Wei Ning, Advisor Over the past three decades there has been a growing interest in searching for distribution families that are suitable to analyze skewed data with excess kurtosis. The search started by numerous papers on the skew normal distribution. Multivariate t distributions started to catch attention shortly after the development of the multivariate skew normal distribution. Many researchers proposed alternative methods to generalize the univariate t distribution to the multivariate case. Recently, skew t distribution started to become popular in research. Skew t distributions provide more flexibility and better ability to accommodate long-tailed data than skew normal distributions. In this dissertation, a new non-central skew t distribution is studied and its theoretical properties are explored. Applications of the proposed non-central skew t distribution in data analysis and model comparisons are studied. An extension of our distribution to the multivariate case is presented and properties of the multivariate non-central skew t distri- bution are discussed. We also discuss the distribution of quadratic forms of the non-central skew t distribution. In the last chapter, the change point problem of the non-central skew t distribution is discussed under different settings.
    [Show full text]
  • 1 One Parameter Exponential Families
    1 One parameter exponential families The world of exponential families bridges the gap between the Gaussian family and general dis- tributions. Many properties of Gaussians carry through to exponential families in a fairly precise sense. • In the Gaussian world, there exact small sample distributional results (i.e. t, F , χ2). • In the exponential family world, there are approximate distributional results (i.e. deviance tests). • In the general setting, we can only appeal to asymptotics. A one-parameter exponential family, F is a one-parameter family of distributions of the form Pη(dx) = exp (η · t(x) − Λ(η)) P0(dx) for some probability measure P0. The parameter η is called the natural or canonical parameter and the function Λ is called the cumulant generating function, and is simply the normalization needed to make dPη fη(x) = (x) = exp (η · t(x) − Λ(η)) dP0 a proper probability density. The random variable t(X) is the sufficient statistic of the exponential family. Note that P0 does not have to be a distribution on R, but these are of course the simplest examples. 1.0.1 A first example: Gaussian with linear sufficient statistic Consider the standard normal distribution Z e−z2=2 P0(A) = p dz A 2π and let t(x) = x. Then, the exponential family is eη·x−x2=2 Pη(dx) / p 2π and we see that Λ(η) = η2=2: eta= np.linspace(-2,2,101) CGF= eta**2/2. plt.plot(eta, CGF) A= plt.gca() A.set_xlabel(r'$\eta$', size=20) A.set_ylabel(r'$\Lambda(\eta)$', size=20) f= plt.gcf() 1 Thus, the exponential family in this setting is the collection F = fN(η; 1) : η 2 Rg : d 1.0.2 Normal with quadratic sufficient statistic on R d As a second example, take P0 = N(0;Id×d), i.e.
    [Show full text]
  • On the Meaning and Use of Kurtosis
    Psychological Methods Copyright 1997 by the American Psychological Association, Inc. 1997, Vol. 2, No. 3,292-307 1082-989X/97/$3.00 On the Meaning and Use of Kurtosis Lawrence T. DeCarlo Fordham University For symmetric unimodal distributions, positive kurtosis indicates heavy tails and peakedness relative to the normal distribution, whereas negative kurtosis indicates light tails and flatness. Many textbooks, however, describe or illustrate kurtosis incompletely or incorrectly. In this article, kurtosis is illustrated with well-known distributions, and aspects of its interpretation and misinterpretation are discussed. The role of kurtosis in testing univariate and multivariate normality; as a measure of departures from normality; in issues of robustness, outliers, and bimodality; in generalized tests and estimators, as well as limitations of and alternatives to the kurtosis measure [32, are discussed. It is typically noted in introductory statistics standard deviation. The normal distribution has a kur- courses that distributions can be characterized in tosis of 3, and 132 - 3 is often used so that the refer- terms of central tendency, variability, and shape. With ence normal distribution has a kurtosis of zero (132 - respect to shape, virtually every textbook defines and 3 is sometimes denoted as Y2)- A sample counterpart illustrates skewness. On the other hand, another as- to 132 can be obtained by replacing the population pect of shape, which is kurtosis, is either not discussed moments with the sample moments, which gives or, worse yet, is often described or illustrated incor- rectly. Kurtosis is also frequently not reported in re- ~(X i -- S)4/n search articles, in spite of the fact that virtually every b2 (•(X i - ~')2/n)2' statistical package provides a measure of kurtosis.
    [Show full text]
  • A Tail Quantile Approximation Formula for the Student T and the Symmetric Generalized Hyperbolic Distribution
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Schlüter, Stephan; Fischer, Matthias J. Working Paper A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution IWQW Discussion Papers, No. 05/2009 Provided in Cooperation with: Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics Suggested Citation: Schlüter, Stephan; Fischer, Matthias J. (2009) : A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution, IWQW Discussion Papers, No. 05/2009, Friedrich-Alexander-Universität Erlangen-Nürnberg, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung (IWQW), Nürnberg This Version is available at: http://hdl.handle.net/10419/29554 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu IWQW Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung Diskussionspapier Discussion Papers No.
    [Show full text]
  • Chapter 5 Sections
    Chapter 5 Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions Continuous univariate distributions: 5.6 Normal distributions 5.7 Gamma distributions Just skim 5.8 Beta distributions Multivariate distributions Just skim 5.9 Multinomial distributions 5.10 Bivariate normal distributions 1 / 43 Chapter 5 5.1 Introduction Families of distributions How: Parameter and Parameter space pf /pdf and cdf - new notation: f (xj parameters ) Mean, variance and the m.g.f. (t) Features, connections to other distributions, approximation Reasoning behind a distribution Why: Natural justification for certain experiments A model for the uncertainty in an experiment All models are wrong, but some are useful – George Box 2 / 43 Chapter 5 5.2 Bernoulli and Binomial distributions Bernoulli distributions Def: Bernoulli distributions – Bernoulli(p) A r.v. X has the Bernoulli distribution with parameter p if P(X = 1) = p and P(X = 0) = 1 − p. The pf of X is px (1 − p)1−x for x = 0; 1 f (xjp) = 0 otherwise Parameter space: p 2 [0; 1] In an experiment with only two possible outcomes, “success” and “failure”, let X = number successes. Then X ∼ Bernoulli(p) where p is the probability of success. E(X) = p, Var(X) = p(1 − p) and (t) = E(etX ) = pet + (1 − p) 8 < 0 for x < 0 The cdf is F(xjp) = 1 − p for 0 ≤ x < 1 : 1 for x ≥ 1 3 / 43 Chapter 5 5.2 Bernoulli and Binomial distributions Binomial distributions Def: Binomial distributions – Binomial(n; p) A r.v.
    [Show full text]
  • Univariate and Multivariate Skewness and Kurtosis 1
    Running head: UNIVARIATE AND MULTIVARIATE SKEWNESS AND KURTOSIS 1 Univariate and Multivariate Skewness and Kurtosis for Measuring Nonnormality: Prevalence, Influence and Estimation Meghan K. Cain, Zhiyong Zhang, and Ke-Hai Yuan University of Notre Dame Author Note This research is supported by a grant from the U.S. Department of Education (R305D140037). However, the contents of the paper do not necessarily represent the policy of the Department of Education, and you should not assume endorsement by the Federal Government. Correspondence concerning this article can be addressed to Meghan Cain ([email protected]), Ke-Hai Yuan ([email protected]), or Zhiyong Zhang ([email protected]), Department of Psychology, University of Notre Dame, 118 Haggar Hall, Notre Dame, IN 46556. UNIVARIATE AND MULTIVARIATE SKEWNESS AND KURTOSIS 2 Abstract Nonnormality of univariate data has been extensively examined previously (Blanca et al., 2013; Micceri, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74% of univariate distributions and 68% multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17% in a t-test and 30% in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances.
    [Show full text]