Use of Proc Iml to Calculate L-Moments for the Univariate Distributional Shape Parameters Skewness and Kurtosis

Total Page:16

File Type:pdf, Size:1020Kb

Use of Proc Iml to Calculate L-Moments for the Univariate Distributional Shape Parameters Skewness and Kurtosis Statistics 573 USE OF PROC IML TO CALCULATE L-MOMENTS FOR THE UNIVARIATE DISTRIBUTIONAL SHAPE PARAMETERS SKEWNESS AND KURTOSIS Michael A. Walega Berlex Laboratories, Wayne, New Jersey Introduction Exploratory data analysis statistics, such as those Gaussian. Bickel (1988) and Van Oer Laan and generated by the sp,ge procedure PROC Verdooren (1987) discuss the concept of robustness UNIVARIATE (1990), are useful tools to characterize and how it pertains to the assumption of normality. the underlying distribution of data prior to more rigorous statistical analyses. Assessment of the As discussed by Glass et al. (1972), incorrect distributional shape of data is usually accomplished conclusions may be reached when the normality by careful examination of the values of the third and assumption is not valid, especially when one-tail tests fourth central moments, skewness and kurtosis. are employed or the sample size or significance level However, when the sample size is small or the are very small. Hopkins and Weeks (1990) also underlying distribution is non-normal, the information discuss the effects of highly non-normal data on obtained from the sample skewness and kurtosis can hypothesis testing of variances. Thus, it is apparent be misleading. that examination of the skewness (departure from symmetry) and kurtosis (deviation from a normal One alternative to the central moment shape statistics curve) is an important component of exploratory data is the use of linear combinations of order statistics (L­ analyses. moments) to examine the distributional shape characteristics of data. L-moments have several Various methods to estimate skewness and kurtosis theoretical advantages over the central moment have been proposed (MacGillivray and Salanela, shape statistics: Characterization of a wider range of 1988). For many years, the conventional coefficients cflstributions, robustness to outliers and more of skewness and kurtosiS, 'Yand K (Hosking, 1990), accurate estimates in small sample sizes. have been used to describe the shape characteristics of distributions. However, as pointed out by Hosking This paper focuses on the development of a macro (1990) and Royston (1992), these coefficients are not program that uses SASIIML- (1989) to generate the without limitations. Both are sensitive to minute central moment and L-moment distributional shape changes in the tails of a distribution, susceptible to parameters. In addition, the results of simulations, moderate outliers and biased in small to moderately­ conducted with various sample sizes and sized samples from skew distributions. Also, the distributions, will be presented. information conveyed by the third and fourth central moments with regards to the shape of a distribution Background can be difficult to assess. Thus, it would be appropriate to determine if other, more robust Largely through the influence of John Tukey's work measures of skewness and kurtosis can be used to (19n), statisticians have increasingly emphasized the assess the shape of a distnbution. exploratory analysis of data prior to more formal statistical analyses (t-tests, ANOVA, etc.). Tukey has L-moments suggested that to fully understand the nature of a variable and its measurement, characteriStics other One more robust measure is the use of linear than the central tendency (mean) and variability combinations of order statistics, or L-moments. In (standard deviation) need to be examined. Many theory, L-moments are less prone to the effects of classical statistical tests rely on the assumption that sampling variability as compared to conventional the underlying distribution of the data (or residuals) is moments. Hosking (1990) provides an excellent NESUG '93 Proceedings 574 Statistics overview of the theory behind the derivation and randomly selected values of X; however. by its nature application of L-moments as summary statistics for cr assigns more weight to extreme sample values univariate probability distributions. Royston (1992) than does~. As is a scale-dependent measure of compares the properties of the conventional shape $kewness for a sample of size 3. and A4 is parameters to their L-moment counterparts for two proportional to a weighted difference between outer lognormal distributions. Rather than discuss the extremes and the central portion in samples of size 4 detailed theory behind L-moments. the reader is (Royston. 1992). referred to the two aforementioned papers. Instead. a brief overview of the development of the equations Scale-free versions of the L-moments for skewness. necessary to apply L-moments is described below. 'ta •. and kurtosis. t 4• can be written as As with the paper by Royston. the notation of Hosking (1990) will be employed. For the random variables Xi •.... x. of sample size n drawn from the distribution of a random variable X with mean Il and variance OZ. let X,:, ::s; ... ::s; X,:. be An alternative measure of skewness. 1'3' is defined as the order statistics such that the L-momerns of X are (1 + 'tJ I (1 - 'tJ. This measure is the ratio of the defined by expected length of the upper tall to that of the lower tail in samples of size 3. and as such may be easier to interpret than 'ta. ~. 'ta. 'to 3 and 't are subject to r-1 ( r - 1) 4 A,. .. (' E(-1)k EX'4<:r • r =1.2 .... the constraints "-0 k ~>O. -1<'ta<1. 0<'t'3 <00 and where r is the r'" L-moment of a distribution and ~7 th is the expected value of the i smallest observation Y4(W3 -1)St4 <1. in a sample of size r. If a random sample of size n is drawn from a The first four central moments of a random variable distribution of the random variable X and X can be written as x,:. S ... S xn;n are the ordered sample values then estimates olthe L-moments A.,. ~. As and A,4' namely 11 = E(X). I,. ~. ~ and 14, can be calculated as follows. First, define w2• W3 and w4 as OZ = E(X _ 1l)2. 1 n "I = E(X - 1l)3 I if and w2 = -- ~(i- 1)xl:n' n(n-1) tt II: = E(X - 11)4 I ct. 1 n In a similar fashion. the first four L-moments of a W3 = ~(i - 1Hi - 2)XL'tI and random variable X can be written as n(n-1)(n-2) tt A., = E(X). 1 n w4 = ~(i - 1)(i - 2)(; - 3)xL." ~ = 1hE(~2 - X;:J. n(n-1)(n-2)(n-3)!t . As = 1hE(~ - 2X2:3 + X,~ and Then the L-moments and the corresponding shape statistics can be estimated as ~ = Y.4E(X4:4 - 3X3:4 + 3X2:4 - X,:J. It can be seen that A., is equivalent to the usual measure of central tendency. II- ~ is similar to cr in that both measure the difference between two NESUG '93 Proceedings Statistics 575 Two VM&specific SAS functions, FINDFILE and FINDEND, are used to search for the analysis data set in a permanent SAS data library or in the SAS$WORK data library. Users on other operating fa=13/~ and systems may have equivalent functions that could be substituted If no analysis data set is found, the 1:.t = 14 / ~ • program reports this error to the .LOG file and terminates. Otherwise, the calculation macro begins. fa and ~ are the sample L-skewness and L-kurtosis, respectively. The sample estimate of the altemative If BY variable processing is requested, the data are measure of skewness, f3' is defiled as sorted before submission to PROC UNIVARIATE for (1 + tJ I (1 - tJ. analysis. PROC PRINTTO is used to capture the usual output and send it to the file 'UNI.DAT'. Also, The Program an output data set from PROC UNIVARIATE is used to store the number of non-missing observations for The macro program L_MOMENTS was written using each analysis. SAS v6.08 under the VMS- operating environment. With slight modification (detailed below), the program If the user chooses to generate a hardcopy of the should run on any operating system. The user is results, a DATA step is used to process the UNI.DAT required to provide the name of the SAS dataset file. The functions PUT and SUBSTR are used in (macro variable INDAT) to be used in the analyses conjunction with the $BINARYS. format to search for and the name(s) of the variables (macro variable pagebreaks (the eC=CR option is specified in an VARS), separated by spaces, to be analyzed. "There OPTIONS statement) and set a flag that will be used is no limit as to the number of variables that can be to fire a PUT _PAGE_ in a DATA _NULL_ step at the analyzed. Options available to the user include: end of the program. Next, a flag is set if BY variable box plots are created. For each page of output that · Specify the location of the SAS data library (macro does not contain BY variable box plots, a counter is variable LIB). Default is current user location. incremented. The counter is used to facilitate direct read access of the shape statistics data set created · BY group processing (macro variable BYVAR). No by PROC IML for use In the DATA _NULL_ that limit on number of BY variables, delimited by a generates the hardcopy output. Next, that part Of the space. Default is no BY group processing. output line that displays the values for skewness and kurtosis "is removed. Finally, a flag is set that · Generate stem-leaf, box and normal probability plots indicates the last line of the tabular portion of the (macro variable PLOTS). Default is no plots. PROe UNIVARIATE output. · Generate a hardcopy of the usual PROC For each analysis variable, the raw data are sorted, UNIVARIATE output, with the central moment and then merged and transposed.
Recommended publications
  • Probability and Statistics
    APPENDIX A Probability and Statistics The basics of probability and statistics are presented in this appendix, provid- ing the reader with a reference source for the main body of this book without interrupting the flow of argumentation there with unnecessary excursions. Only the basic definitions and results have been included in this appendix. More complicated concepts (such as stochastic processes, Ito Calculus, Girsanov Theorem, etc.) are discussed directly in the main body of the book as the need arises. A.1 PROBABILITY, EXPECTATION, AND VARIANCE A variable whose value is dependent on random events, is referred to as a random variable or a random number. An example of a random variable is the number observed when throwing a dice. The distribution of this ran- dom number is an example of a discrete distribution. A random number is discretely distributed if it can take on only discrete values such as whole numbers, for instance. A random variable has a continuous distribution if it can assume arbitrary values in some interval (which can by all means be the infinite interval from −∞ to ∞). Intuitively, the values which can be assumed by the random variable lie together densely, i.e., arbitrarily close to one another. An example of a discrete distribution on the interval from 0 to ∞ (infinity) might be a random variable which could take on the values 0.00, 0.01, 0.02, 0.03, ..., 99.98, 99.99, 100.00, 100.01, 100.02, ..., etc., like a Bund future with a tick-size of 0.01% corresponding to a value change of 10 euros per tick on a nominal of 100,000 euros.
    [Show full text]
  • A Skew Extension of the T-Distribution, with Applications
    J. R. Statist. Soc. B (2003) 65, Part 1, pp. 159–174 A skew extension of the t-distribution, with applications M. C. Jones The Open University, Milton Keynes, UK and M. J. Faddy University of Birmingham, UK [Received March 2000. Final revision July 2002] Summary. A tractable skew t-distribution on the real line is proposed.This includes as a special case the symmetric t-distribution, and otherwise provides skew extensions thereof.The distribu- tion is potentially useful both for modelling data and in robustness studies. Properties of the new distribution are presented. Likelihood inference for the parameters of this skew t-distribution is developed. Application is made to two data modelling examples. Keywords: Beta distribution; Likelihood inference; Robustness; Skewness; Student’s t-distribution 1. Introduction Student’s t-distribution occurs frequently in statistics. Its usual derivation and use is as the sam- pling distribution of certain test statistics under normality, but increasingly the t-distribution is being used in both frequentist and Bayesian statistics as a heavy-tailed alternative to the nor- mal distribution when robustness to possible outliers is a concern. See Lange et al. (1989) and Gelman et al. (1995) and references therein. It will often be useful to consider a further alternative to the normal or t-distribution which is both heavy tailed and skew. To this end, we propose a family of distributions which includes the symmetric t-distributions as special cases, and also includes extensions of the t-distribution, still taking values on the whole real line, with non-zero skewness. Let a>0 and b>0be parameters.
    [Show full text]
  • On the Meaning and Use of Kurtosis
    Psychological Methods Copyright 1997 by the American Psychological Association, Inc. 1997, Vol. 2, No. 3,292-307 1082-989X/97/$3.00 On the Meaning and Use of Kurtosis Lawrence T. DeCarlo Fordham University For symmetric unimodal distributions, positive kurtosis indicates heavy tails and peakedness relative to the normal distribution, whereas negative kurtosis indicates light tails and flatness. Many textbooks, however, describe or illustrate kurtosis incompletely or incorrectly. In this article, kurtosis is illustrated with well-known distributions, and aspects of its interpretation and misinterpretation are discussed. The role of kurtosis in testing univariate and multivariate normality; as a measure of departures from normality; in issues of robustness, outliers, and bimodality; in generalized tests and estimators, as well as limitations of and alternatives to the kurtosis measure [32, are discussed. It is typically noted in introductory statistics standard deviation. The normal distribution has a kur- courses that distributions can be characterized in tosis of 3, and 132 - 3 is often used so that the refer- terms of central tendency, variability, and shape. With ence normal distribution has a kurtosis of zero (132 - respect to shape, virtually every textbook defines and 3 is sometimes denoted as Y2)- A sample counterpart illustrates skewness. On the other hand, another as- to 132 can be obtained by replacing the population pect of shape, which is kurtosis, is either not discussed moments with the sample moments, which gives or, worse yet, is often described or illustrated incor- rectly. Kurtosis is also frequently not reported in re- ~(X i -- S)4/n search articles, in spite of the fact that virtually every b2 (•(X i - ~')2/n)2' statistical package provides a measure of kurtosis.
    [Show full text]
  • Stochastic Time-Series Spectroscopy John Scoville1 San Jose State University, Dept
    Stochastic Time-Series Spectroscopy John Scoville1 San Jose State University, Dept. of Physics, San Jose, CA 95192-0106, USA Spectroscopically measuring low levels of non-equilibrium phenomena (e.g. emission in the presence of a large thermal background) can be problematic due to an unfavorable signal-to-noise ratio. An approach is presented to use time-series spectroscopy to separate non-equilibrium quantities from slowly varying equilibria. A stochastic process associated with the non-equilibrium part of the spectrum is characterized in terms of its central moments or cumulants, which may vary over time. This parameterization encodes information about the non-equilibrium behavior of the system. Stochastic time-series spectroscopy (STSS) can be implemented at very little expense in many settings since a series of scans are typically recorded in order to generate a low-noise averaged spectrum. Higher moments or cumulants may be readily calculated from this series, enabling the observation of quantities that would be difficult or impossible to determine from an average spectrum or from prinicipal components analysis (PCA). This method is more scalable than PCA, having linear time complexity, yet it can produce comparable or superior results, as shown in example applications. One example compares an STSS-derived CO2 bending mode to a standard reference spectrum and the result of PCA. A second example shows that STSS can reveal conditions of stress in rocks, a scenario where traditional methods such as PCA are inadequate. This allows spectral lines and non-equilibrium behavior to be precisely resolved. A relationship between 2nd order STSS and a time-varying form of PCA is considered.
    [Show full text]
  • A Family of Skew-Normal Distributions for Modeling Proportions and Rates with Zeros/Ones Excess
    S S symmetry Article A Family of Skew-Normal Distributions for Modeling Proportions and Rates with Zeros/Ones Excess Guillermo Martínez-Flórez 1, Víctor Leiva 2,* , Emilio Gómez-Déniz 3 and Carolina Marchant 4 1 Departamento de Matemáticas y Estadística, Facultad de Ciencias Básicas, Universidad de Córdoba, Montería 14014, Colombia; [email protected] 2 Escuela de Ingeniería Industrial, Pontificia Universidad Católica de Valparaíso, 2362807 Valparaíso, Chile 3 Facultad de Economía, Empresa y Turismo, Universidad de Las Palmas de Gran Canaria and TIDES Institute, 35001 Canarias, Spain; [email protected] 4 Facultad de Ciencias Básicas, Universidad Católica del Maule, 3466706 Talca, Chile; [email protected] * Correspondence: [email protected] or [email protected] Received: 30 June 2020; Accepted: 19 August 2020; Published: 1 September 2020 Abstract: In this paper, we consider skew-normal distributions for constructing new a distribution which allows us to model proportions and rates with zero/one inflation as an alternative to the inflated beta distributions. The new distribution is a mixture between a Bernoulli distribution for explaining the zero/one excess and a censored skew-normal distribution for the continuous variable. The maximum likelihood method is used for parameter estimation. Observed and expected Fisher information matrices are derived to conduct likelihood-based inference in this new type skew-normal distribution. Given the flexibility of the new distributions, we are able to show, in real data scenarios, the good performance of our proposal. Keywords: beta distribution; centered skew-normal distribution; maximum-likelihood methods; Monte Carlo simulations; proportions; R software; rates; zero/one inflated data 1.
    [Show full text]
  • Unbiased Central Moment Estimates
    Unbiased Central Moment Estimates Inna Gerlovina1;2, Alan Hubbard1 February 11, 2020 1University of California, Berkeley 2University of California, San Francisco [email protected] Contents 1 Introduction 2 2 Estimators: na¨ıve biased to unbiased 2 3 Estimates obtained from a sample 4 4 Higher order estimates (symbolic expressions for expectations) 5 1 1 Introduction Umoments package calculates unbiased central moment estimates and their two-sample analogs, also known as pooled estimates, up to sixth order (sample variance and pooled variance are commonly used second order examples of corresponding estimators). Orders four and higher include powers and products of moments - for example, fifth moment and a product of second and third moments are both of fifth order; those estimators are also included in the package. The estimates can be obtained directly from samples or from na¨ıve biased moment estimates. If the estimates of orders beyond sixth are needed, they can be generated using the set of functions also provided in the package. Those functions generate symbolic expressions for expectations of moment combinations of an arbitrary order and, in addition to moment estimation, can be used for other derivations that require such expectations (e.g. Edgeworth expansions). 2 Estimators: na¨ıve biased to unbiased For a sample X1;:::;Xn, a na¨ıve biased estimate of a k'th central moment µk is n 1 X m = (X − X¯)k; k = 2; 3;:::: (1) k n i i=1 These biased estimates together with the sample size n can be used to calculate unbiased n estimates of a given order, e.g.
    [Show full text]
  • The Normal Probability Distribution
    BIOSTATISTICS BIOL 4343 Assessing normality Not all continuous random variables are normally distributed. It is important to evaluate how well the data set seems to be adequately approximated by a normal distribution. In this section some statistical tools will be presented to check whether a given set of data is normally distributed. 1. Previous knowledge of the nature of the distribution Problem: A researcher working with sea stars needs to know if sea star size (length of radii) is normally distributed. What do we know about the size distributions of sea star populations? 1. Has previous work with this species of sea star shown them to be normally distributed? 2. Has previous work with a closely related species of seas star shown them to be normally distributed? 3. Has previous work with seas stars in general shown them to be normally distributed? If you can answer yes to any of the above questions and you do not have a reason to think your population should be different, you could reasonably assume that your population is also normally distributed and stop here. However, if any previous work has shown non-normal distribution of sea stars you had probably better use other techniques. 2. Construct charts For small- or moderate-sized data sets, the stem-and-leaf display and box-and- whisker plot will look symmetric. For large data sets, construct a histogram or polygon and see if the distribution bell-shaped or deviates grossly from a bell-shaped normal distribution. Look for skewness and asymmetry. Look for gaps in the distribution – intervals with no observations.
    [Show full text]
  • Approximating the Distribution of the Product of Two Normally Distributed Random Variables
    S S symmetry Article Approximating the Distribution of the Product of Two Normally Distributed Random Variables Antonio Seijas-Macías 1,2 , Amílcar Oliveira 2,3 , Teresa A. Oliveira 2,3 and Víctor Leiva 4,* 1 Departamento de Economía, Universidade da Coruña, 15071 A Coruña, Spain; [email protected] 2 CEAUL, Faculdade de Ciências, Universidade de Lisboa, 1649-014 Lisboa, Portugal; [email protected] (A.O.); [email protected] (T.A.O.) 3 Departamento de Ciências e Tecnologia, Universidade Aberta, 1269-001 Lisboa, Portugal 4 Escuela de Ingeniería Industrial, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile * Correspondence: [email protected] or [email protected] Received: 21 June 2020; Accepted: 18 July 2020; Published: 22 July 2020 Abstract: The distribution of the product of two normally distributed random variables has been an open problem from the early years in the XXth century. First approaches tried to determinate the mathematical and statistical properties of the distribution of such a product using different types of functions. Recently, an improvement in computational techniques has performed new approaches for calculating related integrals by using numerical integration. Another approach is to adopt any other distribution to approximate the probability density function of this product. The skew-normal distribution is a generalization of the normal distribution which considers skewness making it flexible. In this work, we approximate the distribution of the product of two normally distributed random variables using a type of skew-normal distribution. The influence of the parameters of the two normal distributions on the approximation is explored. When one of the normally distributed variables has an inverse coefficient of variation greater than one, our approximation performs better than when both normally distributed variables have inverse coefficients of variation less than one.
    [Show full text]
  • Expectation and Functions of Random Variables
    POL 571: Expectation and Functions of Random Variables Kosuke Imai Department of Politics, Princeton University March 10, 2006 1 Expectation and Independence To gain further insights about the behavior of random variables, we first consider their expectation, which is also called mean value or expected value. The definition of expectation follows our intuition. Definition 1 Let X be a random variable and g be any function. 1. If X is discrete, then the expectation of g(X) is defined as, then X E[g(X)] = g(x)f(x), x∈X where f is the probability mass function of X and X is the support of X. 2. If X is continuous, then the expectation of g(X) is defined as, Z ∞ E[g(X)] = g(x)f(x) dx, −∞ where f is the probability density function of X. If E(X) = −∞ or E(X) = ∞ (i.e., E(|X|) = ∞), then we say the expectation E(X) does not exist. One sometimes write EX to emphasize that the expectation is taken with respect to a particular random variable X. For a continuous random variable, the expectation is sometimes written as, Z x E[g(X)] = g(x) d F (x). −∞ where F (x) is the distribution function of X. The expectation operator has inherits its properties from those of summation and integral. In particular, the following theorem shows that expectation preserves the inequality and is a linear operator. Theorem 1 (Expectation) Let X and Y be random variables with finite expectations. 1. If g(x) ≥ h(x) for all x ∈ R, then E[g(X)] ≥ E[h(X)].
    [Show full text]
  • Convergence in Distribution Central Limit Theorem
    Convergence in Distribution Central Limit Theorem Statistics 110 Summer 2006 Copyright °c 2006 by Mark E. Irwin Convergence in Distribution Theorem. Let X » Bin(n; p) and let ¸ = np, Then µ ¶ n e¡¸¸x lim P [X = x] = lim px(1 ¡ p)n¡x = n!1 n!1 x x! So when n gets large, we can approximate binomial probabilities with Poisson probabilities. Proof. µ ¶ µ ¶ µ ¶ µ ¶ n n ¸ x ¸ n¡x lim px(1 ¡ p)n¡x = lim 1 ¡ n!1 x n!1 x n n µ ¶ µ ¶ µ ¶ n! 1 ¸ ¡x ¸ n = ¸x 1 ¡ 1 ¡ x!(n ¡ x)! nx n n Convergence in Distribution 1 µ ¶ µ ¶ µ ¶ n! 1 ¸ ¡x ¸ n = ¸x 1 ¡ 1 ¡ x!(n ¡ x)! nx n n µ ¶ ¸x n! 1 ¸ n = lim 1 ¡ x! n!1 (n ¡ x)!(n ¡ ¸)x n | {z } | {z } !1 !e¡¸ e¡¸¸x = x! 2 Note that approximation works better when n is large and p is small as can been seen in the following plot. If p is relatively large, a di®erent approximation should be used. This is coming later. (Note in the plot, bars correspond to the true binomial probabilities and the red circles correspond to the Poisson approximation.) Convergence in Distribution 2 lambda = 1 n = 10 p = 0.1 lambda = 1 n = 50 p = 0.02 lambda = 1 n = 200 p = 0.005 p(x) p(x) p(x) 0.0 0.1 0.2 0.3 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 x x x lambda = 5 n = 10 p = 0.5 lambda = 5 n = 50 p = 0.1 lambda = 5 n = 200 p = 0.025 p(x) p(x) p(x) 0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.10 0.15 0.00 0.05 0.10 0.15 0 1 2 3 4 5 6 7 8 9 0 2 4 6 8 10 12 0 2 4 6 8 10 12 x x x Convergence in Distribution 3 Example: Let Y1;Y2;::: be iid Exp(1).
    [Show full text]
  • Discrete Random Variables Randomness
    Discrete Random Variables Randomness • The word random effectively means unpredictable • In engineering practice we may treat some signals as random to simplify the analysis even though they may not actually be random Random Variable Defined X A random variable () is the assignment of numerical values to the outcomes of experiments Random Variables Examples of assignments of numbers to the outcomes of experiments. Discrete-Value vs Continuous- Value Random Variables •A discrete-value (DV) random variable has a set of distinct values separated by values that cannot occur • A random variable associated with the outcomes of coin flips, card draws, dice tosses, etc... would be DV random variable •A continuous-value (CV) random variable may take on any value in a continuum of values which may be finite or infinite in size Probability Mass Functions The probability mass function (pmf ) for a discrete random variable X is P x = P X = x . X () Probability Mass Functions A DV random variable X is a Bernoulli random variable if it takes on only two values 0 and 1 and its pmf is 1 p , x = 0 P x p , x 1 X ()= = 0 , otherwise and 0 < p < 1. Probability Mass Functions Example of a Bernoulli pmf Probability Mass Functions If we perform n trials of an experiment whose outcome is Bernoulli distributed and if X represents the total number of 1’s that occur in those n trials, then X is said to be a Binomial random variable and its pmf is n x nx p 1 p , x 0,1,2,,n P x () {} X ()= x 0 , otherwise Probability Mass Functions Binomial pmf Probability Mass Functions If we perform Bernoulli trials until a 1 (success) occurs and the probability of a 1 on any single trial is p, the probability that the k1 first success will occur on the kth trial is p()1 p .
    [Show full text]
  • A Semi-Parametric Approach to the Detection of Non-Gaussian Gravitational Wave Stochastic Backgrounds
    A Semi-Parametric Approach to the Detection of Non-Gaussian Gravitational Wave Stochastic Backgrounds Lionel Martellini1, 2, ∗ and Tania Regimbau2 1EDHEC-Risk Institute, 400 Promenade des Anglais, BP 3116, 06202 Nice Cedex 3, France 2UMR ARTEMIS, CNRS, University of Nice Sophia-Antipolis, Observatoire de la C^oted'Azur, CS 34229 F-06304 NICE, France (Dated: November 7, 2018) Abstract Using a semi-parametric approach based on the fourth-order Edgeworth expansion for the un- known signal distribution, we derive an explicit expression for the likelihood detection statistic in the presence of non-normally distributed gravitational wave stochastic backgrounds. Numerical likelihood maximization exercises based on Monte-Carlo simulations for a set of large tail sym- metric non-Gaussian distributions suggest that the fourth cumulant of the signal distribution can be estimated with reasonable precision when the ratio between the signal and the noise variances is larger than 0.01. The estimation of higher-order cumulants of the observed gravitational wave signal distribution is expected to provide additional constraints on astrophysical and cosmological models. arXiv:1405.5775v1 [astro-ph.CO] 22 May 2014 ∗Electronic address: [email protected] 1 I. INTRODUCTION According to various cosmological scenarios, we are bathed in a stochastic background of gravitational waves. Proposed theoretical models include the amplification of vacuum fluctuations during inflation[1–3], pre Big Bang models [4{6], cosmic (super)strings [7{10] or phase transitions [11{13]. In addition to the cosmological background (CGB) [14, 15], an astrophysical contribution (AGB) [16] is expected to result from the superposition of a large number of unresolved sources, such as core collapses to neutron stars or black holes [17{20], rotating neutron stars [21, 22] including magnetars [23{26], phase transition [27] or initial instabilities in young neutron stars [28, 29, 29, 30] or compact binary mergers [31{35].
    [Show full text]