Probability Distributions and Combination of Random Variables

Total Page:16

File Type:pdf, Size:1020Kb

Probability Distributions and Combination of Random Variables Appendix A Probability Distributions and Combination of Random Variables A.1 Probability Density Functions Normal (Gaussian) Distribution Fig. A.1 Probability density μ σ function of normal =0, =1 0.8 μ σ (Gaussian) random variables =0, =3 for different values of the 0.7 μ=0, σ=0.5 μ σ μ=2, σ=2 parameters and 0.6 ) σ 0.5 , μ 0.4 p(x| 0.3 0.2 0.1 0 −6 −4 −2 0 2 4 6 x The stating point of most of the probability distributions that arises when dealing with noise in medical imaging is the Normal distribution, also known as Gaussian distribution. Let X be a random variable (RV) that follows a normal distribution. This is usually represented as follows: X ∼ N(μ, σ2) where μ is the mean of the distribution and σ is the standard deviation. © Springer International Publishing Switzerland 2016 275 S. Aja-Fernandez´ and G. Vegas-Sanchez-Ferrero,´ Statistical Analysis of Noise in MRI, DOI 10.1007/978-3-319-39934-8 276 Appendix A: Probability Distributions and Combination of Random Variables PDF: (Probability density function) 1 (x − μ)2 p(x|μ, σ) = √ exp − (A.1) σ 2π 2σ2 MGF: (Moment generating function) σ2t2 M (t) = exp μt + X 2 Main parameters: Mean = μ Median = μ Mode = μ Variance = σ2 Main moments: μ1 = E{x}=μ 2 2 2 μ2 = E{x }=μ + σ 3 3 2 μ3 = E{x }=μ + 3μσ 4 4 2 2 4 μ4 = E{x }=μ + 6μ σ + 3σ Rayleigh Distribution Fig. A.2 Probability density 0.7 function of Rayleigh random σ=1 variables for different values 0.6 σ=2 of the parameter σ σ=3 0.5 σ=4 ) 0.4 σ p(x| 0.3 0.2 0.1 0 0 2 4 6 8 x Appendix A: Probability Distributions and Combination of Random Variables 277 The Rayleigh distribution can be seen as the distribution that models the square root of the sum of squares of two independent and identically distributed Gaussian RV with the same σ and μ = 0. This is the case, for instance of the modulus of a complex Gaussian RV with independent components: (σ) = 2 + 2 ∼ ( , σ2) R X1 X2, Xi N 0 2 2 2 R(σ) = (X1 − μ1) + (X2 − μ2) , Xi ∼ N(μi , σ ) R(σ) =|X|, X = N(0, σ2) + jN(0, σ2) PDF: x x2 p(x|σ) = exp − (A.2) σ2 2σ2 MGF: π σ σ2t2/2 t MX (t) = 1 + σ te erf √ + 1 2 2 Raw moments: k k/2 μk = σ 2 Γ(1 + k/2) Main parameters: π Mean = σ 2 Median = σ log 4 Mode = σ 4 − π Variance = σ2 2 Main moments: π μ = σ 1 2 μ = σ2 2 2 π μ = 3 σ3 3 2 4 μ4 = 8 σ 278 Appendix A: Probability Distributions and Combination of Random Variables Rician Distribution Fig. A.3 Probability density 0.6 function of Rician random A=1, σ=1 σ variables for different values 0.5 A=4, =1 of the parameters A and σ A=1, σ=2 A=1, σ=3 0.4 A=2, σ=4 ) σ 0.3 p(x|A, 0.2 0.1 0 0 2 4 6 8 x The Rician distribution can be seen as the distribution that models the square root of the sum of squares of two independent and identically distributed Gaussian RV with the same σ. ( , σ) = 2 + 2, ∼ (μ , σ2) R A X1 X2 Xi N i 2 2 R(A, σ) =|X|, X = N(μ1, σ ) + jN(μ2, σ ) PDF: x − x2+A2 Ax p(x|A, σ) = e 2σ2 I u(x), (A.3) σ2 0 σ2 where = μ2 + μ2. A 1 2 For A = 0 the Rician distribution simplifies to a Rayleigh. Raw moments 2 k k/2 A μ = σ 2 Γ(1 + k/2) L / − k k 2 2σ2 where Ln(x) is the Laguerre polynomial. It is related to the confluent hypergeo- metric function of the first kind: Ln(x) = 1 F1(−n; 1; x). The even moments become simple polynomials. Appendix A: Probability Distributions and Combination of Random Variables 279 Main moments: π A2 μ = L / − σ 1 2 1 2 2σ2 μ = A2 + 2 σ2 2 π 2 A 3 μ = 3 L / − σ 3 2 3 2 2σ2 4 2 2 4 μ4 = A + 8 σ A + 8 σ πσ2 2 2 2 2 A Var = 2σ + A − L / − 2 1 2 2σ2 1 1 A2 ≈ σ2 1 − − + O(x−3) with x = 4x 8x2 2σ2 Series Expansion of Hypergeometric Functions √ 2 x 1 1 −5/2 L / (−x) = √ + √ √ + √ + O x 1 2 π 2 π x 16 πx3/2 √ 3/2 4x 3 x 3 1 −5/2 L / (−x) = √ + √ + √ √ + √ + O x . 3 2 3 π π 8 π x 32 πx3/2 Central Chi Distribution (c-χ) Fig. A.4 Probability density 0.7 function of c-χ random σ=1, L=1 variables for different values 0.6 σ=1, L=2 of the parameters L and σ σ=2, L=1 0.5 σ=2, L=2 0.4 ,L) σ 0.3 p(x| 0.2 0.1 0 0 2 4 6 8 x 280 Appendix A: Probability Distributions and Combination of Random Variables The central-χ (c-χ) distribution can be seen as the distribution that models the square root of the sum of squares of several independent and identically distributed Gaussian RV with the same σ and μ = 0: 2L ( , σ) = 2 ∼ ( , σ2) R L Xi Xi N 0 i=1 2L 2 2 R(L, σ) = (Xi − μi ) Xi ∼ N(μi , σ ) i=1 L 2 2 2 R(L, σ) = |Yi | Yi = N(0, σ ) + jN(0, σ ). i=1 PDF: 1−L 2L−1 2 x − x2 p(x|σ, L) = e 2σ2 (A.4) Γ(L) σ2L Note that this definition of the (nonnormalized) PDF uses parameters related to MRI configuration. In other texts, m = 2L are used instead. The Rayleigh distribution is a special case of the c-χ when L = 1. Raw moments: Γ(k/2 + L) μ = σk 2k/2 . k Γ(L) For n an integer, Γ(n) = (n − 1)! and √ 1 π n Γ n + = (2k − 1) 2 2n k=1 Main moments: √ Γ(L + 1/2) μ = 2 σ 1 Γ(L) 2 μ2 = 2 L σ Appendix A: Probability Distributions and Combination of Random Variables 281 √ Γ(L + 3/2) μ = 3 2 σ3 3 Γ(L) 4 μ4 = 4 L(L + 1) σ Γ(k/2 + L) 2 Var = 2σ2 L − Γ(L) Noncentral Chi Distribution (nc-χ) Fig. A.5 Probability density 0.6 function of nc-χ random σ=1, L=1, A=1 variables for different values σ=1, L=2, A=1 0.5 of the parameters A, L and σ σ=2, L=1, A=2 σ=2, L=2, A=2 0.4 σ=1, L=1, A=3 ,L) σ 0.3 p(x|A, 0.2 0.1 0 0 2 4 6 8 x The noncentral-χ (nc-χ) distribution can be seen as the distribution that model the square root of the sum of squares of several independent and identically distributed Gaussian RV with the same σ: 2L = 2 ∼ (μ , σ2) R Xi Xi N i i=1 L 2 2 2 R = |Yi | Yi = N(μ1,i , σ ) + jN(μ2,i , σ ) i=1 PDF: − 1 L x2+A2 AL L − L AL x p(x|A , σ, L) = x e 2σ2 I − u(x), (A.5) L σ2 L 1 σ2 282 Appendix A: Probability Distributions and Combination of Random Variables with L 2 AL = |μi | . i=1 When AL = 0 the nc-χ simplifies into a c-χ. The Rician distribution is a special case of the nc-χ for L = 1. Main moments: √ Γ(L + 1/2) 1 A2 μ = 2 F − , L, − L σ 1 Γ(L) 1 1 2 2σ2 μ = A2 + 2 Lσ2 2 L √ Γ(L + 3/2) 3 A2 μ = 3 2 F − , L, − L σ3 3 Γ(L) 1 1 2 2σ2 μ = 4 + ( + ) 2 σ2 + ( + ) σ4 4 AL 4 L 1 AL 4L L 1 Γ(L + 1/2) 2 1 Var = 2σ2 1 + x − F 2 − , L, −x Γ(L) 1 1 2 1 − 2L 3 − 8L + 4L2 A2 ≈ σ2 1 + + + O(x−3) with x = 4x 8x2 2σ2 Central Chi Square Distribution (c-χ2) Fig. A.6 Probability density 0.2 function of c-χ2 random σ=1, L=2 variables for different values σ=1, L=3 of the parameters L and σ σ=2, L=2 0.15 σ=2, L=4 ,L) σ 0.1 p(x| 0.05 0 0 10 20 30 40 50 x The central-χ square (c-χ2) distribution can be seen as the distribution that models the square root of the sum of several independent and identically distributed Gaussian RV with the same σ and μ = 0: Appendix A: Probability Distributions and Combination of Random Variables 283 2L ( , σ) = 2 ∼ ( , σ2) R L Xi Xi N 0 i=1 2L 2 2 R(L, σ) = (Xi − μi ) Xi ∼ N(μi , σ ) i=1 L 2 2 2 R(L, σ) = |Yi | Yi = N(0, σ ) + jN(0, σ ). i=1 PDF: − 1 x L 1 − x p(x|σ, L) = e 2σ2 u(x), (A.6) 2σ2Γ(L) 2σ2 Raw moments: Γ(k + L) μ = σ2k 2k .
Recommended publications
  • The Poisson Burr X Inverse Rayleigh Distribution and Its Applications
    Journal of Data Science,18(1). P. 56 – 77,2020 DOI:10.6339/JDS.202001_18(1).0003 The Poisson Burr X Inverse Rayleigh Distribution And Its Applications Rania H. M. Abdelkhalek* Department of Statistics, Mathematics and Insurance, Benha University, Egypt. ABSTRACT A new flexible extension of the inverse Rayleigh model is proposed and studied. Some of its fundamental statistical properties are derived. We assessed the performance of the maximum likelihood method via a simulation study. The importance of the new model is shown via three applications to real data sets. The new model is much better than other important competitive models. Keywords: Inverse Rayleigh; Poisson; Simulation; Modeling. * [email protected] Rania H. M. Abdelkhalek 57 1. Introduction and physical motivation The well-known inverse Rayleigh (IR) model is considered as a distribution for a life time random variable (r.v.). The IR distribution has many applications in the area of reliability studies. Voda (1972) proved that the distribution of lifetimes of several types of experimental (Exp) units can be approximated by the IR distribution and studied some properties of the maximum likelihood estimation (MLE) of the its parameter. Mukerjee and Saran (1984) studied the failure rate of an IR distribution. Aslam and Jun (2009) introduced a group acceptance sampling plans for truncated life tests based on the IR. Soliman et al. (2010) studied the Bayesian and non- Bayesian estimation of the parameter of the IR model Dey (2012) mentioned that the IR model has also been used as a failure time distribution although the variance and higher order moments does not exist for it.
    [Show full text]
  • New Proposed Length-Biased Weighted Exponential and Rayleigh Distribution with Application
    CORE Metadata, citation and similar papers at core.ac.uk Provided by International Institute for Science, Technology and Education (IISTE): E-Journals Mathematical Theory and Modeling www.iiste.org ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) Vol.4, No.7, 2014 New Proposed Length-Biased Weighted Exponential and Rayleigh Distribution with Application Kareema Abed Al-kadim1 Nabeel Ali Hussein 2 1,2.College of Education of Pure Sciences, University of Babylon/Hilla 1.E-mail of the Corresponding author: [email protected] 2.E-mail: [email protected] Abstract The concept of length-biased distribution can be employed in development of proper models for lifetime data. Length-biased distribution is a special case of the more general form known as weighted distribution. In this paper we introduce a new class of length-biased of weighted exponential and Rayleigh distributions(LBW1E1D), (LBWRD).This paper surveys some of the possible uses of Length - biased distribution We study the some of its statistical properties with application of these new distribution. Keywords: length- biased weighted Rayleigh distribution, length- biased weighted exponential distribution, maximum likelihood estimation. 1. Introduction Fisher (1934) [1] introduced the concept of weighted distributions Rao (1965) [2]developed this concept in general terms by in connection with modeling statistical data where the usual practi ce of using standard distributions were not found to be appropriate, in which various situations that can be modeled by weighted distributions, where non-observe ability of some events or damage caused to the original observation resulting in a reduced value, or in a sampling procedure which gives unequal chances to the units in the original That means the recorded observations cannot be considered as a original random sample.
    [Show full text]
  • Moments of the Product and Ratio of Two Correlated Chi-Square Variables
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Springer - Publisher Connector Stat Papers (2009) 50:581–592 DOI 10.1007/s00362-007-0105-0 REGULAR ARTICLE Moments of the product and ratio of two correlated chi-square variables Anwar H. Joarder Received: 2 June 2006 / Revised: 8 October 2007 / Published online: 20 November 2007 © The Author(s) 2007 Abstract The exact probability density function of a bivariate chi-square distribu- tion with two correlated components is derived. Some moments of the product and ratio of two correlated chi-square random variables have been derived. The ratio of the two correlated chi-square variables is used to compare variability. One such applica- tion is referred to. Another application is pinpointed in connection with the distribution of correlation coefficient based on a bivariate t distribution. Keywords Bivariate chi-square distribution · Moments · Product of correlated chi-square variables · Ratio of correlated chi-square variables Mathematics Subject Classification (2000) 62E15 · 60E05 · 60E10 1 Introduction Fisher (1915) derived the distribution of mean-centered sum of squares and sum of products in order to study the distribution of correlation coefficient from a bivariate nor- mal sample. Let X1, X2,...,X N (N > 2) be two-dimensional independent random vectors where X j = (X1 j , X2 j ) , j = 1, 2,...,N is distributed as a bivariate normal distribution denoted by N2(θ, ) with θ = (θ1,θ2) and a 2 × 2 covariance matrix = (σik), i = 1, 2; k = 1, 2. The sample mean-centered sums of squares and sum of products are given by a = N (X − X¯ )2 = mS2, m = N − 1,(i = 1, 2) ii j=1 ij i i = N ( − ¯ )( − ¯ ) = and a12 j=1 X1 j X1 X2 j X2 mRS1 S2, respectively.
    [Show full text]
  • Study of Generalized Lomax Distribution and Change Point Problem
    STUDY OF GENERALIZED LOMAX DISTRIBUTION AND CHANGE POINT PROBLEM Amani Alghamdi A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 2018 Committee: Arjun K. Gupta, Committee Co-Chair Wei Ning, Committee Co-Chair Jane Chang, Graduate Faculty Representative John Chen Copyright c 2018 Amani Alghamdi All rights reserved iii ABSTRACT Arjun K. Gupta and Wei Ning, Committee Co-Chair Generalizations of univariate distributions are often of interest to serve for real life phenomena. These generalized distributions are very useful in many fields such as medicine, physics, engineer- ing and biology. Lomax distribution (Pareto-II) is one of the well known univariate distributions that is considered as an alternative to the exponential, gamma, and Weibull distributions for heavy tailed data. However, this distribution does not grant great flexibility in modeling data. In this dissertation, we introduce a generalization of the Lomax distribution called Rayleigh Lo- max (RL) distribution using the form obtained by El-Bassiouny et al. (2015). This distribution provides great fit in modeling wide range of real data sets. It is a very flexible distribution that is related to some of the useful univariate distributions such as exponential, Weibull and Rayleigh dis- tributions. Moreover, this new distribution can also be transformed to a lifetime distribution which is applicable in many situations. For example, we obtain the inverse estimation and confidence intervals in the case of progressively Type-II right censored situation. We also apply Schwartz information approach (SIC) and modified information approach (MIC) to detect the changes in parameters of the RL distribution.
    [Show full text]
  • 1 One Parameter Exponential Families
    1 One parameter exponential families The world of exponential families bridges the gap between the Gaussian family and general dis- tributions. Many properties of Gaussians carry through to exponential families in a fairly precise sense. • In the Gaussian world, there exact small sample distributional results (i.e. t, F , χ2). • In the exponential family world, there are approximate distributional results (i.e. deviance tests). • In the general setting, we can only appeal to asymptotics. A one-parameter exponential family, F is a one-parameter family of distributions of the form Pη(dx) = exp (η · t(x) − Λ(η)) P0(dx) for some probability measure P0. The parameter η is called the natural or canonical parameter and the function Λ is called the cumulant generating function, and is simply the normalization needed to make dPη fη(x) = (x) = exp (η · t(x) − Λ(η)) dP0 a proper probability density. The random variable t(X) is the sufficient statistic of the exponential family. Note that P0 does not have to be a distribution on R, but these are of course the simplest examples. 1.0.1 A first example: Gaussian with linear sufficient statistic Consider the standard normal distribution Z e−z2=2 P0(A) = p dz A 2π and let t(x) = x. Then, the exponential family is eη·x−x2=2 Pη(dx) / p 2π and we see that Λ(η) = η2=2: eta= np.linspace(-2,2,101) CGF= eta**2/2. plt.plot(eta, CGF) A= plt.gca() A.set_xlabel(r'$\eta$', size=20) A.set_ylabel(r'$\Lambda(\eta)$', size=20) f= plt.gcf() 1 Thus, the exponential family in this setting is the collection F = fN(η; 1) : η 2 Rg : d 1.0.2 Normal with quadratic sufficient statistic on R d As a second example, take P0 = N(0;Id×d), i.e.
    [Show full text]
  • A New Generalization of the Generalized Inverse Rayleigh Distribution with Applications
    S S symmetry Article A New Generalization of the Generalized Inverse Rayleigh Distribution with Applications Rana Ali Bakoban * and Ashwaq Mohammad Al-Shehri Department of Statistics, College of Science, University of Jeddah, Jeddah 22254, Saudi Arabia; [email protected] * Correspondence: [email protected] Abstract: In this article, a new four-parameter lifetime model called the beta generalized inverse Rayleigh distribution (BGIRD) is defined and studied. Mixture representation of this model is derived. Curve’s behavior of probability density function, reliability function, and hazard function are studied. Next, we derived the quantile function, median, mode, moments, harmonic mean, skewness, and kurtosis. In addition, the order statistics and the mean deviations about the mean and median are found. Other important properties including entropy (Rényi and Shannon), which is a measure of the uncertainty for this distribution, are also investigated. Maximum likelihood estimation is adopted to the model. A simulation study is conducted to estimate the parameters. Four real-life data sets from difference fields were applied on this model. In addition, a comparison between the new model and some competitive models is done via information criteria. Our model shows the best fitting for the real data. Keywords: beta generalized inverse Rayleigh distribution; statistical properties; mean deviations; quantile; Rényi entropy; Shannon entropy; incomplete beta function; Montecarlo Simulation Citation: Bakoban, R.A.; Al-Shehri, MSC: Primary 62E10; Secondary 60E05 A.M. A New Generalization of the Generalized Inverse Rayleigh Distribution with Applications. Symmetry 2021, 13, 711. https:// 1. Introduction doi.org/10.3390/sym13040711 Modeling and analysis of lifetime phenomena are important aspects of statistical Academic Editor: Sergei D.
    [Show full text]
  • A Multivariate Student's T-Distribution
    Open Journal of Statistics, 2016, 6, 443-450 Published Online June 2016 in SciRes. http://www.scirp.org/journal/ojs http://dx.doi.org/10.4236/ojs.2016.63040 A Multivariate Student’s t-Distribution Daniel T. Cassidy Department of Engineering Physics, McMaster University, Hamilton, ON, Canada Received 29 March 2016; accepted 14 June 2016; published 17 June 2016 Copyright © 2016 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ Abstract A multivariate Student’s t-distribution is derived by analogy to the derivation of a multivariate normal (Gaussian) probability density function. This multivariate Student’s t-distribution can have different shape parameters νi for the marginal probability density functions of the multi- variate distribution. Expressions for the probability density function, for the variances, and for the covariances of the multivariate t-distribution with arbitrary shape parameters for the marginals are given. Keywords Multivariate Student’s t, Variance, Covariance, Arbitrary Shape Parameters 1. Introduction An expression for a multivariate Student’s t-distribution is presented. This expression, which is different in form than the form that is commonly used, allows the shape parameter ν for each marginal probability density function (pdf) of the multivariate pdf to be different. The form that is typically used is [1] −+ν Γ+((ν n) 2) T ( n) 2 +Σ−1 n 2 (1.[xx] [ ]) (1) ΓΣ(νν2)(π ) This “typical” form attempts to generalize the univariate Student’s t-distribution and is valid when the n marginal distributions have the same shape parameter ν .
    [Show full text]
  • Basic Econometrics / Statistics Statistical Distributions: Normal, T, Chi-Sq, & F
    Basic Econometrics / Statistics Statistical Distributions: Normal, T, Chi-Sq, & F Course : Basic Econometrics : HC43 / Statistics B.A. Hons Economics, Semester IV/ Semester III Delhi University Course Instructor: Siddharth Rathore Assistant Professor Economics Department, Gargi College Siddharth Rathore guj75845_appC.qxd 4/16/09 12:41 PM Page 461 APPENDIX C SOME IMPORTANT PROBABILITY DISTRIBUTIONS In Appendix B we noted that a random variable (r.v.) can be described by a few characteristics, or moments, of its probability function (PDF or PMF), such as the expected value and variance. This, however, presumes that we know the PDF of that r.v., which is a tall order since there are all kinds of random variables. In practice, however, some random variables occur so frequently that statisticians have determined their PDFs and documented their properties. For our purpose, we will consider only those PDFs that are of direct interest to us. But keep in mind that there are several other PDFs that statisticians have studied which can be found in any standard statistics textbook. In this appendix we will discuss the following four probability distributions: 1. The normal distribution 2. The t distribution 3. The chi-square (␹2 ) distribution 4. The F distribution These probability distributions are important in their own right, but for our purposes they are especially important because they help us to find out the probability distributions of estimators (or statistics), such as the sample mean and sample variance. Recall that estimators are random variables. Equipped with that knowledge, we will be able to draw inferences about their true population values.
    [Show full text]
  • Thompson Sampling on Symmetric Α-Stable Bandits
    Thompson Sampling on Symmetric α-Stable Bandits Abhimanyu Dubey and Alex Pentland Massachusetts Institute of Technology fdubeya, [email protected] Abstract Thompson Sampling provides an efficient technique to introduce prior knowledge in the multi-armed bandit problem, along with providing remarkable empirical performance. In this paper, we revisit the Thompson Sampling algorithm under rewards drawn from α-stable dis- tributions, which are a class of heavy-tailed probability distributions utilized in finance and economics, in problems such as modeling stock prices and human behavior. We present an efficient framework for α-stable posterior inference, which leads to two algorithms for Thomp- son Sampling in this setting. We prove finite-time regret bounds for both algorithms, and demonstrate through a series of experiments the stronger performance of Thompson Sampling in this setting. With our results, we provide an exposition of α-stable distributions in sequential decision-making, and enable sequential Bayesian inference in applications from diverse fields in finance and complex systems that operate on heavy-tailed features. 1 Introduction The multi-armed bandit (MAB) problem is a fundamental model in understanding the exploration- exploitation dilemma in sequential decision-making. The problem and several of its variants have been studied extensively over the years, and a number of algorithms have been proposed that op- timally solve the bandit problem when the reward distributions are well-behaved, i.e. have a finite support, or are sub-exponential. The most prominently studied class of algorithms are the Upper Confidence Bound (UCB) al- gorithms, that employ an \optimism in the face of uncertainty" heuristic [ACBF02], which have been shown to be optimal (in terms of regret) in several cases [CGM+13, BCBL13].
    [Show full text]
  • On Products of Gaussian Random Variables
    On products of Gaussian random variables Zeljkaˇ Stojanac∗1, Daniel Suessy1, and Martin Klieschz2 1Institute for Theoretical Physics, University of Cologne, Germany 2 Institute of Theoretical Physics and Astrophysics, University of Gda´nsk,Poland May 29, 2018 Sums of independent random variables form the basis of many fundamental theorems in probability theory and statistics, and therefore, are well understood. The related problem of characterizing products of independent random variables seems to be much more challenging. In this work, we investigate such products of normal random vari- ables, products of their absolute values, and products of their squares. We compute power-log series expansions of their cumulative distribution function (CDF) based on the theory of Fox H-functions. Numerically we show that for small arguments the CDFs are well approximated by the lowest orders of this expansion. For the two non-negative random variables, we also compute the moment generating functions in terms of Mei- jer G-functions, and consequently, obtain a Chernoff bound for sums of such random variables. Keywords: Gaussian random variable, product distribution, Meijer G-function, Cher- noff bound, moment generating function AMS subject classifications: 60E99, 33C60, 62E15, 62E17 1. Introduction and motivation Compared to sums of independent random variables, our understanding of products is much less comprehensive. Nevertheless, products of independent random variables arise naturally in many applications including channel modeling [1,2], wireless relaying systems [3], quantum physics (product measurements of product states), as well as signal processing. Here, we are particularly motivated by a tensor sensing problem (see Ref. [4] for the basic idea). In this problem we consider n1×n2×···×n tensors T 2 R d and wish to recover them from measurements of the form yi := hAi;T i 1 2 d j nj with the sensing tensors also being of rank one, Ai = ai ⊗ai ⊗· · ·⊗ai with ai 2 R .
    [Show full text]
  • A Guide on Probability Distributions
    powered project A guide on probability distributions R-forge distributions Core Team University Year 2008-2009 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic ditribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 132 13 Misc 134 Conclusion 135 Bibliography 135 A Mathematical tools 138 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • Constructing Copulas from Shock Models with Imprecise Distributions
    Constructing copulas from shock models with imprecise distributions MatjaˇzOmladiˇc Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia [email protected] Damjan Skuljˇ Faculty of Social Sciences, University of Ljubljana, Slovenia [email protected] November 19, 2019 Abstract The omnipotence of copulas when modeling dependence given marg- inal distributions in a multivariate stochastic situation is assured by the Sklar's theorem. Montes et al. (2015) suggest the notion of what they call an imprecise copula that brings some of its power in bivari- ate case to the imprecise setting. When there is imprecision about the marginals, one can model the available information by means of arXiv:1812.07850v5 [math.PR] 18 Nov 2019 p-boxes, that are pairs of ordered distribution functions. By anal- ogy they introduce pairs of bivariate functions satisfying certain con- ditions. In this paper we introduce the imprecise versions of some classes of copulas emerging from shock models that are important in applications. The so obtained pairs of functions are not only imprecise copulas but satisfy an even stronger condition. The fact that this con- dition really is stronger is shown in Omladiˇcand Stopar (2019) thus raising the importance of our results. The main technical difficulty 1 in developing our imprecise copulas lies in introducing an appropriate stochastic order on these bivariate objects. Keywords. Marshall's copula, maxmin copula, p-box, imprecise probability, shock model 1 Introduction In this paper we propose copulas arising from shock models in the presence of probabilistic uncertainty, which means that probability distributions are not necessarily precisely known. Copulas have been introduced in the precise setting by A.
    [Show full text]