Appendix: the Multivariate Normal Distribution

Total Page:16

File Type:pdf, Size:1020Kb

Appendix: the Multivariate Normal Distribution Appendix: The Multivariate Normal Distribution In dealing with multivariate distributions such as the multivariate normal, it is convenient to extend the expectation and variance operators to random t vectors. The expectation of a random vector X = (X1,...,Xn) is defined componentwise by E[X1] . E(X) = . . E[X ] n Linearity carries over from the scalar case in the sense that E(X + Y ) = E(X) + E(Y ) E(MX) = M E(X) for a compatible random vector Y and a compatible matrix M. The same componentwise conventions hold for the expectation of a random matrix and the variances and covariances of a random vector. Thus, we can express the variance-covariance matrix of a random vector X as Var(X) = E [X E(X)][X E(X)]t = E(XXt) E(X) E(X)t. { − − } − These notational choices produce many other compact formulas. For in- stance, the random quadratic form XtMX has expectation E(XtMX) = tr[M Var(X)] + E(X)tM E(X). (A.1) To verify this assertion, observe that t E(X MX) = E Ximij Xj Xi Xj = mij E(XiXj ) Xi Xj = mij[Cov(Xi,Xj ) + E(Xi) E(Xj )] Xi Xj = tr[M Var(X)] + E(X)tM E(X). Among the many possible definitions of the multivariate normal distri- bution, we adopt the one most widely used in stochastic simulation. Our 581 582 Appendix: The Multivariate Normal Distribution point of departure will be random vectors with independent standard nor- mal components. If such a random vector X has n components, then its density is n n/2 1 x2/2 1 xtx/2 e− j = e− . √2π 2π jY=1 Because the standard normal distribution has mean 0, variance 1, and s2 /2 characteristic function e− , it follows that X has mean vector 0, variance matrix I, and characteristic function n istX s2/2 sts/2 E(e ) = e− j = e− . j=1 Y We now define any affine transformation Y = AX + µ of X to be mul- tivariate normal [1, 2]. This definition has several practical consequences. First, it is clear that E(Y ) = µ and Var(Y ) = A Var(X)At = AAt = Ω. Second, any affine transformation BY +ν = BAX+Bµ+ν of Y is also mul- tivariate normal. Third, any subvector of Y is multivariate normal. Fourth, the characteristic function of Y is istY istµ istAX istµ stAAts/2 istµ stΩs/2 E(e ) = e E(e ) = e − = e − . Fifth, the sum of two independent multivariate normal random vectors is multivariate normal. Indeed, if Z = BU + ν is suitably dimensioned and X is independent of U, then we can represent the sum X Y + Z = ( AB ) + µ + ν U in the required form. This enumeration omits two more subtle issues. One is whether Y pos- sesses a density. Observe that Y lives in an affine subspace of dimension equal to or less than the rank of A. Thus, if Y has m components, then n m must hold in order for Y to possess a density. A second issue is the existence≥ and nature of the conditional density of a set of components of Y given the remaining components. We can clarify both of these issues by making canonical choices of X and A based on the QR decomposition of a matrix. Assuming that n m, we can write ≥ R At = Q , 0 where Q is an n n orthogonal matrix and R = Lt is an m m upper- triangular matrix× with nonnegative diagonal entries. (If n = ×m, we omit the zero matrix in the QR decomposition.) It follows that AX = ( L 0t ) QtX = ( L 0t ) Z. Appendix: The Multivariate Normal Distribution 583 In view of the usual change-of-variables formula for probability densities and the facts that the orthogonal matrix Qt preserves inner products and has determinant 1, the random vector Z has n independent standard normal components± and serves as a substitute for X. Not only is this true, but we can dispense with the last n m components of Z because they are multiplied by the matrix 0t. Thus,− we can safely assume n = m and calculate the density of Y = LZ + µ when L is invertible. The change-of- variables formula then shows that Y has density n/2 t 1 t 1 1 1 (y µ) (L− ) L− (y µ)/2 f(y) = det L− e− − − 2π | | n/2 t 1 1 1/2 (y µ) Ω− (y µ)/2 = det Ω − e− − − , 2π | | where Ω = LLt is the variance matrix of Y . By definition LLt is the Cholesky decomposition of Ω. To address the issue of conditional densities, consider the compatibly t t t t t t t t t partitioned vectors Y = (Y1 ,Y2 ), X = (X1,X2), and µ = (µ1, µ2) and matrices L 0 Ω Ω L = 11 , Ω = 11 12 . L L Ω Ω 21 22 21 22 Now suppose that X is standard normal, that Y = LX + µ, and that L11 has full rank. For Y1 = y1 fixed, the equation y1 = L11X1 + µ1 shows that 1 X is fixed at the value x = L− (y µ ). Because no restrictions apply 1 1 11 1 − 1 to X2, we have 1 Y = L X + L L− (y µ ) + µ . 2 22 2 21 11 1 − 1 2 1 Thus, Y2 given Y1 is normal with mean L21L11− (y1 µ1) + µ2 and variance t − t L22L22. To express these in terms of the blocks of Ω = LL , observe that t Ω11 = L11L11 t Ω21 = L21L11 t t Ω22 = L21L21 + L22L22. 1 1 The first two of these equations imply that L21L11− = Ω21Ω11− . The last equation then gives t t L22L22 = Ω22 L21L21 − t 1 1 = Ω22 Ω21(L11)− L11− Ω12 − 1 = Ω Ω Ω− Ω . 22 − 21 11 12 These calculations do not require that Y2 possess a density. In summary, the conditional distribution of Y2 given Y1 is normal with mean and variance 1 E(Y2 Y1) = Ω21Ω11− (Y1 µ1) + µ2 | −1 Var(Y Y ) = Ω Ω Ω− Ω . (A.2) 2 | 1 22 − 21 11 12 584 Appendix: The Multivariate Normal Distribution A.1 References [1] Rao CR (1973) Linear Statistical Inference and Its Applications, 2nd ed. Wiley, New York [2] Severini TA (2005) Elements of Distribution Theory. Cambridge Uni- versity Press, Cambridge Index Acceptance function, 544 examples, 52–54 Active constraint, 169 Attenuation coefficient, 203 Adaptive acceptance-rejection, 440 Autocovariance, 405, 566 Adaptive barrier methods, 301– 305 Backtracking, 251, 287 linear programming, 303 Backward algorithm, Baum’s, 508 logarithmic, 301–303 Backward operator, 565 Adaptive quadrature, 369 Banded matrix, 108, 151 Admixture distribution, 443 Barker function, 544 Admixtures, see EM algorithm, Basis, 335 cluster analysis Haar’s, 413–415 AIDS data, 257 wavelets, 429 Allele frequency estimation, 229– Baum’s algorithms, 508–509 231 Bayesian EM algorithm, 228 Dirichlet prior, with, 532 Bernoulli functions, 338–340 Gibbs sampling, 545 Bernoulli number, 339 Hardy-Weinberg law, 229 Euler-Maclaurin formula, in, loglikelihood function, 239 364 Alternating projections, 310 Bernoulli polynomials, 338–340, 357 Analytic function, 400–402 Bernoulli random variables, vari- Antithetic simulation, 464–465 ance, 43 bootstrapping, 492 Bernoulli-Laplace model, 521 Apollonius’s problem, 183 Bessel function, 454 Arcsine distribution, 450 Bessel’s inequality, 335 Armijo rule, 287 Best subset regression, 543 Ascent algorithm, 251 Beta distribution Asymptotic expansions, 39–54 coupling, 576 incomplete gamma function, distribution function, see In- 45 complete beta function Laplace transform, 45 orthonormal polynomials, 344– Laplace’s method, 46–51 346 order statistic moments, 47 recurrence relation, 347 Poincar´e’s definition, 46 sampling, 436, 445, 452, 454 posterior expectations, 49 Bias reduction, 486–487 Stieltjes function, 52 Bilateral exponential distribution, Stirling’s formula, 49 382 Taylor expansions, 41–43 sampling, 437 Asymptotic functions, 40 Binomial coefficients, 1, 5 585 586 Index Binomial distribution Canonical correlations, 178–179 conjugate prior, 529 Capture-recapture, 531–532 coupling, 474 Cardinal B-spline, 426 distribution function, 19 Cauchy distribution, 382 maximum likelihood estima- convolution, 387 tion, 216 Fourier transform, 385 orthonormal polynomials, 357 sampling, 433, 455 right-tail probability, 461 Cauchy sequence, 334 sampling, 444, 453 Cauchy-Schwarz inequality, 79, 170 score and information, 256 inner product space, on, 334 Biorthogonality, 561 Censoring, 272 Bipartite graph, 506 Central difference formula, 375 Birthday problem, 48 Central limit theorem, 562 Bisection method, 55–58 Central moments, 484 Bivariate exponential, 466 Chapman-Kolmogorov relation, 512 Bivariate normal distribution Characteristic function, 379 distribution function, 24, 376 moments, in terms of, 388 missing data, with, 241 Chi-square distance, 558, 570 Chi-square distribution Block relaxation, 174–181 distribution function, 19 global convergence of, 286 noncentral, 23 local convergence, 282–283, 291 sampling, 445 Blood type data, 229, 255 Chi-square statistic, 483 Blood type genes, 229, 239 Chi-square test, see Multinomial Bolzano-Weierstrass theorem, 158 distribution Bootstrapping, 477–499 Cholesky decomposition, 99–101, antithetic simulation, 492 108, 583 balanced, 491–492 banded matrix, 108, 151, 153 bias reduction, 486–487 operation count, 108 confidence interval, 487–489 Circulant matrix, 407 bootstrap-t method, 487 Clique, 554 correspondence principle, 484 Cluster point, 116, 284 generalized linear models, 490 Coercive function, 158, 181, 182, importance resampling, 492– 283 495, 498–499 Coin tossing, waiting time, 409 linear regression, 490 Coloring, 201 nonparametric, 484 Compact operator, 567–570 parametric, 484 Complete inner product space, 334 Box-Muller method, 434 Complete orthonormal sequence, Bradley-Terry model, 196 335 Branching process, 399, 518 Compound Poisson distribution, continuous time, 514–515 15 extinction probabilities, 62– Concave function, 173 63, 65–67, 72 Condition number, 87–88, 90–91, Bregman distance, 302 138 Index 587 Confidence interval, 56–58 Convolution bootstrapping, 487–489 functions, of, 386–387 normal variance, 71 Fourier transform, 387 Conjugate prior sequences, of, 396, 402–405 binomial distribution, 529 Coordinate descent, 311–317 exponential distribution, 529 Coronary disease data, 187 geometric distribution, 529 Coupled random
Recommended publications
  • Comparison of Harmonic, Geometric and Arithmetic Means for Change Detection in SAR Time Series Guillaume Quin, Béatrice Pinel-Puysségur, Jean-Marie Nicolas
    Comparison of Harmonic, Geometric and Arithmetic means for change detection in SAR time series Guillaume Quin, Béatrice Pinel-Puysségur, Jean-Marie Nicolas To cite this version: Guillaume Quin, Béatrice Pinel-Puysségur, Jean-Marie Nicolas. Comparison of Harmonic, Geometric and Arithmetic means for change detection in SAR time series. EUSAR. 9th European Conference on Synthetic Aperture Radar, 2012., Apr 2012, Germany. hal-00737524 HAL Id: hal-00737524 https://hal.archives-ouvertes.fr/hal-00737524 Submitted on 2 Oct 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. EUSAR 2012 Comparison of Harmonic, Geometric and Arithmetic Means for Change Detection in SAR Time Series Guillaume Quin CEA, DAM, DIF, F-91297 Arpajon, France Béatrice Pinel-Puysségur CEA, DAM, DIF, F-91297 Arpajon, France Jean-Marie Nicolas Telecom ParisTech, CNRS LTCI, 75634 Paris Cedex 13, France Abstract The amplitude distribution in a SAR image can present a heavy tail. Indeed, very high–valued outliers can be observed. In this paper, we propose the usage of the Harmonic, Geometric and Arithmetic temporal means for amplitude statistical studies along time. In general, the arithmetic mean is used to compute the mean amplitude of time series.
    [Show full text]
  • University of Cincinnati
    UNIVERSITY OF CINCINNATI Date:___________________ I, _________________________________________________________, hereby submit this work as part of the requirements for the degree of: in: It is entitled: This work and its defense approved by: Chair: _______________________________ _______________________________ _______________________________ _______________________________ _______________________________ Gibbs Sampling and Expectation Maximization Methods for Estimation of Censored Values from Correlated Multivariate Distributions A dissertation submitted to the Division of Research and Advanced Studies of the University of Cincinnati in partial ful…llment of the requirements for the degree of DOCTORATE OF PHILOSOPHY (Ph.D.) in the Department of Mathematical Sciences of the McMicken College of Arts and Sciences May 2008 by Tina D. Hunter B.S. Industrial and Systems Engineering The Ohio State University, Columbus, Ohio, 1984 M.S. Aerospace Engineering University of Cincinnati, Cincinnati, Ohio, 1989 M.S. Statistics University of Cincinnati, Cincinnati, Ohio, 2003 Committee Chair: Dr. Siva Sivaganesan Abstract Statisticians are often called upon to analyze censored data. Environmental and toxicological data is often left-censored due to reporting practices for mea- surements that are below a statistically de…ned detection limit. Although there is an abundance of literature on univariate methods for analyzing this type of data, a great need still exists for multivariate methods that take into account possible correlation amongst variables. Two methods are developed here for that purpose. One is a Markov Chain Monte Carlo method that uses a Gibbs sampler to es- timate censored data values as well as distributional and regression parameters. The second is an expectation maximization (EM) algorithm that solves for the distributional parameters that maximize the complete likelihood function in the presence of censored data.
    [Show full text]
  • Moments of the Product and Ratio of Two Correlated Chi-Square Variables
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Springer - Publisher Connector Stat Papers (2009) 50:581–592 DOI 10.1007/s00362-007-0105-0 REGULAR ARTICLE Moments of the product and ratio of two correlated chi-square variables Anwar H. Joarder Received: 2 June 2006 / Revised: 8 October 2007 / Published online: 20 November 2007 © The Author(s) 2007 Abstract The exact probability density function of a bivariate chi-square distribu- tion with two correlated components is derived. Some moments of the product and ratio of two correlated chi-square random variables have been derived. The ratio of the two correlated chi-square variables is used to compare variability. One such applica- tion is referred to. Another application is pinpointed in connection with the distribution of correlation coefficient based on a bivariate t distribution. Keywords Bivariate chi-square distribution · Moments · Product of correlated chi-square variables · Ratio of correlated chi-square variables Mathematics Subject Classification (2000) 62E15 · 60E05 · 60E10 1 Introduction Fisher (1915) derived the distribution of mean-centered sum of squares and sum of products in order to study the distribution of correlation coefficient from a bivariate nor- mal sample. Let X1, X2,...,X N (N > 2) be two-dimensional independent random vectors where X j = (X1 j , X2 j ) , j = 1, 2,...,N is distributed as a bivariate normal distribution denoted by N2(θ, ) with θ = (θ1,θ2) and a 2 × 2 covariance matrix = (σik), i = 1, 2; k = 1, 2. The sample mean-centered sums of squares and sum of products are given by a = N (X − X¯ )2 = mS2, m = N − 1,(i = 1, 2) ii j=1 ij i i = N ( − ¯ )( − ¯ ) = and a12 j=1 X1 j X1 X2 j X2 mRS1 S2, respectively.
    [Show full text]
  • Discrete Probability Distributions Uniform Distribution Bernoulli
    Discrete Probability Distributions Uniform Distribution Experiment obeys: all outcomes equally probable Random variable: outcome Probability distribution: if k is the number of possible outcomes, 1 if x is a possible outcome p(x)= k ( 0 otherwise Example: tossing a fair die (k = 6) Bernoulli Distribution Experiment obeys: (1) a single trial with two possible outcomes (success and failure) (2) P trial is successful = p Random variable: number of successful trials (zero or one) Probability distribution: p(x)= px(1 − p)n−x Mean and variance: µ = p, σ2 = p(1 − p) Example: tossing a fair coin once Binomial Distribution Experiment obeys: (1) n repeated trials (2) each trial has two possible outcomes (success and failure) (3) P ith trial is successful = p for all i (4) the trials are independent Random variable: number of successful trials n x n−x Probability distribution: b(x; n,p)= x p (1 − p) Mean and variance: µ = np, σ2 = np(1 − p) Example: tossing a fair coin n times Approximations: (1) b(x; n,p) ≈ p(x; λ = pn) if p ≪ 1, x ≪ n (Poisson approximation) (2) b(x; n,p) ≈ n(x; µ = pn,σ = np(1 − p) ) if np ≫ 1, n(1 − p) ≫ 1 (Normal approximation) p Geometric Distribution Experiment obeys: (1) indeterminate number of repeated trials (2) each trial has two possible outcomes (success and failure) (3) P ith trial is successful = p for all i (4) the trials are independent Random variable: trial number of first successful trial Probability distribution: p(x)= p(1 − p)x−1 1 2 1−p Mean and variance: µ = p , σ = p2 Example: repeated attempts to start
    [Show full text]
  • Chapter 5 Sections
    Chapter 5 Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions Continuous univariate distributions: 5.6 Normal distributions 5.7 Gamma distributions Just skim 5.8 Beta distributions Multivariate distributions Just skim 5.9 Multinomial distributions 5.10 Bivariate normal distributions 1 / 43 Chapter 5 5.1 Introduction Families of distributions How: Parameter and Parameter space pf /pdf and cdf - new notation: f (xj parameters ) Mean, variance and the m.g.f. (t) Features, connections to other distributions, approximation Reasoning behind a distribution Why: Natural justification for certain experiments A model for the uncertainty in an experiment All models are wrong, but some are useful – George Box 2 / 43 Chapter 5 5.2 Bernoulli and Binomial distributions Bernoulli distributions Def: Bernoulli distributions – Bernoulli(p) A r.v. X has the Bernoulli distribution with parameter p if P(X = 1) = p and P(X = 0) = 1 − p. The pf of X is px (1 − p)1−x for x = 0; 1 f (xjp) = 0 otherwise Parameter space: p 2 [0; 1] In an experiment with only two possible outcomes, “success” and “failure”, let X = number successes. Then X ∼ Bernoulli(p) where p is the probability of success. E(X) = p, Var(X) = p(1 − p) and (t) = E(etX ) = pet + (1 − p) 8 < 0 for x < 0 The cdf is F(xjp) = 1 − p for 0 ≤ x < 1 : 1 for x ≥ 1 3 / 43 Chapter 5 5.2 Bernoulli and Binomial distributions Binomial distributions Def: Binomial distributions – Binomial(n; p) A r.v.
    [Show full text]
  • Arcsine Laws for Random Walks Generated from Random Permutations with Applications to Genomics
    Applied Probability Trust (4 February 2021) ARCSINE LAWS FOR RANDOM WALKS GENERATED FROM RANDOM PERMUTATIONS WITH APPLICATIONS TO GENOMICS XIAO FANG,1 The Chinese University of Hong Kong HAN LIANG GAN,2 Northwestern University SUSAN HOLMES,3 Stanford University HAIYAN HUANG,4 University of California, Berkeley EROL PEKOZ,¨ 5 Boston University ADRIAN ROLLIN,¨ 6 National University of Singapore WENPIN TANG,7 Columbia University 1 Email address: [email protected] 2 Email address: [email protected] 3 Email address: [email protected] 4 Email address: [email protected] 5 Email address: [email protected] 6 Email address: [email protected] 7 Email address: [email protected] 1 Postal address: Department of Statistics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong 2 Postal address: Department of Mathematics, Northwestern University, 2033 Sheridan Road, Evanston, IL 60208 2 Current address: University of Waikato, Private Bag 3105, Hamilton 3240, New Zealand 1 2 X. Fang et al. Abstract A classical result for the simple symmetric random walk with 2n steps is that the number of steps above the origin, the time of the last visit to the origin, and the time of the maximum height all have exactly the same distribution and converge when scaled to the arcsine law. Motivated by applications in genomics, we study the distributions of these statistics for the non-Markovian random walk generated from the ascents and descents of a uniform random permutation and a Mallows(q) permutation and show that they have the same asymptotic distributions as for the simple random walk.
    [Show full text]
  • A Multivariate Student's T-Distribution
    Open Journal of Statistics, 2016, 6, 443-450 Published Online June 2016 in SciRes. http://www.scirp.org/journal/ojs http://dx.doi.org/10.4236/ojs.2016.63040 A Multivariate Student’s t-Distribution Daniel T. Cassidy Department of Engineering Physics, McMaster University, Hamilton, ON, Canada Received 29 March 2016; accepted 14 June 2016; published 17 June 2016 Copyright © 2016 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ Abstract A multivariate Student’s t-distribution is derived by analogy to the derivation of a multivariate normal (Gaussian) probability density function. This multivariate Student’s t-distribution can have different shape parameters νi for the marginal probability density functions of the multi- variate distribution. Expressions for the probability density function, for the variances, and for the covariances of the multivariate t-distribution with arbitrary shape parameters for the marginals are given. Keywords Multivariate Student’s t, Variance, Covariance, Arbitrary Shape Parameters 1. Introduction An expression for a multivariate Student’s t-distribution is presented. This expression, which is different in form than the form that is commonly used, allows the shape parameter ν for each marginal probability density function (pdf) of the multivariate pdf to be different. The form that is typically used is [1] −+ν Γ+((ν n) 2) T ( n) 2 +Σ−1 n 2 (1.[xx] [ ]) (1) ΓΣ(νν2)(π ) This “typical” form attempts to generalize the univariate Student’s t-distribution and is valid when the n marginal distributions have the same shape parameter ν .
    [Show full text]
  • Incorporating a Geometric Mean Formula Into The
    Calculating the CPI Incorporating a geometric mean formula into the CPI Beginning in January 1999, a new geometric mean formula will replace the current Laspeyres formula in calculating most basic components of the Consumer Price Index; the new formula will better account for the economic substitution behavior of consumers 2 Kenneth V. Dalton, his article describes an important improve- bias” in the CPI. This upward bias was a techni- John S. Greenlees, ment in the calculation of the Consumer cal problem that tied the weight of a CPI sample and TPrice Index (CPI). The Bureau of Labor Sta- item to its expected price change. The flaw was Kenneth J. Stewart tistics plans to use a new geometric mean for- effectively eliminated by changes to the CPI mula for calculating most of the basic compo- sample rotation and substitution procedures and nents of the Consumer Price Index for all Urban to the functional form used to calculate changes Consumers (CPI-U) and the Consumer Price In- in the cost of shelter for homeowners. In 1997, a dex for Urban Wage Earners and Clerical Work- new approach to the measurement of consumer ers (CPI-W). This change will become effective prices for hospital services was introduced.3 Pric- with data for January 1999.1 ing procedures were revised, from pricing indi- The geometric mean formula will be used in vidual items (such as a unit of blood or a hospi- index categories that make up approximately 61 tal inpatient day) to pricing the combined sets of percent of total consumer spending represented goods and services provided on selected patient by the CPI-U.
    [Show full text]
  • Wavelet Operators and Multiplicative Observation Models
    Wavelet Operators and Multiplicative Observation Models - Application to Change-Enhanced Regularization of SAR Image Time Series Abdourrahmane Atto, Emmanuel Trouvé, Jean-Marie Nicolas, Thu Trang Le To cite this version: Abdourrahmane Atto, Emmanuel Trouvé, Jean-Marie Nicolas, Thu Trang Le. Wavelet Operators and Multiplicative Observation Models - Application to Change-Enhanced Regularization of SAR Image Time Series. 2016. hal-00950823v3 HAL Id: hal-00950823 https://hal.archives-ouvertes.fr/hal-00950823v3 Preprint submitted on 26 Jan 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 1 Wavelet Operators and Multiplicative Observation Models - Application to Change-Enhanced Regularization of SAR Image Time Series Abdourrahmane M. Atto1;∗, Emmanuel Trouve1, Jean-Marie Nicolas2, Thu-Trang Le^1 Abstract|This paper first provides statistical prop- I. Introduction - Motivation erties of wavelet operators when the observation model IGHLY resolved data such as Synthetic Aperture can be seen as the product of a deterministic piece- Radar (SAR) image time series issued from new wise regular function (signal) and a stationary random H field (noise). This multiplicative observation model is generation sensors show minute details. Indeed, the evo- analyzed in two standard frameworks by considering lution of SAR imaging systems is such that in less than 2 either (1) a direct wavelet transform of the model decades: or (2) a log-transform of the model prior to wavelet • high resolution sensors can achieve metric resolution, decomposition.
    [Show full text]
  • A Guide on Probability Distributions
    powered project A guide on probability distributions R-forge distributions Core Team University Year 2008-2009 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic ditribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 132 13 Misc 134 Conclusion 135 Bibliography 135 A Mathematical tools 138 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • Free Infinite Divisibility and Free Multiplicative Mixtures of the Wigner Distribution
    FREE INFINITE DIVISIBILITY AND FREE MULTIPLICATIVE MIXTURES OF THE WIGNER DISTRIBUTION Victor Pérez-Abreu and Noriyoshi Sakuma Comunicación del CIMAT No I-09-0715/ -10-2009 ( PE/CIMAT) Free Infinite Divisibility of Free Multiplicative Mixtures of the Wigner Distribution Victor P´erez-Abreu∗ Department of Probability and Statistics, CIMAT Apdo. Postal 402, Guanajuato Gto. 36000, Mexico [email protected] Noriyoshi Sakumay Department of Mathematics, Keio University, 3-14-1, Hiyoshi, Yokohama 223-8522, Japan. [email protected] March 19, 2010 Abstract Let I∗ and I be the classes of all classical infinitely divisible distributions and free infinitely divisible distributions, respectively, and let Λ be the Bercovici-Pata bijection between I∗ and I: The class type W of symmetric distributions in I that can be represented as free multiplicative convolutions of the Wigner distribution is studied. A characterization of this class under the condition that the mixing distribution is 2-divisible with respect to free multiplicative convolution is given. A correspondence between sym- metric distributions in I and the free counterpart under Λ of the positive distributions in I∗ is established. It is shown that the class type W does not include all symmetric distributions in I and that it does not coincide with the image under Λ of the mixtures of the Gaussian distribution in I∗. Similar results for free multiplicative convolutions with the symmetric arcsine measure are obtained. Several well-known and new concrete examples are presented. AMS 2000 Subject Classification: 46L54, 15A52. Keywords: Free convolutions, type G law, free stable law, free compound distribution, Bercovici-Pata bijection.
    [Show full text]
  • Location-Scale Distributions
    Location–Scale Distributions Linear Estimation and Probability Plotting Using MATLAB Horst Rinne Copyright: Prof. em. Dr. Horst Rinne Department of Economics and Management Science Justus–Liebig–University, Giessen, Germany Contents Preface VII List of Figures IX List of Tables XII 1 The family of location–scale distributions 1 1.1 Properties of location–scale distributions . 1 1.2 Genuine location–scale distributions — A short listing . 5 1.3 Distributions transformable to location–scale type . 11 2 Order statistics 18 2.1 Distributional concepts . 18 2.2 Moments of order statistics . 21 2.2.1 Definitions and basic formulas . 21 2.2.2 Identities, recurrence relations and approximations . 26 2.3 Functions of order statistics . 32 3 Statistical graphics 36 3.1 Some historical remarks . 36 3.2 The role of graphical methods in statistics . 38 3.2.1 Graphical versus numerical techniques . 38 3.2.2 Manipulation with graphs and graphical perception . 39 3.2.3 Graphical displays in statistics . 41 3.3 Distribution assessment by graphs . 43 3.3.1 PP–plots and QQ–plots . 43 3.3.2 Probability paper and plotting positions . 47 3.3.3 Hazard plot . 54 3.3.4 TTT–plot . 56 4 Linear estimation — Theory and methods 59 4.1 Types of sampling data . 59 IV Contents 4.2 Estimators based on moments of order statistics . 63 4.2.1 GLS estimators . 64 4.2.1.1 GLS for a general location–scale distribution . 65 4.2.1.2 GLS for a symmetric location–scale distribution . 71 4.2.1.3 GLS and censored samples .
    [Show full text]