Maximum Likelihood Covariance Matrix Estimation from Two Possibly Mismatched Data Sets Olivier Besson

Total Page:16

File Type:pdf, Size:1020Kb

Maximum Likelihood Covariance Matrix Estimation from Two Possibly Mismatched Data Sets Olivier Besson Maximum likelihood covariance matrix estimation from two possibly mismatched data sets Olivier Besson To cite this version: Olivier Besson. Maximum likelihood covariance matrix estimation from two possibly mismatched data sets. Signal Processing, Elsevier, 2020, 167, pp.107285-107294. 10.1016/j.sigpro.2019.107285. hal-02572461 HAL Id: hal-02572461 https://hal.archives-ouvertes.fr/hal-02572461 Submitted on 13 May 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Open Archive Toulouse Archive Ouverte (OATAO ) OATAO is an open access repository that collects the wor of some Toulouse researchers and ma es it freely available over the web where possible. This is an author's version published in: https://oatao.univ-toulouse.fr/25984 Official URL : https://doi.org/10.1016/j.sigpro.2019.107285 To cite this version : Besson, Olivier Maximum likelihood covariance matrix estimation from two possibly mismatched data sets. (2020) Signal Processing, 167. 107285-107294. ISSN 0165-1684 Any correspondence concerning this service should be sent to the repository administrator: [email protected] Maximum likelihood covariance matrix estimation from two possibly mismatched data sets Olivier Besson ISAE-SUPAERO, 10 Avenue Edouard Belin, Toulouse 31055, France a b s t r a c t We consider estimating the covariance matrix from two data sets, one whose covariance matrix R 1 is the sought one and another set of samples whose covariance matrix R 2 slightly differs from the sought one, due e.g. to different measurement configurations. We assume however that the two matrices are rather / − / 1 2 1 1 2| close, which we formulate by assuming that R1 R2 R1 R1 follows a Wishart distribution around the identity matrix. It turns out that this assumption results in two data sets with different marginal distri- Keywords: butions, hence the problem becomes that of covariance matrix estimation from two data sets which are Covariance matrix estimation distribution-mismatched. The maximum likelihood estimator (MLE) is derived and is shown to depend Maximum likelihood on the values of the number of samples in each set. We show that it involves whitening of one data set Mismatch by the other one, shrinkage of eigenvalues and colorization, at least when one data set contains more samples than the size p of the observation space. When both data sets have less than p samples but the total number is larger than p , the MLE again entails eigenvalues shrinkage but this time after a projection operation. Simulation results compare the new estimator to state of the art techniques. 1. Problem statement derived minimax estimators in two important classes, namely es- timators of the form Rˆ = GDG T where D is a diagonal matrix and Analysis or processing of multichannel data most often relies on G is the Cholesky factor of S , or of the form Rˆ = U diag ϕ(λ) U T the covariance matrix, which is a fundamental tool e.g., for princi- where U diag( λ) U T is the eigenvalue decomposition of S and ϕ( λ) pal component analysis, spectral analysis, adaptive filtering, detec- is a non-linear function of λ. This seminal work of Stein gave rise tion, direction of arrival estimation among others [1–3] . In practical to a great number of studies, see for instance [7–13] and refer- applications, the p × p covariance matrix R needs to be estimated ences therein. A second class of robust estimates is based on lin- from a finite number n of samples. When the latter are indepen- ear shrinkage of the SCM to a target matrix (an approach which dent and Gaussian distributed, the maximum likelihood estimator can be interpreted as an empirical Bayes technique), i.e., esti- −1 × = T of R is n S where X is the p n data matrix and S XX is the mates of the form Rˆ = αRt + βS where Rt = I is the most widely sample covariance matrix (SCM) [1] . However, in low sample sup- spread choice, see e.g., [14–20] . Note that these techniques ap- port or when deviation from the Gaussian assumption is at hand, plied with Rt = I achieve an affine transformation of the eigen- the SCM tends to behave poorly. In particular it was observed that values of S , while retaining the eigenvectors, and therefore bear the sample covariance matrix is usually less well-conditioned than resemblance with Stein’s method, although the selection of α, β the true covariance matrix, and therefore considerable effort has may not be driven by the same principle. Robustness to a pos- been dedicated to regularizing it with a view to improve its per- sibly non Gaussian distribution has also been a topic of consid- formance. erable interest and many papers have focused on robust estima- One of the most important approach in this respect is due to tion for elliptically distributed data, see e.g., [21–30] and references Stein [4–6] who, instead of maximizing the likelihood function, therein. advocated to minimize a meaningful loss function within a given Most of the above cited works deal with estimation of a co- class of estimators. Stein hence introduced the concept of admissi- variance matrix from a single data set. In this paper, we consider a ble estimation and minimax estimators under the so-called Stein’s situation where two data sets X1 and X2 are available, with respec- loss. He showed that the SCM-based estimator is not minimax and tive covariance matrices R1 and R2 . This situation typically arises in radar applications when one wishes to detect a target buried in clutter with unknown statistics [31,32] . In order to infer the lat- E-mail address: [email protected] ter, training samples are generally used, which hopefully share the https://doi.org/10.1016/j.sigpro.2019.107285 same statistics as the clutter in the cell under test (CUT). However, X2 ) would be maximized under the constraint that the distance it has been evidenced that clutter is most often heterogeneous between W and I is smaller than some value. Alternatively, and [31] , with a discrepancy compared to the CUT that may grow with this is what we elect here, one can resort to an empirical Bayes the distance to the CUT [33] . Therefore, one is led to use some approach where the random matrix W follows some prior distri- clustering that separates training samples, either based on their bution rather concentrated around I . For mathematical tractability, proximity to the CUT or by means of some statistical criterion, we choose a conjugate prior for W and we assume that W fol- such as the power selected training [34] . The samples so selected lows a Wishart distribution with ν degrees of freedom and param- are deemed to be representative of the clutter in the CUT while −1 d −1 eter matrix μ I , i.e., W = W p ν, μ I . Of course, this is a rather others are less reliable, which corresponds to the situation consid- strong assumption whose validity would be difficult to check, e.g., ered herein. A second example is in the field of synthetic aper- on real data. However, it is in accordance with the mere knowl- ture radar in the case where a scene is imaged on two consecutive edge we have about the relation between R1 and R2 , and it allows days, with possible changes in between [35] . Finally, in hyperspec- for tractable derivations. tral imagery, the problem of target or anomaly detection leads to Using the fact that X1 |R1 and X2 |R2 are independent and Gaus- a very similar framework. Indeed, the background in a pixel under sian distributed with respective covariance matrices R1 and R2 , test has to be estimated from the local pixels around and pixels lo- = −1 T , and since R2 G1 W G1 we thus assume the following stochastic cated further apart [36]. In the present paper, we assume that R2 model: is close to R1 , the covariance matrix we wish to estimate. Since R2 −n / 2 −p(n + n ) / 2 −n 1 / 2 −1 2 p(X , X | R , W ) = (2 π ) 1 2 |R | W R differs from but is close to R1 we investigate using both X1 and 1 2 1 1 1 X to estimate R . The reason for using also X is that despite its 2 1 2 1 T −1 1 T −T −1 ×etr − X R X 1 − X G WG X 2 (1a) covariance matrix is not R1 , it is close to. Additionally, one might 2 1 1 2 2 1 1 face situations where the number of samples in X1 is very small. ν p/ 2 This paper constitutes a first approach to this specific problem and μ (ν− − ) / 1 p(W ) = |W | p 1 2etr − μW (1b) we focus herein on the most natural approach, namely maximum ν p/ 2 2 p (ν/ 2) 2 likelihood estimation. The objective is to figure out the pros and cons of the latter and the conditions under which it is an accu- E −1 = (ν − − ) −1 μ E { } = Note that W p 1 I so that R2 rate estimator. The paper is organized as follows.
Recommended publications
  • Ph 21.5: Covariance and Principal Component Analysis (PCA)
    Ph 21.5: Covariance and Principal Component Analysis (PCA) -v20150527- Introduction Suppose we make a measurement for which each data sample consists of two measured quantities. A simple example would be temperature (T ) and pressure (P ) taken at time (t) at constant volume(V ). The data set is Ti;Pi ti N , which represents a set of N measurements. We wish to make sense of the data and determine thef dependencej g of, say, P on T . Suppose P and T were for some reason independent of each other; then the two variables would be uncorrelated. (Of course we are well aware that P and V are correlated and we know the ideal gas law: PV = nRT ). How might we infer the correlation from the data? The tools for quantifying correlations between random variables is the covariance. For two real-valued random variables (X; Y ), the covariance is defined as (under certain rather non-restrictive assumptions): Cov(X; Y ) σ2 (X X )(Y Y ) ≡ XY ≡ h − h i − h i i where ::: denotes the expectation (average) value of the quantity in brackets. For the case of P and T , we haveh i Cov(P; T ) = (P P )(T T ) h − h i − h i i = P T P T h × i − h i × h i N−1 ! N−1 ! N−1 ! 1 X 1 X 1 X = PiTi Pi Ti N − N N i=0 i=0 i=0 The extension of this to real-valued random vectors (X;~ Y~ ) is straighforward: D E Cov(X;~ Y~ ) σ2 (X~ < X~ >)(Y~ < Y~ >)T ≡ X~ Y~ ≡ − − This is a matrix, resulting from the product of a one vector and the transpose of another vector, where X~ T denotes the transpose of X~ .
    [Show full text]
  • 6 Probability Density Functions (Pdfs)
    CSC 411 / CSC D11 / CSC C11 Probability Density Functions (PDFs) 6 Probability Density Functions (PDFs) In many cases, we wish to handle data that can be represented as a real-valued random variable, T or a real-valued vector x =[x1,x2,...,xn] . Most of the intuitions from discrete variables transfer directly to the continuous case, although there are some subtleties. We describe the probabilities of a real-valued scalar variable x with a Probability Density Function (PDF), written p(x). Any real-valued function p(x) that satisfies: p(x) 0 for all x (1) ∞ ≥ p(x)dx = 1 (2) Z−∞ is a valid PDF. I will use the convention of upper-case P for discrete probabilities, and lower-case p for PDFs. With the PDF we can specify the probability that the random variable x falls within a given range: x1 P (x0 x x1)= p(x)dx (3) ≤ ≤ Zx0 This can be visualized by plotting the curve p(x). Then, to determine the probability that x falls within a range, we compute the area under the curve for that range. The PDF can be thought of as the infinite limit of a discrete distribution, i.e., a discrete dis- tribution with an infinite number of possible outcomes. Specifically, suppose we create a discrete distribution with N possible outcomes, each corresponding to a range on the real number line. Then, suppose we increase N towards infinity, so that each outcome shrinks to a single real num- ber; a PDF is defined as the limiting case of this discrete distribution.
    [Show full text]
  • Covariance of Cross-Correlations: Towards Efficient Measures for Large-Scale Structure
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by RERO DOC Digital Library Mon. Not. R. Astron. Soc. 400, 851–865 (2009) doi:10.1111/j.1365-2966.2009.15490.x Covariance of cross-correlations: towards efficient measures for large-scale structure Robert E. Smith Institute for Theoretical Physics, University of Zurich, Zurich CH 8037, Switzerland Accepted 2009 August 4. Received 2009 July 17; in original form 2009 June 13 ABSTRACT We study the covariance of the cross-power spectrum of different tracers for the large-scale structure. We develop the counts-in-cells framework for the multitracer approach, and use this to derive expressions for the full non-Gaussian covariance matrix. We show that for the usual autopower statistic, besides the off-diagonal covariance generated through gravitational mode- coupling, the discreteness of the tracers and their associated sampling distribution can generate strong off-diagonal covariance, and that this becomes the dominant source of covariance as spatial frequencies become larger than the fundamental mode of the survey volume. On comparison with the derived expressions for the cross-power covariance, we show that the off-diagonal terms can be suppressed, if one cross-correlates a high tracer-density sample with a low one. Taking the effective estimator efficiency to be proportional to the signal-to-noise ratio (S/N), we show that, to probe clustering as a function of physical properties of the sample, i.e. cluster mass or galaxy luminosity, the cross-power approach can outperform the autopower one by factors of a few.
    [Show full text]
  • Lecture 4 Multivariate Normal Distribution and Multivariate CLT
    Lecture 4 Multivariate normal distribution and multivariate CLT. T We start with several simple observations. If X = (x1; : : : ; xk) is a k 1 random vector then its expectation is × T EX = (Ex1; : : : ; Exk) and its covariance matrix is Cov(X) = E(X EX)(X EX)T : − − Notice that a covariance matrix is always symmetric Cov(X)T = Cov(X) and nonnegative definite, i.e. for any k 1 vector a, × a T Cov(X)a = Ea T (X EX)(X EX)T a T = E a T (X EX) 2 0: − − j − j � We will often use that for any vector X its squared length can be written as X 2 = XT X: If we multiply a random k 1 vector X by a n k matrix A then the covariancej j of Y = AX is a n n matrix × × × Cov(Y ) = EA(X EX)(X EX)T AT = ACov(X)AT : − − T Multivariate normal distribution. Let us consider a k 1 vector g = (g1; : : : ; gk) of i.i.d. standard normal random variables. The covariance of g is,× obviously, a k k identity × matrix, Cov(g) = I: Given a n k matrix A, the covariance of Ag is a n n matrix × × � := Cov(Ag) = AIAT = AAT : Definition. The distribution of a vector Ag is called a (multivariate) normal distribution with covariance � and is denoted N(0; �): One can also shift this disrtibution, the distribution of Ag + a is called a normal distri­ bution with mean a and covariance � and is denoted N(a; �): There is one potential problem 23 with the above definition - we assume that the distribution depends only on covariance ma­ trix � and does not depend on the construction, i.e.
    [Show full text]
  • The Variance Ellipse
    The Variance Ellipse ✧ 1 / 28 The Variance Ellipse For bivariate data, like velocity, the variabililty can be spread out in not one but two dimensions. In this case, the variance is now a matrix, and the spread of the data is characterized by an ellipse. This variance ellipse eccentricity indicates the extent to which the variability is anisotropic or directional, and the orientation tells the direction in which the variability is concentrated. ✧ 2 / 28 Variance Ellipse Example Variance ellipses are a very useful way to analyze velocity data. This example compares velocities observed by a mooring array in Fram Strait with velocities in two numerical models. From Hattermann et al. (2016), “Eddy­driven recirculation of Atlantic Water in Fram Strait”, Geophysical Research Letters. Variance ellipses can be powerfully combined with lowpassing and bandpassing to reveal the geometric structure of variability in different frequency bands. ✧ 3 / 28 Understanding Ellipses This section will focus on understanding the properties of the variance ellipse. To do this, it is not really possible to avoid matrix algebra. Therefore we will first review some relevant mathematical background. ✧ 4 / 28 Review: Rotations T The most important action on a vector z ≡ [u v] is a ninety-degree rotation. This is carried out through the matrix multiplication 0 −1 0 −1 u −v z = = . [ 1 0 ] [ 1 0 ] [ v ] [ u ] Note the mathematically positive direction is counterclockwise. A general rotation is carried out by the rotation matrix cos θ − sin θ J(θ) ≡ [ sin θ cos θ ] cos θ − sin θ u u cos θ − v sin θ J(θ) z = = .
    [Show full text]
  • Principal Components Analysis (Pca)
    PRINCIPAL COMPONENTS ANALYSIS (PCA) Steven M. Holand Department of Geology, University of Georgia, Athens, GA 30602-2501 3 December 2019 Introduction Suppose we had measured two variables, length and width, and plotted them as shown below. Both variables have approximately the same variance and they are highly correlated with one another. We could pass a vector through the long axis of the cloud of points and a second vec- tor at right angles to the first, with both vectors passing through the centroid of the data. Once we have made these vectors, we could find the coordinates of all of the data points rela- tive to these two perpendicular vectors and re-plot the data, as shown here (both of these figures are from Swan and Sandilands, 1995). In this new reference frame, note that variance is greater along axis 1 than it is on axis 2. Also note that the spatial relationships of the points are unchanged; this process has merely rotat- ed the data. Finally, note that our new vectors, or axes, are uncorrelated. By performing such a rotation, the new axes might have particular explanations. In this case, axis 1 could be regard- ed as a size measure, with samples on the left having both small length and width and samples on the right having large length and width. Axis 2 could be regarded as a measure of shape, with samples at any axis 1 position (that is, of a given size) having different length to width ratios. PC axes will generally not coincide exactly with any of the original variables.
    [Show full text]
  • 3 Random Vectors
    10 3 Random Vectors X1 . 3.1 Definition: A random vector is a vector of random variables X = . . Xn E[X1] . 3.2 Definition: The mean or expectation of X is defined as E[X]= . . E[Xn] 3.3 Definition: A random matrix is a matrix of random variables Z =(Zij). Its expectation is given by E[Z]=(E[Zij ]). 3.4 Theorem: A constant vector a (vector of constants) and a constant matrix A (matrix of constants) satisfy E[a]=a and E[A]=A. 3.5 Theorem: E[X + Y]=E[X]+E[Y]. 3.6 Theorem: E[AX]=AE[X] for a constant matrix A. 3.7 Theorem: E[AZB + C]=AE[Z]B + C if A, B, C are constant matrices. 3.8 Definition: If X is a random vector, the covariance matrix of X is defined as var(X )cov(X ,X ) cov(X ,X ) 1 1 2 ··· 1 n cov(X2,X1)var(X2) cov(X2,Xn) cov(X) [cov(X ,X )] ··· . i j . .. ≡ ≡ . cov(X ,X )cov(X ,X ) var(X ) n 1 n 2 ··· n An alternative form is X1 E[X1] − . cov(X)=E[(X E[X])(X E[X])!]=E . (X E[X ], ,X E[X ]) . − − . 1 − 1 ··· n − n Xn E[Xn] − 3.9 Example: If X1,...,Xn are independent, then the covariances are 0 and the covariance matrix is equal to 2 2 2 2 diag(σ1,...,σn) ,or σ In if the Xi have common variance σ . Properties of covariance matrices: 3.10 Theorem: Symmetry: cov(X)=[cov(X)]!.
    [Show full text]
  • Covariance Matrix Estimation in Time Series Wei Biao Wu and Han Xiao June 15, 2011
    Covariance Matrix Estimation in Time Series Wei Biao Wu and Han Xiao June 15, 2011 Abstract Covariances play a fundamental role in the theory of time series and they are critical quantities that are needed in both spectral and time domain analysis. Es- timation of covariance matrices is needed in the construction of confidence regions for unknown parameters, hypothesis testing, principal component analysis, predic- tion, discriminant analysis among others. In this paper we consider both low- and high-dimensional covariance matrix estimation problems and present a review for asymptotic properties of sample covariances and covariance matrix estimates. In particular, we shall provide an asymptotic theory for estimates of high dimensional covariance matrices in time series, and a consistency result for covariance matrix estimates for estimated parameters. 1 Introduction Covariances and covariance matrices play a fundamental role in the theory and practice of time series. They are critical quantities that are needed in both spectral and time domain analysis. One encounters the issue of covariance matrix estimation in many problems, for example, the construction of confidence regions for unknown parameters, hypothesis testing, principal component analysis, prediction, discriminant analysis among others. It is particularly relevant in time series analysis in which the observations are dependent and the covariance matrix characterizes the second order dependence of the process. If the underlying process is Gaussian, then the covariances completely capture its dependence structure. In this paper we shall provide an asymptotic distributional theory for sample covariances and convergence rates for covariance matrix estimates of time series. In Section 2 we shall present a review for asymptotic theory for sample covariances of stationary processes.
    [Show full text]
  • The Multivariate Normal Distribution
    Multivariate normal distribution Linear combinations and quadratic forms Marginal and conditional distributions The multivariate normal distribution Patrick Breheny September 2 Patrick Breheny University of Iowa Likelihood Theory (BIOS 7110) 1 / 31 Multivariate normal distribution Linear algebra background Linear combinations and quadratic forms Definition Marginal and conditional distributions Density and MGF Introduction • Today we will introduce the multivariate normal distribution and attempt to discuss its properties in a fairly thorough manner • The multivariate normal distribution is by far the most important multivariate distribution in statistics • It’s important for all the reasons that the one-dimensional Gaussian distribution is important, but even more so in higher dimensions because many distributions that are useful in one dimension do not easily extend to the multivariate case Patrick Breheny University of Iowa Likelihood Theory (BIOS 7110) 2 / 31 Multivariate normal distribution Linear algebra background Linear combinations and quadratic forms Definition Marginal and conditional distributions Density and MGF Inverse • Before we get to the multivariate normal distribution, let’s review some important results from linear algebra that we will use throughout the course, starting with inverses • Definition: The inverse of an n × n matrix A, denoted A−1, −1 −1 is the matrix satisfying AA = A A = In, where In is the n × n identity matrix. • Note: We’re sort of getting ahead of ourselves by saying that −1 −1 A is “the” matrix satisfying
    [Show full text]
  • Portfolio Allocation with Skewness Risk: a Practical Guide∗
    Portfolio Allocation with Skewness Risk: A Practical Guide∗ Edmond Lezmi Hassan Malongo Quantitative Research Quantitative Research Amundi Asset Management, Paris Amundi Asset Management, Paris [email protected] [email protected] Thierry Roncalli Rapha¨elSobotka Quantitative Research Multi-Asset Management Amundi Asset Management, Paris Amundi Asset Management, Paris [email protected] [email protected] February 2019 (First version: June 2018) Abstract In this article, we show how to take into account skewness risk in portfolio allocation. Until recently, this issue has been seen as a purely statistical problem, since skewness corresponds to the third statistical moment of a probability distribution. However, in finance, the concept of skewness is more related to extreme events that produce portfolio losses. More precisely, the skewness measures the outcome resulting from bad times and adverse scenarios in financial markets. Based on this interpretation of the skewness risk, we focus on two approaches that are closely connected. The first one is based on the Gaussian mixture model with two regimes: a `normal' regime and a `turbulent' regime. The second approach directly incorporates a stress scenario using jump-diffusion modeling. This second approach can be seen as a special case of the first approach. However, it has the advantage of being clearer and more in line with the experience of professionals in financial markets: skewness is due to negative jumps in asset prices. After presenting the mathematical framework, we analyze an investment portfolio that mixes risk premia, more specifically risk parity, momentum and carry strategies. We show that traditional portfolio management based on the volatility risk measure is biased and corresponds to a short-sighted approach to bad times.
    [Show full text]
  • Multivariate Distributions
    IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing1 in particular on multivariate normal, normal-mixture, spherical and elliptical distributions. In addition to studying their properties, we will also discuss techniques for simulating and, very briefly, estimating these distributions. Familiarity with these important classes of multivariate distributions is important for many aspects of risk management. We will defer the study of copulas until later in the course. 1 Preliminary Definitions Let X = (X1;:::Xn) be an n-dimensional vector of random variables. We have the following definitions and statements. > n Definition 1 (Joint CDF) For all x = (x1; : : : ; xn) 2 R , the joint cumulative distribution function (CDF) of X satisfies FX(x) = FX(x1; : : : ; xn) = P (X1 ≤ x1;:::;Xn ≤ xn): Definition 2 (Marginal CDF) For a fixed i, the marginal CDF of Xi satisfies FXi (xi) = FX(1;:::; 1; xi; 1;::: 1): It is straightforward to generalize the previous definition to joint marginal distributions. For example, the joint marginal distribution of Xi and Xj satisfies Fij(xi; xj) = FX(1;:::; 1; xi; 1;:::; 1; xj; 1;::: 1). If the joint CDF is absolutely continuous, then it has an associated probability density function (PDF) so that Z x1 Z xn FX(x1; : : : ; xn) = ··· f(u1; : : : ; un) du1 : : : dun: −∞ −∞ Similar statements also apply to the marginal CDF's. A collection of random variables is independent if the joint CDF (or PDF if it exists) can be factored into the product of the marginal CDFs (or PDFs). If > > X1 = (X1;:::;Xk) and X2 = (Xk+1;:::;Xn) is a partition of X then the conditional CDF satisfies FX2jX1 (x2jx1) = P (X2 ≤ x2jX1 = x1): If X has a PDF, f(·), then it satisfies Z xk+1 Z xn f(x1; : : : ; xk; uk+1; : : : ; un) FX2jX1 (x2jx1) = ··· duk+1 : : : dun −∞ −∞ fX1 (x1) where fX1 (·) is the joint marginal PDF of X1.
    [Show full text]
  • The Multivariate Normal Distribution
    The Multivariate Normal Distribution Why should we consider the multivariate normal distribution? It would seem that applied problems are so complex that it would only be interesting from a mathematical perspective. 1. It is mathematically tractable for a large number of problems, and, therefore, progress towards answers to statistical questions can be provided, even if only approximately so. 2. Because it is tractable for so many problems, it provides insight into techniques based upon other distributions or even non-parametric techniques. For this, it is often a benchmark against which other methods are judged. 3. For some problems it serves as a reasonable model of the data. In other instances, transfor- mations can be applied to the set of responses to have the set conform well to multivariate normality. 4. The sampling distribution of many (multivariate) statistics are normal, regardless of the parent distribution (Multivariate Central Limit Theorems). Thus, for large sample sizes, we may be able to make use of results from the multivariate normal distribution to answer our statistical questions, even when the parent distribution is not multivariate normal. Consider first the univariate normal distribution with parameters µ (the mean) and σ (the variance) for the random variable x, 2 1 − 1 (x−µ) f(x)=√ e 2 σ2 (1) 2πσ2 for −∞ <x<∞, −∞ <µ<∞,andσ2 > 0. Now rewrite the exponent (x − µ)2/σ2 using the linear algebra formulation of (x − µ)(σ2)−1(x − µ). This formulation matches that for the generalized or Mahalanobis squared distance (x − µ)Σ−1(x − µ), where both x and µ are vectors.
    [Show full text]