Fitting a Mixture Distribution to Data: Tutorial

Total Page:16

File Type:pdf, Size:1020Kb

Fitting a Mixture Distribution to Data: Tutorial Fitting A Mixture Distribution to Data: Tutorial Benyamin Ghojogh [email protected] Department of Electrical and Computer Engineering, Machine Learning Laboratory, University of Waterloo, Waterloo, ON, Canada Aydin Ghojogh [email protected] Mark Crowley [email protected] Department of Electrical and Computer Engineering, Machine Learning Laboratory, University of Waterloo, Waterloo, ON, Canada Fakhri Karray [email protected] Department of Electrical and Computer Engineering, Centre for Pattern Analysis and Machine Intelligence, University of Waterloo, Waterloo, ON, Canada Abstract of K distributions fg1(x;Θ1); : : : ; gK (x;ΘK )g where the This paper is a step-by-step tutorial for fitting a weights fw1; : : : ; wK g sum to one. As is obvious, every mixture distribution to data. It merely assumes distribution in the mixture has its own parameter Θk. The the reader has the background of calculus and lin- mixture distribution is formulated as: ear algebra. Other required background is briefly K X reviewed before explaining the main algorithm. f(x;Θ1;:::; ΘK ) = wk gk(x;Θk); In explaining the main algorithm, first, fitting a k=1 (1) mixture of two distributions is detailed and ex- K X amples of fitting two Gaussians and Poissons, re- subject to wk = 1: spectively for continuous and discrete cases, are k=1 introduced. Thereafter, fitting several distribu- The distributions can be from different families, for ex- tions in general case is explained and examples ample from beta and normal distributions. However, this of several Gaussians (Gaussian Mixture Model) makes the problem very complex and sometimes useless; and Poissons are again provided. Model-based therefore, mostly the distributions in a mixture are from clustering, as one of the applications of mixture one family (e.g., all normal distributions) but with different distributions, is also introduced. Numerical sim- parameters. This paper aims to find the parameters of the ulations are also provided for both Gaussian and distributions in the mixture distribution f(x; Θ) as well as Poisson examples for the sake of better clarifica- the weights (also called “mixing probabilities”) wk. tion. The remainder of paper is organized as follows. Section 2 reviews some technical background required for explain- arXiv:1901.06708v2 [stat.OT] 11 Oct 2020 1. Introduction ing the main algorithm. Afterwards, the methodology of Every random variable can be considered as a sample from fitting a mixture distribution to data is explained in Section a distribution, whether a well-known distribution or a not 3. In that section, first the mixture of two distributions, as very well-known (or “ugly”) distribution. Some random a special case of mixture distributions, is introduced and variables are drawn from one single distribution, such as a analyzed. Then, the general mixture distribution is dis- normal distribution. But life is not always so easy! Most of cussed. Meanwhile, examples of mixtures of Gaussians real-life random variables might have been generated from (example for continuous cases) and Poissons (example for a mixture of several distributions and not a single distri- discrete cases) are mentioned for better clarification. Sec- bution. The mixture distribution is a weighted summation tion4 briefly introduces clustering as one of the applica- tions of mixture distributions. In Section5, the discussed methods are then implemented through some simulations in order to have better sense of how these algorithms work. Finally, Section6 concludes the paper. Fitting A Mixture Distribution to Data: Tutorial 2 2. Background expectation is: This section reviews some technical background required X for explaining the main algorithm. This review includes E(X) = xf(x); (8) probability and Bayes rule, probability mass/density func- dom x Z tion, expectation, maximum likelihood estimation, expec- E(X) = xf(x) dx; (9) tation maximization, and Lagrange multiplier. dom x 2.1. Probability and Bayes Rule respectively, where dom x is the domain of X. The condi- If S denotes the total sample space and A denotes an event tional expectation is defined as: in this sample space, the probability of event A is: X (XjY ) = xf(xjy); (10) jAj EXjY P(A) = : (2) dom x jSj Z EXjY (XjY ) = xf(xjy) dx; (11) The conditional probability, i.e., probability of occurance dom x of event A given that event B happens, is: for discrete and continuous cases, respectively. P(A; B) P(AjB) = (3) 2.4. Maximum Likelihood Estimation P(B) Assume we have a sample with size n, i.e., fx ; : : : ; x g. P(BjA) P(A) 1 n = ; (4) Also assume that we know the distribution from which this P(B) sample has been randomly drawn but we do not know the parameters of that distribution. For example, we know it where P(AjB), P(BjA), P(A), and P(B) are called like- lihood, posterior, prior, and marginal probabilities, respec- is drawn from a normal distribution but the mean and vari- tively. If we assume that the event A consists of some cases ance of this distribution are unknown. The goal is to es- timate the parameters of the distribution using the sample A = fA1;:::;Ang, we can write: fx1; : : : ; xng available from it. This estimation of parame- ters from the available sample is called “point estimation”. P(BjAi) P(Ai) P(AijB) = Pn : (5) One of the approaches for point estimation is Maximum P(BjAj) P(Aj) j=1 Likelihood Estimation (MLE). As it is obvious from its the equations (4) and (5) are two versions of Bayes rule. name, MLE deals with the likelihood of data. We postulate that the values of sample, i.e., x1; : : : ; xn, are 2.2. Probability Mass/Density Function independent random variates of data having the sample dis- In discrete cases, the probability mass function is defined tribution. In other words, the data has a joint distribution as: fX (x1; : : : ; xnjΘ) with parameter Θ and we assume the variates are independent and identically distributed (iid) iid f(x) = P(X = x); (6) variates, i.e., xi ∼ fX (xi; Θ) with the same parameter Θ. Considering the Bayes rule, equation (4), we have: where X and x are a random variable and a number, re- spectively. fX (x1; : : : ; xnjΘ)π(Θ) L(Θjx1; : : : ; xn) = : (12) In continuous cases, the probability density function is: fX (x1; : : : ; xn) Θ P(x ≤ X ≤ x + ∆x) @P(X ≤ x) The MLE aims to find parameter which maximizes the f(x) = lim = : likelihood: ∆x!0 ∆x @x (7) Θb = arg max L(Θ): (13) Θ In this work, by mixture of distributions, we imply mixture of mass/density functions. According to the definition, the likelihood can be written as: 2.3. Expectation Expectation means the value of a random variable X on av- L(Θjx1; : : : ; xn) := f(x1; : : : ; xn; Θ) erage. Therefore, expectation is a weighted average where n (a) Y the weights are probabilities of the random variable X to = f(xi; Θ); (14) get different values. In discrete and continuous cases, the i=1 Fitting A Mixture Distribution to Data: Tutorial 3 where (a) is because the x1; : : : ; xn are iid. Note that in In the M-step, the MLE approach is used where the log- literature, the L(Θjx1; : : : ; xn) is also denoted by L(Θ) for likelihood is replaced with its expectation, i.e., Q(Θ); simplicity. therefore: Usually, for more convenience, we use log-likelihood Θb = arg max Q(Θ): (18) rather than likelihood: Θ `(Θ) := log L(Θ) (15) These two steps are iteratively repeated until convergence n n of the estimated parameters Θb. Y X = log f(x ; Θ) = log f(x ; Θ): (16) i i 2.6. Lagrange Multiplier i=1 i=1 Suppose we have a multi-variate function Q(Θ1;:::; ΘK ) Often, the logarithm is a natural logarithm for the sake of (called “objective function”) and we want to maximize (or compatibility with the exponential in the well-known nor- minimize) it. However, this optimization is constrained and mal density function. Notice that as logarithm function is its constraint is equality P (Θ1;:::; ΘK ) = c where c is a monotonic, it does not change the location of maximization constant. So, the constrained optimization problem is: of the likelihood. maximize Q(Θ1;:::; ΘK ); 2.5. Expectation Maximization Θ1;:::;ΘK (19) Sometimes, the data are not fully observable. For example, subject to P (Θ1;:::; ΘK ) = c: the data are known to be whether zero or greater than zero. For solving this problem, we can introduce a new variable As an illustration, assume the data are collected for a partic- α which is called “Lagrange multiplier”. Also, a new func- ular disease but for convenience of the patients participated tion L(Θ ;:::; Θ ; α), called “Lagrangian” is introduced: in the survey, the severity of the disease is not recorded 1 K but only the existence or non-existence of the disease is re- ported. So, the data are not giving us complete information L(Θ1;:::; ΘK ; α) = Q(Θ1;:::; ΘK ) (20) as Xi > 0 is not obvious whether is Xi = 2 or Xi = 1000. − α P (Θ1;:::; ΘK ) − c : In this case, MLE cannot be directly applied as we do not have access to complete information and some data are Maximizing (or minimizing) this Lagrangian function missing. In this case, Expectation Maximization (EM) is gives us the solution to the optimization problem (Boyd & useful. The main idea of EM can be summarized in this Vandenberghe, 2004): short friendly conversation: set – What shall we do? The data is missing! The log- rΘ1;:::;ΘK ,αL = 0; (21) likelihood is not known completely so MLE cannot be used. – Mmm, probably we can replace the missing data with which gives us: something... r L =set 0 =) r Q = αr P; – Aha! Let us replace it with its mean.
Recommended publications
  • LECTURE 13 Mixture Models and Latent Space Models
    LECTURE 13 Mixture models and latent space models We now consider models that have extra unobserved variables. The variables are called latent variables or state variables and the general name for these models are state space models.A classic example from genetics/evolution going back to 1894 is whether the carapace of crabs come from one normal or from a mixture of two normal distributions. We will start with a common example of a latent space model, mixture models. 13.1. Mixture models 13.1.1. Gaussian Mixture Models (GMM) Mixture models make use of latent variables to model different parameters for different groups (or clusters) of data points. For a point xi, let the cluster to which that point belongs be labeled zi; where zi is latent, or unobserved. In this example (though it can be extended to other likelihoods) we will assume our observable features xi to be distributed as a Gaussian, so the mean and variance will be cluster- specific, chosen based on the cluster that point xi is associated with. However, in practice, almost any distribution can be chosen for the observable features. With d-dimensional Gaussian data xi, our model is: zi j π ∼ Mult(π) xi j zi = k ∼ ND(µk; Σk); where π is a point on a K-dimensional simplex, so π 2 RK obeys the following PK properties: k=1 πk = 1 and 8k 2 f1; 2; ::; Kg : πk 2 [0; 1]. The variables π are known as the mixture proportions and the cluster-specific distributions are called th the mixture components.
    [Show full text]
  • START HERE: Instructions Mixtures of Exponential Family
    Machine Learning 10-701 Mar 30, 2015 http://alex.smola.org/teaching/10-701-15 due on Apr 20, 2015 Carnegie Mellon University Homework 9 Solutions START HERE: Instructions • Thanks a lot to John A.W.B. Constanzo for providing and allowing to use the latex source files for quick preparation of the HW solution. • The homework is due at 9:00am on April 20, 2015. Anything that is received after that time will not be considered. • Answers to every theory questions will be also submitted electronically on Autolab (PDF: Latex or handwritten and scanned). Make sure you prepare the answers to each question separately. • Collaboration on solving the homework is allowed (after you have thought about the problems on your own). However, when you do collaborate, you should list your collaborators! You might also have gotten some inspiration from resources (books or online etc...). This might be OK only after you have tried to solve the problem, and couldn’t. In such a case, you should cite your resources. • If you do collaborate with someone or use a book or website, you are expected to write up your solution independently. That is, close the book and all of your notes before starting to write up your solution. • Latex source of this homework: http://alex.smola.org/teaching/10-701-15/homework/ hw9_latex.zip. Mixtures of Exponential Family In this problem we will study approximate inference on a general Bayesian Mixture Model. In particular, we will derive both Expectation-Maximization (EM) algorithm and Gibbs Sampling for the mixture model. A typical
    [Show full text]
  • Robust Mixture Modelling Using the T Distribution
    Statistics and Computing (2000) 10, 339–348 Robust mixture modelling using the t distribution D. PEEL and G. J. MCLACHLAN Department of Mathematics, University of Queensland, St. Lucia, Queensland 4072, Australia [email protected] Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we consider a more robust approach by modelling the data by a mixture of t distributions. The use of the ECM algorithm to fit this t mixture model is described and examples of its use are given in the context of clustering multivariate data in the presence of atypical observations in the form of background noise. Keywords: finite mixture models, normal components, multivariate t components, maximum like- lihood, EM algorithm, cluster analysis 1. Introduction tailed alternative to the normal distribution. Hence it provides a more robust approach to the fitting of normal mixture mod- Finite mixtures of distributions have provided a mathematical- els, as observations that are atypical of a component are given based approach to the statistical modelling of a wide variety of reduced weight in the calculation of its parameters. Also, the random phenomena; see, for example, Everitt and Hand (1981), use of t components gives less extreme estimates of the pos- Titterington, Smith and Makov (1985), McLachlan and Basford terior probabilities of component membership of the mixture (1988), Lindsay (1995), and B¨ohning (1999).
    [Show full text]
  • A Tail Quantile Approximation Formula for the Student T and the Symmetric Generalized Hyperbolic Distribution
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Schlüter, Stephan; Fischer, Matthias J. Working Paper A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution IWQW Discussion Papers, No. 05/2009 Provided in Cooperation with: Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics Suggested Citation: Schlüter, Stephan; Fischer, Matthias J. (2009) : A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution, IWQW Discussion Papers, No. 05/2009, Friedrich-Alexander-Universität Erlangen-Nürnberg, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung (IWQW), Nürnberg This Version is available at: http://hdl.handle.net/10419/29554 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu IWQW Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung Diskussionspapier Discussion Papers No.
    [Show full text]
  • Mixture Random Effect Model Based Meta-Analysis for Medical Data
    Mixture Random Effect Model Based Meta-analysis For Medical Data Mining Yinglong Xia?, Shifeng Weng?, Changshui Zhang??, and Shao Li State Key Laboratory of Intelligent Technology and Systems,Department of Automation, Tsinghua University, Beijing, China [email protected], [email protected], fzcs,[email protected] Abstract. As a powerful tool for summarizing the distributed medi- cal information, Meta-analysis has played an important role in medical research in the past decades. In this paper, a more general statistical model for meta-analysis is proposed to integrate heterogeneous medi- cal researches efficiently. The novel model, named mixture random effect model (MREM), is constructed by Gaussian Mixture Model (GMM) and unifies the existing fixed effect model and random effect model. The pa- rameters of the proposed model are estimated by Markov Chain Monte Carlo (MCMC) method. Not only can MREM discover underlying struc- ture and intrinsic heterogeneity of meta datasets, but also can imply reasonable subgroup division. These merits embody the significance of our methods for heterogeneity assessment. Both simulation results and experiments on real medical datasets demonstrate the performance of the proposed model. 1 Introduction As the great improvement of experimental technologies, the growth of the vol- ume of scientific data relevant to medical experiment researches is getting more and more massively. However, often the results spreading over journals and on- line database appear inconsistent or even contradict because of variance of the studies. It makes the evaluation of those studies to be difficult. Meta-analysis is statistical technique for assembling to integrate the findings of a large collection of analysis results from individual studies.
    [Show full text]
  • A Family of Skew-Normal Distributions for Modeling Proportions and Rates with Zeros/Ones Excess
    S S symmetry Article A Family of Skew-Normal Distributions for Modeling Proportions and Rates with Zeros/Ones Excess Guillermo Martínez-Flórez 1, Víctor Leiva 2,* , Emilio Gómez-Déniz 3 and Carolina Marchant 4 1 Departamento de Matemáticas y Estadística, Facultad de Ciencias Básicas, Universidad de Córdoba, Montería 14014, Colombia; [email protected] 2 Escuela de Ingeniería Industrial, Pontificia Universidad Católica de Valparaíso, 2362807 Valparaíso, Chile 3 Facultad de Economía, Empresa y Turismo, Universidad de Las Palmas de Gran Canaria and TIDES Institute, 35001 Canarias, Spain; [email protected] 4 Facultad de Ciencias Básicas, Universidad Católica del Maule, 3466706 Talca, Chile; [email protected] * Correspondence: [email protected] or [email protected] Received: 30 June 2020; Accepted: 19 August 2020; Published: 1 September 2020 Abstract: In this paper, we consider skew-normal distributions for constructing new a distribution which allows us to model proportions and rates with zero/one inflation as an alternative to the inflated beta distributions. The new distribution is a mixture between a Bernoulli distribution for explaining the zero/one excess and a censored skew-normal distribution for the continuous variable. The maximum likelihood method is used for parameter estimation. Observed and expected Fisher information matrices are derived to conduct likelihood-based inference in this new type skew-normal distribution. Given the flexibility of the new distributions, we are able to show, in real data scenarios, the good performance of our proposal. Keywords: beta distribution; centered skew-normal distribution; maximum-likelihood methods; Monte Carlo simulations; proportions; R software; rates; zero/one inflated data 1.
    [Show full text]
  • D. Normal Mixture Models and Elliptical Models
    D. Normal Mixture Models and Elliptical Models 1. Normal Variance Mixtures 2. Normal Mean-Variance Mixtures 3. Spherical Distributions 4. Elliptical Distributions QRM 2010 74 D1. Multivariate Normal Mixture Distributions Pros of Multivariate Normal Distribution • inference is \well known" and estimation is \easy". • distribution is given by µ and Σ. • linear combinations are normal (! VaR and ES calcs easy). • conditional distributions are normal. > • For (X1;X2) ∼ N2(µ; Σ), ρ(X1;X2) = 0 () X1 and X2 are independent: QRM 2010 75 Multivariate Normal Variance Mixtures Cons of Multivariate Normal Distribution • tails are thin, meaning that extreme values are scarce in the normal model. • joint extremes in the multivariate model are also too scarce. • the distribution has a strong form of symmetry, called elliptical symmetry. How to repair the drawbacks of the multivariate normal model? QRM 2010 76 Multivariate Normal Variance Mixtures The random vector X has a (multivariate) normal variance mixture distribution if d p X = µ + WAZ; (1) where • Z ∼ Nk(0;Ik); • W ≥ 0 is a scalar random variable which is independent of Z; and • A 2 Rd×k and µ 2 Rd are a matrix and a vector of constants, respectively. > Set Σ := AA . Observe: XjW = w ∼ Nd(µ; wΣ). QRM 2010 77 Multivariate Normal Variance Mixtures Assumption: rank(A)= d ≤ k, so Σ is a positive definite matrix. If E(W ) < 1 then easy calculations give E(X) = µ and cov(X) = E(W )Σ: We call µ the location vector or mean vector and we call Σ the dispersion matrix. The correlation matrices of X and AZ are identical: corr(X) = corr(AZ): Multivariate normal variance mixtures provide the most useful examples of elliptical distributions.
    [Show full text]
  • Mixture Modeling with Applications in Alzheimer's Disease
    University of Kentucky UKnowledge Theses and Dissertations--Epidemiology and Biostatistics College of Public Health 2017 MIXTURE MODELING WITH APPLICATIONS IN ALZHEIMER'S DISEASE Frank Appiah University of Kentucky, [email protected] Digital Object Identifier: https://doi.org/10.13023/ETD.2017.100 Right click to open a feedback form in a new tab to let us know how this document benefits ou.y Recommended Citation Appiah, Frank, "MIXTURE MODELING WITH APPLICATIONS IN ALZHEIMER'S DISEASE" (2017). Theses and Dissertations--Epidemiology and Biostatistics. 14. https://uknowledge.uky.edu/epb_etds/14 This Doctoral Dissertation is brought to you for free and open access by the College of Public Health at UKnowledge. It has been accepted for inclusion in Theses and Dissertations--Epidemiology and Biostatistics by an authorized administrator of UKnowledge. For more information, please contact [email protected]. STUDENT AGREEMENT: I represent that my thesis or dissertation and abstract are my original work. Proper attribution has been given to all outside sources. I understand that I am solely responsible for obtaining any needed copyright permissions. I have obtained needed written permission statement(s) from the owner(s) of each third-party copyrighted matter to be included in my work, allowing electronic distribution (if such use is not permitted by the fair use doctrine) which will be submitted to UKnowledge as Additional File. I hereby grant to The University of Kentucky and its agents the irrevocable, non-exclusive, and royalty-free license to archive and make accessible my work in whole or in part in all forms of media, now or hereafter known.
    [Show full text]
  • Lecture 16: Mixture Models
    Lecture 16: Mixture models Roger Grosse and Nitish Srivastava 1 Learning goals • Know what generative process is assumed in a mixture model, and what sort of data it is intended to model • Be able to perform posterior inference in a mixture model, in particular { compute the posterior distribution over the latent variable { compute the posterior predictive distribution • Be able to learn the parameters of a mixture model using the Expectation-Maximization (E-M) algorithm 2 Unsupervised learning So far in this course, we've focused on supervised learning, where we assumed we had a set of training examples labeled with the correct output of the algorithm. We're going to shift focus now to unsupervised learning, where we don't know the right answer ahead of time. Instead, we have a collection of data, and we want to find interesting patterns in the data. Here are some examples: • The neural probabilistic language model you implemented for Assignment 1 was a good example of unsupervised learning for two reasons: { One of the goals was to model the distribution over sentences, so that you could tell how \good" a sentence is based on its probability. This is unsupervised, since the goal is to model a distribution rather than to predict a target. This distri- bution might be used in the context of a speech recognition system, where you want to combine the evidence (the acoustic signal) with a prior over sentences. { The model learned word embeddings, which you could later analyze and visu- alize. The embeddings weren't given as a supervised signal to the algorithm, i.e.
    [Show full text]
  • Linear Mixture Model Applied to Amazonian Vegetation Classification
    Remote Sensing of Environment 87 (2003) 456–469 www.elsevier.com/locate/rse Linear mixture model applied to Amazonian vegetation classification Dengsheng Lua,*, Emilio Morana,b, Mateus Batistellab,c a Center for the Study of Institutions, Population, and Environmental Change (CIPEC) Indiana University, 408 N. Indiana Ave., Bloomington, IN, 47408, USA b Anthropological Center for Training and Research on Global Environmental Change (ACT), Indiana University, Bloomington, IN, USA c Brazilian Agricultural Research Corporation, EMBRAPA Satellite Monitoring Campinas, Sa˜o Paulo, Brazil Received 7 September 2001; received in revised form 11 June 2002; accepted 11 June 2002 Abstract Many research projects require accurate delineation of different secondary succession (SS) stages over large regions/subregions of the Amazon basin. However, the complexity of vegetation stand structure, abundant vegetation species, and the smooth transition between different SS stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier (MLC). Most of the time, classification distinguishes only between forest/non-forest. It has been difficult to accurately distinguish stages of SS. In this paper, a linear mixture model (LMM) approach is applied to classify successional and mature forests using Thematic Mapper (TM) imagery in the Rondoˆnia region of the Brazilian Amazon. Three endmembers (i.e., shade, soil, and green vegetation or GV) were identified based on the image itself and a constrained least-squares solution was used to unmix the image. This study indicates that the LMM approach is a promising method for distinguishing successional and mature forests in the Amazon basin using TM data. It improved vegetation classification accuracy over that of the MLC.
    [Show full text]
  • Graph Laplacian Mixture Model Hermina Petric Maretic and Pascal Frossard
    1 Graph Laplacian mixture model Hermina Petric Maretic and Pascal Frossard Abstract—Graph learning methods have recently been receiv- of signals that are an unknown combination of data with ing increasing interest as means to infer structure in datasets. different structures. Namely, we propose a generative model Most of the recent approaches focus on different relationships for data represented as a mixture of signals naturally living on between a graph and data sample distributions, mostly in settings where all available data relate to the same graph. This is, a collection of different graphs. As it is often the case with however, not always the case, as data is often available in mixed data, the separation of these signals into clusters is assumed form, yielding the need for methods that are able to cope with to be unknown. We thus propose an algorithm that will jointly mixture data and learn multiple graphs. We propose a novel cluster the signals and infer multiple graph structures, one for generative model that represents a collection of distinct data each of the clusters. Our method assumes a general signal which naturally live on different graphs. We assume the mapping of data to graphs is not known and investigate the problem of model, and offers a framework for multiple graph inference jointly clustering a set of data and learning a graph for each of that can be directly used with a number of state-of-the- the clusters. Experiments demonstrate promising performance in art network inference algorithms, harnessing their particular data clustering and multiple graph inference, and show desirable benefits.
    [Show full text]
  • A Two-Component Normal Mixture Alternative to the Fay-Herriot Model
    A two-component normal mixture alternative to the Fay-Herriot model August 22, 2015 Adrijo Chakraborty 1, Gauri Sankar Datta 2 3 4 and Abhyuday Mandal 5 Abstract: This article considers a robust hierarchical Bayesian approach to deal with ran- dom effects of small area means when some of these effects assume extreme values, resulting in outliers. In presence of outliers, the standard Fay-Herriot model, used for modeling area- level data, under normality assumptions of the random effects may overestimate random effects variance, thus provides less than ideal shrinkage towards the synthetic regression pre- dictions and inhibits borrowing information. Even a small number of substantive outliers of random effects result in a large estimate of the random effects variance in the Fay-Herriot model, thereby achieving little shrinkage to the synthetic part of the model or little reduction in posterior variance associated with the regular Bayes estimator for any of the small areas. While a scale mixture of normal distributions with known mixing distribution for the random effects has been found to be effective in presence of outliers, the solution depends on the mixing distribution. As a possible alternative solution to the problem, a two-component nor- mal mixture model has been proposed based on noninformative priors on the model variance parameters, regression coefficients and the mixing probability. Data analysis and simulation studies based on real, simulated and synthetic data show advantage of the proposed method over the standard Bayesian Fay-Herriot solution derived under normality of random effects. 1NORC at the University of Chicago, Bethesda, MD 20814, [email protected] 2Department of Statistics, University of Georgia, Athens, GA 30602, USA.
    [Show full text]