A Bayesian Latent Variable Mixture Model for Flitering Firm Profit Rates

Total Page:16

File Type:pdf, Size:1020Kb

A Bayesian Latent Variable Mixture Model for Flitering Firm Profit Rates SCHWARTZ CENTER FOR ECONOMIC POLICY ANALYSIS THE NEW SCHOOL WORKING PAPER 2014-1 A Bayesian Latent Variable Mixture Model for Filtering Firm Prot Rates Gregor Semieniuk and Ellis Scharfenaker Schwartz Center for Economic Policy Analysis Department of Economics The New School for Social Research 6 East 16th Street, New York, NY 10003 www.economicpolicyresearch.org Suggested Citation: Semieniuk and Scharfenaker, Ellis. (2014) “A Bayesian FEBRUARY Latent Variable Mixture Model for Filtering Firm Prot Rates.” Schwartz Center for Economic Policy Analysis and Department of Economics, 2014 The New School for Social Research, Working Paper Series. * This research benefited from discussions with Duncan Foley, Sid Dalal as well as from suggestions by Mishael Milakovíc, Amos Golan, Anwar Shaikh, Sid Chib, Fernando Rugitsky, participants in Sid Dalal’s Columbia course “Bayesian Statistics”, at the ASSRU seminar in Trento, at the New School Economics Dissertation Workshop and at the meeting of the International Society for Bayesian Analysis at Duke University. We are grateful to Columbia University Library for access to the COMPUSTAT dataset. Gregor gratefully acknowledges financial support from INET. † Both Department of Economics, New School for Social Research, New York City, USA. Email addresses Email address: [email protected] (Ellis) and [email protected] (Gregor). Abstract: By using Bayesian Markov chain Monte Carlo methods we select the proper subset of competitive firms and find striking evidence for Laplace shaped firm profit rate distributions. Our approach enables us to extract more information from data than previous research. We filter US firm-level data into signal and noise distributions by Gibbs-sampling from a latent variable mixture distribution, extracting a sharply peaked, negatively skewed Laplace-type profit rate distribution. A Bayesian change point analysis yields the subset of large firms with symmetric and stationary Laplace distributed profit rates, adding to the evidence for statistical equilibrium at the economy- wide and sectoral levels. Keywords: Firm competition, Laplace distribution, Gibbs sampler, Profit rate, Statistical equilibrium JEL codes: C15, D20, E10, L11 1 Firm Competition and Bayesian Methods We propose Bayesian computational methods, in particular Markov chain Monte Carlo simulations for selecting the subset of firms for analysis of firm profitability under competition. This enables us to extract more information from firm level data than the exogenously selected subset yielded in previous research by Alfarano et al. (2012) on profit rates and e.g. Stanley et al. (1996); Bottazzi et al. (2001); Malevergne et al. (2013) on firm growth rates. We obtain striking results of Laplace- shaped profit rate distribution by applying our selection method to US firms in the COMPUSTAT dataset from 1950-2012 . This confirms the findings by Alfarano and Milakovi´c(2008) and Alfarano et al. (2012) for a much larger economy-wide dataset. Moreover, we extend the sectoral results of Bottazzi and Secchi (2003a) for growth rates to profit rates. The formation of a general rate of profit, around which profit rates gravitate, takes place through competition and capital mobility. This point was stressed by classi- cal political economists beginning with Adam Smith, who theorized that through competition capital would instinctively move from one area to another in its end- less search for higher rates of profit. From this instinctive movement, a tendency of the equalization of profit rates across all industries would emerge. Excluded from 2 this process of equalization would be industries that are shielded from competition by legislative measures and monopolies or public enterprises that strive for other objectives than profitability. Recently, Alfarano et al. (2012) in this journal and also Alfarano and Milakovi´c (2008), examine firm-level data and find that private firms in all but the financial services participate in the competitive processes as described by Adam Smith, which they call \classical competition." Interested in determining the shape of the distri- bution of profit rates, Alfarano et al. (2012) fit a generalized error distribution also known as the Subbotin distribution to a sample of firm-level data. They find that throughout their sample the Subbotin's shape parameter is close to one for a subset of long-lived firms, implying profit rates are distributed as a Laplace distribution. The Laplace distribution is already a common result in research on firm growth rates. Stanley et al. (1996); Amaral et al. (1997); Bottazzi et al. (2001); Bottazzi and Secchi (2003a,b) find Laplace distributed, or nearly Laplace distributed (Buldyrev et al., 2007) firm growth rates in particular industries. These studies are part of a larger literature that examines distributions of various firm characteristics (Farmer and Lux, 2008; Lux, 2009). A preview of our results in Figure 1 shows that for our data we find strong evidence for Laplace distributed profit rates by sector as well as for the total economy. One problem in many such analyses is selecting the proper subset of the data. Some papers examine specific sectors, e.g. the pharmaceutical sector in Bottazzi et al. (2001) or use preselected datasets such as the Fortune 2000 (Alfarano and Milakovi´c,2008). For more general datasets, subsets are commonly selected using an exogenously determined lower size bound or minimum age. For example, in their 3 Figure 1: Profit rate histograms on semi-log plot with maximum likelihood fit in the class of Laplace distributions. Data is pooled over the years 1961-2012, with filtering by Bayesian mixture and change point models having endogenously removed noise and entry and exit dynamics, as described in sections 3 and 4 of this paper. Results look similar for plots of single years. àààìì àæìòàòìì ìòæòòì àò òì àì àæì æòì àòì òì àòì àæòì òòì ìò àì àìæ òæì ì 1 ì àìòò àòì ì àòì ìæ ààæì à òìòæ àòì ì æ à ì ì Agriculture ì òì ì à ìò æ òìì à ìòò à ìòò ò D ààìæ æ à r òììà àààììò Construction @ 0.1 à àì òææ p ìòò ì ò ìòì àìì ò ìò ì ò àà ìàì ìààò òììòì ììì ì ììì ò Manufacturing òæ æòòæìòòìæòææ æ æà ààààìààà àààà ììì òìì ò ò ò ò ì ì ò æòæò æò òòòòòììò ì òò ò ò òò Mining 0.01 ìì ìììì ì ìì ì æò òò òòæòòìòìòòììò òòòæò òòì æò ììì ì ììììì ìì ì ììì ìì ìì ììì ìæìæ ì ìììæìæììì ì ììììììììììæìæìæìììììæìæìæ ìæ -2 -1 0 1 2 ì 10 ìì ìòòàì àòàæòàòàò ìàòæææìòà æò ìæòàò ìàò ìæòà æìàò òàæ àò ìàò àò àæò æò ìàò àòì æò æàòì ìàòæ æò ìòà 1 àòìì òæ òì ìàòæ æòà ìàòæ æòàì ìàò òàì ìòæ ææòà ìàòæ æ òòìàì àòæ Services æìò ìòà æòìà ìàò òòììà ìòàæ D æòìòà ìàòæ r ò ìàæ @ ò ææòòàìàìàà òæ à p ò òà 0.1 æòòòàà ìòàòò Trade ææòìì ìààòææ òòòìàì àòò æòòòàà ìòàòòàòæ æòæòììì òòææ æòæòòàààìà ààòòà ì æòòòàà ììàòòæàæææ Transport æòòòìòìì ì òòòæ òòòòà à àààòòòààà æ àòìòòìà ì ò òææàæ òòììì ìòòòòò æ æ æ æàòæìæìàìà ìàìààòààòòàæà ò 0.01 òæòòòòò òòòæòòò æ All Industries òìæòì ìì ì ì òìòòìæ ò æààæà æòàæòæàòæòàòòàòàæòàààà àòàòàòò ææàòæææàææææòæà æà ìæ ìæ ò òòìæòìòìòòìæòòòòììììì ìììæìììòòìæòòìæòìæòòìæòìæìæìæìæ ò ìæ æ æò ææò ææòòææòæòòæòòòò æòæò òòòææòæòòòæòòæòæòæòò òæòæò æ æòæò òòæòæòòæòæòæòæòæòòòæòòæòòòòòò òòòæòòòòòæòòòæòòæòæòòòæòòæòòòæòæòæòæò -2 -1 0 1 2 r seminal paper on the statistical theory of firm growth rates, Simon and Bonini (1958) disregard firms with a capital stock below a lower bound. Subsequently, Hymer and Pashigian (1962) investigate only the thousand largest U.S. firms, and Amaral et al. (1997) and Malevergne et al. (2013) subset their data using various lower bounds on firm revenue. These exogenous cut-offs appear to be necessary to rid the dataset of noise as well as of new or dying firms, which distort the distribution of firms engaged in the competitive process. In their analysis of profit rates, Alfarano et al. (2012) 4 consider only firms that are alive at least 27 years or for the entire period spanned by the dataset. While their approach is a good first approximation, they necessarily discard valuable information. In this paper we argue that instead of a more or less arbitrary cut-off, a Bayesian approach enables us to extract more information from the data. As a natural starting point our prior is based off of the previous research which takes the distribution of profit rates as Laplace. This we take as one of two components of a mixture distribution into which all firms are mapped, regardless of whether they partake in competition or are disruptive noise. In order to decide for each firm whether it is in the Laplace distribution, which we designate as the signal distribution, or confounding noise, we will give a latent variable to each firm observation that is one if the firm belongs to the signal distribution and zero if it does not. We then compute each latent variable's posterior probability of taking on the values one or zero, and thus the firm’s probability of being part of the competitive process. If the probability of its being in the signal distribution is not sufficiently high, we discard it. For our dataset, after discarding the noise, we retain nearly 97% of all observations regardless of age or size. Unlike previous studies, this allows us to analyze the profit rate distribution throughout the entire competitive process preserving significantly more information. As a second step, we apply a Bayesian change point analysis to filter out small firms not subject to entry-exit dynamics, again using the Laplace distribution as our prior, to demonstrate an alternative way of obtaining a subset of stationary Laplace distributions.1 Both steps are made possible by using Markov chain Monte Carlo (MCMC) methods. 1After this more restrictive selection we discard 45% of our original data compared toAlfarano et al.
Recommended publications
  • LECTURE 13 Mixture Models and Latent Space Models
    LECTURE 13 Mixture models and latent space models We now consider models that have extra unobserved variables. The variables are called latent variables or state variables and the general name for these models are state space models.A classic example from genetics/evolution going back to 1894 is whether the carapace of crabs come from one normal or from a mixture of two normal distributions. We will start with a common example of a latent space model, mixture models. 13.1. Mixture models 13.1.1. Gaussian Mixture Models (GMM) Mixture models make use of latent variables to model different parameters for different groups (or clusters) of data points. For a point xi, let the cluster to which that point belongs be labeled zi; where zi is latent, or unobserved. In this example (though it can be extended to other likelihoods) we will assume our observable features xi to be distributed as a Gaussian, so the mean and variance will be cluster- specific, chosen based on the cluster that point xi is associated with. However, in practice, almost any distribution can be chosen for the observable features. With d-dimensional Gaussian data xi, our model is: zi j π ∼ Mult(π) xi j zi = k ∼ ND(µk; Σk); where π is a point on a K-dimensional simplex, so π 2 RK obeys the following PK properties: k=1 πk = 1 and 8k 2 f1; 2; ::; Kg : πk 2 [0; 1]. The variables π are known as the mixture proportions and the cluster-specific distributions are called th the mixture components.
    [Show full text]
  • START HERE: Instructions Mixtures of Exponential Family
    Machine Learning 10-701 Mar 30, 2015 http://alex.smola.org/teaching/10-701-15 due on Apr 20, 2015 Carnegie Mellon University Homework 9 Solutions START HERE: Instructions • Thanks a lot to John A.W.B. Constanzo for providing and allowing to use the latex source files for quick preparation of the HW solution. • The homework is due at 9:00am on April 20, 2015. Anything that is received after that time will not be considered. • Answers to every theory questions will be also submitted electronically on Autolab (PDF: Latex or handwritten and scanned). Make sure you prepare the answers to each question separately. • Collaboration on solving the homework is allowed (after you have thought about the problems on your own). However, when you do collaborate, you should list your collaborators! You might also have gotten some inspiration from resources (books or online etc...). This might be OK only after you have tried to solve the problem, and couldn’t. In such a case, you should cite your resources. • If you do collaborate with someone or use a book or website, you are expected to write up your solution independently. That is, close the book and all of your notes before starting to write up your solution. • Latex source of this homework: http://alex.smola.org/teaching/10-701-15/homework/ hw9_latex.zip. Mixtures of Exponential Family In this problem we will study approximate inference on a general Bayesian Mixture Model. In particular, we will derive both Expectation-Maximization (EM) algorithm and Gibbs Sampling for the mixture model. A typical
    [Show full text]
  • Robust Mixture Modelling Using the T Distribution
    Statistics and Computing (2000) 10, 339–348 Robust mixture modelling using the t distribution D. PEEL and G. J. MCLACHLAN Department of Mathematics, University of Queensland, St. Lucia, Queensland 4072, Australia [email protected] Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we consider a more robust approach by modelling the data by a mixture of t distributions. The use of the ECM algorithm to fit this t mixture model is described and examples of its use are given in the context of clustering multivariate data in the presence of atypical observations in the form of background noise. Keywords: finite mixture models, normal components, multivariate t components, maximum like- lihood, EM algorithm, cluster analysis 1. Introduction tailed alternative to the normal distribution. Hence it provides a more robust approach to the fitting of normal mixture mod- Finite mixtures of distributions have provided a mathematical- els, as observations that are atypical of a component are given based approach to the statistical modelling of a wide variety of reduced weight in the calculation of its parameters. Also, the random phenomena; see, for example, Everitt and Hand (1981), use of t components gives less extreme estimates of the pos- Titterington, Smith and Makov (1985), McLachlan and Basford terior probabilities of component membership of the mixture (1988), Lindsay (1995), and B¨ohning (1999).
    [Show full text]
  • Lecture 16: Mixture Models
    Lecture 16: Mixture models Roger Grosse and Nitish Srivastava 1 Learning goals • Know what generative process is assumed in a mixture model, and what sort of data it is intended to model • Be able to perform posterior inference in a mixture model, in particular { compute the posterior distribution over the latent variable { compute the posterior predictive distribution • Be able to learn the parameters of a mixture model using the Expectation-Maximization (E-M) algorithm 2 Unsupervised learning So far in this course, we've focused on supervised learning, where we assumed we had a set of training examples labeled with the correct output of the algorithm. We're going to shift focus now to unsupervised learning, where we don't know the right answer ahead of time. Instead, we have a collection of data, and we want to find interesting patterns in the data. Here are some examples: • The neural probabilistic language model you implemented for Assignment 1 was a good example of unsupervised learning for two reasons: { One of the goals was to model the distribution over sentences, so that you could tell how \good" a sentence is based on its probability. This is unsupervised, since the goal is to model a distribution rather than to predict a target. This distri- bution might be used in the context of a speech recognition system, where you want to combine the evidence (the acoustic signal) with a prior over sentences. { The model learned word embeddings, which you could later analyze and visu- alize. The embeddings weren't given as a supervised signal to the algorithm, i.e.
    [Show full text]
  • Linear Mixture Model Applied to Amazonian Vegetation Classification
    Remote Sensing of Environment 87 (2003) 456–469 www.elsevier.com/locate/rse Linear mixture model applied to Amazonian vegetation classification Dengsheng Lua,*, Emilio Morana,b, Mateus Batistellab,c a Center for the Study of Institutions, Population, and Environmental Change (CIPEC) Indiana University, 408 N. Indiana Ave., Bloomington, IN, 47408, USA b Anthropological Center for Training and Research on Global Environmental Change (ACT), Indiana University, Bloomington, IN, USA c Brazilian Agricultural Research Corporation, EMBRAPA Satellite Monitoring Campinas, Sa˜o Paulo, Brazil Received 7 September 2001; received in revised form 11 June 2002; accepted 11 June 2002 Abstract Many research projects require accurate delineation of different secondary succession (SS) stages over large regions/subregions of the Amazon basin. However, the complexity of vegetation stand structure, abundant vegetation species, and the smooth transition between different SS stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier (MLC). Most of the time, classification distinguishes only between forest/non-forest. It has been difficult to accurately distinguish stages of SS. In this paper, a linear mixture model (LMM) approach is applied to classify successional and mature forests using Thematic Mapper (TM) imagery in the Rondoˆnia region of the Brazilian Amazon. Three endmembers (i.e., shade, soil, and green vegetation or GV) were identified based on the image itself and a constrained least-squares solution was used to unmix the image. This study indicates that the LMM approach is a promising method for distinguishing successional and mature forests in the Amazon basin using TM data. It improved vegetation classification accuracy over that of the MLC.
    [Show full text]
  • Graph Laplacian Mixture Model Hermina Petric Maretic and Pascal Frossard
    1 Graph Laplacian mixture model Hermina Petric Maretic and Pascal Frossard Abstract—Graph learning methods have recently been receiv- of signals that are an unknown combination of data with ing increasing interest as means to infer structure in datasets. different structures. Namely, we propose a generative model Most of the recent approaches focus on different relationships for data represented as a mixture of signals naturally living on between a graph and data sample distributions, mostly in settings where all available data relate to the same graph. This is, a collection of different graphs. As it is often the case with however, not always the case, as data is often available in mixed data, the separation of these signals into clusters is assumed form, yielding the need for methods that are able to cope with to be unknown. We thus propose an algorithm that will jointly mixture data and learn multiple graphs. We propose a novel cluster the signals and infer multiple graph structures, one for generative model that represents a collection of distinct data each of the clusters. Our method assumes a general signal which naturally live on different graphs. We assume the mapping of data to graphs is not known and investigate the problem of model, and offers a framework for multiple graph inference jointly clustering a set of data and learning a graph for each of that can be directly used with a number of state-of-the- the clusters. Experiments demonstrate promising performance in art network inference algorithms, harnessing their particular data clustering and multiple graph inference, and show desirable benefits.
    [Show full text]
  • Comparing Unsupervised Clustering Algorithms to Locate Uncommon User Behavior in Public Travel Data
    Comparing unsupervised clustering algorithms to locate uncommon user behavior in public travel data A comparison between the K-Means and Gaussian Mixture Model algorithms MAIN FIELD OF STUDY: Computer Science AUTHORS: Adam Håkansson, Anton Andrésen SUPERVISOR: Beril Sirmacek JÖNKÖPING 2020 juli This thesis was conducted at Jönköping school of engineering in Jönköping within [see field of study on the previous page]. The authors themselves are responsible for opinions, conclusions and results. Examiner: Maria Riveiro Supervisor: Beril Sirmacek Scope: 15 hp (bachelor’s degree) Date: 2020-06-01 Mail Address: Visiting address: Phone: Box 1026 Gjuterigatan 5 036-10 10 00 (vx) 551 11 Jönköping Abstract Clustering machine learning algorithms have existed for a long time and there are a multitude of variations of them available to implement. Each of them has its advantages and disadvantages, which makes it challenging to select one for a particular problem and application. This study focuses on comparing two algorithms, the K-Means and Gaussian Mixture Model algorithms for outlier detection within public travel data from the travel planning mobile application MobiTime1[1]. The purpose of this study was to compare the two algorithms against each other, to identify differences between their outlier detection results. The comparisons were mainly done by comparing the differences in number of outliers located for each model, with respect to outlier threshold and number of clusters. The study found that the algorithms have large differences regarding their capabilities of detecting outliers. These differences heavily depend on the type of data that is used, but one major difference that was found was that K-Means was more restrictive then Gaussian Mixture Model when it comes to classifying data points as outliers.
    [Show full text]
  • Semiparametric Estimation in the Normal Variance-Mean Mixture Model∗
    (September 26, 2018) Semiparametric estimation in the normal variance-mean mixture model∗ Denis Belomestny University of Duisburg-Essen Thea-Leymann-Str. 9, 45127 Essen, Germany and Laboratory of Stochastic Analysis and its Applications National Research University Higher School of Economics Shabolovka, 26, 119049 Moscow, Russia e-mail: [email protected] Vladimir Panov Laboratory of Stochastic Analysis and its Applications National Research University Higher School of Economics Shabolovka, 26, 119049 Moscow, Russia e-mail: [email protected] Abstract: In this paper we study the problem of statistical inference on the parameters of the semiparametric variance-mean mixtures. This class of mixtures has recently become rather popular in statistical and financial modelling. We design a semiparametric estimation procedure that first es- timates the mean of the underlying normal distribution and then recovers nonparametrically the density of the corresponding mixing distribution. We illustrate the performance of our procedure on simulated and real data. Keywords and phrases: variance-mean mixture model, semiparametric inference, Mellin transform, generalized hyperbolic distribution. Contents 1 Introduction and set-up . .2 2 Estimation of µ ...............................3 3 Estimation of G with known µ ......................4 4 Estimation of G with unknown µ .....................7 5 Numerical example . .8 6 Real data example . 10 7 Proofs . 11 arXiv:1705.07578v1 [stat.OT] 22 May 2017 7.1 Proof of Theorem 2.1......................... 11 7.2 Proof of Theorem 3.1......................... 13 7.3 Proof of Theorem 4.1......................... 17 References . 20 ∗ This work has been funded by the Russian Academic Excellence Project \5-100". 1 D.Belomestny and V.Panov/Estimation in the variance-mean mixture model 2 1.
    [Show full text]
  • Structure of a Mixture Model
    In statistics, a mixture model is a probabilistic model for representing the presence of sub-populations within an overall population, without requiring that an observed data-set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population-identity information. Some ways of implementing mixture models involve steps that do attribute postulated sub-population-identities to individual observations (or weights towards such sub-populations), in which case these can be regarded as a types of unsupervised learning or clustering procedures. However not all inference procedures involve such steps. Mixture models should not be confused with model for compositional data, i.e., data whose components are constrained to sum to a constant value (1, 100%, etc.). Structure of a mixture model General mixture model A typical finite-dimensional mixture model is a hierarchical model consisting of the following components: • N random variables corresponding to observations, each assumed to be distributed according to a mixture of K components, with each component belonging to the same parametric family of distributions but with different parameters • N corresponding random latent variables specifying the identity of the mixture component of each observation, each distributed according to a K-dimensional categorical distribution • A set of K mixture weights, each of which is a probability (a real number between 0 and 1), all of which sum to 1 • A set of K parameters, each specifying the parameter of the corresponding mixture component.
    [Show full text]
  • Student's T Distribution Based Estimation of Distribution
    1 Student’s t Distribution based Estimation of Distribution Algorithms for Derivative-free Global Optimization ∗Bin Liu Member, IEEE, Shi Cheng Member, IEEE, Yuhui Shi Fellow, IEEE Abstract In this paper, we are concerned with a branch of evolutionary algorithms termed estimation of distribution (EDA), which has been successfully used to tackle derivative-free global optimization problems. For existent EDA algorithms, it is a common practice to use a Gaussian distribution or a mixture of Gaussian components to represent the statistical property of available promising solutions found so far. Observing that the Student’s t distribution has heavier and longer tails than the Gaussian, which may be beneficial for exploring the solution space, we propose a novel EDA algorithm termed ESTDA, in which the Student’s t distribution, rather than Gaussian, is employed. To address hard multimodal and deceptive problems, we extend ESTDA further by substituting a single Student’s t distribution with a mixture of Student’s t distributions. The resulting algorithm is named as estimation of mixture of Student’s t distribution algorithm (EMSTDA). Both ESTDA and EMSTDA are evaluated through extensive and in-depth numerical experiments using over a dozen of benchmark objective functions. Empirical results demonstrate that the proposed algorithms provide remarkably better performance than their Gaussian counterparts. Index Terms derivative-free global optimization, EDA, estimation of distribution, Expectation-Maximization, mixture model, Student’s t distribution I. INTRODUCTION The estimation of distribution algorithm (EDA) is an evolutionary computation (EC) paradigm for tackling derivative-free global optimization problems [1]. EDAs have gained great success and attracted increasing attentions during the last decade arXiv:1608.03757v2 [cs.NE] 25 Nov 2016 [2].
    [Show full text]
  • The Infinite Gaussian Mixture Model
    in Advances in Neural Information Processing Systems 12 S.A. Solla, T.K. Leen and K.-R. Muller¨ (eds.), pp. 554–560, MIT Press (2000) The Infinite Gaussian Mixture Model Carl Edward Rasmussen Department of Mathematical Modelling Technical University of Denmark Building 321, DK-2800 Kongens Lyngby, Denmark [email protected] http://bayes.imm.dtu.dk Abstract In a Bayesian mixture model it is not necessary a priori to limit the num- ber of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of find- ing the “right” number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling. 1 Introduction One of the major advantages in the Bayesian methodology is that “overfitting” is avoided; thus the difficult task of adjusting model complexity vanishes. For neural networks, this was demonstrated by Neal [1996] whose work on infinite networks led to the reinvention and popularisation of Gaussian Process models [Williams & Rasmussen, 1996]. In this paper a Markov Chain Monte Carlo (MCMC) implementation of a hierarchical infinite Gaussian mixture model is presented. Perhaps surprisingly, inference in such models is possible using finite amounts of computation. Similar models are known in statistics as Dirichlet Process mixture models and go back to Ferguson [1973] and Antoniak [1974]. Usually, expositions start from the Dirichlet process itself [West et al, 1994]; here we derive the model as the limiting case of the well- known finite mixtures. Bayesian methods for mixtures with an unknown (finite) number of components have been explored by Richardson & Green [1997], whose methods are not easily extended to multivariate observations.
    [Show full text]
  • Outlier Detection and Robust Mixture Modeling Using Nonconvex Penalized Likelihood
    Journal of Statistical Planning and Inference 164 (2015) 27–38 Contents lists available at ScienceDirect Journal of Statistical Planning and Inference journal homepage: www.elsevier.com/locate/jspi Outlier detection and robust mixture modeling using nonconvex penalized likelihood Chun Yu a, Kun Chen b, Weixin Yao c,∗ a School of Statistics, Jiangxi University of Finance and Economics, Nanchang 330013, China b Department of Statistics, University of Connecticut, Storrs, CT 06269, United States c Department of Statistics, University of California, Riverside, CA 92521, United States article info a b s t r a c t Article history: Finite mixture models are widely used in a variety of statistical applications. However, Received 29 April 2014 the classical normal mixture model with maximum likelihood estimation is prone to the Received in revised form 9 March 2015 presence of only a few severe outliers. We propose a robust mixture modeling approach Accepted 10 March 2015 using a mean-shift formulation coupled with nonconvex sparsity-inducing penalization, Available online 19 March 2015 to conduct simultaneous outlier detection and robust parameter estimation. An efficient iterative thresholding-embedded EM algorithm is developed to maximize the penalized Keywords: log-likelihood. The efficacy of our proposed approach is demonstrated via simulation EM algorithm Mixture models studies and a real application on Acidity data analysis. Outlier detection ' 2015 Elsevier B.V. All rights reserved. Penalized likelihood 1. Introduction Nowadays finite mixture distributions are increasingly important in modeling a variety of random phenomena (see Everitt and Hand, 1981; Titterington et al., 1985; McLachlan and Basford, 1988; Lindsay, 1995; Böhning, 1999). The m-component finite normal mixture distribution has probability density m I D X I 2 f .y θ/ πiφ.y µi; σi /; (1.1) iD1 T 2 where θ D .π1; µ1; σ1I ::: I πm; µm; σm/ collects all the unknown parameters, φ.·; µ, σ / denotes the density function 2 Pm D of N(µ, σ /, and πj is the proportion of jth subpopulation with jD1 πj 1.
    [Show full text]