A Comparison of Two MCMC Algorithms for Hierarchical Mixture Models

A Comparison of Two MCMC Algorithms for Hierarchical Mixture Models

A Comparison of Two MCMC Algorithms for Hierarchical Mixture Models Russell G. Almond⇤ Florida State University Abstract 1 Introduction Mixture models (McLachlan & Peel, 2000) are a fre- Mixture models form an important class of quently used method of unsupervised learning. They models for unsupervised learning, allowing sort data points into clusters based on just their val- data points to be assigned labels based on ues. One of the most frequently used mixture models their values. However, standard mixture is a mixture of normal distributions. Often the mean models procedures do not deal well with rare and variance of each cluster is learned along with the components. For example, pause times in classification of each data point. student essays have di↵erent lengths depend- ing on what cognitive processes a student As an example, Almond, Deane, Quinlan, Wagner, and engages in during the pause. However, in- Sydorenko (2012) fit a mixture of lognormal distribu- stances of student planning (and hence very tions to the pause time of students typing essays as long pauses) are rare, and thus it is dif- part of a pilot writing assessment. (Alternatively, this ficult to estimate those parameters from a model can be described as a mixture of normals fit to single student’s essays. A hierarchical mix- the log pause times.) Almond et al. found that mix- ture model eliminates some of those prob- ture models seems to fit the data fairly well. The mix- lems, by pooling data across several of the ture components could correspond to di↵erent cogni- higher level units (in the example students) tive process used in writing (Deane, 2012) where each to estimate parameters of the mixture com- cognitive process takes di↵erent amounts of time (i.e., ponents. One way to estimate the parame- students pause longer when planning, than when sim- ters of a hierarchical mixture model is to use ply typing). MCMC. But these models have several issues Mixture models are difficult to fit because they dis- such as non-identifiability under label switch- play a number of pathologies. One problem is com- ing that make them difficult to estimate just ponent identification. Simply swapping the labels of using o↵-the-shelf MCMC tools. This paper Components 1 and 2 produces a model with identical looks at the steps necessary to estimate these likelihoods. Even if a prior distribution is placed on models using two popular MCMC packages: the mixture component parameters, the posterior is JAGS (random walk Metropolis algorithm) multimodal. Second, it is easy to get a pathological and Stan (Hamiltonian Monte Carlo). JAGS, solution in which a mixture component consists of a Stan and R code to estimate the models and single point. These solutions are not desirable, and model fit statistics are published along with some estimation tools constrain the minimum size of the paper. a mixture component (Gruen & Leisch, 2008). Fur- thermore, if a separate variance is to be estimated for each mixture component, several data points must be assigned to each component in order for the variance estimate to have a reasonable standard error. There- Key words: Mixture Models, Markov Chain Monte fore, fitting a model with rare components requires a Carlo, JAGS, Stan, WAIC large data size. ⇤Paper presented at the Bayesian Application Workshop at Un- certainty in Artificial Intelligence Conference 2014, Quebec City, Almond et al. (2012) noted these limitations in Canada. their conclusions. First, events corresponding to the 1 highest-level planning components in Deane (2012)’s cognitive model would be relatively rare, and hence would lumped in with other mixture components due to size restrictions. Second, some linguistic contexts (e.g., between Sentence pauses) were rare enough that fitting a separate mixture model to each student would not be feasible. One work-around is a hierarchical mixture model. As with all hierarchical models, it requires units at two di↵erent levels (in this example, students or essays are Level 2 and individual pauses are Level 1). The assumption behind the hierarchical mixture model is that the mixture components will look similar across the second level units. Thus, the mean and variance of Mixture Component 1 for Student 1 will look similar Figure 1: Non-hierarchical Mixture Model to those for Student 2. Li (2013) tried this on some of the writing data. tion 2.4 describes prior distributions and constraints One problem that frequently arises in estimating mix- on parameters which keep the Markov chain away from ture models is determining how many mixture compo- those points. nents to use. What is commonly done is to estimate models for K =2, 3, 4,... up to some small maximum number of components (depending on the size of the 2.1 Mixture of Normals data). Then a measure of model–data fit, such as AIC, Consider a collection observations, Y = DIC or WAIC (see Gelman et al., 2013, Chapter 7), is i (Y ,...,Y ) for a single student, i. Assume calculated for each model and the model with the best i,1 i,Ji that the process that generated these data is a mix- fit index is chosen. These methods look at the deviance ture of K normals. Let Z cat(⇡ ) be a categorical (minus twice the log likelihood of the data) and adjust i,j i latent index variable indicating⇠ which component Ob- it with a penalty for model complexity. Both DIC and servation j comes from and let Y (µ , σ ) WAIC require Markov chain Monte Carlo (MCMC) to i,j,k⇤ i,k i,k be the value of Y which would be⇠ realizedN when compute, and require some custom coding for mixture i,j Z = k. models because of the component identification issue. i,j Figure 1 shows this model graphically. The plates in- This paper is a tutorial for replicating the method used dicate replication over categories (k), Level 1 (pauses, by Li (2013). The paper walks through a script writ- j) and Level 2 (students, i) units. Note that there is ten in the R language (R Core Team, 2014) which per- no connection across plates, so the model fit to each forms most of the steps. The actual estimation is done Level 2 unit is independent of all the other Level 2 using MCMC using either Stan (Stan Development units. This is what Gelman et al. (2013) call the no Team, 2013) or JAGS (Plummer, 2012). The R scripts pooling case. along with the Stan and JAGS models and some sam- ple data are available at http://pluto.coe.fsu.edu/ The latent variables Z and Y ⇤ can be removed from mcmc-hierMM/. the likelihood for Yi,j by summing over the possible values of Z. The likelihood for one student’s data, Yi, 2 Mixture Models is Ji K Yi,j µi,k Let i 1,...,I be a set of indexes over the sec- Li(Yi ⇡i, µ , σi)= ⇡i,kφ( − ) (1) | i σ 2 { } j=1 i,k ond level units (students in the example) and let Y kX=1 j 1,...,J be the first level units (pause events 2 { i} where φ( ) is the unit normal density. in the example). A hierarchical mixture model is by · adding a Level 2 (across student) distribution over the Although conceptually simple, there are a number of parameters of the Level 1 (within student) mixture issues that mixture models can have. The first is- model. Section 2.1 describes the base Level 1 mixture sue is that the component labels cannot be identified model, and Section 2.2 describes the Level 2 model. from data. Consider the case with two components. Often MCMC requires reparameterization to achieve The model created by swapping the labels for Compo- better mixing (Section 2.3). Also, there are certain pa- nents 1 and 2 with new parameters ⇡i0 =(⇡i,2, ⇡i,1), rameter values which result in infinite likelihoods. Sec- µi0 =(µi,2,µi,1), and σi0 =(σi,2, σi,1) has an identical 2 likelihood. For the K component model, any permu- tation of the component labels produces a model with identical likelihood. Consequently, the likelihood sur- face is multimodal. A common solution to the problem is to identify the components by placing an ordering constraint on one of the three parameter vectors: ⇡, µ or σ. Section 4.1 returns to this issue in practice. A second issue involves degenerate solutions which contain only a single data point. If a mixture com- ponent has only a single data point, its standard devi- ation, σi,k will go to zero, and ⇡i,k will approach 1/Ji, which is close to zero for large Level 1 data sets. Note that if either ⇡i,k 0 or σi,k > 0 for any k,thenthe likelihood will become! singular.− Figure 2: Hierarchical Mixture Model Estimating σi,k requires a minimum number of data points from Component k (⇡i,kJi > 5 is a rough mini- ing mixture model. To make the hierarchical model, mum). If ⇡i,k is believed to be small for some k,thena add across-student prior distributions for the student- large (Level 1) sample is needed. As K increases, the specific parameters parameters, ⇡i, µi and σi.Be- smallest value of ⇡i,k becomes smaller so the minimum cause JAGS parameterizes the normal distribution sample size increases. with precisions (reciprocal variances) rather than stan- dard deviations, ⌧ i are substituted for the standard 2.2 Hierarchical mixtures deviations σi. Figure 2 shows the hierarchical model. Gelman et al. (2013) explains the concept of a hier- Completing the model requires specifying distributions archical model using a SAT coaching experiment that for the three Level 1 (student-specific) parameters.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    101 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us