Machine Learning with Dirichlet and Beta Process Priors: Theory and Applications

Total Page:16

File Type:pdf, Size:1020Kb

Machine Learning with Dirichlet and Beta Process Priors: Theory and Applications Machine Learning with Dirichlet and Beta Process Priors: Theory and Applications by John Paisley Department of Electrical & Computer Engineering Duke University Date: Approved: Lawrence Carin, Advisor David Brady Rebecca Willett Lu David Dunson Mauro Maggioni Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical & Computer Engineering in the Graduate School of Duke University 2010 Abstract (Machine Learning) Machine Learning with Dirichlet and Beta Process Priors: Theory and Applications by John Paisley Department of Electrical & Computer Engineering Duke University Date: Approved: Lawrence Carin, Advisor David Brady Rebecca Willett Lu David Dunson Mauro Maggioni An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical & Computer Engineering in the Graduate School of Duke University 2010 Copyright c 2010 by John Paisley All rights reserved except the rights granted by the Creative Commons Attribution-Noncommercial Licence Abstract Bayesian nonparametric methods are useful for modeling data without having to define the complexity of the entire model a priori, but rather allowing for this com- plexity to be determined by the data. Two problems considered in this dissertation are the number of components in a mixture model, and the number of factors in a latent factor model, for which the Dirichlet process and the beta process are the two respective Bayesian nonparametric priors selected for handling these issues. The flexibility of Bayesian nonparametric priors arises from the prior's defini- tion over an infinite dimensional parameter space. Therefore, there are theoretically an infinite number of latent components and an infinite number of latent factors. Nevertheless, draws from each respective prior will produce only a small number of components or factors that appear in a given data set. As mentioned, the number of these components and factors, and their corresponding parameter values, are left for the data to decide. This dissertation is split between novel practical applications and novel theoret- ical results for these priors. For the Dirichlet process, we investigate stick-breaking representations for the finite Dirichlet process and their application to novel sam- pling techniques, as well as a novel mixture modeling framework that incorporates multiple modalities within a data set. For the beta process, we present a new stick- breaking construction for the infinite-dimensional prior, and consider applications to image interpolation problems and dictionary learning for compressive sensing. iv Contents Abstract iv List of Tables vi List of Figures vii Acknowledgements viii Introduction1 1 The Dirichlet Process for Mixture Models7 1.1 Abstract..................................7 1.2 The Dirichlet Distribution........................7 1.2.1 Calculating the Posterior of π .................. 10 1.3 The P´olya Urn Process.......................... 12 1.4 Constructing the Finite-Dimensional Dirichlet Distribution...... 15 1.4.1 Proof of the Construction.................... 16 1.5 The Extension to Infinite-Dimensional Spaces............. 19 1.6 Inference for the Dirichlet Process Mixture Model........... 22 1.6.1 Dirichlet Process Mixture Models................ 22 1.6.2 Gibbs Sampling for Dirichlet Process Mixture Models..... 22 1.7 The Expectation and Variance of the Entropy of Dirichlet Processes. 26 1.7.1 Derivation of the Expectation.................. 28 1.7.2 Derivation of the Variance.................... 29 v 1.8 Appendix................................. 32 2 Sethuraman's Constructive Definition for Finite Mixture Models 33 2.1 Abstract.................................. 33 2.2 Comparing Dir(αg0) and DP(αG0) Priors Using Constructive Definitions 33 2.2.1 Statistical Properties of ..................... 34 2.2.2 Connecting Dir(αg0) and DP(αG0) Priors Via ........ 35 2.3 Applications of the Construction of Dir(αg0).............. 40 2.3.1 Inference for α Using the Constructive Definition of Dir(αg0). 40 2.3.2 Inference for the Hierarchical Dirichlet Process......... 47 2.4 Appendix................................. 52 3 Dirichlet Processes with Product Base Measures 53 3.1 Abstract.................................. 53 3.2 Introduction................................ 54 3.3 The Dirichlet Process with Product Base Measure........... 55 3.3.1 Predicting Values for Missing Modalities............ 56 3.4 MCMC Inference for DP-PBM Mixture Models............ 57 3.5 Applications: The Gaussian-HMM Mixture Model........... 59 3.5.1 Experiment with Synthesized Data............... 59 3.5.2 Major League Baseball Data Set................. 60 3.6 Conclusions................................ 62 4 The Beta Process for Latent Factor Models 63 4.1 Abstract.................................. 63 4.2 Introduction................................ 63 4.3 The Beta Process............................. 65 4.3.1 The Marginalized Beta Process and the Indian Buffet Process 66 vi 4.3.2 Finite Approximation to the Beta Process........... 68 4.4 Beta Process Factor Analysis...................... 69 4.5 Variational Bayesian Inference...................... 72 4.5.1 The VB-E Step.......................... 73 4.5.2 The VB-M Step.......................... 73 4.5.3 Accelerated VB Inference.................... 76 4.5.4 Prediction for New Observations................. 76 4.6 Experiments................................ 76 4.6.1 A Synthetic Example....................... 76 4.6.2 MNIST Handwritten Digits Dataset............... 78 4.6.3 HGDP-CEPH Cell Line Panel.................. 79 4.6.4 Learning Dictionaries for Compressive Sensing Applications. 81 4.7 Conclusion................................. 87 5 A Stick-Breaking Construction of the Beta Process 89 5.1 Abstract.................................. 89 5.2 Introduction................................ 89 5.3 The Beta Process............................. 91 5.3.1 A Construction of the Beta Distribution............ 92 5.3.2 Related Work........................... 93 5.4 A Stick-Breaking Construction of the Beta Process.......... 94 5.4.1 Derivation of the Construction.................. 95 5.5 Inference for the Stick-Breaking Construction............. 98 5.5.1 Inference for dk .......................... 99 5.5.2 Inference for γ ........................... 101 5.5.3 Inference for α .......................... 102 vii 5.5.4 Inference for p(znk = 1jα; dk;Zprev)............... 102 5.6 Experiments................................ 103 5.6.1 Synthetic Data.......................... 103 5.6.2 MNIST Handwritten Digits................... 103 5.6.3 Time-Evolving Gene Expression Data.............. 105 5.7 Conclusion................................. 108 5.8 Appendix................................. 108 6 Image Interpolation Using Dirichlet and Beta Process Priors 109 6.1 Abstract.................................. 109 6.2 Introduction................................ 109 6.3 The Model................................. 110 6.3.1 Handling Missing Data...................... 112 6.4 Model Inference.............................. 113 6.4.1 Maximum A Posteriori Updates and Collapsed Probabilities. 114 6.4.2 Gibbs Sampling of Latent Indicators.............. 115 6.5 Related Algorithms............................ 117 6.5.1 Orthogonal Matching Pursuits.................. 117 6.5.2 Method of Optimal Directions.................. 118 6.5.3 K-SVD............................... 118 6.5.4 Iterative Minimum Mean Squared Error............ 120 6.6 Experiments................................ 121 6.7 Conclusion................................. 123 7 Conclusion 136 Bibliography 138 Biography 145 viii List of Tables 6.1 Average per-iteration run time for algorithms as function of percent missing data (castle image). Comparison is not meaningful for the iMMSE algorithm (which is significantly faster)............. 125 6.2 Average per-iteration run time for algorithms as function of percent missing data for the hyperspectral image problem using 3 × 3 × 210 patches................................... 125 6.3 Average per-iteration run time for algorithms as function of percent missing data for the hyperspectral image problem using 4 × 4 × 210 patches................................... 125 ix List of Figures 1 (a) The original data set. (b) Clustering results for a Gaussian mixture model using a sparsity-promoting Dirichlet prior on the mixing weights and learned using VB-EM inference. Of the initial 20 components, only 3 are ultimately used and shown. (c) Clustering results for a Gaussian mixture model using the maximum likelihood EM algorithm to learn the model parameters. All 20 components are used by the data, resulting in clear overfitting.....................2 2 The empirical Kullback-Liebler divergence between the true underly- ing HMM and the learned HMM using the ML-EM algorithm (blue) and the fully Bayesian VB-EM algorithm (red). This figure is taken from [52]..................................3 3 The RMSE of the interpolated matrix values to the true values for a matrix completion problem where the symmetric matrix is modeled as X = ΦT Φ + E. The x-axis is a function of increasing number of measurements. This figure is taken from [55]..............4 1.1 10,000 samples from a 3-dimensional Dirichlet distribution with g0 uniform and (a) α = 1 (b) α = 3 (c) α = 10 As can be seen, (a) when α < 3, the samples concentrate near vertices and edges of ∆3; (b) when α = 3, the density is uniform; and (c) when
Recommended publications
  • Connection Between Dirichlet Distributions and a Scale-Invariant Probabilistic Model Based on Leibniz-Like Pyramids
    Home Search Collections Journals About Contact us My IOPscience Connection between Dirichlet distributions and a scale-invariant probabilistic model based on Leibniz-like pyramids This content has been downloaded from IOPscience. Please scroll down to see the full text. View the table of contents for this issue, or go to the journal homepage for more Download details: IP Address: 152.84.125.254 This content was downloaded on 22/12/2014 at 23:17 Please note that terms and conditions apply. Journal of Statistical Mechanics: Theory and Experiment Connection between Dirichlet distributions and a scale-invariant probabilistic model based on J. Stat. Mech. Leibniz-like pyramids A Rodr´ıguez1 and C Tsallis2,3 1 Departamento de Matem´atica Aplicada a la Ingenier´ıa Aeroespacial, Universidad Polit´ecnica de Madrid, Pza. Cardenal Cisneros s/n, 28040 Madrid, Spain 2 Centro Brasileiro de Pesquisas F´ısicas and Instituto Nacional de Ciˆencia e Tecnologia de Sistemas Complexos, Rua Xavier Sigaud 150, 22290-180 Rio de Janeiro, Brazil (2014) P12027 3 Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA E-mail: [email protected] and [email protected] Received 4 November 2014 Accepted for publication 30 November 2014 Published 22 December 2014 Online at stacks.iop.org/JSTAT/2014/P12027 doi:10.1088/1742-5468/2014/12/P12027 Abstract. We show that the N limiting probability distributions of a recently introduced family of d→∞-dimensional scale-invariant probabilistic models based on Leibniz-like (d + 1)-dimensional hyperpyramids (Rodr´ıguez and Tsallis 2012 J. Math. Phys. 53 023302) are given by Dirichlet distributions for d = 1, 2, ...
    [Show full text]
  • Distribution of Mutual Information
    Distribution of Mutual Information Marcus Hutter IDSIA, Galleria 2, CH-6928 Manno-Lugano, Switzerland [email protected] http://www.idsia.ch/-marcus Abstract The mutual information of two random variables z and J with joint probabilities {7rij} is commonly used in learning Bayesian nets as well as in many other fields. The chances 7rij are usually estimated by the empirical sampling frequency nij In leading to a point es­ timate J(nij In) for the mutual information. To answer questions like "is J (nij In) consistent with zero?" or "what is the probability that the true mutual information is much larger than the point es­ timate?" one has to go beyond the point estimate. In the Bayesian framework one can answer these questions by utilizing a (second order) prior distribution p( 7r) comprising prior information about 7r. From the prior p(7r) one can compute the posterior p(7rln), from which the distribution p(Iln) of the mutual information can be cal­ culated. We derive reliable and quickly computable approximations for p(Iln). We concentrate on the mean, variance, skewness, and kurtosis, and non-informative priors. For the mean we also give an exact expression. Numerical issues and the range of validity are discussed. 1 Introduction The mutual information J (also called cross entropy) is a widely used information theoretic measure for the stochastic dependency of random variables [CT91, SooOO] . It is used, for instance, in learning Bayesian nets [Bun96, Hec98] , where stochasti­ cally dependent nodes shall be connected. The mutual information defined in (1) can be computed if the joint probabilities {7rij} of the two random variables z and J are known.
    [Show full text]
  • Partnership As Experimentation: Business Organization and Survival in Egypt, 1910–1949
    Yale University EliScholar – A Digital Platform for Scholarly Publishing at Yale Discussion Papers Economic Growth Center 5-1-2017 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç Timothy Guinnane Follow this and additional works at: https://elischolar.library.yale.edu/egcenter-discussion-paper-series Recommended Citation Artunç, Cihan and Guinnane, Timothy, "Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949" (2017). Discussion Papers. 1065. https://elischolar.library.yale.edu/egcenter-discussion-paper-series/1065 This Discussion Paper is brought to you for free and open access by the Economic Growth Center at EliScholar – A Digital Platform for Scholarly Publishing at Yale. It has been accepted for inclusion in Discussion Papers by an authorized administrator of EliScholar – A Digital Platform for Scholarly Publishing at Yale. For more information, please contact [email protected]. ECONOMIC GROWTH CENTER YALE UNIVERSITY P.O. Box 208269 New Haven, CT 06520-8269 http://www.econ.yale.edu/~egcenter Economic Growth Center Discussion Paper No. 1057 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç University of Arizona Timothy W. Guinnane Yale University Notes: Center discussion papers are preliminary materials circulated to stimulate discussion and critical comments. This paper can be downloaded without charge from the Social Science Research Network Electronic Paper Collection: https://ssrn.com/abstract=2973315 Partnership as Experimentation: Business Organization and Survival in Egypt, 1910–1949 Cihan Artunç⇤ Timothy W. Guinnane† This Draft: May 2017 Abstract The relationship between legal forms of firm organization and economic develop- ment remains poorly understood. Recent research disputes the view that the joint-stock corporation played a crucial role in historical economic development, but retains the view that the costless firm dissolution implicit in non-corporate forms is detrimental to investment.
    [Show full text]
  • A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks†
    A Model of Gene Expression Based on Random Dynamical Systems Reveals Modularity Properties of Gene Regulatory Networks† Fernando Antoneli1,4,*, Renata C. Ferreira3, Marcelo R. S. Briones2,4 1 Departmento de Informática em Saúde, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 2 Departmento de Microbiologia, Imunologia e Parasitologia, Escola Paulista de Medicina (EPM), Universidade Federal de São Paulo (UNIFESP), SP, Brasil 3 College of Medicine, Pennsylvania State University (Hershey), PA, USA 4 Laboratório de Genômica Evolutiva e Biocomplexidade, EPM, UNIFESP, Ed. Pesquisas II, Rua Pedro de Toledo 669, CEP 04039-032, São Paulo, Brasil Abstract. Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving
    [Show full text]
  • POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis
    POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted a series of pathbreaking experiments on radioactive decay. In one of these, a radioactive substance was observed in N = 2608 time intervals of 7.5 seconds each, and the number of decay particles reaching a counter during each period was recorded. The table below shows the number Nk of these time periods in which exactly k decays were observed for k = 0,1,2,...,9. Also shown is N pk where k pk = (3.87) exp 3.87 =k! {− g The parameter value 3.87 was chosen because it is the mean number of decays/period for Rutherford’s data. k Nk N pk k Nk N pk 0 57 54.4 6 273 253.8 1 203 210.5 7 139 140.3 2 383 407.4 8 45 67.9 3 525 525.5 9 27 29.2 4 532 508.4 10 16 17.1 5 408 393.5 ≥ This is typical of what happens in many situations where counts of occurences of some sort are recorded: the Poisson distribution often provides an accurate – sometimes remarkably ac- curate – fit. Why? 1.2. Poisson Approximation to the Binomial Distribution. The ubiquity of the Poisson distri- bution in nature stems in large part from its connection to the Binomial and Hypergeometric distributions. The Binomial-(N ,p) distribution is the distribution of the number of successes in N independent Bernoulli trials, each with success probability p.
    [Show full text]
  • Lecture 24: Dirichlet Distribution and Dirichlet Process
    Stat260: Bayesian Modeling and Inference Lecture Date: April 28, 2010 Lecture 24: Dirichlet distribution and Dirichlet Process Lecturer: Michael I. Jordan Scribe: Vivek Ramamurthy 1 Introduction In the last couple of lectures, in our study of Bayesian nonparametric approaches, we considered the Chinese Restaurant Process, Bayesian mixture models, stick breaking, and the Dirichlet process. Today, we will try to gain some insight into the connection between the Dirichlet process and the Dirichlet distribution. 2 The Dirichlet distribution and P´olya urn First, we note an important relation between the Dirichlet distribution and the Gamma distribution, which is used to generate random vectors which are Dirichlet distributed. If, for i ∈ {1, 2, · · · , K}, Zi ∼ Gamma(αi, β) independently, then K K S = Zi ∼ Gamma αi, β i=1 i=1 ! X X and V = (V1, · · · ,VK ) = (Z1/S, · · · ,ZK /S) ∼ Dir(α1, · · · , αK ) Now, consider the following P´olya urn model. Suppose that • Xi - color of the ith draw • X - space of colors (discrete) • α(k) - number of balls of color k initially in urn. We then have that α(k) + j<i δXj (k) p(Xi = k|X1, · · · , Xi−1) = α(XP) + i − 1 where δXj (k) = 1 if Xj = k and 0 otherwise, and α(X ) = k α(k). It may then be shown that P n α(x1) α(xi) + j<i δXj (xi) p(X1 = x1, X2 = x2, · · · , X = x ) = n n α(X ) α(X ) + i − 1 i=2 P Y α(1)[α(1) + 1] · · · [α(1) + m1 − 1]α(2)[α(2) + 1] · · · [α(2) + m2 − 1] · · · α(C)[α(C) + 1] · · · [α(C) + m − 1] = C α(X )[α(X ) + 1] · · · [α(X ) + n − 1] n where 1, 2, · · · , C are the distinct colors that appear in x1, · · · ,xn and mk = i=1 1{Xi = k}.
    [Show full text]
  • The Exciting Guide to Probability Distributions – Part 2
    The Exciting Guide To Probability Distributions – Part 2 Jamie Frost – v1.1 Contents Part 2 A revisit of the multinomial distribution The Dirichlet Distribution The Beta Distribution Conjugate Priors The Gamma Distribution We saw in the last part that the multinomial distribution was over counts of outcomes, given the probability of each outcome and the total number of outcomes. xi f(x1, ... , xk | n, p1, ... , pk)= [n! / ∏xi!] ∏pi The count of The probability of each outcome. each outcome. That’s all smashing, but suppose we wanted to know the reverse, i.e. the probability that the distribution underlying our random variable has outcome probabilities of p1, ... , pk, given that we observed each outcome x1, ... , xk times. In other words, we are considering all the possible probability distributions (p1, ... , pk) that could have generated these counts, rather than all the possible counts given a fixed distribution. Initial attempt at a probability mass function: Just swap the domain and the parameters: The RHS is exactly the same. xi f(p1, ... , pk | n, x1, ... , xk )= [n! / ∏xi!] ∏pi Notational convention is that we define the support as a vector x, so let’s relabel p as x, and the counts x as α... αi f(x1, ... , xk | n, α1, ... , αk )= [n! / ∏ αi!] ∏xi We can define n just as the sum of the counts: αi f(x1, ... , xk | α1, ... , αk )= [(∑αi)! / ∏ αi!] ∏xi But wait, we’re not quite there yet. We know that probabilities have to sum to 1, so we need to restrict the domain we can draw from: s.t.
    [Show full text]
  • Hyperprior on Symmetric Dirichlet Distribution
    Hyperprior on symmetric Dirichlet distribution Jun Lu Computer Science, EPFL, Lausanne [email protected] November 5, 2018 Abstract In this article we introduce how to put vague hyperprior on Dirichlet distribution, and we update the parameter of it by adaptive rejection sampling (ARS). Finally we analyze this hyperprior in an over-fitted mixture model by some synthetic experiments. 1 Introduction It has become popular to use over-fitted mixture models in which number of cluster K is chosen as a conservative upper bound on the number of components under the expectation that only relatively few of the components K0 will be occupied by data points in the samples X . This kind of over-fitted mixture models has been successfully due to the ease in computation. Previously Rousseau & Mengersen(2011) proved that quite generally, the posterior behaviour of overfitted mixtures depends on the chosen prior on the weights, and on the number of free parameters in the emission distributions (here D, i.e. the dimension of data). Specifically, they have proved that (a) If α=min(αk; k ≤ K)>D=2 and if the number of components is larger than it should be, asymptotically two or more components in an overfitted mixture model will tend to merge with non-negligible weights. (b) −1=2 In contrast, if α=max(αk; k 6 K) < D=2, the extra components are emptied at a rate of N . Hence, if none of the components are small, it implies that K is probably not larger than K0. In the intermediate case, if min(αk; k ≤ K) ≤ D=2 ≤ max(αk; k 6 K), then the situation varies depending on the αk’s and on the difference between K and K0.
    [Show full text]
  • Hierarchical Dirichlet Process Hidden Markov Model for Unsupervised Bioacoustic Analysis
    Hierarchical Dirichlet Process Hidden Markov Model for Unsupervised Bioacoustic Analysis Marius Bartcus, Faicel Chamroukhi, Herve Glotin Abstract-Hidden Markov Models (HMMs) are one of the the standard HDP-HMM Gibbs sampling has the limitation most popular and successful models in statistics and machine of an inadequate modeling of the temporal persistence of learning for modeling sequential data. However, one main issue states [9]. This problem has been addressed in [9] by relying in HMMs is the one of choosing the number of hidden states. on a sticky extension which allows a more robust learning. The Hierarchical Dirichlet Process (HDP)-HMM is a Bayesian Other solutions for the inference of the hidden Markov non-parametric alternative for standard HMMs that offers a model in this infinite state space models are using the Beam principled way to tackle this challenging problem by relying sampling [lO] rather than Gibbs sampling. on a Hierarchical Dirichlet Process (HDP) prior. We investigate the HDP-HMM in a challenging problem of unsupervised We investigate the BNP formulation for the HMM, that is learning from bioacoustic data by using Markov-Chain Monte the HDP-HMM into a challenging problem of unsupervised Carlo (MCMC) sampling techniques, namely the Gibbs sampler. learning from bioacoustic data. The problem consists of We consider a real problem of fully unsupervised humpback extracting and classifying, in a fully unsupervised way, whale song decomposition. It consists in simultaneously finding the structure of hidden whale song units, and automatically unknown number of whale song units. We use the Gibbs inferring the unknown number of the hidden units from the Mel sampler to infer the HDP-HMM from the bioacoustic data.
    [Show full text]
  • A Dirichlet Process Mixture Model of Discrete Choice Arxiv:1801.06296V1
    A Dirichlet Process Mixture Model of Discrete Choice 19 January, 2018 Rico Krueger (corresponding author) Research Centre for Integrated Transport Innovation, School of Civil and Environmental Engineering, UNSW Australia, Sydney NSW 2052, Australia [email protected] Akshay Vij Institute for Choice, University of South Australia 140 Arthur Street, North Sydney NSW 2060, Australia [email protected] Taha H. Rashidi Research Centre for Integrated Transport Innovation, School of Civil and Environmental Engineering, UNSW Australia, Sydney NSW 2052, Australia [email protected] arXiv:1801.06296v1 [stat.AP] 19 Jan 2018 1 Abstract We present a mixed multinomial logit (MNL) model, which leverages the truncated stick- breaking process representation of the Dirichlet process as a flexible nonparametric mixing distribution. The proposed model is a Dirichlet process mixture model and accommodates discrete representations of heterogeneity, like a latent class MNL model. Yet, unlike a latent class MNL model, the proposed discrete choice model does not require the analyst to fix the number of mixture components prior to estimation, as the complexity of the discrete mixing distribution is inferred from the evidence. For posterior inference in the proposed Dirichlet process mixture model of discrete choice, we derive an expectation maximisation algorithm. In a simulation study, we demonstrate that the proposed model framework can flexibly capture differently-shaped taste parameter distributions. Furthermore, we empirically validate the model framework in a case study on motorists’ route choice preferences and find that the proposed Dirichlet process mixture model of discrete choice outperforms a latent class MNL model and mixed MNL models with common parametric mixing distributions in terms of both in-sample fit and out-of-sample predictive ability.
    [Show full text]
  • Contents-Preface
    Stochastic Processes From Applications to Theory CHAPMAN & HA LL/CRC Texts in Statis tical Science Series Series Editors Francesca Dominici, Harvard School of Public Health, USA Julian J. Faraway, University of Bath, U K Martin Tanner, Northwestern University, USA Jim Zidek, University of Br itish Columbia, Canada Statistical !eory: A Concise Introduction Statistics for Technology: A Course in Applied F. Abramovich and Y. Ritov Statistics, !ird Edition Practical Multivariate Analysis, Fifth Edition C. Chat!eld A. A!!, S. May, and V.A. Clark Analysis of Variance, Design, and Regression : Practical Statistics for Medical Research Linear Modeling for Unbalanced Data, D.G. Altman Second Edition R. Christensen Interpreting Data: A First Course in Statistics Bayesian Ideas and Data Analysis: An A.J.B. Anderson Introduction for Scientists and Statisticians Introduction to Probability with R R. Christensen, W. Johnson, A. Branscum, K. Baclawski and T.E. Hanson Linear Algebra and Matrix Analysis for Modelling Binary Data, Second Edition Statistics D. Collett S. Banerjee and A. Roy Modelling Survival Data in Medical Research, Mathematical Statistics: Basic Ideas and !ird Edition Selected Topics, Volume I, D. Collett Second Edition Introduction to Statistical Methods for P. J. Bickel and K. A. Doksum Clinical Trials Mathematical Statistics: Basic Ideas and T.D. Cook and D.L. DeMets Selected Topics, Volume II Applied Statistics: Principles and Examples P. J. Bickel and K. A. Doksum D.R. Cox and E.J. Snell Analysis of Categorical Data with R Multivariate Survival Analysis and Competing C. R. Bilder and T. M. Loughin Risks Statistical Methods for SPC and TQM M.
    [Show full text]
  • Notes on Stochastic Processes
    Notes on stochastic processes Paul Keeler March 20, 2018 This work is licensed under a “CC BY-SA 3.0” license. Abstract A stochastic process is a type of mathematical object studied in mathemat- ics, particularly in probability theory, which can be used to represent some type of random evolution or change of a system. There are many types of stochastic processes with applications in various fields outside of mathematics, including the physical sciences, social sciences, finance and economics as well as engineer- ing and technology. This survey aims to give an accessible but detailed account of various stochastic processes by covering their history, various mathematical definitions, and key properties as well detailing various terminology and appli- cations of the process. An emphasis is placed on non-mathematical descriptions of key concepts, with recommendations for further reading. 1 Introduction In probability and related fields, a stochastic or random process, which is also called a random function, is a mathematical object usually defined as a collection of random variables. Historically, the random variables were indexed by some set of increasing numbers, usually viewed as time, giving the interpretation of a stochastic process representing numerical values of some random system evolv- ing over time, such as the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule [120, page 7][51, page 46 and 47][66, page 1]. Stochastic processes are widely used as math- ematical models of systems and phenomena that appear to vary in a random manner. They have applications in many disciplines including physical sciences such as biology [67, 34], chemistry [156], ecology [16][104], neuroscience [102], and physics [63] as well as technology and engineering fields such as image and signal processing [53], computer science [15], information theory [43, page 71], and telecommunications [97][11][12].
    [Show full text]