Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data

Total Page:16

File Type:pdf, Size:1020Kb

Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data Yarin Gal [email protected] Yutian Chen [email protected] Zoubin Ghahramani [email protected] University of Cambridge Abstract a relatively small number of patients. Each patient has a Multivariate categorical data occur in many ap- medical record composed often of dozens of examinations, plications of machine learning. One of the main taking various categorical test results. We might be faced difficulties with these vectors of categorical vari- with the task of deciding which tests are necessary for a ables is sparsity. The number of possible obser- patient under examination to take, and which examination vations grows exponentially with vector length, results could be deduced from the existing tests. This can but dataset diversity might be poor in compari- be achieved with distribution estimation. son. Recent models have gained significant im- Several tools in the Bayesian framework could be used provement in supervised tasks with this data. for this task of distribution estimation of unlabelled small These models embed observations in a continu- datasets. Tools such as the Dirichlet-Multinomial distribu- ous space to capture similarities between them. tion and its extensions are an example of such. These rely Building on these ideas we propose a Bayesian on relative frequencies of categorical variables appearing model for the unsupervised task of distribution with others, with the addition of various smoothing tech- estimation of multivariate categorical data. niques. But when faced with long multivariate sequences, We model vectors of categorical variables as these models run into problems of sparsity. This occurs generated from a non-linear transformation of when the data consists of vectors of categorical variables a continuous latent space. Non-linearity cap- with most configurations of categorical values not in the tures multi-modality in the distribution. The con- dataset. In medical diagnosis this happens when there is a tinuous representation addresses sparsity. Our large number of possible examinations compared to a small model ties together many existing models, link- number of patients. ing the linear categorical latent Gaussian model, the Gaussian process latent variable model, and Building on ideas used for big labelled discrete data, we Gaussian process classification. We derive in- propose a Bayesian model for distribution estimation of ference for our model based on recent develop- small unlabelled data. Existing supervised models for dis- ments in sampling based variational inference. crete labelled data embed the observations in a continuous We show empirically that the model outperforms space. This is used to find the similarity between vectors of its linear and discrete counterparts in imputation categorical variables. We extend this idea to the small unla- tasks of sparse data. belled domain by modelling the continuous embedding as a latent variable. A generative model is used to find a dis- 1 Introduction tribution over the discrete observations by modelling them arXiv:1503.02182v1 [stat.ML] 7 Mar 2015 as dependent on the continuous latent variables. Multivariate categorical data is common in fields ranging from language processing to medical diagnosis. Recently Following the medical diagnosis example, patient n would proposed supervised models have gained significant im- be modelled by a continuous latent variable xn 2 X . For provement in tasks involving big labelled data of this form each examination d, the latent xn induces a vector of prob- (see for example Bengio et al. (2006); Collobert & Weston abilities f = (fnd1; :::; fndK ), one probability for each pos- (2008)). These models rely on information that had been sible test result k. A categorical distribution returns test re- largely ignored before: similarity between vectors of cate- sult ynd based on these probabilities, resulting in a patient’s gorical variables. But what should we do in the unsuper- medical assessment yn = (yn1; :::; ynD). We need to de- vised setting, when we face small unlabelled data of this cide how to model the distribution over the latent space X form? and vectors of probabilities f. Medical diagnosis provides good examples of small unla- We would like to capture sparse multi-modal categorical belled data. Consider a dataset composed of test results of distributions. A possible approach would be to model the Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data continuous representation with a simple latent space and a then demonstrate the utility of the model in the real-world non-linear transformation of points in the space to obtain sparse data domain. We use a medical diagnosis dataset probabilities. In this approach we place a standard normal where data is scarce, comparing our model to discrete fre- distribution prior on a latent space, and feed the output of a quency based models. We use the estimated distribution non-linear transformation of the latent space into a Softmax for a task similar to the above, where we attempt to in- to obtain probabilities. We use sparse Gaussian processes fer which test results can be deduced from the others. We (GPs) to transform the latent space non-linearly. Sparse compare the models on the task of imputation of raw data GPs form a distribution over functions supported on a small studying the effects of government terror warnings on po- number of points with linear time complexity (Quinonero-˜ litical attitudes. We then evaluate the continuous models on Candela & Rasmussen, 2005; Titsias, 2009). We use a co- a binary Alphadigits dataset composed of binary images of variance function that is able to transform the latent space handwritten digits and letters, where each class contains a non-linearly. We name this model the Categorical Latent small number of images. We inspect the latent space em- Gaussian Process (CLGP). Using a Gaussian process with beddings and separation of the classes. Lastly, we evaluate a linear covariance function recovers the linear Gaussian the robustness of our inference, inspecting the Monte Carlo model (LGM, Marlin et al., 2011). This model linearly estimate variance over time. transform a continuous latent space resulting in discrete ob- servations. 2 Related Work The Softmax likelihood is not conjugate to the our Gaus- Our model (CLGP) relates to some key probabilistic mod- sian prior, and integrating the latent variables with a Soft- els (fig. 1). It can be seen as a non-linear version of the max distribution is intractable. A similar problem exists latent Gaussian model (LGM, Khan et al. (2012)) as dis- with LGMs. Marlin et al. (2011) solved this by using vari- cussed above. In the LGM we have a standard normal prior ational inference and various bounds for the likelihood in placed on a latent space, which is transformed linearly and the binary case, or alternative likelihoods to the Softmax in fed into a Softmax likelihood function. The probability the categorical case (Khan et al., 2012). Many bounds have vector output is then used to sample a single categorical been studied in the literature for the binary case: Jaakkola value for each categorical variable (e.g. medical test re- and Jordan’s bound (Jaakkola & Jordan, 1997), the tilted sults) in a list of categorical variables (e.g. medical assess- bound (Knowles & Minka, 2011), piecewise linear and ment). These categorical variables correspond to elements quadratic bounds (Marlin et al., 2011), and others. But for in a multivariate categorical vector. The parameters of the categorical data fewer bounds exist, since the multivariate linear transformation are optimised directly within an EM Softmax is hard to approximate in high-dimensions. The framework. Khan et al. (2012) avoid the hard task of ap- Bohning bound (Bohning,¨ 1992) and Blei and Lafferty’s proximating the Softmax likelihood by using an alternative bound (Blei & Lafferty, 2006) give poor approximation function (product of sigmoids) which is approximated us- (Khan et al., 2012). ing numerical techniques. Our approach avoids this cum- Instead we use recent developments in sampling-based bersome inference. variational inference (Blei et al., 2012) to avoid integrat- ing the latent variables with the Softmax analytically. Our Gaussian process approach takes advantage of this tool to obtain simple yet Linear regression regression powerful model and inference. We use Monte Carlo in- tegration to approximate the non-conjugate likelihood ob- Gaussian process latent Factor analysis taining noisy gradients (Kingma & Welling, 2013; Rezende variable model et al., 2014; Titsias & Lazaro-Gredilla,´ 2014). We then use Gaussian process Logistic regression learning-rate free stochastic optimisation (Tieleman & Hin- classification ton, 2012) to optimise the noisy objective. We leverage Linear Non-linear Categorical Latent symbolic differentiation (Theano, Bergstra et al., 2010) to Continuous Observed input Latent Gaussian models 1 Gaussian Process obtain simple and modular code . Discrete Latent input We experimentally show the advantages of using non-linear Figure 1. Relations between existing models and the model pro- transformations for the latent space. We follow the ideas posed in this paper (Categorical Latent Gaussian Process); the brought in Paccanaro & Hinton (2001) and evaluate the model can be seen as a non-linear version of the latent Gaussian models on relation embedding and relation learning. We model (left to right, Khan
Recommended publications
  • Package 'Distributional'
    Package ‘distributional’ February 2, 2021 Title Vectorised Probability Distributions Version 0.2.2 Description Vectorised distribution objects with tools for manipulating, visualising, and using probability distributions. Designed to allow model prediction outputs to return distributions rather than their parameters, allowing users to directly interact with predictive distributions in a data-oriented workflow. In addition to providing generic replacements for p/d/q/r functions, other useful statistics can be computed including means, variances, intervals, and highest density regions. License GPL-3 Imports vctrs (>= 0.3.0), rlang (>= 0.4.5), generics, ellipsis, stats, numDeriv, ggplot2, scales, farver, digest, utils, lifecycle Suggests testthat (>= 2.1.0), covr, mvtnorm, actuar, ggdist RdMacros lifecycle URL https://pkg.mitchelloharawild.com/distributional/, https: //github.com/mitchelloharawild/distributional BugReports https://github.com/mitchelloharawild/distributional/issues Encoding UTF-8 Language en-GB LazyData true Roxygen list(markdown = TRUE, roclets=c('rd', 'collate', 'namespace')) RoxygenNote 7.1.1 1 2 R topics documented: R topics documented: autoplot.distribution . .3 cdf..............................................4 density.distribution . .4 dist_bernoulli . .5 dist_beta . .6 dist_binomial . .7 dist_burr . .8 dist_cauchy . .9 dist_chisq . 10 dist_degenerate . 11 dist_exponential . 12 dist_f . 13 dist_gamma . 14 dist_geometric . 16 dist_gumbel . 17 dist_hypergeometric . 18 dist_inflated . 20 dist_inverse_exponential . 20 dist_inverse_gamma
    [Show full text]
  • Fisher Information Matrix for Gaussian and Categorical Distributions
    Fisher information matrix for Gaussian and categorical distributions Jakub M. Tomczak November 28, 2012 1 Notations Let x be a random variable. Consider a parametric distribution of x with parameters θ, p(xjθ). The contiuous random variable x 2 R can be modelled by normal distribution (Gaussian distribution): 1 n (x − µ)2 o p(xjθ) = p exp − 2πσ2 2σ2 = N (xjµ, σ2); (1) where θ = µ σ2T. A discrete (categorical) variable x 2 X , X is a finite set of K values, can be modelled by categorical distribution:1 K Y xk p(xjθ) = θk k=1 = Cat(xjθ); (2) P where 0 ≤ θk ≤ 1, k θk = 1. For X = f0; 1g we get a special case of the categorical distribution, Bernoulli distribution, p(xjθ) = θx(1 − θ)1−x = Bern(xjθ): (3) 2 Fisher information matrix 2.1 Definition The Fisher score is determined as follows [1]: g(θ; x) = rθ ln p(xjθ): (4) The Fisher information matrix is defined as follows [1]: T F = Ex g(θ; x) g(θ; x) : (5) 1We use the 1-of-K encoding [1]. 1 2.2 Example 1: Bernoulli distribution Let us calculate the fisher matrix for Bernoulli distribution (3). First, we need to take the logarithm: ln Bern(xjθ) = x ln θ + (1 − x) ln(1 − θ): (6) Second, we need to calculate the derivative: d x 1 − x ln Bern(xjθ) = − dθ θ 1 − θ x − θ = : (7) θ(1 − θ) Hence, we get the following Fisher score for the Bernoulli distribution: x − θ g(θ; x) = : (8) θ(1 − θ) The Fisher information matrix (here it is a scalar) for the Bernoulli distribution is as follows: F = Ex[g(θ; x) g(θ; x)] h (x − θ)2 i = Ex (θ(1 − θ))2 1 n o = [x2 − 2xθ + θ2] (θ(1 − θ))2 Ex 1 n o = [x2] − 2θ [x] + θ2 (θ(1 − θ))2 Ex Ex 1 n o = θ − 2θ2 + θ2 (θ(1 − θ))2 1 = θ(1 − θ) (θ(1 − θ))2 1 = : (9) θ(1 − θ) 2.3 Example 2: Categorical distribution Let us calculate the fisher matrix for categorical distribution (2).
    [Show full text]
  • Categorical Distributions in Natural Language Processing Version 0.1
    Categorical Distributions in Natural Language Processing Version 0.1 MURAWAKI Yugo 12 May 2016 MURAWAKI Yugo Categorical Distributions in NLP 1 / 34 Categorical distribution Suppose random variable x takes one of K values. x is generated according to categorical distribution Cat(θ), where θ = (0:1; 0:6; 0:3): RYG In many task settings, we do not know the true θ and need to infer it from observed data x = (x1; ··· ; xN). Once we infer θ, we often want to predict new variable x0. NOTE: In Bayesian settings, θ is usually integrated out and x0 is predicted directly from x. MURAWAKI Yugo Categorical Distributions in NLP 2 / 34 Categorical distributions are a building block of natural language models N-gram language model (predicting the next word) POS tagging based on a Hidden Markov Model (HMM) Probabilistic context-free grammar (PCFG) Topic model (Latent Dirichlet Allocation (LDA)) MURAWAKI Yugo Categorical Distributions in NLP 3 / 34 Example: HMM-based POS tagging BOS DT NN VBZ VBN EOS the sun has risen Let K be the number of POS tags and V be the vocabulary size (ignore BOS and EOS for simplicity). The transition probabilities can be computed using K categorical θTRANS; θTRANS; ··· distributions ( DT NN ), with the dimension K. θTRANS = : ; : ; : ; ··· DT (0 21 0 27 0 09 ) NN NNS ADJ Similarly, the emission probabilities can be computed using K θEMIT; θEMIT; ··· categorical distributions ( DT NN ), with the dimension V. θEMIT = : ; : ; : ; ··· NN (0 012 0 002 0 005 ) sun rose cat MURAWAKI Yugo Categorical Distributions in NLP 4 / 34 Outline Categorical and multinomial distributions Conjugacy and posterior predictive distribution LDA (Latent Dirichlet Applocation) as an application Gibbs sampling for inference MURAWAKI Yugo Categorical Distributions in NLP 5 / 34 Categorical distribution: 1 observation Suppose θ is known.
    [Show full text]
  • Package 'Extradistr'
    Package ‘extraDistr’ September 7, 2020 Type Package Title Additional Univariate and Multivariate Distributions Version 1.9.1 Date 2020-08-20 Author Tymoteusz Wolodzko Maintainer Tymoteusz Wolodzko <[email protected]> Description Density, distribution function, quantile function and random generation for a number of univariate and multivariate distributions. This package implements the following distributions: Bernoulli, beta-binomial, beta-negative binomial, beta prime, Bhattacharjee, Birnbaum-Saunders, bivariate normal, bivariate Poisson, categorical, Dirichlet, Dirichlet-multinomial, discrete gamma, discrete Laplace, discrete normal, discrete uniform, discrete Weibull, Frechet, gamma-Poisson, generalized extreme value, Gompertz, generalized Pareto, Gumbel, half-Cauchy, half-normal, half-t, Huber density, inverse chi-squared, inverse-gamma, Kumaraswamy, Laplace, location-scale t, logarithmic, Lomax, multivariate hypergeometric, multinomial, negative hypergeometric, non-standard beta, normal mixture, Poisson mixture, Pareto, power, reparametrized beta, Rayleigh, shifted Gompertz, Skellam, slash, triangular, truncated binomial, truncated normal, truncated Poisson, Tukey lambda, Wald, zero-inflated binomial, zero-inflated negative binomial, zero-inflated Poisson. License GPL-2 URL https://github.com/twolodzko/extraDistr BugReports https://github.com/twolodzko/extraDistr/issues Encoding UTF-8 LazyData TRUE Depends R (>= 3.1.0) LinkingTo Rcpp 1 2 R topics documented: Imports Rcpp Suggests testthat, LaplacesDemon, VGAM, evd, hoa,
    [Show full text]
  • Chapter 9. Exponential Family of Distributions
    Machine Learning for Engineers: Chapter 9. Exponential Family of Distributions Osvaldo Simeone King's College London January 26, 2021 Osvaldo Simeone ML4Engineers 1 / 97 This Chapter In the previous chapters, we have adopted a limited range of probabilistic models, namely Bernoulli and categorical for discrete rvs and Gaussian for continuous rvs. While these are the most common modelling choices, they clearly do not represent many important situations. Examples: I Discrete data may a priori take arbitrarily large values, making Bernoulli and categorical models not suitable F ex.: waiting times for next arrival in a queue; I Continuous data may be non-negative, making Gaussian models not suitable F ex.: measurements of weights or heights. Osvaldo Simeone ML4Engineers 2 / 97 This Chapter Furthermore, we have seen that Bernoulli, categorical, and Gaussian distributions share several common features: I The gradient of the log-loss with respect to the model parameters can be expressed in terms of a mean error that measures the difference between mean under the model and observation (see Chapters 4 and 6); I ML learning can be solved in closed form by evaluating empirical averages (see Chapters 3, 4, and 6); I Information-theoretic quantities such as (differential) entropy and KL divergence can be computed in closed form (see Chapter 3). Osvaldo Simeone ML4Engineers 3 / 97 This Chapter In this chapter, we will introduce a general family of distributions that includes Bernoulli, categorical, and Gaussian as special cases: the exponential family of distributions. The family is much larger, and it also encompasses distributions such as I Poisson and geometric distributions, whose support is discrete and includes all integers; I exponential and gamma distributions, whose support is continuous and includes only non-negative values.
    [Show full text]
  • Discrete Categorical Distribution
    Discrete Categorical Distribution Carl Edward Rasmussen November 11th, 2016 Carl Edward Rasmussen Discrete Categorical Distribution November 11th, 2016 1 / 8 Key concepts We generalize the concepts from binary variables to multiple discrete outcomes. • discrete and multinomial distributions • the Dirichlet distribution Carl Edward Rasmussen Discrete Categorical Distribution November 11th, 2016 2 / 8 The multinomial distribution (1) Generalisation of the binomial distribution from 2 outcomes to m outcomes. Useful for random variables that take one of a finite set of possible outcomes. Throw a die n = 60 times, and count the observed (6 possible) outcomes. Outcome Count X = x1 = 1 k1 = 12 X = x2 = 2 k2 = 7 Note that we have one parameter too many. We X = x3 = 3 k3 = 11 don’t need to know all the ki and n, because 6 X = x4 = 4 k4 = 8 i=1 ki = n. X = x5 = 5 k5 = 9 P X = x6 = 6 k6 = 13 Carl Edward Rasmussen Discrete Categorical Distribution November 11th, 2016 3 / 8 The multinomial distribution (2) Consider a discrete random variable X that can take one of m values x1,..., xm. Out of n independent trials, let ki be the number of times X = xi was observed. m It follows that i=1 ki = n. m Denote by πi the probability that X = xi, with i=1 πi = 1. P > The probability of observing a vector of occurrences k = [k1,..., km] is given by P > the multinomial distribution parametrised by π = [π1,..., πm] : n! ki p(kjπ, n) = p(k1,..., kmjπ1,..., πm, n) = πi k !k !... km! 1 2 i= Y1 • Note that we can write p(kjπ) since n is redundant.
    [Show full text]
  • The Continuous Categorical: a Novel Simplex-Valued Exponential Family
    The Continuous Categorical: A Novel Simplex-Valued Exponential Family Elliott Gordon-Rodriguez 1 Gabriel Loaiza-Ganem 2 John P. Cunningham 1 Abstract cal relevance across the natural and social sciences (see Pawlowsky-Glahn & Egozcue(2006); Pawlowsky-Glahn Simplex-valued data appear throughout statistics & Buccianti(2011); Pawlowsky-Glahn et al.(2015) for an and machine learning, for example in the context overview). Prominent examples appear in highly cited work of transfer learning and compression of deep net- ranging from geology (Pawlowsky-Glahn & Olea, 2004; works. Existing models for this class of data rely Buccianti et al., 2006), chemistry (Buccianti & Pawlowsky- on the Dirichlet distribution or other related loss Glahn, 2005), microbiology (Gloor et al., 2017), genetics functions; here we show these standard choices (Quinn et al., 2018), psychiatry (Gueorguieva et al., 2008), suffer systematically from a number of limitations, ecology (Douma & Weedon, 2019), environmental science including bias and numerical issues that frustrate (Filzmoser et al., 2009), materials science (Na et al., 2014), the use of flexible network models upstream of political science (Katz & King, 1999), public policy (Bre- these distributions. We resolve these limitations unig & Busemeyer, 2012), economics (Fry et al., 2000), and by introducing a novel exponential family of dis- the list goes on. An application of particular interest in ma- tributions for modeling simplex-valued data – the chine learning arises in the context of model compression, continuous categorical, which arises as a nontriv- where the class probabilities outputted by a large model are ial multivariate generalization of the recently dis- used as ‘soft targets’ to train a small neural network (Bu- covered continuous Bernoulli.
    [Show full text]
  • Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data
    Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data Yarin Gal Yutian Chen Zoubin Ghahramani University of Cambridge f yg279, yc373, zoubin [email protected] Abstract Multivariate categorical data occur in many applications of machine learning, such as data analysis and language processing. Here we develop a flexible class of models for distribution estimation in such multivariate (i.e. vectors of) categorical data. Multivariate categorical data is challenging because the number of possible discrete observation vectors grows exponentially with the number of categorical variables in the vector. In particular, we address the problem of estimating the distribution when the data is sparsely sampled—i.e. in the typical case when the diversity of the data points is poor compared to the exponentially many possible observations. We make use of a continuous latent Gaussian space, but unlike pre- vious linear approaches, we learn a non-linear transformation between this latent space and the multivariate categorical observation space. Non-linearity is essen- tial for capturing multi-modality in the distribution. Our model ties together many existing models, linking the categorical linear latent Gaussian model, the Gaus- sian process latent variable model, and Gaussian process classification. We derive effective inference for our model based on recent developments in sampling-based variational inference and stochastic optimisation. 1 Introduction Categorical distribution estimation (CDE) forms one of the core problems in machine learning, and can be used to perform tasks ranging from survey analysis (Inoguchi, 2008) to cancer prediction (Zwitter and Soklic, 1988). One of the major challenges in CDE is sparsity. Sparsity can occur either when there is a single categorical variable with many possible values, some appearing scarcely, or when the data consists of vectors of categorical variables, with most configurations of categorical values not in the dataset.
    [Show full text]
  • A Compound Dirichlet-Multinomial Model for Provincial Level Covid-19 Predictions in South Africa
    medRxiv preprint doi: https://doi.org/10.1101/2020.06.15.20131433; this version posted June 17, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. A compound Dirichlet-Multinomial model for provincial level Covid-19 predictions in South Africa Alta de Waal*1,2Daan de Waal1,3 1 Department of Statistics, University of Pretoria, Pretoria, South Africa 2 Center for Artificial Intelligence (CAIR), Pretoria, South Africa 3 Mathematical Statistics and Actuarial Science, University of the Free State, Bloemfontein, South Africa * [email protected] Abstract 1 Accurate prediction of COVID-19 related indicators such as confirmed cases, deaths and 2 recoveries play an important in understanding the spread and impact of the virus, as 3 well as resource planning and allocation. In this study, we approach the prediction 4 problem from a statistical perspective and predict confirmed cases and deaths on a 5 provincial level. We propose the compound Dirichlet Multinomial distribution to 6 estimate the proportion parameter of each province as mutually exclusive outcomes. 7 Furthermore, we make an assumption of exponential growth of the total cummulative 8 counts in order to predict future total counts. The outcomes of this approach is not 9 only prediction. The variation of the proportion parameter is characterised by the 10 Dirichlet distribution, which provides insight in the movement of the pandemic across 11 provinces over time. 12 Introduction 13 The global COVID-19 (C19) pandemic has urged governments globally to rapidly 14 implement measures to reduce the number of infected cases and reduce pressure on the 15 June 15, 2020 1/17 NOTE: This preprint reports new research that has not been certified by peer review and should not be used to guide clinical practice.
    [Show full text]
  • An R Package for Multivariate Categorical Data Analysis by Juhyun Kim, Yiwen Zhang, Joshua Day, Hua Zhou
    CONTRIBUTED RESEARCH ARTICLE 73 MGLM: An R Package for Multivariate Categorical Data Analysis by Juhyun Kim, Yiwen Zhang, Joshua Day, Hua Zhou Abstract Data with multiple responses is ubiquitous in modern applications. However, few tools are available for regression analysis of multivariate counts. The most popular multinomial-logit model has a very restrictive mean-variance structure, limiting its applicability to many data sets. This article introduces an R package MGLM, short for multivariate response generalized linear models, that expands the current tools for regression analysis of polytomous data. Distribution fitting, random number generation, regression, and sparse regression are treated in a unifying framework. The algorithm, usage, and implementation details are discussed. Introduction Multivariate categorical data arises in many fields, including genomics, image analysis, text mining, and sports statistics. The multinomial-logit model (Agresti, 2002, Chapter 7) has been the most popular tool for analyzing such data. However, it is limiting due to its specific mean-variance structure and the strong assumption that the counts are negatively correlated. Models that address over-dispersion relative to a multinomial distribution and incorporate positive and/or negative correlation structures would offer greater flexibility for analysis of polytomous data. In this article, we introduce an R package MGLM, short for multivariate response generalized linear models. The MGLM package provides a unified framework for random number generation, distribution fitting, regression, hypothesis testing, and variable selection for multivariate response generalized linear models, particularly four models listed in Table1. These models considerably broaden the class of generalized linear models (GLM) for analysis of multivariate categorical data. MGLM overlaps little with existing packages in R and other softwares.
    [Show full text]
  • Probability Distributions: Multinomial and Poisson
    Probability Distributions: Multinomial and Poisson Data Science: Jordan Boyd-Graber University of Maryland JANUARY 21, 2018 Data Science: Jordan Boyd-Graber UMD Probability Distributions: Multinomial and Poisson 1 / 12 j j Multinomial distribution Recall: the binomial distribution is the number of successes from multiple Bernoulli success/fail events The multinomial distribution is the number of different outcomes from multiple categorical events It is a generalization of the binomial distribution to more than two possible outcomes As with the binomial distribution, each categorical event is assumed to be independent Bernoulli : binomial :: categorical : multinomial Examples: The number of times each face of a die turned up after 50 rolls The number of times each suit is drawn from a deck of cards after 10 draws Data Science: Jordan Boyd-Graber UMD Probability Distributions: Multinomial and Poisson 2 / 12 j j Multinomial distribution Notation: let X~ be a vector of length K , where Xk is a random variable that describes the number of times that the kth value was the outcome out of N categorical trials. The possible values of each Xk are integers from 0 to N PK All Xk values must sum to N: k=1 Xk = N Example: if we roll a die 10 times, suppose it X1 = 1 comes up with the following values: X2 = 0 X3 = 3 X~ =< 1,0,3,2,1,3 > X4 = 2 X5 = 1 X6 = 3 The multinomial distribution is a joint distribution over multiple random variables: P(X1,X2,...,XK ) Data Science: Jordan Boyd-Graber UMD Probability Distributions: Multinomial and Poisson 3 / 12 j j Multinomial distribution 3 Suppose we roll a die 3 times.
    [Show full text]
  • CHAPTER 2 Estimating Probabilities
    CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2017. Tom M. Mitchell. All rights reserved. *DRAFT OF January 26, 2018* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR’S PERMISSION* This is a rough draft chapter intended for inclusion in the upcoming second edition of the textbook Machine Learning, T.M. Mitchell, McGraw Hill. You are welcome to use this for educational purposes, but do not duplicate or repost it on the internet. For online copies of this and other materials related to this book, visit the web site www.cs.cmu.edu/∼tom/mlbook.html. Please send suggestions for improvements, or suggested exercises, to [email protected]. Many machine learning methods depend on probabilistic approaches. The reason is simple: when we are interested in learning some target function f : X ! Y, we can more generally learn the probabilistic function P(YjX). By using a probabilistic approach, we can design algorithms that learn func- tions with uncertain outcomes (e.g., predicting tomorrow’s stock price) and that incorporate prior knowledge to guide learning (e.g., a bias that tomor- row’s stock price is likely to be similar to today’s price). This chapter de- scribes joint probability distributions over many variables, and shows how they can be used to calculate a target P(YjX). It also considers the problem of learning, or estimating, probability distributions from training data, pre- senting the two most common approaches: maximum likelihood estimation and maximum a posteriori estimation. 1 Joint Probability Distributions The key to building probabilistic models is to define a set of random variables, and to consider the joint probability distribution over them.
    [Show full text]