Identifiability of Finite Mixtures of Multinomial Logit Models With
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Maximum Likelihood Estimation in Latent Class Models for Contingency Table Data
Maximum Likelihood Estimation in Latent Class Models for Contingency Table Data Stephen E. Fienberg Department of Statistics, Machine Learning Department and Cylab Carnegie Mellon University Pittsburgh, PA 15213-3890 USA Patricia Hersh Department of Mathematics Indiana University Bloomington, IN 47405-7000 USA Alessandro Rinaldo Department of Statistics Carnegie Mellon University Pittsburgh, PA 15213-3890 USA Yi Zhou Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213-3890 USA i ii Abstract Statistical models with latent structure have a history going back to the 1950s and have seen widespread use in the social sciences and, more re- cently, in computational biology and in machine learning. Here we study the basic latent class model proposed originally by the sociologist Paul F. Lazarfeld for categorical variables, and we explain its geometric struc- ture. We draw parallels between the statistical and geometric properties of latent class models and we illustrate geometrically the causes of many problems associated with maximum likelihood estimation and related sta- tistical inference. In particular, we focus on issues of non-identifiability and determination of the model dimension, of maximization of the like- lihood function and on the effect of symmetric data. We illustrate these phenomena with a variety of synthetic and real-life tables, of different di- mensions and complexities. Much of the motivation for this work stems from the “100 Swiss Franks” problem, which we introduce and describe in detail. Keywords: latent class model, algebraic geometry, variety, Segre vari- ety, secant variety, effective dimension. CONTENTS iii Contents 1 Introduction 1 2 Latent Class Models for Contingency Tables 2 3 Geometric Description of Latent Class Models 4 4 Examples Involving Synthetic Data 8 4.1 Effective Dimension and Polynomials . -
The Mixed Logit Model: the State of Practice and Warnings for the Unwary
The Mixed Logit Model: The State of Practice and Warnings for the Unwary David A. Hensher Institute of Transport Studies Faculty of Economics and Business The University of Sydney NSW 2006 Australia [email protected] William H. Greene Department of Economics Stern School of Business New York University New York USA [email protected] 28 November 2001 Abstract The mixed logit model is considered to be the most promising state of the art discrete choice model currently available. Increasingly researchers and practitioners are estimating mixed logit models of various degrees of sophistication with mixtures of revealed preference and stated preference data. It is timely to review progress in model estimation since the learning curve is steep and the unwary are likely to fall into a chasm if not careful. These chasms are very deep indeed given the complexity of the mixed logit model. Although the theory is relatively clear, estimation and data issues are far from clear. Indeed there is a great deal of potential mis-inference consequent on trying to extract increased behavioural realism from data that are often not able to comply with the demands of mixed logit models. Possibly for the first time we now have an estimation method that requires extremely high quality data if the analyst wishes to take advantage of the extended behavioural capabilities of such models. This paper focuses on the new opportunities offered by mixed logit models and some issues to be aware of to avoid misuse of such advanced discrete choice methods by the practitioner1. Key Words: Mixed logit, Random Parameters, Estimation, Simulation, Data Quality, Model Specification, Distributions 1. -
An ANOVA Test for Parameter Estimability Using Data Cloning with Application to Statistical
An ANOVA Test for Parameter Estimability using Data Cloning with Application to Statistical Inference for Dynamic Systems Corresponding Author: David Campbell Department of Statistics and Actuarial Science Simon Fraser University Surrey, BC V3T 0A3 Canada [email protected] Subhash Lele Department of Mathematical and Statistical Sciences University of Alberta Edmonton, AB T6G 2G1 Canada [email protected] Abstract: Models for complex systems are often built with more parameters than can be uniquely identified by available data. Because of the variety of causes, identifying a lack of parameter identifiability typically requires mathematical manipulation of models, monte carlo simulations, and examination of the Fisher Information Matrix. A simple test for parameter estimability is introduced, using Data Cloning, a Markov Chain Monte Carlo based algorithm. Together, Data cloning and the ANOVA based test determine if the model parameters are estimable and if so, determine their maximum likelihood estimates and provide asymptotic standard errors. When not all model parameters are estimable, the Data Cloning results and the ANOVA test can be used to determine estimable parameter combinations or infer identifiability problems in the model structure. The method is illustrated using three different real data systems that are known to be difficult to analyze. Keywords: Maximum Likelihood Estimation, Over-Parametrized Models, Markov Chain Monte Carlo, Parameter Identifiability, Differential Equation Models 1.INTRODUCTION The ability of dynamic system models to succinctly describe complex behavior with a few but readily interpretable parameters has led to their wide spread popularity. Dynamic system models often try to model all the inherent behavior of the underlying process but the observations do not always have adequate information to support such complexity, leading to parameter inestimability. -
Using Latent Class Analysis and Mixed Logit Model to Explore Risk Factors On
Accident Analysis and Prevention 129 (2019) 230–240 Contents lists available at ScienceDirect Accident Analysis and Prevention journal homepage: www.elsevier.com/locate/aap Using latent class analysis and mixed logit model to explore risk factors on T driver injury severity in single-vehicle crashes ⁎ Zhenning Lia, Qiong Wua, Yusheng Cib, Cong Chenc, Xiaofeng Chend, Guohui Zhanga, a Department of Civil and Environmental Engineering, University of Hawaii at Manoa, 2540 Dole Street, Honolulu, HI 96822, USA b Department of Transportation Science and Engineering, Harbin Institute of Technology, 73 Huanghe Road, Harbin, Heilongjiang 150090, China c Center for Urban Transportation Research, University of South Florida, 4202 East Fowler Avenue, CUT100, Tampa, FL 33620, USA d School of Automation, Northwestern Polytechnical University, Xi’an, 710129, China ARTICLE INFO ABSTRACT Keywords: The single-vehicle crash has been recognized as a critical crash type due to its high fatality rate. In this study, a Unobserved heterogeneity two-year crash dataset including all single-vehicle crashes in New Mexico is adopted to analyze the impact of Latent class analysis contributing factors on driver injury severity. In order to capture the across-class heterogeneous effects, a latent Mixed logit model class approach is designed to classify the whole dataset by maximizing the homogeneous effects within each Driver injury severity cluster. The mixed logit model is subsequently developed on each cluster to account for the within-class un- observed heterogeneity and to further analyze the dataset. According to the estimation results, several variables including overturn, fixed object, and snowing, are found to be normally distributed in the observations in the overall sample, indicating there exist some heterogeneous effects in the dataset. -
Lecture 5 Identifiability
Lecture 5 The Maximum Likelihood method is essentially the only estimation method used in tag recovery or capture recapture models. Theory goes back to R. A. Fisher in 1912 (when he was a third year undergraduate!) and a flurry of later papers. Actually, Edwards (1974) traces the history of such thinking back to Daniel Bernoulli and Johann Lambert in the 18th century. Maximum likelihood is the primary estimation method in statistics. Likelihood Theory is the Foundation of Statistics! The probability of mortality (1–S ) includes harvest mortality (including fish killed, but not retrieved) and “natural" mortality. SS is the probability of survival from one tagging period to the next. 1– is the probability of mortality and includes both harvest and “natural" mortality. This seems odds, at first, because the data relate to only harvest mortality that is reported. Generally, tag recovery data do not allow estimates of the 2 components (i.e., harvest vs. natural mortality). Note that the estimates of survival (SS ) or mortality (1– ) are based on the data from the harvest! Haunting Question: If one had only R" and m, """#"$"% m, m, m, and m "& , could they compute MLEs of Sr and ? Why not? Under what model? Can you see where the assumptions in these models and estimation methods come from? Why are the assumptions explicit? In the analysis of real data, one must be very concerned about the validity of the assumptions made. If an assumption fails, the method may be very dependent upon the truth of the assumption and large bias in the MLEs may result. -
Using a Laplace Approximation to Estimate the Random Coefficients
Using a Laplace Approximation to Estimate the Random Coe±cients Logit Model by Non-linear Least Squares1 Matthew C. Harding2 Jerry Hausman3 September 13, 2006 1We thank Ketan Patel for excellent research assistance. We thank Ronald Butler, Kenneth Train, Joan Walker and participants at the MIT Econometrics Lunch Seminar and at the Harvard Applied Statistics Workshop for comments. 2Department of Economics, MIT; and Institute for Quantitative Social Science, Harvard Uni- versity. Email: [email protected] 3Department of Economics, MIT. E-mail: [email protected]. Abstract Current methods of estimating the random coe±cients logit model employ simulations of the distribution of the taste parameters through pseudo-random sequences. These methods su®er from di±culties in estimating correlations between parameters and computational limitations such as the curse of dimensionality. This paper provides a solution to these problems by approximating the integral expression of the expected choice probability using a multivariate extension of the Laplace approximation. Simulation results reveal that our method performs very well, both in terms of accuracy and computational time. 1 Introduction Understanding discrete economic choices is an important aspect of modern economics. McFadden (1974) introduced the multinomial logit model as a model of choice behavior derived from a random utility framework. An individual i faces the choice between K di®erent goods i = 1::K. The utility to individual i from consuming good j is given by 0 0 Uij = xij¯+²ij, where xij corresponds to a set of choice relevant characteristics speci¯c to the consumer-good pair (i; j). The error component ²ij is assumed to be independently identi- cally distributed with an extreme value distribution f(²ij) = exp(¡²ij) exp(¡ exp(¡²ij)): If the individual iis constrained to choose a single good within the available set, utility maximization implies that the good j will be chosen over all other goods l 6= j such that Uij > Uil, for all l 6= j. -
Identifiability of Mixtures of Power-Series Distributions and Related Characterizations
Ann. Inst. Statist. Math. Vol. 47, No. 3, 447-459 (1995) IDENTIFIABILITY OF MIXTURES OF POWER-SERIES DISTRIBUTIONS AND RELATED CHARACTERIZATIONS THEOFANIS SAPATINAS Department of Mathematical Statistics and Operational Research, Exeter University, Exeter EX4-$QE, U.K. (Received January 28, 1994; revised October 24, 1994) Abstract. The concept of the identifiability of mixtures of distributions is discussed and a sufficient condition for the identifiability of the mixture of a large class of discrete distributions, namely that of the power-series distribu- tions, is given. Specifically, by using probabilistic arguments, an elementary and shorter proof of the Liixmann-Ellinghaus's (1987, Statist. Probab. Lett., 5, 375-378) result is obtained. Moreover, it is shown that this result is a spe- cial case of a stronger result connected with the Stieltjes moment problem. Some recent observations due to Singh and Vasudeva (1984, J. Indian Statist. Assoc., 22, 93 96) and Johnson and Kotz (1989, Ann. Inst. Statist. Math., 41, 13-17) concerning characterizations based on conditional distributions are also revealed as special cases of this latter result. Exploiting the notion of the identifiability of power-series mixtures, characterizations based on regression functions (posterior expectations) are obtained. Finally, multivariate general- izations of the preceding results have also been addressed. Key words and phrases: Univariate and multivariate power-series distribu- tions, mixtures of distributions, the moment problem, infinite divisibility, pos- terior expectations. 1. Introduction Mixtures of distributions are used in building probability models quite fre- quently in biological and physical sciences. For instance, in order to study certain characteristics in natural populations of fish, a random sample might be taken and the characteristic measured for each member of the sample; since the character- istic varies with the age of the fish, the distribution of the characteristic in the total population will be a mixture of the distribution at different ages. -
Multilevel Logistic Regression for Polytomous Data and Rankings
Integre Tech. Pub. Co., Inc. Psychometrika June 30, 2003 8:34 a.m. skrondal Page 267 PSYCHOMETRIKA—VOL. 68, NO. 2, 267–287 JUNE 2003 MULTILEVEL LOGISTIC REGRESSION FOR POLYTOMOUS DATA AND RANKINGS ANDERS SKRONDAL DIVISION OF EPIDEMIOLOGY NORWEGIAN INSTITUTE OF PUBLIC HEALTH, OSLO SOPHIA RABE-HESKETH DEPARTMENT OF BIOSTATISTICS AND COMPUTING INSTITUTE OF PSYCHIATRY, LONDON We propose a unifying framework for multilevel modeling of polytomous data and rankings, ac- commodating dependence induced by factor and/or random coefficient structures at different levels. The framework subsumes a wide range of models proposed in disparate methodological literatures. Partial and tied rankings, alternative specific explanatory variables and alternative sets varying across units are handled. The problem of identification is addressed. We develop an estimation and prediction method- ology for the model framework which is implemented in the generally available gllamm software. The methodology is applied to party choice and rankings from the 1987–1992 panel of the British Election Study. Three levels are considered: elections, voters and constituencies. Key words: multilevel models, generalized linear latent and mixed models, factor models, random co- efficient models, polytomous data, rankings, first choice, discrete choice, permutations, nominal data, gllamm. Introduction Two kinds of nominal outcome variable are considered in this article; unordered polytomous variables and permutations. The outcome for a unit in the unordered polytomous case is one among several objects, whereas the outcome in the permutation case is a particular ordering of objects. The objects are nominal in the sense that they do not possess an inherent ordering shared by all units as is assumed for ordinal variables. -
Statistical Identification of Synchronous Spiking
Statistical Identification of Synchronous Spiking Matthew T. Harrison1, Asohan Amarasingham2, and Robert E. Kass3 Contents 1 Introduction 2 2 Synchrony and time scale 4 3 Spike trains and firing rate 5 3.1 Point processes, conditional intensities, and firing rates . 7 3.2 Models of conditional intensities . 9 4 Models for coarse temporal dependence 10 4.1 Cross-correlation histogram (CCH) . 11 4.2 Statistical hypothesis testing . 12 4.3 Independent homogeneous Poisson process (HPP) model . 13 4.3.1 Bootstrap approximation . 14 4.3.2 Monte Carlo approximation . 15 4.3.3 Bootstrap . 15 4.3.4 Acceptance bands . 15 4.3.5 Conditional inference . 16 4.3.6 Uniform model and conditional modeling . 16 4.3.7 Model-based correction . 17 4.4 Identical trials models . 17 4.4.1 Joint peri-stimulus time histogram (JPSTH) . 18 4.4.2 Independent inhomogeneous Poisson process (IPP) model . 18 4.4.3 Exchangeable model and trial shuffling . 20 4.4.4 Trial-to-trial variability (TTV) models . 22 4.5 Temporal smoothness models . 23 4.5.1 Independent inhomogeneous slowly-varying Poisson model . 24 4.5.2 ∆-uniform model and jitter . 25 4.5.3 Heuristic spike resampling methods . 27 4.6 Generalized regression models . 27 4.7 Multiple spike trains and more general precise spike timing . 30 4.8 Comparisons across conditions . 31 5 Discussion 32 1Division of Applied Mathematics, Brown University, Providence RI 02912 USA 2Department of Mathematics, City College of New York, New York NY 10031 USA 3Department of Statistics, Carnegie Mellon University, Pittsburgh PA 15213 USA 1 A Probability and random variables 34 A.1 Statistical models and scientific questions . -
The Sufficient and Necessary Condition for the Identifiability And
The Sufficient and Necessary Condition for the Identifiability and Estimability of the DINA Model Yuqi Gu and Gongjun Xu University of Michigan Abstract Cognitive Diagnosis Models (CDMs) are useful statistical tools in cognitive diagno- sis assessment. However, as many other latent variable models, the CDMs often suffer from the non-identifiability issue. This work gives the sufficient and necessary condition for identifiability of the basic DINA model, which not only addresses the open problem in Xu and Zhang (2016, Psychomatrika, 81:625-649) on the minimal requirement for identifiability, but also sheds light on the study of more general CDMs, which often cover DINA as a submodel. Moreover, we show the identifiability condition ensures the consistent estimation of the model parameters. From a practical perspective, the identifiability condition only depends on the Q-matrix structure and is easy to verify, which would provide a guideline for designing statistically valid and estimable cognitive diagnosis tests. Keywords: cognitive diagnosis models, identifiability, estimability, Q-matrix. arXiv:1711.03174v2 [stat.ME] 24 Apr 2018 1 Introduction Cognitive Diagnosis Models (CDMs), also called Diagnostic Classification Models (DCMs), are useful statistical tools in cognitive diagnosis assessment, which aims to achieve a fine- grained decision on an individual's latent attributes, such as skills, knowledge, personality 1 traits, or psychological disorders, based on his or her observed responses to some designed diagnostic items. The CDMs fall into the more general regime of restricted latent class models in the statistics literature, and model the complex relationships among the items, the latent attributes and the item responses for a set of items and a sample of respondents. -
A Guide to Using PEST for Model-Parameter and Predictive-Uncertainty Analysis
Groundwater Resources Program Global Change Research & Development Approaches to Highly Parameterized Inversion: A Guide to Using PEST for Model-Parameter and Predictive-Uncertainty Analysis 1. Calibrate the model p (unknown) 3. Take difference with calibrated parameter field NULL SPACE Total parameter error p (estimated) SOLUTION SPACE 2. Generate a parameter set using C(p) NULL SPACE NULL SPACE SOLUTION SPACE 4. Project difference to null space SOLUTION SPACE NULL SPACE SOLUTION SPACE 5. Add to calibrated field 6. Adjust solution space components 7. Repeat . NULL SPACE NULL SPACE NULL SPACE SOLUTION SPACE SOLUTION SPACE SOLUTION SPACE Scientific Investigations Report 2010–5211 U.S. Department of the Interior U.S. Geological Survey Cover figure Processing steps required for generation of a sequence of calibration-constrained parameter fields by use of the null-space Monte Carlo methodology Approaches to Highly Parameterized Inversion: A Guide to Using PEST for Model-Parameter and Predictive-Uncertainty Analysis By John E. Doherty, Randall J. Hunt, and Matthew J. Tonkin Groundwater Resources Program Global Change Research & Development Scientific Investigations Report 2010–5211 U.S. Department of the Interior U.S. Geological Survey U.S. Department of the Interior KEN SALAZAR, Secretary U.S. Geological Survey Marcia K. McNutt, Director U.S. Geological Survey, Reston, Virginia: 2011 For more information on the USGS—the Federal source for science about the Earth, its natural and living resources, natural hazards, and the environment, visit http://www.usgs.gov or call 1–888–ASK–USGS. For an overview of USGS information products, including maps, imagery, and publications, visit http://www.usgs.gov/pubprod To order this and other USGS information products, visit http://store.usgs.gov Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. -
Meanings of Identification in Econometrics
The Identification Zoo - Meanings of Identification in Econometrics Arthur Lewbel Boston College First version January 2016, Final preprint version October 2019, Published version: Journal of Economic Literature, December 2019, 57(4). Abstract Over two dozen different terms for identification appear in the econometrics literature, including set identification, causal identification, local identification, generic identification, weak identification, identification at infinity, and many more. This survey: 1. gives a new framework unifying existing definitions of point identification, 2. summarizes and compares the zooful of different terms associated with identification that appear in the literature, and 3. discusses concepts closely related to identification, such as normalizations and the differences in identification between structural models and causal, reduced form models. JEL codes: C10, B16 Keywords: Identification, Econometrics, Coherence, Completeness, Randomization, Causal inference, Reduced Form Models, Instrumental Variables, Structural Models, Observational Equivalence, Normalizations, Nonparamet- rics, Semiparametrics. I would like to thank Steven Durlauf, Jim Heckman, Judea Pearl, Krishna Pendakur, Frederic Vermeulen, Daniel Ben-Moshe, Xun Tang, Juan-Carlos Escanciano, Jeremy Fox, Eric Renault, Yingying Dong, Laurens Cherchye, Matthew Gentzkow, Fabio Schiantarelli, Andrew Pua, Ping Yu, and five anonymous referees for many helpful sug- gestions. All errors are my own. Corresponding address: Arthur Lewbel, Dept of Economics, Boston College,