An Introduction to MM Algorithms for Machine Learning and Statistical

Total Page:16

File Type:pdf, Size:1020Kb

An Introduction to MM Algorithms for Machine Learning and Statistical An Introduction to MM Algorithms for Machine Learning and Statistical Estimation Hien D. Nguyen November 12, 2016 School of Mathematics and Physics, University of Queensland, St. Lucia. Centre for Advanced Imaging, University of Queensland, St. Lucia. Abstract MM (majorization–minimization) algorithms are an increasingly pop- ular tool for solving optimization problems in machine learning and sta- tistical estimation. This article introduces the MM algorithm framework in general and via three popular example applications: Gaussian mix- ture regressions, multinomial logistic regressions, and support vector ma- chines. Specific algorithms for the three examples are derived and numer- ical demonstrations are presented. Theoretical and practical aspects of MM algorithm design are discussed. arXiv:1611.03969v1 [stat.CO] 12 Nov 2016 1 Introduction Let X X Rp and Y Y Rq be random variables, which we shall refer to ∈ ⊂ ∈ ⊂ as the input and target variables, respectively. We shall denote a sample of n in- n dependent and identically distributed (IID) pairs of variables D = (Xi, Yi) { }i=1 1 ¯ n as the data, and D = (xi, yi) as an observed realization of the data. Under { }i=1 the empirical risk minimization (ERM) framework of Vapnik (1998, Ch. 1) or the extremum estimation (EE) framework of Amemiya (1985, Ch. 4), a large number of machine learning and statistical estimation problems can be phrased as the computation of min θ; D¯ or θˆ = arg min θ; D¯ , (1) θ Θ R θ Θ R ∈ ∈ where θ; D¯ is a risk function defined over the observed data D¯ and is de- R pendent on some parameter θ Θ. ∈ Common risk functions that are used in practice are the negative log-likelihood functions, which can be expressed as n ¯ 1 θ; D = log f (xi, yi; θ) , R −n i=1 X where f (x, y; θ) is a density function over the support of X and Y , which takes parameter θ. The minimization of the risk in this case yields the maximum likelihood (ML) estimate for the data D¯, given the parametric family f. Another common risk function is the d -norm difference between the target variable and |·| some function f of the input: n ¯ 1 d θ; D = yi f (xi; θ) , R n | − | i=1 X where d [1, 2], yi R, and f (x; θ) is some predictive function of yi with ∈ ∈ parameter θ that takes x as an input. Setting d = 1 and d = 2 yield the common least-absolute deviation and least-squares criteria, respectively. Furthermore, taking f (x; θ) = θ>x and d = 2 simultaneously yields the classical ordinary least-squares criterion. Here Θ Rp and the superscript indicates matrix ⊂ > 2 transposition. When Y = 1, 1 , a common problem in machine learning is to construct {− } a classification function f (xi; θ) that minimizes the classification (0-1) risk n ¯ 1 θ; D = I yi = f (xi; θ) , R n { 6 } i=1 X where f : X Y and I A = 1 if proposition A is true and 0 otherwise. Unfor- → { } tunately, the form of the classification risk is combinatorial and thus necessitates the use of surrogate classification risks of the form n ¯ 1 θ; D = ψ (xi, yi, f (xi; θ)) , R n i=1 X 2 where ψ : Rp 1, 1 [0, ) and ψ (x, y, y) = 0 for all x and y [cf. Scholkopf ×{− } → ∞ & Smola (2002, Def. 3.1)]. An example of a machine learning algorithm that minimizes a surrogate classification risk is the support vector machine (SVM) of Cortes & Vapnik (1995). The linear-basis SVM utilizes a surrogate risk func- tion, where ψ (x, y, f (x; θ)) = max 0, 1 yf (x; θ) is the hinge loss function, { − } p+1 f (x; θ) = α + β>x, and θ> = α, β> Θ R . ∈ ⊂ The task of computing (1) may be complicated by various factors that fall outside the scope of the traditional calculus formulation for optimization [cf. Khuri (2003, Ch. 7)]. Such factors include the lack of differentiability of or R difficulty in obtaining closed-form solutions to the first-order condition (FOC) equation = 0, where is the gradient operator with respect to θ, and 0 ∇θR ∇θ is a zero vector. The MM (majorization–minimization) algorithm framework is a unifying paradigm for simplifying the computation of (1) when difficulties arise, via iter- ative minimization of surrogate functions. MM algorithms are particularly at- tractive due to the monotonicity and thus stability of their objective sequences 3 as well as global convergence of their limits, in general settings. A comprehensive treatment on the theory and implementation of MM al- gorithms can be found in Lange (2016). Summaries and tutorials on MM al- gorithms for various problems can be found in Becker et al. (1997), Hunter & Lange (2004), Lange (2013, Ch. 8), Lange et al. (2000), Lange et al. (2014), McLachlan & Krishnan (2008, Sec. 7.7), Wu & Lange (2010), and Zhou et al. (2010). Some theoretical analyses of MM algorithms can be found in de Leeuw & Lange (2009), Lange (2013, Sec. 12.4), Mairal (2015), and Vaida (2005). It is known that MM algorithms are generalizations of the EM (expectation– maximization) algorithms of Dempster et al. (1977) [cf. Lange (2013, Ch. 9)]. The recently established connection between MM algorithms and the successive upper-bound maximization (SUM) algorithms of Razaviyayn et al. (2013) fur- ther shows that the MM algorithm framework also covers the concave-convex procedures [Yuille & Rangarajan (2003); CCCP], proximal algorithms (Parikh & Boyd, 2013), forward-backward splitting algorithms [Combettes & Pesquet (2011); FBS], as well as various incarnations of iteratively-reweighed least- squares algorithms (IRLS) such as those of Becker et al. (1997) and Lange et al. (2000); see Hong et al. (2016) for details. It is not possible to provide a complete list of applications of MM algorithms to machine learning, statistical estimation, and signal processing problems. We present a comprehensive albeit incomplete summary of applications of MM al- gorithms in Table 1. Further examples and references can be found in Hong et al. (2016) and Lange (2016). In this article, we will present the MM algorithm framework via applica- tions to three examples that span the scope of statistical estimation and ma- chine learning problems: Gaussian mixtures of regressions (GMR), multinomial- logistic regressions (MLR), and SVM estimations. The three estimation prob- 4 Table 1: MM algorithm applications and references. distributions estimation (Wu & Lange, 2010) t -norm regressions (Becker et al., 1997; Lange et al., 2000) d |·| ApplicationBradley-Terry models estimationConvex and shape-restricted regressionsDirichlet-multinomial distributions estimationElliptical symmetric distributions estimationFully-visible Boltzmann machines (Zhou estimation &Gaussian Lange, mixtures 2010; estimation ZhouGeometric & (Chi and Zhang, (Becker et sigmoidal 2012) et al., programming al., 2014) Heteroscedastic 1997) regressions (Nguyen (Hunter, & 2004; Wood, LangeLaplace 2016) et regression al., models 2000) estimationLeast Linear mixed models estimationLogistic and (Lange multinomial & regressions Zhou,Markov 2014) random field estimationMatrix completion (Nguyen References and & imputation McLachlan,Mixture (Nguyen 2015) of & experts McLachlan, models 2016a;Multidimensional Nguyen estimation scaling et al., 2016b) Multivariate (Daye et al., 2012;Non-negative matrix Nguyen (Bohning et factorization & al., Lindsay,Poisson (Lange 2016a) 1988; regression et Bohning, estimation al., 1992) 2014) Point to set minimization (NguyenPolygonal (Mazumder problems et distributions et (Nguyen al., estimation al., & 2016c) 2010; McLachlan,Quantile Lange 2014, regression et 2016a) estimation al., 2014) SVM estimationTransmission tomography image reconstructionVariable (Lee selection & in Seung, regression 1999) via (Lange regularization (De et Pierro, al., 1993; 2000) (Chi Becker (Hunter & et & Lange, al., (Lange Li, 2014; (Nguyen 1997) et 2005; & Chi al., McLachlan, Lange et 2000) 2016b) et al., al., 2014) 2014) (Hunter & Lange, 2000) (de Leeuw & Lange, 2009; Wu & Lange, 2010) 5 lems will firstly be presented in Section 2. The MM algorithm framework will be presented in Section 3 along with some theoretical results. MM algorithms for the three estimation problems are presented in Section 4. Numerical demon- strations of the MM algorithms are presented in Section 5. Conclusions are then drawn in Section 6. 2 Example problems 2.1 Gaussian mixture of regressions Let X arise from a distribution with unknown density function fX (x), which does not depend on the parameter θ (X can be non-stochastic). Conditioned on X = x, suppose that Y can arise from one of g N possible component ∈ regression regimes. Let Z be a random variable that indicates the component from which Y arises, such that P (Z = c) = πc, where c [g] ([g] = 1, 2, ..., g ), ∈ { } g πc > 0, and c=1 πc = 1. Write the conditional probability density of Y given X = x and ZP= c as fY X,c (y x; Bc, Σc) = φ (y; Bcx, Σc) , (2) | | q p where Bc R × , Σc is a positive-definite q q matrix covariance matrix, and ∈ × d/2 1/2 1 φ (y; µ, Σ) = (2π)− Σ − exp (y µ)> Σ (y µ) (3) | | −2 − − is the multivariate Gaussian distribution with mean vector µ and covariance matrix Σ. The conditional (in Z) characterization (2) leads to the marginal characterization g fY X (y x; θ) = πcφ (y; Bcx, Σc) , (4) | | c=1 X 6 where θ contains the parameter elements π , B , and Σ , for c [g]. We refer c c c ∈ to the characterization (4) as the GMR model. The GMR model was first proposed by Quandt (1972) for the q = 1 case and an EM algorithm for the same case was proposed in DeSarbo & Cron (1988). To the best of our knowledge, the general multivariate case (q > 1) of characterization 4 was first considered in Jones & McLachlan (1992). See McLachlan & Peel (2000) regarding mixture models in general. Given data D¯, the estimation of a GMR model requires the minimization of the negative log-likelihood risk n ¯ 1 θ; D = log fY X (yi xi; θ) R −n | | i=1 X g 1 n = log π φ (y ; B x , Σ ) .
Recommended publications
  • Mixture Models with Grouping Structure: Retail Analytics Applications Haidar Almohri Wayne State University
    Wayne State University Wayne State University Dissertations 1-1-2018 Mixture Models With Grouping Structure: Retail Analytics Applications Haidar Almohri Wayne State University, Follow this and additional works at: https://digitalcommons.wayne.edu/oa_dissertations Part of the Business Administration, Management, and Operations Commons, Engineering Commons, and the Statistics and Probability Commons Recommended Citation Almohri, Haidar, "Mixture Models With Grouping Structure: Retail Analytics Applications" (2018). Wayne State University Dissertations. 1911. https://digitalcommons.wayne.edu/oa_dissertations/1911 This Open Access Dissertation is brought to you for free and open access by DigitalCommons@WayneState. It has been accepted for inclusion in Wayne State University Dissertations by an authorized administrator of DigitalCommons@WayneState. MIXTURE MODELS WITH GROUPING STRUCTURE: RETAIL ANALYTICS APLICATIONS by HAIDAR ALMOHRI DISSERTATION Submitted to the Graduate School of Wayne State University, Detroit, Michigan in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY 2018 MAJOR: INDUSTRIAL ENGINEERING Approved By: Advisor Date © COPYRIGHT BY HAIDAR ALMOHRI 2018 All Rights Reserved DEDICATION To my lovely daughter Nour. ii ACKNOWLEDGMENTS I would like to thank my advisor Dr. Ratna Babu Chinnam for introducing me to this fascinating research topic and for his guidance, technical advise, and support throughout this research. I am also grateful to my committee members, Dr. Alper Murat, Dr. Evrim Dalkiran for their time and interest in this thesis and for providing helpful comments for my research. Special thanks go to Dr. Arash Ali Amini for providing valuable suggestions to improve this work. I also thank my family and friends for their continuous support. iii TABLE OF CONTENTS DEDICATION .
    [Show full text]
  • Mini-Batch Learning of Exponential Family Finite Mixture Models Hien D Nguyen, Florence Forbes, Geoffrey Mclachlan
    Mini-batch learning of exponential family finite mixture models Hien D Nguyen, Florence Forbes, Geoffrey Mclachlan To cite this version: Hien D Nguyen, Florence Forbes, Geoffrey Mclachlan. Mini-batch learning of exponential family finite mixture models. Statistics and Computing, Springer Verlag (Germany), 2020, 30, pp.731-748. 10.1007/s11222-019-09919-4. hal-02415068v2 HAL Id: hal-02415068 https://hal.archives-ouvertes.fr/hal-02415068v2 Submitted on 26 Mar 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Mini-batch learning of exponential family finite mixture models Hien D. Nguyen1∗, Florence Forbes2, and Geoffrey J. McLachlan3 September 6, 2019 1Department of Mathematics and Statistics, La Trobe University, Melbourne, Victoria, Australia. 2Univ. Grenoble Alpes, Inria, CNRS, Grenoble INPy, LJK, 38000 Grenoble, France. yInstitute of Engineering Univ. Grenoble Alpes. 3School of Mathematics and Physics, University of Queensland, St. Lucia, Brisbane, Australia. ∗Corresponding author: Hien Nguyen (Email: [email protected]). Abstract Mini-batch algorithms have become increasingly popular due to the requirement for solving optimization problems, based on large-scale data sets. Using an existing online expectation-- maximization (EM) algorithm framework, we demonstrate how mini-batch (MB) algorithms may be constructed, and propose a scheme for the stochastic stabilization of the constructed mini-batch algorithms.
    [Show full text]
  • Econometrics and Statistics (Ecosta 2018)
    EcoSta2018 PROGRAMME AND ABSTRACTS 2nd International Conference on Econometrics and Statistics (EcoSta 2018) http://cmstatistics.org/EcoSta2018 City University of Hong Kong 19 – 21 June 2018 c ECOSTA ECONOMETRICS AND STATISTICS. All rights reserved. I EcoSta2018 ISBN: 978-9963-2227-3-5 c 2018 - ECOSTA Econometrics and Statistics All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any other form or by any means without the prior permission from the publisher. II c ECOSTA ECONOMETRICS AND STATISTICS. All rights reserved. EcoSta2018 Co-chairs: Igor Pruenster, Alan Wan, Ping-Shou Zhong. EcoSta Editors: Ana Colubi, Erricos J. Kontoghiorghes, Manfred Deistler. Scientific Programme Committee: Tomohiro Ando, Jennifer Chan, Cathy W.S. Chen, Hao Chen, Ming Yen Cheng, Jeng-Min Chiou, Terence Chong, Fabrizio Durante, Yingying Fan, Richard Gerlach, Michele Guindani, Marc Hallin, Alain Hecq, Daniel Henderson, Robert Kohn, Sangyeol Lee, Degui Li, Wai-Keung Li, Yingying Li, Hua Liang, Tsung-I Lin, Shiqing Ling, Alessandra Luati, Hiroki Masuda, Geoffrey McLachlan, Samuel Mueller, Yasuhiro Omori, Marc Paolella, Sandra Paterlini, Heng Peng, Artem Prokhorov, Jeroen Rombouts, Matteo Ruggiero, Mike K.P. So, Xinyuan Song, John Stufken, Botond Szabo, Minh-Ngoc Tran, Andrey Vasnev, Judy Huixia Wang, Yong Wang, Yichao Wu and Jeff Yao. Local Organizing Committee: Guanhao Feng, Daniel Preve, Geoffrey Tso, Inez Zwetsloot, Catherine Liu, Zhen Pang. c ECOSTA ECONOMETRICS AND STATISTICS. All rights reserved. III EcoSta2018 Dear Colleagues, It is a great pleasure to welcome you to the 2nd International Conference on Econometrics and Statistics (EcoSta 2018). The conference is co-organized by the working group on Computational and Methodological Statistics (CMStatistics), the network of Computational and Financial Econometrics (CFEnetwork), the journal Economet- rics and Statistics (EcoSta) and the Department of Management Sciences of the City University of Hong Kong (CityU).
    [Show full text]
  • Mixture Model Clustering in the Analysis of Complex Diseases
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Helsingin yliopiston digitaalinen arkisto Department of Computer Science Series of Publications A Report A-2012-2 Mixture Model Clustering in the Analysis of Complex Diseases Jaana Wessman To be presented, with the permission of the Faculty of Science of the University of Helsinki, for public criticism in Auditorium XIV, University of Helsinki Main Building, on 13 April 2012 at noon. University of Helsinki Finland Supervisor Heikki Mannila, Department of Information and Computer Science and Helsinki Institute of Information Technology, Aalto University, Finland and Leena Peltonen (died 11th March 2010), University of Helsinki and National Public Health Institute, Finland Pre-examiners Sampsa Hautaniemi, Docent, Institute of Biomedicine, University of Helsinki, Finland Martti Juhola, Professor, Department of Computer Science, University of Tampere, Finland Opponent Tapio Elomaa, Professor, Department of Software Systems, Tampere University of Technology, Finland Custos Hannu Toivonen, Professor, Department of Computer Science, University of Helsinki, Finland Contact information Department of Computer Science P.O. Box 68 (Gustaf Hällströmin katu 2b) FI-00014 University of Helsinki Finland Email address: [email protected].fi URL: http://www.cs.Helsinki.fi/ Telephone: +358 9 1911, telefax: +358 9 191 51120 Copyright © 2012 Jaana Wessman ISSN 1238-8645 ISBN 978-952-10-7897-2 (paperback) ISBN 978-952-10-7898-9 (PDF) Computing Reviews (1998) Classification: I.5.3, J.2 Helsinki 2012 Unigrafia Mixture Model Clustering in the Analysis of Complex Diseases Jaana Wessman Department of Computer Science P.O. Box 68, FI-00014 University of Helsinki, Finland Jaana.Wessman@iki.fi PhD Thesis, Series of Publications A, Report A-2012-2 Helsinki, Mar 2012, 119+11 pages ISSN 1238-8645 ISBN 978-952-10-7897-2 (paperback) ISBN 978-952-10-7898-9 (PDF) Abstract The topic of this thesis is the analysis of complex diseases, and specifically the use of k-means and mixture modeling based clustering methods to do it.
    [Show full text]
  • Sponsored by the International Biometric Society
    Sponsored by the International Biometric Society The International Biometric Society is devoted to the development and application of statistical and mathematical theory and methods in the biosiciences. The Conference is grateful for the support of the following organisations: The Federation of Pharmaceutical Manufacturers' Associations of JAPAN Organised by the Biometric Society of Japan, Japanese Region of the International Biometric Society Contents List of Sponsors Inside Front Cover Welcome from Presidents Page 2 Welcome from Chairs Page 3 Programme at a Glance Page 4 - 6 Opening Session Page 7 IBC2012 Governance Meetings Schedule Page 8 Organising Committees Page 9 Awards at IBC Kobe 2012 Page 10 General Information Page 11 Venue Information Page 12 Presentation Instruction Page 13 Social Programme Page 15 Mid-Conference Tours Page 16 Scientifi c Programme Page 21 Satellite Symposium Page 46 Map of KICC Page 60 Access Map Page 61 Floorplan Page 62 1 XXVIth International Biometric Conference Message from the IBS President and the IBC 2012 Organising President It is with great pleasure that we welcome all delegates, their families and friends to the XXVIth International Biometric Conference (IBC 2012) being hosted by the Japanese Region in Kobe. It is nearly thirty years since they hosted the one previous IBC held in Japan in Tokyo in 1984. Toshiro Tango and his Local Organising Committee (LOC) have done a wonderful job in providing a socially and scientifi cally inviting program at an outstanding venue. The Kobe International Convention Centre provides excellent spaces for formal meetings/scientifi c sessions and informal conversations/ networking activities which are an essential part of every IBC.
    [Show full text]
  • Computational Statistics & Data Analysis
    COMPUTATIONAL STATISTICS & DATA ANALYSIS AUTHOR INFORMATION PACK TABLE OF CONTENTS XXX . • Description p.1 • Audience p.2 • Impact Factor p.2 • Abstracting and Indexing p.2 • Editorial Board p.2 • Guide for Authors p.7 ISSN: 0167-9473 DESCRIPTION . Computational Statistics and Data Analysis (CSDA), an Official Publication of the network Computational and Methodological Statistics (CMStatistics) and of the International Association for Statistical Computing (IASC), is an international journal dedicated to the dissemination of methodological research and applications in the areas of computational statistics and data analysis. The journal consists of four refereed sections which are divided into the following subject areas: I) Computational Statistics - Manuscripts dealing with: 1) the explicit impact of computers on statistical methodology (e.g., Bayesian computing, bioinformatics,computer graphics, computer intensive inferential methods, data exploration, data mining, expert systems, heuristics, knowledge based systems, machine learning, neural networks, numerical and optimization methods, parallel computing, statistical databases, statistical systems), and 2) the development, evaluation and validation of statistical software and algorithms. Software and algorithms can be submitted with manuscripts and will be stored together with the online article. II) Statistical Methodology for Data Analysis - Manuscripts dealing with novel and original data analytical strategies and methodologies applied in biostatistics (design and analytic methods
    [Show full text]
  • Vine Copula Mixture Models and Clustering for Non-Gaussian Data
    Vine copula mixture models and clustering for non-Gaussian data Ozge¨ Sahina,∗, Claudia Czadoa,b aDepartment of Mathematics, Technische Universit¨atM¨unchen,Boltzmanstraße 3, 85748 Garching, Germany bMunich Data Science Institute, Walther-von-Dyck-Straße 10, 85748 Garching, Germany Abstract The majority of finite mixture models suffer from not allowing asymmetric tail dependencies within compo- nents and not capturing non-elliptical clusters in clustering applications. Since vine copulas are very flexible in capturing these dependencies, a novel vine copula mixture model for continuous data is proposed. The model selection and parameter estimation problems are discussed, and further, a new model-based clustering algorithm is formulated. The use of vine copulas in clustering allows for a range of shapes and dependency structures for the clusters. The simulation experiments illustrate a significant gain in clustering accuracy when notably asymmetric tail dependencies or/and non-Gaussian margins within the components exist. The analysis of real data sets accompanies the proposed method. The model-based clustering algorithm with vine copula mixture models outperforms others, especially for the non-Gaussian multivariate data. Keywords: Dependence, ECM algorithm, model-based clustering, multivariate finite mixtures, pair-copula, statistical learning 1. Introduction Finite mixture models are convenient statistical tools for model-based clustering. They assume that observations in the multivariate data can be clustered using k components. Each component has its density, and each observation is assigned to a component with a probability. They have many applications in finance, genetics, and marketing (e.g., Hu(2006); Gambacciani & Paolella(2017); Sun et al.(2017); Zhang & Shi (2017)). McLachlan & Peel(2000) provides more details about the finite mixture models.
    [Show full text]
  • Learning Mixtures of Plackett-Luce Models
    Learning Mixtures of Plackett-Luce models Zhibing Zhao, Peter Piech, and Lirong Xia Abstract In this paper we address the identifiability and efficient learning problem of finite mixtures of Plackett-Luce models for rank data. We prove that for any k ≥ 2, the mixture of k Plackett- Luce models for no more than 2k − 1 alternatives is non-identifiable and this bound is tight for k = 2. For generic identifiability, we prove that the mixture of k Plackett-Luce models over m−2 m alternatives is generically identifiable if k ≤ b 2 c!. We also propose an efficient generalized method of moments (GMM) algorithm to learn the mixture of two Plackett-Luce models and show that the algorithm is consistent. Our experi- ments show that our GMM algorithm is significantly faster than the EMM algorithm by Gorm- ley and Murphy (2008), while achieving competitive statistical efficiency. 1 Introduction In many machine learning problems the data are composed of rankings over a finite number of alter- natives [18]. For example, meta-search engines aggregate rankings over webpages from individual search engines [7]; rankings over documents are combined to find the most relevant document in information retrieval [14]; noisy answers from online workers are aggregated to produce a more accurate answer in crowdsourcing [17]. Rank data are also very common in economics and political science. For example, consumers often give discrete choices data [19] and voters often give rankings over presidential candidates [8]. Perhaps the most commonly-used statistical model for rank data is the Plackett-Luce model [22, 16]. The Plackett-Luce model is a natural generalization of multinomial logistic regression.
    [Show full text]
  • Information Bulletin Nº 2
    INFORMATION BULLETIN Nº 2 60th World Statistics Congress of the International Statistical Institute Riocentro | Rio de Janeiro | Brazil 26-31 July 2015 www.isi2015.org CONTENTS Welcome Messages IBGE President .................................................................................................................... 01 ISI President ......................................................................................................................... 02 Committees Scientific Programme Committee (SPC) ................................................................... 03 Local Programme Committee (LPC) ........................................................................... 04 Short Course Committee (SCC) .................................................................................... 05 Lunch Roundtable Discussions Committee (LRTDC) ........................................... 05 National Honorary Committee (NHC) ......................................................................... 06 Local Organising Committee (LOC) ............................................................................ 07 ISI Governance ..................................................................................................................... 08 Conference Schedule ........................................................................................................ 09 Venue Floor Plans ............................................................................................................... 10 Scientific Programme ......................................................................................................
    [Show full text]
  • THREE ESSAYS on MIXTURE MODEL and GAUSSIAN PROCESSES a Dissertation by WENBIN WU Submitted to the Office of Graduate and Profess
    THREE ESSAYS ON MIXTURE MODEL AND GAUSSIAN PROCESSES A Dissertation by WENBIN WU Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Chair of Committee, Ximing Wu Committee Members, Yu Zhang David Leatham Qi Li Head of Department, Mark L. Waller May 2019 Major Subject: Agricultural Economics Copyright 2019 Wenbin Wu ABSTRACT This dissertation includes three essays. In the first essay I study the problem of density esti- mation using normal mixture models. Instead of selecting the ‘right’ number of components in a normal mixture model, I propose an Averaged Normal Mixture (ANM) model to estimate the underlying densities based on model averaging methods, combining normal mixture models with different number of components. I use two methods to estimate the mixing weights of the proposed Averaged Normal Mixture model, one is based on likelihood cross validation and the other is based on Bayesian information criterion (BIC) weights. I also establish the theoretical properties of the proposed estimator and the simulation results demonstrate its good performance in estimating dif- ferent types of underlying densities. The proposed method is also employed to a real world data set, empirical evidence demonstrates the efficiency of this estimator. The second essay studies short term electricity demand forecasting using Gaussian Processes and different forecast strategies. I propose a hybrid forecasting strategy that combines the strength of different forecasting schemes to predict 24 hourly electricity demand for the next day. This method is shown to provide superior point and overall probabilistic forecasts.
    [Show full text]
  • Computational and Financial Econometrics (CFE 2018)
    CFE-CMStatistics 2018 PROGRAMME AND ABSTRACTS 12th International Conference on Computational and Financial Econometrics (CFE 2018) http://www.cfenetwork.org/CFE2018 and 11th International Conference of the ERCIM (European Research Consortium for Informatics and Mathematics) Working Group on Computational and Methodological Statistics (CMStatistics 2018) http://www.cmstatistics.org/CMStatistics2018 University of Pisa, Italy 14 – 16 December 2018 c ECOSTA ECONOMETRICS AND STATISTICS. All rights reserved. I CFE-CMStatistics 2018 ISBN 978-9963-2227-5-9 c 2018 - ECOSTA ECONOMETRICS AND STATISTICS All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any other form or by any means without the prior permission from the publisher. II c ECOSTA ECONOMETRICS AND STATISTICS. All rights reserved. CFE-CMStatistics 2018 International Organizing Committee: Ana Colubi, Erricos Kontoghiorghes, Herman Van Dijk and Caterina Giusti. CFE 2018 Co-chairs: Alessandra Amendola, Michael Owyang, Dimitris Politis and Toshiaki Watanabe. CFE 2018 Programme Committee: Francesco Audrino, Christopher Baum, Monica Billio, Christian Brownlees, Laura Coroneo, Richard Fairchild, Luca Fanelli, Lola Gadea, Alain Hecq, Benjamin Holcblat, Rustam Ibragimov, Florian Ielpo, Laura Jackson, Robert Kohn, Degui Li, Alessandra Luati, Svetlana Makarova, Claudio Morana, Teruo Nakatsuma, Yasuhiro Omori, Alessia Paccagnini, Sandra Paterlini, Ivan Paya, Christian Proano, Artem Prokhorov, Arvid Raknerud, Joern Sass, Willi Semmler, Etsuro
    [Show full text]
  • Heidi Mara Do Rosário Sousa Estudo De Modelos De Classificação
    UNIVERSIDADE ESTADUAL DE CAMPINAS Instituto de Matemática, Estatística e Computação Científica HEIDI MARA DO ROSÁRIO SOUSA ESTUDO DE MODELOS DE CLASSIFICAÇÃO COM APLICAÇÃO A DADOS GENÔMICOS Campinas 2019 HEIDI MARA DO ROSÁRIO SOUSA ESTUDO DE MODELOS DE CLASSIFICAÇÃO COM APLICAÇÃO A DADOS GENÔMICOS Dissertação apresentada ao Instituto de Matemática, Estatística e Computação Científica da Universidade Estadual de Campinas como parte dos requisitos exigidos para a obtenção do título de Mestra em Estatística. Orientador: Benilton de Sá Carvalho ESTE EXEMPLAR CORRESPONDE À VERSÃO FINAL DA DISSERTAÇÃO DEFENDIDA PELA ALUNA HEIDI MARA DO ROSÁRIO SOUSAE ORIENTADA PELO PROF.DR. BENILTON DE SÁ CARVALHO. CAMPINAS 2019 Ficha catalográfica Universidade Estadual de Campinas Biblioteca do Instituto de Matemática, Estatística e Computação Científica Ana Regina Machado - CRB 8/5467 Sousa, Heidi Mara do Rosário, 1991- So85e SouEstudo de modelos de classificação com aplicação a dados genômicos / Heidi Mara do Rosário Sousa. – Campinas, SP : [s.n.], 2019. SouOrientador: Benilton de Sá Carvalho. SouDissertação (mestrado) – Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica. Sou1. Microarranjos de DNA. 2. Genotipagem. 3. Genética - Métodos estatísticos. 4. Algoritmos. 5. Redes neurais (Computação). I. Carvalho, Benilton de Sá, 1979-. II. Universidade Estadual de Campinas. Instituto de Matemática, Estatística e Computação Científica. III. Título. Informações para Biblioteca Digital Título em outro idioma: Study of classification
    [Show full text]