Computing Functions of Random Variables Via Reproducing Kernel Hilbert Space Representations

Total Page:16

File Type:pdf, Size:1020Kb

Computing Functions of Random Variables Via Reproducing Kernel Hilbert Space Representations CORE Metadata, citation and similar papers at core.ac.uk Provided by Springer - Publisher Connector Stat Comput (2015) 25:755–766 DOI 10.1007/s11222-015-9558-5 Computing functions of random variables via reproducing kernel Hilbert space representations Bernhard Schölkopf1 · Krikamol Muandet1 · Kenji Fukumizu2 · Stefan Harmeling3 · Jonas Peters4 Accepted: 8 March 2015 / Published online: 11 June 2015 © The Author(s) 2015. This article is published with open access at Springerlink.com Abstract We describe a method to perform functional tions we carry out using them. Choosing a data type, such operations on probability distributions of random variables. as Integer, Float, or String, determines the possible values, The method uses reproducing kernel Hilbert space represen- as well as the operations that an object permits. Operations tations of probability distributions, and it is applicable to all typically return their results in the form of data types. Com- operations which can be applied to points drawn from the posite or derived data types may be constructed from simpler respective distributions. We refer to our approach as kernel ones, along with specialized operations applicable to them. probabilistic programming. We illustrate it on synthetic data The goal of the present paper is to propose a way to and show how it can be used for nonparametric structural represent distributions over data types, and to generalize equation models, with an application to causal inference. operations built originally for the data types to operations applicable to those distributions. Our approach is nonpara- · · Keywords Kernel methods Probabilistic programming metric and thus not concerned with what distribution models Causal inference make sense on which statistical data type (e.g., binary, ordi- nal, categorical). It is also general, in the sense that in 1 Introduction principle, it applies to all data types and functional opera- tions. The price to pay for this generality is that Data types, derived structures, and associated operations play a crucial role for programming languages and the computa- • our approach will, in most cases, provide approximate B results only; however, we include a statistical analysis of Bernhard Schölkopf the convergence properties of our approximations, and [email protected] • for each data type involved (as either input or output), Krikamol Muandet we require a positive definite kernel capturing a notion of [email protected] similarity between two objects of that type. Some of our Kenji Fukumizu results require, moreover, that the kernels be characteris- [email protected] tic in the sense that they lead to injective mappings into Stefan Harmeling associated Hilbert spaces. [email protected] Jonas Peters In a nutshell, our approach represents distributions over [email protected] objects as elements of a Hilbert space generated by a kernel 1 Max Planck Institute for Intelligent Systems, 72076 and describes how those elements are updated by operations Tübingen, Germany available for sample points. If the kernel is trivial in the sense 2 Institute of Statistical Mathematics, 10-3 Midori-cho, that each distinct object is only similar to itself, the method Tachikawa, Tokyo reduces to a Monte Carlo approach where the operations 3 Institut für Informatik, Heinrich-Heine-Universität, 40225 are applied to sample points which are propagated to the Düsseldorf, Germany next step. Computationally, we represent the Hilbert space 4 ETH Zürich, Seminar für Statistik, 8092 Zürich, Switzerland elements as finite weighted sets of objects, and all opera- 123 756 Stat Comput (2015) 25:755–766 tions reduce to finite expansions in terms of kernel functions Φ : X → H (4) between objects. x → Φ(x), (5) The remainder of the present article is organized as follows. After describing the necessary preliminaries, we where the feature map (or kernel map) Φ satisfies provide an exposition of our approach (Sect. 3). Section 4 analyzes an application to the problem of cause–effect infer- k(x, x ) = Φ(x), Φ(x ) (6) ence using structural equation models. We conclude with a limited set of experimental results. for all x, x ∈ X . One can show that every k taking the form (6) is positive definite, and every positive definite k allows 2 Kernel Maps the construction of H and Φ satisfying (6). The canonical feature map, which is what by default we think of whenever 2.1 Positive definite kernels we write Φ,is X The concept of representing probability distributions in Φ : X → R (7) a reproducing kernel Hilbert space (RKHS) has recently x → k(x,.), (8) attracted attention in statistical inference and machine learn- ing (Berlinet and Agnan 2004; Smola et al. 2007). One of the with an inner product satisfying the reproducing kernel prop- advantages of this approach is that it allows us to apply RKHS erty methods to probability distributions, often with strong theo- retical guarantees (Sriperumbudur et al. 2008, 2010). It has ( , ) = ( ,.), ( ,.). been applied successfully in many domains such as graphical k x x k x k x (9) models (Song et al. 2010, 2011), two-sample testing (Gretton et al. 2012), domain adaptation (Huang et al. 2007; Gretton Mapping observations x ∈ X into a Hilbert space is rather et al. 2009; Muandet et al. 2013), and supervised learning on convenient in some cases. If the original domain X has no probability distributions (Muandet et al. 2012; Szabó et al. linear structure to begin with (e.g., if the x are strings or 2014). We begin by briefly reviewing these methods, starting graphs), then the Hilbert space representation provides us with some prerequisites. with the possibility to construct geometric algorithms using H X We assume that our input data {x1,...,xm} live in a non- the inner product of . Moreover, even if is a linear empty set X and are generated i.i.d. by a random experiment space in its own right, it can be helpful to use a nonlinear with Borel probability distribution p.Byk, we denote a pos- feature map in order to construct algorithms that are linear itive definite kernel on X × X , i.e., a symmetric function in H while corresponding to nonlinear methods in X . k : X × X → R (1) 2.3 Kernel maps for sets and distributions (x, x ) → k(x, x ) (2) One can generalize the map Φ to accept as inputs not satisfying the following nonnegativity condition: for any m ∈ only single points, but also sets of points or distributions. kernel map of a set of points N, and a1,...,am ∈ R, It was pointed out that the X := {x1,...,xm}, m a a k(x , x ) ≥ 0. (3) m i j i j 1 i, j=1 μ[X]:= Φ(x ), (10) m i i=1 If equality in (3)impliesthata1 =···=am = 0, the kernel is called strictly positive definite. corresponds to a kernel density estimator in the input domain (Schölkopf and Smola 2002; Schölkopf et al. 2001), pro- 2.2 Kernel maps for points vided the kernel is nonnegative and integrates to 1. However, the map (10) can be applied for all positive definite kernels, Kernel methods in machine learning, such as Support Vector including ones that take negative values or that are not nor- Machines or Kernel PCA, are based on mapping the data into malized. Moreover, the fact that μ[X] lives in an RKHS and a reproducing kernel Hilbert space (RKHS) H (Boser et al. the use of the associated inner product and norm will have a 1992; Schölkopf and Smola 2002; Shawe-Taylor and Cris- number of subsequent advantages. For these reasons, it would tianini 2004; Hofmann et al. 2008; Steinwart and Christmann be misleading to think of (10) simply as a kernel density esti- 2008), mator. 123 Stat Comput (2015) 25:755–766 757 , Table 1 What information about a sample X does the kernel map μ[X] which equals (11)fork(x, x ) = e x x using (8). [see (10)] contain? We can use the map to test for equality of data sets, ( , ) = , k x x x x Mean of X μ[X]−μ[X ] = 0 ⇐⇒ X = X , (15) k(x, x) = (x, x+1)n Moments of X up to order n ∈ N k(x, x ) strictly p.d. All of X (i.e., μ injective) or distributions, μ[ ]−μ[ ] = ⇐⇒ = . Table 2 What information about p does the kernel map μ[p] [see p p 0 p p (16) (11)] contain? For the notions of characteristic/universal kernels, see Steinwart (2002); Fukumizu et al. (2008, 2009); an example thereof is Two applications of this idea lead to tests for homogeneity the Gaussian kernel (26) and independence . In the latter case, we estimate μ[px py]− μ[p ] (Bach and Jordan 2002; Gretton et al. 2005); in the k(x, x) =x, x Expectation of p xy former case, we estimate μ[p]−μ[p] (Gretton et al. k(x, x) = x, x+1 n Moments of p up to order n ∈ N 2012). ( , ) μ k x x characteristic/universal All of p (i.e., injective) Estimators for these applications can be constructed in terms of the empirical mean estimator (the kernel mean of the empirical distribution) The kernel map of a distribution p can be defined as the m expectation of the feature map (Berlinet and Agnan 2004; 1 Smola et al. 2007; Gretton et al. 2012), μ[ˆp ]= Φ(x ) = μ[X], (17) m m i i=1 μ[p]:=Ex∼p Φ(x) , (11) where X ={x1,...,xm} is an i.i.d. sample from p [cf. (10)]. where we overload the symbol μ and assume, here and below, As an aside, note that using ideas from James–Stein esti- that p is a Borel probability measure, and mation (James and Stein 1961), we can construct shrinkage estimators that improve upon the standard empirical estima- Ex,x∼p k(x, x ) < ∞.
Recommended publications
  • Categorical and Numerical Data Examples
    Categorical And Numerical Data Examples Phrenologic Vinod execrates some reglet after sparse Roberto remainders just. Explicative Ugo sometimes postdated his shamus voetstoots and voyage so gratis! Ephram deport latterly while imposable Marko crests logarithmically or intermixes facilely. The target encoding techniques can i use simple slopes for missing or resented with and categorical data more complicated Types of Statistical Data Numerical Categorical and Ordinal. What is his example of numerical data? Nominal- and ordinal-scale variables are considered qualitative or categorical variables. Do you in What string of cuisine would the results from this business produce. Categorical vs Numerical Data by Britt Hargreaves on Prezi. Categorical data can also dwell on numerical values Example 1 for direction and 0 for male force that those numbers don't have mathematical meaning. Please free resources, categorical and data examples of the difference between numerical? The Categorical Variable Categorical data describes categories or groups One is would the car brands like Mercedes BMW and Audi they discuss different. For troop a GPA of 33 and a GPA of 40 can be added together. Given the stochastic nature general the algorithm or evaluation procedure or differences in numerical precision. Categorical data MIT. What is an example specify an independent and counter dependent variable. Higher than men therefore honor is not categorical in fact below is numerical. Quantitative variables take numerical values and represent different kind of measurement In our medical example news is few example toward a quantitative variable because earth can take are multiple numerical values It also makes sense to contribute about stroll in numerical form that stump a library can be 1 years old or 0 years old.
    [Show full text]
  • Contributions to Biostatistics: Categorical Data Analysis, Data Modeling and Statistical Inference Mathieu Emily
    Contributions to biostatistics: categorical data analysis, data modeling and statistical inference Mathieu Emily To cite this version: Mathieu Emily. Contributions to biostatistics: categorical data analysis, data modeling and statistical inference. Mathematics [math]. Université de Rennes 1, 2016. tel-01439264 HAL Id: tel-01439264 https://hal.archives-ouvertes.fr/tel-01439264 Submitted on 18 Jan 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. HABILITATION À DIRIGER DES RECHERCHES Université de Rennes 1 Contributions to biostatistics: categorical data analysis, data modeling and statistical inference Mathieu Emily November 15th, 2016 Jury: Christophe Ambroise Professeur, Université d’Evry Val d’Essonne, France Rapporteur Gérard Biau Professeur, Université Pierre et Marie Curie, France Président David Causeur Professeur, Agrocampus Ouest, France Examinateur Heather Cordell Professor, University of Newcastle upon Tyne, United Kingdom Rapporteur Jean-Michel Marin Professeur, Université de Montpellier, France Rapporteur Valérie Monbet Professeur, Université de Rennes 1, France Examinateur Korbinian Strimmer Professor, Imperial College London, United Kingdom Examinateur Jean-Philippe Vert Directeur de recherche, Mines ParisTech, France Examinateur Pour M.H.A.M. Remerciements En tout premier lieu je tiens à adresser mes remerciements à l’ensemble des membres du jury qui m’ont fait l’honneur d’évaluer mes travaux.
    [Show full text]
  • Compliance, Safety, Accountability: Analyzing the Relationship of Scores to Crash Risk October 2012
    Compliance, Safety, Accountability: Analyzing the Relationship of Scores to Crash Risk October 2012 Compliance, Safety, Accountability: Analyzing the Relationship of Scores to Crash Risk October 2012 Micah D. Lueck Research Associate American Transportation Research Institute Saint Paul, MN 950 N. Glebe Road, Suite 210 Arlington, Virginia 22203 www.atri-online.org ATRI BOARD OF DIRECTORS Mr. Steve Williams Mr. William J. Logue Chairman of the ATRI Board President & CEO Chairman & CEO FedEx Freight Maverick USA, Inc. Memphis, TN Little Rock, AR Ms. Judy McReynolds Mr. Michael S. Card President & CEO President Arkansas Best Corporation Combined Transport, Inc. Fort Smith, AR Central Point, OR Mr. Jeffrey J. McCaig Mr. Edward Crowell President & CEO President & CEO Trimac Transportation, Inc. Georgia Motor Trucking Association Houston, TX Atlanta, GA Mr. Gregory L. Owen Mr. Rich Freeland Head Coach & CEO President – Engine Business Ability/ Tri-Modal Transportation Cummins Inc. Services Columbus, IN Carson, CA Mr. Hugh H. Fugleberg Mr. Douglas W. Stotlar President & COO President & CEO Great West Casualty Company Con-way Inc. South Sioux City, NE Ann Arbor, MI Mr. Jack Holmes Ms. Rebecca M. Brewster President President & COO UPS Freight American Transportation Research Richmond, VA Institute Atlanta, GA Mr. Ludvik F. Koci Director Honorable Bill Graves Penske Transportation Components President & CEO Bloomfield Hills, MI American Trucking Associations Arlington, VA Mr. Chris Lofgren President & CEO Schneider National, Inc. Green Bay, WI ATRI RESEARCH ADVISORY COMMITTEE Mr. Philip L. Byrd, Sr., RAC Chairman Ms. Patti Gillette President & CEO Safety Director Bulldog Hiway Express Colorado Motor Carriers Association Ms. Kendra Adams Mr. John Hancock Executive Director Director New York State Motor Truck Association Prime, Inc.
    [Show full text]
  • General Latent Feature Models for Heterogeneous Datasets
    Journal of Machine Learning Research 21 (2020) 1-49 Submitted 6/17; Revised 2/20; Published 4/20 General Latent Feature Models for Heterogeneous Datasets Isabel Valera [email protected] Department of Computer Science Saarland University Saarbr¨ucken,Germany; and Max Planck Institute for Intelligent Systems T¨ubingen,Germany; Melanie F. Pradier [email protected] School of Engineering and Applied Sciences Harvard University Cambridge, USA Maria Lomeli [email protected] Department of Engineering University of Cambridge Cambridge, UK Zoubin Ghahramani [email protected] Department of Engineering University of Cambridge Cambridge, UK; and Uber AI, San Francisco, California, USA Editor: Samuel Kaski Abstract Latent variable models allow capturing the hidden structure underlying the data. In par- ticular, feature allocation models represent each observation by a linear combination of latent variables. These models are often used to make predictions either for new obser- vations or for missing information in the original data, as well as to perform exploratory data analysis. Although there is an extensive literature on latent feature allocation models for homogeneous datasets, where all the attributes that describe each object are of the same (continuous or discrete) type, there is no general framework for practical latent fea- ture modeling for heterogeneous datasets. In this paper, we introduce a general Bayesian nonparametric latent feature allocation model suitable for heterogeneous datasets, where the attributes describing each object can be arbitrary combinations of real-valued, positive real-valued, categorical, ordinal and count variables. The proposed model presents several important properties. First, it is suitable for heterogeneous data while keeping the proper- ties of conjugate models, which enables us to develop an inference algorithm that presents linear complexity with respect to the number of objects and attributes per MCMC itera- tion.
    [Show full text]
  • Advanced Quantitative Data Analysis: Methodology for the Social Sciences
    Advanced Quantitative Data Analysis: Methodology for the Social Sciences Thomas Plümper Professor of Quantitative Social Research Vienna University of Economics [email protected] © Thomas Plümper 2017 - 2018 1 Credits - transparencies from Vera Troeger’s advanced regression course - transparencies from Richard Traunmueller’s causal inference course - Wikipedia - Institute for Digital Research and Education http://stats.idre.ucla.edu/stata/dae/ - Stata Corp. - FiveThirtyEight https://fivethirtyeight.com/ © Thomas Plümper 2017 - 2018 2 Structure: Estimation (Scale of the Depvar) and Complications Error/Residuals/ Coefficient/Effect Functional Conditionality Selection Truncation/Censoring Dynamics Heterogeneity Spatial Significance Form Dependence OLS Chapter 4 Chapter 5 Chapter 6 Chapter 8 Chapter 9 Probit/Logit Chapter 7 n.a. Multinomial n.a. Chapter 10 Ordered Chapter 3 Chapter 7 Chapter 5 Chapter 7 Chapter 6 n.a. Chapter 11 Poisson/Neg Chapter 12 Chapter 6 Binomial Survival Chapter 13 © Thomas Plümper 2017 - 2018 3 ToC Chapter 1: Empirical Research and the Inference Problem 32 Chapter 2: Probabilistic Causal Mechanisms and Modes of Inference 62 Chapter 3: Statistical Inference and the Logic of Regression Analysis 99 Chapter 4: Linear Models: OLS 120 Chapter 5: Minor Complications and Extensions 147 Chapter 6: More Complications: Selection, Truncation, Censoring 190 Chapter 7: Maximum Likelihood Estimation of Categorical Variables 215 Chapter 8: ML Estimation of Count Data 262 Chapter 9: Dynamics and the Estimation 286 Chapter 10: Temporal Heterogeneity 320 Chapter 11: Causal Heterogeneity 335 Chapter 12: Spatial Dependence 361 Chapter 13: The Analysis of Dyadic Data 404 Chapter 14: Effect Strengths and Cases in Quantitative Research 405 © Thomas Plümper 2017 - 2018 4 Literature useful textbooks (among others) © Thomas Plümper 2017 - 2018 5 Chapter 1 Cohen, M.F., 2013.
    [Show full text]
  • A Probabilistic Programming Approach to Probabilistic Data Analysis
    A Probabilistic Programming Approach To Probabilistic Data Analysis Feras Saad Vikash Mansinghka MIT Probabilistic Computing Project MIT Probabilistic Computing Project [email protected] [email protected] Abstract Probabilistic techniques are central to data analysis, but different approaches can be challenging to apply, combine, and compare. This paper introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include discriminative machine learning, hierarchical Bayesian models, multivariate kernel methods, clustering algorithms, and arbitrary probabilistic programs. We demonstrate the integration of CGPMs into BayesDB, a probabilistic programming platform that can express data analysis tasks using a modeling definition language and structured query language. The practical value is illustrated in two ways. First, the paper describes an analysis on a database of Earth satellites, which identifies records that probably violate Kepler’s Third Law by composing causal probabilistic programs with non- parametric Bayes in 50 lines of probabilistic code. Second, it reports the lines of code and accuracy of CGPMs compared with baseline solutions from standard machine learning libraries. 1 Introduction Probabilistic techniques are central to data analysis, but can be difficult to apply, combine, and compare. Such difficulties arise because families of approaches such as parametric statistical modeling, machine learning and probabilistic programming are each associated with different formalisms and assumptions. The contributions of this paper are (i) a way to address these challenges by defining CGPMs, a new family of composable probabilistic models; (ii) an integration of this family into BayesDB [10], a probabilistic programming platform for data analysis; and (iii) empirical illustrations of the efficacy of the framework for analyzing a real-world database of Earth satellites.
    [Show full text]
  • Master Thesis Automatic Data
    Eindhoven University of Technology MASTER Automatic data cleaning Zhang, J. Award date: 2018 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain Department of Mathematics and Computer Science Data Mining Research Group Automatic Data Cleaning Master Thesis Ji Zhang Supervisor: Dr. ir. Joaquin Vanschoren Assessment Committee Members: Dr. ir. Joaquin Vanschoren Dr. ir. Nikolay Yakovets Dr. ir. Michel Westenberg Eindhoven, November 2018 Abstract In the machine learning field, data quality is essential for developing an effective machine learning model. However, raw data always contain various kinds of data problems such as duplicated records, missing values and outliers, which may weaken the model power severely. Therefore, it is vital to clean data thoroughly before proceeding with the data analysis step.
    [Show full text]
  • Poisson Regression Model with Application to Doctor Visits
    International Journal of Applied Science and Mathematical Theory E- ISSN 2489-009X P-ISSN 2695-1908, Vol 6. No.1 2020 www.iiardpub.org Poisson Regression Model with Application to Doctor Visits Uti ,Uju & Isaac Didi Essi Department of Mathematics, Faculty of Science Rivers State University Nkpolu Oroworokku, Rivers State, Nigeria [email protected] ABSTRACT The problem was to apply Poisson regression and its variants to modeling doctor visits. The set of explanatory variables under consideration was tested and subsequently the final model was determined. The MLE. Was used was for estimation using the Stata software package. The research resulted in selecting the count model variant with the best estimates using information criteria. In addition, where there was overdispersion, Negative binomial regression gave better estimates then Poisson model. KEYWORDS: Poisson Regression , Model, Application INTRODUCTION 1.1 BACKGROUND OF STUDY A visit to doctor, also known as physician office visit or ward round is a meeting between a patient with a physician to get health advice of treatment for a symptom or condition. According to a survey in the in the United States a physician typically sees between fifty and one hundred patients per week, but the rate of visitation may vary with medical specialty, but differs one little by community size. The four great cornerstone of diagnostic medicine are anatomy (structure what is there), physiology (how the structure work) pathology (what goes wrong with anatomy and physiology) and psychology (mind and behavior), the physician should consider the patient in their “well” context rather than simply as a walking medical condition.
    [Show full text]
  • Computing Functions of Random Variables Via Reproducing Kernel Hilbert Space Representations
    Computing Functions of Random Variables via Reproducing Kernel Hilbert Space Representations Bernhard Scholkopf¨ ∗ Krikamol Muandet∗ Kenji Fukumizu† Jonas Peters‡ September 5, 2018 Abstract We describe a method to perform functional operations on probability distri- butions of random variables. The method uses reproducing kernel Hilbert space representations of probability distributions, and it is applicable to all operations which can be applied to points drawn from the respective distributions. We refer to our approach as kernel probabilistic programming. We illustrate it on synthetic data, and show how it can be used for nonparametric structural equation models, with an application to causal inference. 1 Introduction Data types, derived structures, and associated operations play a crucial role for pro- gramming languages and the computations we carry out using them. Choosing a data type, such as Integer, Float, or String, determines the possible values, as well as the operations that an object permits. Operations typically return their results in the form of data types. Composite or derived data types may be constructed from simpler ones, along with specialised operations applicable to them. The goal of the present paper is to propose a way to represent distributions over data types, and to generalize operations built originally for the data types to operations ap- plicable to those distributions. Our approach is nonparametric and thus not concerned with what distribution models make sense on which statistical data type (e.g., binary, ordinal, categorical). It is also general, in the sense that in principle, it applies to all arXiv:1501.06794v1 [stat.ML] 27 Jan 2015 data types and functional operations.
    [Show full text]
  • Mathieu Emily
    HABILITATION À DIRIGER DES RECHERCHES Université de Rennes 1 Contributions to biostatistics: categorical data analysis, data modeling and statistical inference Mathieu Emily November 15th, 2016 Jury: Christophe Ambroise Professeur, Université d’Evry Val d’Essonne, France Rapporteur Gérard Biau Professeur, Université Pierre et Marie Curie, France Président David Causeur Professeur, Agrocampus Ouest, France Examinateur Heather Cordell Professor, University of Newcastle upon Tyne, United Kingdom Rapporteur Jean-Michel Marin Professeur, Université de Montpellier, France Rapporteur Valérie Monbet Professeur, Université de Rennes 1, France Examinateur Korbinian Strimmer Professor, Imperial College London, United Kingdom Examinateur Jean-Philippe Vert Directeur de recherche, Mines ParisTech, France Examinateur Pour M.H.A.M. Remerciements En tout premier lieu je tiens à adresser mes remerciements à l’ensemble des membres du jury qui m’ont fait l’honneur d’évaluer mes travaux. Je remercie Hearther Cordell, Christophe Ambroise et Jean-Michel Marin pour le temps qu’ils ont consacré à la lecture de mon manuscrit. Merci également à Gérard Biau d’avoir présidé mon jury, à David Causeur, Valérie Monbet, Jean-Philippe Vert et Korbinian Strimmer pour leurs questions et commentaires précieux lors de la soutenance. Je vous exprime également toute ma gratitude pour votre disponibilité qui a permis de vous réunir tous physiquement le 15 novembre. Les travaux présentés dans ce manuscrit représentent pour moi dix années de travail mais également, et surtout, dix années de rencontres et collaborations. Cette aventure a commencé un mois de novembre de 2006 où, juste après ma soutenance de thèse, j’ai pris la direction du Danemark pour y effectuer mon post-doctorat.
    [Show full text]
  • Statistical Diagnostics of Models for Count Data
    International Journal of Mathematics Trends and Technology (IJMTT) – Volume 63 Number 1 – November 2018 Statistical Diagnostics of Models for Count Data Zamir Ali#1, Yu Feng*2, Ali Choo#3, Munsir Ali#4, 1,2,3,4 Department of Probability and Mathematical Statistics, School of Science, Nanjing University of Science and Technology, Nanjing 210094, P.R. China Abstract While doing statistical analysis, a problem often resists that there may exist some extremely small or large observation (outliers). To deal with such problem diagnostics of the observations is the best option for the model building process. Mostly analysts use ordinary least square (OLS) method which is utterly failing in the identification of outliers. In this paper, we use the diagnostics method to detect outliers and influential points in models for count data. Gauss-Newton and Likelihood Distance method approach has been treated to detect the outliers in parameter estimation in non-linear regression analysis. We used these techniques to analyze the performance of residual and influence in the non-linear regression model. The results show us detection of single and multiple outlier cases in count data. Key Words: Poisson Regression, Outliers, Residuals, Non-linear regression model, Likelihood distance. I. INTRODUCTION Count data is a statistical data type, in which the observation can take only the non-negative integer values i.e. {0, 1, 2……} where these integers coming from counting instead of ranking. Count data is dissimilar from the binary data and ordinal data and their statistical approach are also different. In binary data the observation can take only two values, usually represented by 0 and 1.
    [Show full text]
  • Probabilistic Data Analysis with Probabilistic Programming
    Probabilistic Data Analysis with Probabilistic Programming Feras Saad [email protected] Computer Science & Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA Vikash Mansinghka [email protected] Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139, USA Abstract Probabilistic techniques are central to data analysis, but different approaches can be difficult to apply, combine, and compare. This paper introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include hierarchical Bayesian models, multivariate kernel methods, discriminative machine learning, clus- tering algorithms, dimensionality reduction, and arbitrary probabilistic programs. We also demon- strate the integration of CGPMs into BayesDB, a probabilistic programming platform that can ex- press data analysis tasks using a modeling language and a structured query language. The practical value is illustrated in two ways. First, CGPMs are used in an analysis that identifies satellite data records which probably violate Kepler’s Third Law, by composing causal probabilistic programs with non-parametric Bayes in under 50 lines of probabilistic code. Second, for several representa- tive data analysis tasks, we report on lines of code and accuracy measurements of various CGPMs, plus comparisons with standard baseline solutions from Python and MATLAB libraries. Keywords: probabilistic programming, non-parametric Bayesian inference, probabilistic databases, hybrid modeling, multivariate statistics Acknowledgments The authors thank Taylor Campbell, Gregory Marton, and Alexey Radul for engineering support, and Anthony Lu, Richard Tibbetts, and Marco Cusumano-Towner for helpful contributions, feed- back and discussions.
    [Show full text]