Modular Meta-Learning with Shrinkage
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Shrinkage Priors for Bayesian Penalized Regression
Shrinkage Priors for Bayesian Penalized Regression. Sara van Erp1, Daniel L. Oberski2, and Joris Mulder1 1Tilburg University 2Utrecht University This manuscript has been published in the Journal of Mathematical Psychology. Please cite the published version: Van Erp, S., Oberski, D. L., & Mulder, J. (2019). Shrinkage priors for Bayesian penalized regression. Journal of Mathematical Psychology, 89, 31–50. doi:10.1016/j.jmp.2018.12.004 Abstract In linear regression problems with many predictors, penalized regression tech- niques are often used to guard against overfitting and to select variables rel- evant for predicting an outcome variable. Recently, Bayesian penalization is becoming increasingly popular in which the prior distribution performs a function similar to that of the penalty term in classical penalization. Specif- ically, the so-called shrinkage priors in Bayesian penalization aim to shrink small effects to zero while maintaining true large effects. Compared toclas- sical penalization techniques, Bayesian penalization techniques perform sim- ilarly or sometimes even better, and they offer additional advantages such as readily available uncertainty estimates, automatic estimation of the penalty parameter, and more flexibility in terms of penalties that can be considered. However, many different shrinkage priors exist and the available, often quite technical, literature primarily focuses on presenting one shrinkage prior and often provides comparisons with only one or two other shrinkage priors. This can make it difficult for researchers to navigate through the many prior op- tions and choose a shrinkage prior for the problem at hand. Therefore, the aim of this paper is to provide a comprehensive overview of the literature on Bayesian penalization. -
Advanced Topics in Learning and Vision
Advanced Topics in Learning and Vision Ming-Hsuan Yang [email protected] Lecture 6 (draft) Announcements • More course material available on the course web page • Project web pages: Everyone needs to set up one (format details will soon be available) • Reading (due Nov 1): - RVM application for 3D human pose estimation [1]. - Viola and Jones: Adaboost-based real-time face detector [25]. - Viola et al: Adaboost-based real-time pedestrian detector [26]. A. Agarwal and B. Triggs. 3D human pose from silhouettes by relevance vector regression. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 882–888, 2004. P. Viola and M. Jones. Robust real-time face detection. International Journal of Computer Vision, 57(2):137–154, 2004. P. Viola, M. Jones, and D. Snow. Markov face models. In Proceedings of the Ninth IEEE International Conference on Computer Vision, volume 2, pages 734–741, 2003. Lecture 6 (draft) 1 Overview • Linear classifier: Fisher linear discriminant (FLD), linear support vector machine (SVM). • Nonlinear classifier: nonlinear support vector machine, • SVM regression, relevance vector machine (RVM). • Kernel methods: kernel principal component, kernel discriminant analysis. • Ensemble of classifiers: Adaboost, bagging, ensemble of homogeneous/heterogeneous classifiers. Lecture 6 (draft) 2 Fisher Linear Discriminant • Based on Gaussian distribution: between-scatter/within-scatter matrices. • Supervised learning for multiple classes. 0 • Trick often used in ridge regression: Sw = Sw + λI without using PCA • Fisherfaces vs. Eigenfaces (FLD vs. PCA). • Further study: Aleix Martinez’s paper on analyzing FLD for object recognition [10][11]. A. M. Martinez and A. Kak. PCA versus LDA. -
Shrinkage Estimation of Rate Statistics Arxiv:1810.07654V1 [Stat.AP] 17 Oct
Published, CS-BIGS 7(1):14-25 http://www.csbigs.fr Shrinkage estimation of rate statistics Einar Holsbø Department of Computer Science, UiT — The Arctic University of Norway Vittorio Perduca Laboratory of Applied Mathematics MAP5, Université Paris Descartes This paper presents a simple shrinkage estimator of rates based on Bayesian methods. Our focus is on crime rates as a motivating example. The estimator shrinks each town’s observed crime rate toward the country-wide average crime rate according to town size. By realistic simulations we confirm that the proposed estimator outperforms the maximum likelihood estimator in terms of global risk. We also show that it has better coverage properties. Keywords : Official statistics, crime rates, inference, Bayes, shrinkage, James-Stein estimator, Monte-Carlo simulations. 1. Introduction that most of the best schools—according to a variety of performance measures—were small. 1.1. Two counterintuitive random phenomena As it turns out, there is nothing special about small schools except that they are small: their It is a classic result in statistics that the smaller over-representation among the best schools is the sample, the more variable the sample mean. a consequence of their more variable perfor- The result is due to Abraham de Moivre and it mance, which is counterbalanced by their over- tells us that the standard deviation of the mean p representation among the worst schools. The is sx¯ = s/ n, where n is the sample size and s observed superiority of small schools was sim- the standard deviation of the random variable ply a statistical fluke. of interest. -
Linear Methods for Regression and Shrinkage Methods
Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares • Input vectors • is an attribute / feature / predictor (independent variable) • The linear regression model: • The output is called response (dependent variable) • ’s are unknown parameters (coefficients) 2 Linear Regression Models Least Squares • A set of training data • Each corresponding to attributes • Each is a class attribute value / a label • Wish to estimate the parameters 3 Linear Regression Models Least Squares • One common approach ‐ the method of least squares: • Pick the coefficients to minimize the residual sum of squares: 4 Linear Regression Models Least Squares • This criterion is reasonable if the training observations represent independent draws. • Even if the ’s were not drawn randomly, the criterion is still valid if the ’s are conditionally independent given the inputs . 5 Linear Regression Models Least Squares • Make no assumption about the validity of the model • Simply finds the best linear fit to the data 6 Linear Regression Models Finding Residual Sum of Squares • Denote by the matrix with each row an input vector (with a 1 in the first position) • Let be the N‐vector of outputs in the training set • Quadratic function in the parameters: 7 Linear Regression Models Finding Residual Sum of Squares • Set the first derivation to zero: • Obtain the unique solution: 8 Linear Regression -
Linear, Ridge Regression, and Principal Component Analysis
Linear, Ridge Regression, and Principal Component Analysis Linear, Ridge Regression, and Principal Component Analysis Jia Li Department of Statistics The Pennsylvania State University Email: [email protected] http://www.stat.psu.edu/∼jiali Jia Li http://www.stat.psu.edu/∼jiali Linear, Ridge Regression, and Principal Component Analysis Introduction to Regression I Input vector: X = (X1, X2, ..., Xp). I Output Y is real-valued. I Predict Y from X by f (X ) so that the expected loss function E(L(Y , f (X ))) is minimized. I Square loss: L(Y , f (X )) = (Y − f (X ))2 . I The optimal predictor ∗ 2 f (X ) = argminf (X )E(Y − f (X )) = E(Y | X ) . I The function E(Y | X ) is the regression function. Jia Li http://www.stat.psu.edu/∼jiali Linear, Ridge Regression, and Principal Component Analysis Example The number of active physicians in a Standard Metropolitan Statistical Area (SMSA), denoted by Y , is expected to be related to total population (X1, measured in thousands), land area (X2, measured in square miles), and total personal income (X3, measured in millions of dollars). Data are collected for 141 SMSAs, as shown in the following table. i : 1 2 3 ... 139 140 141 X1 9387 7031 7017 ... 233 232 231 X2 1348 4069 3719 ... 1011 813 654 X3 72100 52737 54542 ... 1337 1589 1148 Y 25627 15389 13326 ... 264 371 140 Goal: Predict Y from X1, X2, and X3. Jia Li http://www.stat.psu.edu/∼jiali Linear, Ridge Regression, and Principal Component Analysis Linear Methods I The linear regression model p X f (X ) = β0 + Xj βj . -
Kernel Mean Shrinkage Estimators
Journal of Machine Learning Research 17 (2016) 1-41 Submitted 5/14; Revised 9/15; Published 4/16 Kernel Mean Shrinkage Estimators Krikamol Muandet∗ [email protected] Empirical Inference Department, Max Planck Institute for Intelligent Systems Spemannstraße 38, T¨ubingen72076, Germany Bharath Sriperumbudur∗ [email protected] Department of Statistics, Pennsylvania State University University Park, PA 16802, USA Kenji Fukumizu [email protected] The Institute of Statistical Mathematics 10-3 Midoricho, Tachikawa, Tokyo 190-8562 Japan Arthur Gretton [email protected] Gatsby Computational Neuroscience Unit, CSML, University College London Alexandra House, 17 Queen Square, London - WC1N 3AR, United Kingdom ORCID 0000-0003-3169-7624 Bernhard Sch¨olkopf [email protected] Empirical Inference Department, Max Planck Institute for Intelligent Systems Spemannstraße 38, T¨ubingen72076, Germany Editor: Ingo Steinwart Abstract A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel mean, is central to kernel methods in that it is used by many classical algorithms such as kernel principal component analysis, and it also forms the core inference step of modern kernel methods that rely on embedding probability distributions in RKHSs. Given a finite sample, an empirical average has been used commonly as a standard estimator of the true kernel mean. Despite a widespread use of this estimator, we show that it can be improved thanks to the well- known Stein phenomenon. We propose a new family of estimators called kernel mean shrinkage estimators (KMSEs), which benefit from both theoretical justifications and good empirical performance. The results demonstrate that the proposed estimators outperform the standard one, especially in a \large d, small n" paradigm. -
Generalized Shrinkage Methods for Forecasting Using Many Predictors
Generalized Shrinkage Methods for Forecasting using Many Predictors This revision: February 2011 James H. Stock Department of Economics, Harvard University and the National Bureau of Economic Research and Mark W. Watson Woodrow Wilson School and Department of Economics, Princeton University and the National Bureau of Economic Research AUTHOR’S FOOTNOTE James H. Stock is the Harold Hitchings Burbank Professor of Economics, Harvard University (e-mail: [email protected]). Mark W. Watson is the Howard Harrison and Gabrielle Snyder Beck Professor of Economics and Public Affairs, Princeton University ([email protected]). The authors thank Jean Boivin, Domenico Giannone, Lutz Kilian, Serena Ng, Lucrezia Reichlin, Mark Steel, and Jonathan Wright for helpful discussions, Anna Mikusheva for research assistance, and the referees for helpful suggestions. An earlier version of the theoretical results in this paper was circulated earlier under the title “An Empirical Comparison of Methods for Forecasting Using Many Predictors.” Replication files for the results in this paper can be downloaded from http://www.princeton.edu/~mwatson. This research was funded in part by NSF grants SBR-0214131 and SBR-0617811. 0 Generalized Shrinkage Methods for Forecasting using Many Predictors JBES09-255 (first revision) This revision: February 2011 [BLINDED COVER PAGE] ABSTRACT This paper provides a simple shrinkage representation that describes the operational characteristics of various forecasting methods designed for a large number of orthogonal predictors (such as principal components). These methods include pretest methods, Bayesian model averaging, empirical Bayes, and bagging. We compare empirically forecasts from these methods to dynamic factor model (DFM) forecasts using a U.S. macroeconomic data set with 143 quarterly variables spanning 1960-2008. -
Dropout-Based Automatic Relevance Determination
Dropout-based Automatic Relevance Determination Dmitry Molchanov1, Arseniy Ashuha2 and Dmitry Vetrov3;4∗ 1Skolkovo Institute of Science and Technology 2Moscow Institute of Physics and Technology 3National Research University Higher School of Economics, 4Yandex Moscow, Russia [email protected], [email protected], [email protected] 1 Introduction During past several years, several works about treating dropout as a Bayesian model have appeared [1,3,4]. Authors of [1] proposed a method that represents dropout as a special case of Bayesian model and allows to tune individual dropout rates for each weight. Another direction of research is the exploration of the ways to train neural networks with sparse weight matrices [6,9] and the ways to prune excessive neurons [8]. One of the possible approaches is so-called Automatic Relevance Determination (ARD). A typical example of such models is the Relevance Vector Machine [4]. However, these models are usually relatively hard to train. In this work, we introduce a new dropout-based way to obtain the ARD effect that can be applied to complex models via stochastic variational inference framework. 2 Theory Variational dropout [1] is a recent technique that generalizes Gaussian dropout [7]. It allows to set individual dropout rates for each weight/neuron/layer and to tune them with a simple gradient descent based method. As in most modern Bayesian models, the goal is to obtain the variational approximation to the posterior distribution via optimization of the variational lower bound (1): L(q) = q(w) log p(T j X; w) − DKL(q(w) k pprior(w)) ! max (1) E q To put it simply, the main idea of Variational dropout is to search for posterior approximation in a 2 specific family of distributions: q(wi) = N (θi; αiθi ). -
Cross-Validation, Shrinkage and Variable Selection in Linear Regression Revisited
Open Journal of Statistics, 2013, 3, 79-102 http://dx.doi.org/10.4236/ojs.2013.32011 Published Online April 2013 (http://www.scirp.org/journal/ojs) Cross-Validation, Shrinkage and Variable Selection in Linear Regression Revisited Hans C. van Houwelingen1, Willi Sauerbrei2 1Department of Medical Statistics and Bioinformatics, Leiden University Medical Center, Leiden, The Netherlands 2Institut fuer Medizinische Biometrie und Medizinische Informatik, Universitaetsklinikum Freiburg, Freiburg, Germany Email: [email protected] Received December 8, 2012; revised January 10, 2013; accepted January 26, 2013 Copyright © 2013 Hans C. van Houwelingen, Willi Sauerbrei. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ABSTRACT In deriving a regression model analysts often have to use variable selection, despite of problems introduced by data- dependent model building. Resampling approaches are proposed to handle some of the critical issues. In order to assess and compare several strategies, we will conduct a simulation study with 15 predictors and a complex correlation struc- ture in the linear regression model. Using sample sizes of 100 and 400 and estimates of the residual variance corre- sponding to R2 of 0.50 and 0.71, we consider 4 scenarios with varying amount of information. We also consider two examples with 24 and 13 predictors, respectively. We will discuss the value of cross-validation, shrinkage and back- ward elimination (BE) with varying significance level. We will assess whether 2-step approaches using global or pa- rameterwise shrinkage (PWSF) can improve selected models and will compare results to models derived with the LASSO procedure. -
Healing the Relevance Vector Machine Through Augmentation
Healing the Relevance Vector Machine through Augmentation Carl Edward Rasmussen [email protected] Max Planck Institute for Biological Cybernetics, 72076 T¨ubingen, Germany Joaquin Qui˜nonero-Candela [email protected] Friedrich Miescher Laboratory, Max Planck Society, 72076 T¨ubingen, Germany Abstract minimizing KL divergences between the approximated and true posterior, by Smola and Sch¨olkopf (2000) and The Relevance Vector Machine (RVM) is a Smola and Bartlett (2001) by making low rank approx- sparse approximate Bayesian kernel method. imations to the posterior. In (Gibbs & MacKay, 1997) It provides full predictive distributions for and in (Williams & Seeger, 2001) sparseness arises test cases. However, the predictive uncer- from matrix approximations, and in (Tresp, 2000) tainties have the unintuitive property, that from neglecting correlations. The use of subsets of they get smaller the further you move away regressors has also been suggested by Wahba et al. from the training cases. We give a thorough (1999). analysis. Inspired by the analogy to non- degenerate Gaussian Processes, we suggest The Relevance Vector Machine (RVM) introduced by augmentation to solve the problem. The pur- Tipping (2001) produces sparse solutions using an im- pose of the resulting model, RVM*, is primar- proper hierarchical prior and optimizing over hyperpa- ily to corroborate the theoretical and exper- rameters. The RVM is exactly equivalent to a Gaus- imental analysis. Although RVM* could be sian Process, where the RVM hyperparameters are pa- used in practical applications, it is no longer rameters of the GP covariance function (more on this a truly sparse model. Experiments show that in the discussion section). -
Sparse Bayesian Classification with the Relevance Vector Machine
Sparse Bayesian Classification with the Relevance Vector Machine In a pattern recognition setting, the Relevance Vector Machine (RVM) [3] can be viewed as a simple logistic regression model, with a Bayesian Automatic Relevance Determination (ARD) prior [1] over the weights associated with each feature in order to achieve a parsimonious model. Let A = {A, C, G, T} represent an alphabet of symbols representing the bases adenine, cytosine, guanine and thymine respectively. The RVM constructs a decision rule from a set of labelled training data, ` ni D = {(xi, ti)}i=1, xi ∈ A , ti ∈ {0, + 1}, where the input patterns, xi, consist of strings drawn from A of varying lengths, describing the upstream promoter regions of a set of co-regulated genes. The target patterns, ti, indicate whether the corresponding gene is up-regulated (class C1, yi = +1) or down-regulated (class C2, yi = 0) under a given treatment. The RVM constructs a logistic regression model based on a set of sequence features derived from the input patterns, i.e. N X p(C1|x) ≈ σ {y(x; w)} where y(x; w) = wiϕi(x) + w0, (1) i=1 and σ {y} = (1 + exp{y})−1 is the logistic inverse link function. In this study a d feature, ϕi(x), represents the number of times an arbitrary substring, si ∈ A , ocurrs in a promoter sequence x. A sufficiently large set of features is used such that it is reasonable to expect that some of these features will represent oligonucleotides forming a relevant promoter protein binding site and so provide discriminatory information for the pattern recognition task at hand. -
Mining Big Data Using Parsimonious Factor, Machine Learning, Variable Selection and Shrinkage Methods Hyun Hak Kim1 and Norman R
Mining Big Data Using Parsimonious Factor, Machine Learning, Variable Selection and Shrinkage Methods Hyun Hak Kim1 and Norman R. Swanson2 1Bank of Korea and 2Rutgers University February 2016 Abstract A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using “big data” (see Bai and Ng (2008), Dufour and Stevanovic (2010), Forni et al. (2000, 2005), Kim and Swanson (2014a), Stock and Watson (2002b, 2006, 2012), and the references cited therein). We add to this literature by analyzing whether “big data” are useful for modelling low frequency macroeconomic variables such as unemployment, in‡ation and GDP. In particular, we analyze the predictive bene…ts associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting “horse-race” using prediction models constructed using a variety of model speci…cation approaches, factor estimation methods, and data windowing methods, in the context of the prediction of 11 macroeconomic variables relevant for monetary policy assessment. In many instances, we …nd that various of our benchmark models, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate speci…cations based on factor-type dimension reduction combined with various machine learning, variable selection, and shrinkage methods (called “combination”models). We …nd that forecast combination methods are mean square forecast error (MSFE) “best” for only 3 of 11 variables when the forecast horizon, h = 1, and for 4 variables when h = 3 or 12.