Model Selection Techniques —An Overview Jie Ding, Vahid Tarokh, and Yuhong Yang

Total Page:16

File Type:pdf, Size:1020Kb

Model Selection Techniques —An Overview Jie Ding, Vahid Tarokh, and Yuhong Yang IEEE SIGNAL PROCESSING MAGAZINE 1 Model Selection Techniques —An Overview Jie Ding, Vahid Tarokh, and Yuhong Yang Abstract—In the era of “big data”, analysts usually lead to purely noisy “discoveries”, severely misleading explore various statistical models or machine learning conclusions, or disappointing predictive performances. methods for observed data in order to facilitate scientific Therefore, a crucial step in a typical data analysis is discoveries or gain predictive power. Whatever data and to consider a set of candidate models (referred to as the fitting procedures are employed, a crucial step is to select model class), and then select the most appropriate one. the most appropriate model or method from a set of model selection candidates. Model selection is a key ingredient in data In other words, is the task of selecting a analysis for reliable and reproducible statistical inference statistical model from a model class, given a set of data. or prediction, and thus central to scientific studies in For example, we may be interested in the selection of fields such as ecology, economics, engineering, finance, • variables for linear regression, political science, biology, and epidemiology. There has • basis terms such as polynomials, splines, or been a long history of model selection techniques that arise from researches in statistics, information theory, wavelets in function estimation, and signal processing. A considerable number of methods • order of an autoregressive process, have been proposed, following different philosophies and • number of components in a mixture model, exhibiting varying performances. The purpose of this • most appropriate parametric family among a num- article is to bring a comprehensive overview of them, ber of alternatives, in terms of their motivation, large sample performance, • number of change points in time series models, and applicability. We provide integrated and practically • number of neurons and layers in neural networks, relevant discussions on theoretical properties of state-of- • best choice among logistic regression, support vec- the-art model selection approaches. We also share our tor machine, and neural networks, thoughts on some controversial views on the practice of model selection. • best machine learning techniques for solving real- data challenges on an online competition platform. Index Terms—Asymptotic efficiency; Consistency; Cross-Validation; High dimension; Information criteria; There have been many overview papers on model Model selection; Optimal prediction; Variable selection. selection scattered in the communities of signal process- ing [1], statistics [2], machine learning [3], epidemiol- ogy [4], chemometrics [5], ecology and evolution [6]. I. WHY MODEL SELECTION Despite the abundant literature on model selection, exist- Vast development in hardware storage, precision in- ing overviews usually focus on derivations, descriptions, strument manufacture, economic globalization, etc. have or applications of particular model selection principles. arXiv:1810.09583v1 [stat.ML] 22 Oct 2018 generated huge volumes of data that can be analyzed to In this paper, we aim to provide an integrated under- extract useful information. Typical statistical inference standing of the properties and practical performances of or machine learning procedures learn from and make various approaches, by reviewing their theoretical and predictions on data by fitting parametric or nonpara- practical advantages, disadvantages, and relations. Our metric models (in a broad sense). However, there exists overview is characterized by the following highlighted no model that is universally suitable for any data and perspectives, which we believe will bring deeper insights goal. An improper choice of model or method can for the research community. First of all, we provide a technical overview of This research was funded in part by the Defense Advanced Research Projects Agency (DARPA) under grant number W911NF- foundational model selection approaches and their rela- 18-1-0134. tions, by centering around two statistical goals, namely J. Ding and Y. Yang are with the School of Statistics, University of prediction and inference (see Section II). We then in- Minnesota, Minneapolis, Minnesota 55455, United States. V. Tarokh troduce rigorous developments in the understanding of is with the Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina 27708, United States. two fundamentally representative model selection criteria Digital Object Identifier 10.1109/MSP.2018.2867638 c 2018 IEEE (see Section III). We also review research developments IEEE SIGNAL PROCESSING MAGAZINE 2 in the problem of high-dimensional variable selection, Step 1: For each candidate model Mm = fpθm ; θm 2 including penalized regression and step-wise variable Hmg, fit all the observed data to that model by estimating selection which have been widely used in practice (see its parameter θm 2 Hm; Section IV). Moreover, we review recent developments Step 2: Once we have a set of estimated candidate in modeling procedure selection, which, differently from models p~ (m 2 ), select the most appropriate one θm M model selection in the narrow sense of choosing among for either interpretation or prediction. parametric models, aims to select the better statistical or We note that not every data analysis and its associated machine learning procedure (see Section V). Finally, we model selection procedure formally rely on probabil- address some common misconceptions and controversies ity distributions. Examples of model-free methods are about model selection techniques (see Section VI), and nearest neighbor learning, reinforcement learning, and provide a few general recommendations on the applica- expert learning. Before we proceed, it is helpful to first tion of model selection (in Section VII). For derivations introduce the following two concepts. and implementations of some popular information cri- The “model fitting”: The fitting procedure (also teria, we refer the readers to monographs and review called parameter estimation) given a certain candidate papers such as [1], [7]. model Mm is usually achieved by minimizing the fol- lowing (cumulative) loss: A. Some Basic Concepts n ~ X θm = arg min s(pθm ; zt): (2) Notation: We use Mm = fpθ : θm 2 Hmg to denote m θm2Hm t=1 a model (in the formal probabilistic sense), which is a In the objective (2), each p represents a distribution set of probability density functions to describe the data θm for the data, and s(·; ·), referred to as the loss function z1; : : : ; zn. Here, Hm is the parameter space associated (or scoring function), is used to evaluate the “goodness with Mm. A model class, fMmgm2 , is a collection M of fit” between a distribution and the observation. A of models indexed by m 2 M. The number of models commonly used loss function is the logarithmic loss (or the cardinality of M) can be fixed, or depend on the sample size n. For each model Mm, we denote by s(p; zt) = − log p(zt); (3) dm the dimension of the parameter in model Mm. Its the negative logarithm of the distribution of zt. Then, log-likelihood function is written as θm 7! `n;m(θm) = the objective (2) produces the MLE for a parametric log pθm (z1; : : : ; zn), and the maximized log-likelihood value is model. For time series data, (3) is written as − log p(zt j z1; : : : ; zt−1), and the quadratic loss s(p; zt) = fzt − ^ ^ 2 `n;m(θm) with θm = arg max pθm (z1; : : : ; zn) (1) Ep(zt j z1; : : : ; zt−1)g is often used, where the expec- θm2Hm tation is taken over the joint distribution p of z1; : : : ; zt. being the maximum likelihood estimator (MLE) under The “best model”: Let p^m = p~ denote the esti- ^ ^ θm model Mm. We will write `n;m(θm) as `n;m for simplic- mated distribution under model Mm. The predictive per- ity. We use p∗ and E∗ to denote the true-data generating formance can be assessed via the out-sample prediction distribution and expectation with respect to the true data- loss, defined as generating distribution, respectively. In the parametric Z framework, there exists some m 2 M and some θ∗ 2 Hm E∗(s(^pm;Z)) = s(^pm(z); z)p∗(z)dz (4) such that p∗ is exactly pθ . In the nonparametric frame- ∗ where Z is independent with and identically distributed work, p∗ is excluded in the model class. We sometimes as the data used to obtain p^m. Here, Z does not have call a model class fMmgm2M well-specified (resp. mis- specified) if the data generating process is in a parametric the subscript t as it is the out-sample data used to evaluate the predictive performance. There can be a (resp. nonparametric) framework. We use !p and !d to denote convergence in probability and in distribution number of variations to this in terms of the prediction loss function [8] and time dependency. In view of this (under p∗), respectively. We use N (µ, V ) to denote a 2 definition, the best model can be naturally defined as the Gaussian distribution of mean µ and covariance V , χd to denote a chi-squared distribution with d degrees of candidate model with the smallest out-sample prediction loss, i.e., freedom, and k·k2 to denote the Euclidean norm. The word “variable” is often referred to as the “covariate” in m^ 0 = arg min E∗(s(^pm;Z)): a regression setting. m2M A typical data analysis can be thought of as consisting In other words, Mm^ 0 is the model whose predictive of two steps: power is the best offered by the candidate models. We IEEE SIGNAL PROCESSING MAGAZINE 3 note that the “best” is in the scope of the available data, sample size. For the latter, however, the selected model the class of models, and the loss function. may be simply the lucky winner among a few close In a parametric framework, typically the true data- competitors, yet the predictive performance can still be generating model, if not too complicated, is the best (nearly) the best possible. If so, the model selection is model.
Recommended publications
  • A Stepwise Regression Method and Consistent Model Selection for High-Dimensional Sparse Linear Models
    Submitted to the Annals of Statistics arXiv: math.PR/0000000 A STEPWISE REGRESSION METHOD AND CONSISTENT MODEL SELECTION FOR HIGH-DIMENSIONAL SPARSE LINEAR MODELS BY CHING-KANG ING ∗ AND TZE LEUNG LAI y Academia Sinica and Stanford University We introduce a fast stepwise regression method, called the orthogonal greedy algorithm (OGA), that selects input variables to enter a p-dimensional linear regression model (with p >> n, the sample size) sequentially so that the selected variable at each step minimizes the residual sum squares. We derive the convergence rate of OGA as m = mn becomes infinite, and also develop a consistent model selection procedure along the OGA path so that the resultant regression estimate has the oracle property of being equivalent to least squares regression on an asymptotically minimal set of relevant re- gressors under a strong sparsity condition. 1. Introduction. Consider the linear regression model p X (1.1) yt = α + βjxtj + "t; t = 1; 2; ··· ; n; j=1 with p predictor variables xt1; xt2; ··· ; xtp that are uncorrelated with the mean- zero random disturbances "t. When p is larger than n, there are computational and statistical difficulties in estimating the regression coefficients by standard regres- sion methods. Major advances to resolve these difficulties have been made in the past decade with the introduction of L2-boosting [2], [3], [14], LARS [11], and Lasso [22] which has an extensive literature because much recent attention has fo- cused on its underlying principle, namely, l1-penalized least squares. It has also been shown that consistent estimation of the regression function > > > (1.2) y(x) = α + β x; where β = (β1; ··· ; βp) ; x = (x1; ··· ; xp) ; is still possible under a sparseness condition on the regression coefficients.
    [Show full text]
  • Model Selection for Optimal Prediction in Statistical Machine Learning
    Model Selection for Optimal Prediction in Statistical Machine Learning Ernest Fokou´e Introduction science with several ideas from cognitive neuroscience and At the core of all our modern-day advances in artificial in- psychology to inspire the creation, invention, and discov- telligence is the emerging field of statistical machine learn- ery of abstract models that attempt to learn and extract pat- ing (SML). From a very general perspective, SML can be terns from the data. One could think of SML as a field thought of as a field of mathematical sciences that com- of science dedicated to building models endowed with bines mathematics, probability, statistics, and computer the ability to learn from the data in ways similar to the ways humans learn, with the ultimate goal of understand- Ernest Fokou´eis a professor of statistics at Rochester Institute of Technology. His ing and then mastering our complex world well enough email address is [email protected]. to predict its unfolding as accurately as possible. One Communicated by Notices Associate Editor Emilie Purvine. of the earliest applications of statistical machine learning For permission to reprint this article, please contact: centered around the now ubiquitous MNIST benchmark [email protected]. task, which consists of building statistical models (also DOI: https://doi.org/10.1090/noti2014 FEBRUARY 2020 NOTICES OF THE AMERICAN MATHEMATICAL SOCIETY 155 known as learning machines) that automatically learn and Theoretical Foundations accurately recognize handwritten digits from the United It is typical in statistical machine learning that a given States Postal Service (USPS). A typical deployment of an problem will be solved in a wide variety of different ways.
    [Show full text]
  • Stepwise Versus Hierarchical Regression: Pros and Cons
    Running head: Stepwise versus Hierarchal Regression Stepwise versus Hierarchical Regression: Pros and Cons Mitzi Lewis University of North Texas Paper presented at the annual meeting of the Southwest Educational Research Association, February 7, 2007, San Antonio. Stepwise versus Hierarchical Regression, 2 Introduction Multiple regression is commonly used in social and behavioral data analysis (Fox, 1991; Huberty, 1989). In multiple regression contexts, researchers are very often interested in determining the “best” predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested in explaining the most variability in the dependent variable with the fewest possible predictors, perhaps as part of a cost analysis. Two approaches to determining the quality of predictors are (1) stepwise regression and (2) hierarchical regression. This paper will explore the advantages and disadvantages of these methods and use a small SPSS dataset for illustration purposes. Stepwise Regression Stepwise methods are sometimes used in educational and psychological research to evaluate the order of importance of variables and to select useful subsets of variables (Huberty, 1989; Thompson, 1995). Stepwise regression involves developing a sequence of linear models that, according to Snyder (1991), can be viewed as a variation of the forward selection method since predictor variables are entered one at a Stepwise versus Hierarchical Regression, 3 time, but true stepwise entry differs from forward entry in that at each step of a stepwise analysis the removal of each entered predictor is also considered; entered predictors are deleted in subsequent steps if they no longer contribute appreciable unique predictive power to the regression when considered in combination with newly entered predictors (Thompson, 1989).
    [Show full text]
  • A Robust Hybrid of Lasso and Ridge Regression
    A robust hybrid of lasso and ridge regression Art B. Owen Stanford University October 2006 Abstract Ridge regression and the lasso are regularized versions of least squares regression using L2 and L1 penalties respectively, on the coefficient vector. To make these regressions more robust we may replace least squares with Huber’s criterion which is a hybrid of squared error (for relatively small errors) and absolute error (for relatively large ones). A reversed version of Huber’s criterion can be used as a hybrid penalty function. Relatively small coefficients contribute their L1 norm to this penalty while larger ones cause it to grow quadratically. This hybrid sets some coefficients to 0 (as lasso does) while shrinking the larger coefficients the way ridge regression does. Both the Huber and reversed Huber penalty functions employ a scale parameter. We provide an objective function that is jointly convex in the regression coefficient vector and these two scale parameters. 1 Introduction We consider here the regression problem of predicting y ∈ R based on z ∈ Rd. The training data are pairs (zi, yi) for i = 1, . , n. We suppose that each vector p of predictor vectors zi gets turned into a feature vector xi ∈ R via zi = φ(xi) for some fixed function φ. The predictor for y is linear in the features, taking the form µ + x0β where β ∈ Rp. In ridge regression (Hoerl and Kennard, 1970) we minimize over β, a criterion of the form n p X 0 2 X 2 (yi − µ − xiβ) + λ βj , (1) i=1 j=1 for a ridge parameter λ ∈ [0, ∞].
    [Show full text]
  • Stepwise Model Fitting and Statistical Inference: Turning Noise Into Signal Pollution
    Stepwise Model Fitting and Statistical Inference: Turning Noise into Signal Pollution The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Mundry, Roger and Charles Nunn. 2009. Stepwise model fitting and statistical inference: turning noise into signal pollution. American Naturalist 173(1): 119-123. Published Version doi:10.1086/593303 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:5344225 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP * Manuscript Stepwise model fitting and statistical inference: turning noise into signal pollution Roger Mundry1 and Charles Nunn1,2,3 1: Max Planck Institute for Evolutionary Anthropology Deutscher Platz 6 D-04103 Leipzig 2: Department of Integrative Biology University of California Berkeley, CA 94720-3140 USA 3: Department of Anthropology Harvard University Cambridge, MA 02138 USA Roger Mundry: [email protected] Charles L. Nunn: [email protected] Keywords: stepwise modeling, statistical inference, multiple testing, error-rate, multiple regression, GLM Article type: note elements of the manuscript that will appear in the expanded online edition: app. A, methods 1 Abstract Statistical inference based on stepwise model selection is applied regularly in ecological, evolutionary and behavioral research. In addition to fundamental shortcomings with regard to finding the 'best' model, stepwise procedures are known to suffer from a multiple testing problem, yet the method is still widely used.
    [Show full text]
  • Linear Regression: Goodness of Fit and Model Selection
    Linear Regression: Goodness of Fit and Model Selection 1 Goodness of Fit I Goodness of fit measures for linear regression are attempts to understand how well a model fits a given set of data. I Models almost never describe the process that generated a dataset exactly I Models approximate reality I However, even models that approximate reality can be used to draw useful inferences or to prediction future observations I ’All Models are wrong, but some are useful’ - George Box 2 Goodness of Fit I We have seen how to check the modelling assumptions of linear regression: I checking the linearity assumption I checking for outliers I checking the normality assumption I checking the distribution of the residuals does not depend on the predictors I These are essential qualitative checks of goodness of fit 3 Sample Size I When making visual checks of data for goodness of fit is important to consider sample size I From a multiple regression model with 2 predictors: I On the left is a histogram of the residuals I On the right is residual vs predictor plot for each of the two predictors 4 Sample Size I The histogram doesn’t look normal but there are only 20 datapoint I We should not expect a better visual fit I Inferences from the linear model should be valid 5 Outliers I Often (particularly when a large dataset is large): I the majority of the residuals will satisfy the model checking assumption I a small number of residuals will violate the normality assumption: they will be very big or very small I Outliers are often generated by a process distinct from those which we are primarily interested in.
    [Show full text]
  • Lasso Reference Manual Release 17
    STATA LASSO REFERENCE MANUAL RELEASE 17 ® A Stata Press Publication StataCorp LLC College Station, Texas Copyright c 1985–2021 StataCorp LLC All rights reserved Version 17 Published by Stata Press, 4905 Lakeway Drive, College Station, Texas 77845 Typeset in TEX ISBN-10: 1-59718-337-7 ISBN-13: 978-1-59718-337-6 This manual is protected by copyright. All rights are reserved. No part of this manual may be reproduced, stored in a retrieval system, or transcribed, in any form or by any means—electronic, mechanical, photocopy, recording, or otherwise—without the prior written permission of StataCorp LLC unless permitted subject to the terms and conditions of a license granted to you by StataCorp LLC to use the software and documentation. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted by this document. StataCorp provides this manual “as is” without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. StataCorp may make improvements and/or changes in the product(s) and the program(s) described in this manual at any time and without notice. The software described in this manual is furnished under a license agreement or nondisclosure agreement. The software may be copied only in accordance with the terms of the agreement. It is against the law to copy the software onto DVD, CD, disk, diskette, tape, or any other medium for any purpose other than backup or archival purposes. The automobile dataset appearing on the accompanying media is Copyright c 1979 by Consumers Union of U.S., Inc., Yonkers, NY 10703-1057 and is reproduced by permission from CONSUMER REPORTS, April 1979.
    [Show full text]
  • Overfitting Can Be Harmless for Basis Pursuit, but Only to a Degree
    Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree Peizhong Ju Xiaojun Lin Jia Liu School of ECE School of ECE Department of ECE Purdue University Purdue University The Ohio State University West Lafayette, IN 47906 West Lafayette, IN 47906 Columbus, OH 43210 [email protected] [email protected] [email protected] Abstract Recently, there have been significant interests in studying the so-called “double- descent” of the generalization error of linear regression models under the overpa- rameterized and overfitting regime, with the hope that such analysis may provide the first step towards understanding why overparameterized deep neural networks (DNN) still generalize well. However, to date most of these studies focused on the min `2-norm solution that overfits the data. In contrast, in this paper we study the overfitting solution that minimizes the `1-norm, which is known as Basis Pursuit (BP) in the compressed sensing literature. Under a sparse true linear regression model with p i.i.d. Gaussian features, we show that for a large range of p up to a limit that grows exponentially with the number of samples n, with high probability the model error of BP is upper bounded by a value that decreases with p. To the best of our knowledge, this is the first analytical result in the literature establishing the double-descent of overfitting BP for finite n and p. Further, our results reveal significant differences between the double-descent of BP and min `2-norm solu- tions. Specifically, the double-descent upper-bound of BP is independent of the signal strength, and for high SNR and sparse models the descent-floor of BP can be much lower and wider than that of min `2-norm solutions.
    [Show full text]
  • Scalable Model Selection for Spatial Additive Mixed Modeling: Application to Crime Analysis
    Scalable model selection for spatial additive mixed modeling: application to crime analysis Daisuke Murakami1,2,*, Mami Kajita1, Seiji Kajita1 1Singular Perturbations Co. Ltd., 1-5-6 Risona Kudan Building, Kudanshita, Chiyoda, Tokyo, 102-0074, Japan 2Department of Statistical Data Science, Institute of Statistical Mathematics, 10-3 Midori-cho, Tachikawa, Tokyo, 190-8562, Japan * Corresponding author (Email: [email protected]) Abstract: A rapid growth in spatial open datasets has led to a huge demand for regression approaches accommodating spatial and non-spatial effects in big data. Regression model selection is particularly important to stably estimate flexible regression models. However, conventional methods can be slow for large samples. Hence, we develop a fast and practical model-selection approach for spatial regression models, focusing on the selection of coefficient types that include constant, spatially varying, and non-spatially varying coefficients. A pre-processing approach, which replaces data matrices with small inner products through dimension reduction dramatically accelerates the computation speed of model selection. Numerical experiments show that our approach selects the model accurately and computationally efficiently, highlighting the importance of model selection in the spatial regression context. Then, the present approach is applied to open data to investigate local factors affecting crime in Japan. The results suggest that our approach is useful not only for selecting factors influencing crime risk but also for predicting crime events. This scalable model selection will be key to appropriately specifying flexible and large-scale spatial regression models in the era of big data. The developed model selection approach was implemented in the R package spmoran. Keywords: model selection; spatial regression; crime; fast computation; spatially varying coefficient modeling 1.
    [Show full text]
  • Model Selection Techniques: an Overview
    Model Selection Techniques An overview ©ISTOCKPHOTO.COM/GREMLIN Jie Ding, Vahid Tarokh, and Yuhong Yang n the era of big data, analysts usually explore various statis- following different philosophies and exhibiting varying per- tical models or machine-learning methods for observed formances. The purpose of this article is to provide a compre- data to facilitate scientific discoveries or gain predictive hensive overview of them, in terms of their motivation, large power. Whatever data and fitting procedures are employed, sample performance, and applicability. We provide integrated Ia crucial step is to select the most appropriate model or meth- and practically relevant discussions on theoretical properties od from a set of candidates. Model selection is a key ingredi- of state-of-the-art model selection approaches. We also share ent in data analysis for reliable and reproducible statistical our thoughts on some controversial views on the practice of inference or prediction, and thus it is central to scientific stud- model selection. ies in such fields as ecology, economics, engineering, finance, political science, biology, and epidemiology. There has been a Why model selection long history of model selection techniques that arise from Vast developments in hardware storage, precision instrument researches in statistics, information theory, and signal process- manufacturing, economic globalization, and so forth have ing. A considerable number of methods has been proposed, generated huge volumes of data that can be analyzed to extract useful information. Typical statistical inference or machine- learning procedures learn from and make predictions on data Digital Object Identifier 10.1109/MSP.2018.2867638 Date of publication: 13 November 2018 by fitting parametric or nonparametric models (in a broad 16 IEEE SIGNAL PROCESSING MAGAZINE | November 2018 | 1053-5888/18©2018IEEE sense).
    [Show full text]
  • Least Squares After Model Selection in High-Dimensional Sparse Models.” DOI:10.3150/11-BEJ410SUPP
    Bernoulli 19(2), 2013, 521–547 DOI: 10.3150/11-BEJ410 Least squares after model selection in high-dimensional sparse models ALEXANDRE BELLONI1 and VICTOR CHERNOZHUKOV2 1100 Fuqua Drive, Durham, North Carolina 27708, USA. E-mail: [email protected] 250 Memorial Drive, Cambridge, Massachusetts 02142, USA. E-mail: [email protected] In this article we study post-model selection estimators that apply ordinary least squares (OLS) to the model selected by first-step penalized estimators, typically Lasso. It is well known that Lasso can estimate the nonparametric regression function at nearly the oracle rate, and is thus hard to improve upon. We show that the OLS post-Lasso estimator performs at least as well as Lasso in terms of the rate of convergence, and has the advantage of a smaller bias. Remarkably, this performance occurs even if the Lasso-based model selection “fails” in the sense of missing some components of the “true” regression model. By the “true” model, we mean the best s-dimensional approximation to the nonparametric regression function chosen by the oracle. Furthermore, OLS post-Lasso estimator can perform strictly better than Lasso, in the sense of a strictly faster rate of convergence, if the Lasso-based model selection correctly includes all components of the “true” model as a subset and also achieves sufficient sparsity. In the extreme case, when Lasso perfectly selects the “true” model, the OLS post-Lasso estimator becomes the oracle estimator. An important ingredient in our analysis is a new sparsity bound on the dimension of the model selected by Lasso, which guarantees that this dimension is at most of the same order as the dimension of the “true” model.
    [Show full text]
  • Modern Regression 2: the Lasso
    Modern regression 2: The lasso Ryan Tibshirani Data Mining: 36-462/36-662 March 21 2013 Optional reading: ISL 6.2.2, ESL 3.4.2, 3.4.3 1 Reminder: ridge regression and variable selection Recall our setup: given a response vector y 2 Rn, and a matrix X 2 Rn×p of predictor variables (predictors on the columns) Last time we saw that ridge regression, ^ridge 2 2 β = argmin ky − Xβk2 + λkβk2 β2Rp can have better prediction error than linear regression in a variety of scenarios, depending on the choice of λ. It worked best when there was a subset of the true coefficients that are small or zero But it will never sets coefficients to zero exactly, and therefore cannot perform variable selection in the linear model. While this didn't seem to hurt its prediction ability, it is not desirable for the purposes of interpretation (especially if the number of variables p is large) 2 Recall our example: n = 50, p = 30; true coefficients: 10 are nonzero and pretty big, 20 are zero Linear MSE ● True nonzero Ridge MSE True zero 1.0 ● Ridge Bias^2 ● 0.8 ● Ridge Var ● ● ● ● 0.6 ● 0.5 ● ● ● 0.4 Coefficients ● ● ● ● ● 0.0 ● ● 0.2 ● ● ● ● ● ● 0.0 −0.5 0 5 10 15 20 25 0 5 10 15 20 25 λ λ 3 Example: prostate data Recall the prostate data example: we are interested in the level of prostate-specific antigen (PSA), elevated in men who have prostate cancer. We have measurements of PSA on n = 97 men with prostate cancer, and p = 8 clinical predictors.
    [Show full text]