Empirical Priors for Prediction in Sparse High-Dimensional Linear Regression∗
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Efficient Empirical Bayes Variable Selection and Estimation in Linear
Efficient Empirical Bayes Variable Selection and Estimation in Linear Models Ming YUAN and Yi LIN We propose an empirical Bayes method for variable selection and coefficient estimation in linear regression models. The method is based on a particular hierarchical Bayes formulation, and the empirical Bayes estimator is shown to be closely related to the LASSO estimator. Such a connection allows us to take advantage of the recently developed quick LASSO algorithm to compute the empirical Bayes estimate, and provides a new way to select the tuning parameter in the LASSO method. Unlike previous empirical Bayes variable selection methods, which in most practical situations can be implemented only through a greedy stepwise algorithm, our method gives a global solution efficiently. Simulations and real examples show that the proposed method is very competitive in terms of variable selection, estimation accuracy, and computation speed compared with other variable selection and estimation methods. KEY WORDS: Hierarchical model; LARS algorithm; LASSO; Model selection; Penalized least squares. 1. INTRODUCTION because the number of candidate models grows exponentially as the number of predictors increases. In practice, this type of We consider the problem of variable selection and coeffi- method is implemented in a stepwise fashion through forward cient estimation in the common normal linear regression model selection or backward elimination. In doing so, one contents wherewehaven observations on a dependent variable Y and oneself with the locally optimal solution instead of the globally p predictors (x , x ,...,x ), and 1 2 p optimal solution. Y = Xβ + , (1) A number of other variable selection methods have been in- 2 troduced in recent years (George and McCulloch 1993; Foster where ∼ Nn(0,σ I) and β = (β1,...,βp) . -
Paradoxes and Priors in Bayesian Regression
Paradoxes and Priors in Bayesian Regression Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Agniva Som, B. Stat., M. Stat. Graduate Program in Statistics The Ohio State University 2014 Dissertation Committee: Dr. Christopher M. Hans, Advisor Dr. Steven N. MacEachern, Co-advisor Dr. Mario Peruggia c Copyright by Agniva Som 2014 Abstract The linear model has been by far the most popular and most attractive choice of a statistical model over the past century, ubiquitous in both frequentist and Bayesian literature. The basic model has been gradually improved over the years to deal with stronger features in the data like multicollinearity, non-linear or functional data pat- terns, violation of underlying model assumptions etc. One valuable direction pursued in the enrichment of the linear model is the use of Bayesian methods, which blend information from the data likelihood and suitable prior distributions placed on the unknown model parameters to carry out inference. This dissertation studies the modeling implications of many common prior distri- butions in linear regression, including the popular g prior and its recent ameliorations. Formalization of desirable characteristics for model comparison and parameter esti- mation has led to the growth of appropriate mixtures of g priors that conform to the seven standard model selection criteria laid out by Bayarri et al. (2012). The existence of some of these properties (or lack thereof) is demonstrated by examining the behavior of the prior under suitable limits on the likelihood or on the prior itself. -
A Compendium of Conjugate Priors
A Compendium of Conjugate Priors Daniel Fink Environmental Statistics Group Department of Biology Montana State Univeristy Bozeman, MT 59717 May 1997 Abstract This report reviews conjugate priors and priors closed under sampling for a variety of data generating processes where the prior distributions are univariate, bivariate, and multivariate. The effects of transformations on conjugate prior relationships are considered and cases where conjugate prior relationships can be applied under transformations are identified. Univariate and bivariate prior relationships are verified using Monte Carlo methods. Contents 1 Introduction Experimenters are often in the position of having had collected some data from which they desire to make inferences about the process that produced that data. Bayes' theorem provides an appealing approach to solving such inference problems. Bayes theorem, π(θ) L(θ x ; : : : ; x ) g(θ x ; : : : ; x ) = j 1 n (1) j 1 n π(θ) L(θ x ; : : : ; x )dθ j 1 n is commonly interpreted in the following wayR. We want to make some sort of inference on the unknown parameter(s), θ, based on our prior knowledge of θ and the data collected, x1; : : : ; xn . Our prior knowledge is encapsulated by the probability distribution on θ; π(θ). The data that has been collected is combined with our prior through the likelihood function, L(θ x ; : : : ; x ) . The j 1 n normalized product of these two components yields a probability distribution of θ conditional on the data. This distribution, g(θ x ; : : : ; x ) , is known as the posterior distribution of θ. Bayes' j 1 n theorem is easily extended to cases where is θ multivariate, a vector of parameters. -
A Review of Bayesian Optimization
Taking the Human Out of the Loop: A Review of Bayesian Optimization The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Shahriari, Bobak, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando de Freitas. 2016. “Taking the Human Out of the Loop: A Review of Bayesian Optimization.” Proc. IEEE 104 (1) (January): 148– 175. doi:10.1109/jproc.2015.2494218. Published Version doi:10.1109/JPROC.2015.2494218 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:27769882 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP 1 Taking the Human Out of the Loop: A Review of Bayesian Optimization Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams and Nando de Freitas Abstract—Big data applications are typically associated with and the analytics company that sits between them. The analyt- systems involving large numbers of users, massive complex ics company must develop procedures to automatically design software systems, and large-scale heterogeneous computing and game variants across millions of users; the objective is to storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., rec- enhance user experience and maximize the content provider’s ommendation systems, medical analysis tools, real-time game revenue. engines, speech recognizers) thus involves many tunable config- The preceding examples highlight the importance of au- uration parameters. -
Bayes and Empirical-Bayes Multiplicity Adjustment in the Variable-Selection Problem
The Annals of Statistics 2010, Vol. 38, No. 5, 2587–2619 DOI: 10.1214/10-AOS792 c Institute of Mathematical Statistics, 2010 BAYES AND EMPIRICAL-BAYES MULTIPLICITY ADJUSTMENT IN THE VARIABLE-SELECTION PROBLEM By James G. Scott1 and James O. Berger University of Texas at Austin and Duke University This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens auto- matically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham’s-razor effect. Our second goal is to con- trast empirical-Bayes and fully Bayesian approaches to variable selec- tion through examples, theoretical results and simulations. Consider- able differences between the two approaches are found. In particular, we prove a theorem that characterizes a surprising aymptotic dis- crepancy between fully Bayes and empirical Bayes. This discrepancy arises from a different source than the failure to account for hyper- parameter uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when the empirical-Bayes estimate converges asymp- totically to the true variable-inclusion probability, the potential for a serious difference remains. 1. Introduction. This paper addresses concerns about multiplicity in the traditional variable-selection problem for linear models. We focus on Bayesian and empirical-Bayesian approaches to the problem. These methods both have the attractive feature that they can, if set up correctly, account for multiplicity automatically, without the need for ad-hoc penalties. Given the huge number of possible predictors in many of today’s scien- tific problems, these concerns about multiplicity are becoming ever more relevant. -
9 Introduction to Hierarchical Models
9 Introduction to Hierarchical Models One of the important features of a Bayesian approach is the relative ease with which hierarchical models can be constructed and estimated using Gibbs sampling. In fact, one of the key reasons for the recent growth in the use of Bayesian methods in the social sciences is that the use of hierarchical models has also increased dramatically in the last two decades. Hierarchical models serve two purposes. One purpose is methodological; the other is substantive. Methodologically, when units of analysis are drawn from clusters within a population (communities, neighborhoods, city blocks, etc.), they can no longer be considered independent. Individuals who come from the same cluster will be more similar to each other than they will be to individuals from other clusters. Therefore, unobserved variables may in- duce statistical dependence between observations within clusters that may be uncaptured by covariates within the model, violating a key assumption of maximum likelihood estimation as it is typically conducted when indepen- dence of errors is assumed. Recall that a likelihood function, when observations are independent, is simply the product of the density functions for each ob- servation taken over all the observations. However, when independence does not hold, we cannot construct the likelihood as simply. Thus, one reason for constructing hierarchical models is to compensate for the biases—largely in the standard errors—that are introduced when the independence assumption is violated. See Ezell, Land, and Cohen (2003) for a thorough review of the approaches that have been used to correct standard errors in hazard model- ing applications with repeated events, one class of models in which repeated measurement yields hierarchical clustering. -
On the Robustness to Misspecification of Α-Posteriors and Their
Robustness of α-posteriors and their variational approximations On the Robustness to Misspecification of α-Posteriors and Their Variational Approximations Marco Avella Medina [email protected] Department of Statistics Columbia University New York, NY 10027, USA Jos´eLuis Montiel Olea [email protected] Department of Economics Columbia University New York, NY 10027, USA Cynthia Rush [email protected] Department of Statistics Columbia University New York, NY 10027, USA Amilcar Velez [email protected] Department of Economics Northwestern University Evanston, IL 60208, USA Abstract α-posteriors and their variational approximations distort standard posterior inference by downweighting the likelihood and introducing variational approximation errors. We show that such distortions, if tuned appropriately, reduce the Kullback-Leibler (KL) divergence from the true, but perhaps infeasible, posterior distribution when there is potential para- arXiv:2104.08324v1 [stat.ML] 16 Apr 2021 metric model misspecification. To make this point, we derive a Bernstein-von Mises theorem showing convergence in total variation distance of α-posteriors and their variational approx- imations to limiting Gaussian distributions. We use these distributions to evaluate the KL divergence between true and reported posteriors. We show this divergence is minimized by choosing α strictly smaller than one, assuming there is a vanishingly small probability of model misspecification. The optimized value becomes smaller as the the misspecification becomes -
Approximate Empirical Bayes for Deep Neural Networks
Approximate Empirical Bayes for Deep Neural Networks Han Zhao,∗ Yao-Hung Hubert Tsai,∗ Ruslan Salakhutdinov, Geoff Gordon {hanzhao, yaohungt, rsalakhu, ggordon}@cs.cmu.edu Machine Learning Department Carnegie Mellon University Abstract Dropout [22] and the more recent DeCov [5] method, in this paper we propose a regularization approach for the We propose an approximate empirical Bayes weight matrix in neural networks through the lens of the framework and an efficient algorithm for learn- empirical Bayes method. We aim to address the prob- ing the weight matrix of deep neural networks. lem of overfitting when training large networks on small Empirically, we show the proposed method dataset. Our key insight stems from the famous argument works as a regularization approach that helps by Efron [6]: It is beneficial to learn from the experience generalization when training neural networks of others. Specifically, from an algorithmic perspective, on small datasets. we argue that the connection weights of neurons in the same layer (row/column vectors of the weight matrix) will be correlated with each other through the backpropaga- 1 Introduction tion learning, and hence by learning from other neurons in the same layer, a neuron can “borrow statistical strength” The empirical Bayes methods provide us a powerful tool from other neurons to reduce overfitting. to obtain Bayesian estimators even if we do not have As an illustrating example, consider the simple setting complete information about the prior distribution. The where the input x 2 Rd is fully connected to a hidden literature on the empirical Bayes methods and its appli- layer h 2 Rp, which is further fully connected to the cations is abundant [2, 6, 8, 9, 10, 15, 21, 23]. -
Empirical Bayes Least Squares Estimation Without an Explicit Prior
Computer Science Technical Report TR2007-900, May 2007 Courant Institute of Mathematical Sciences, New York University http://cs.nyu.edu/web/Research/TechReports/reports.html Empirical Bayes Least Squares Estimation without an Explicit Prior Martin Raphan Eero P. Simoncelli Howard Hughes Medical Institute, Courant Inst. of Mathematical Sciences Center for Neural Science, and New York University Courant Inst. of Mathematical Sciences [email protected] New York University [email protected] Bayesian estimators are commonly constructed using an explicit prior model. In many applications, one does not have such a model, and it is difficult to learn since one does not have access to uncorrupted measure- ments of the variable being estimated. In many cases however, including the case of contamination with additive Gaussian noise, the Bayesian least squares estimator can be formulated directly in terms of the distri- bution of noisy measurements. We demonstrate the use of this formu- lation in removing noise from photographic images. We use a local ap- proximation of the noisy measurement distribution by exponentials over adaptively chosen intervals, and derive an estimator from this approxi- mate distribution. We demonstrate through simulations that this adaptive Bayesian estimator performs as well or better than previously published estimators based on simple prior models. 1 Introduction Denoising is a classic signal processing problem, and Bayesian methods can provide well formulated and highly successful solutions. Bayesian estimators are derived from three fundamental components: 1) the likelihood function, which characterizes the distortion process, 2) the prior, which characterizes the distribution of the variable to be estimated in the absence of any measurement data, and 3) the loss function, which expresses the cost of making errors. -
Nonparametric Empirical Bayes for the Dirichlet Process Mixture Model
Nonparametric empirical Bayes for the Dirichlet process mixture model Jon D. McAuliffe David M. Blei Michael I. Jordan October 4, 2004 Abstract The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified in advance and can grow as new data come in. However, the behavior of the model is sensitive to the choice of the parameters, including an infinite-dimensional distributional parameter G0. Most previous applications have either fixed G0 as a member of a parametric family or treated G0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method for constructing smooth estimates of G0. We combine this method with a technique for estimating α, the other Dirichlet process parameter, that is inspired by an existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G0 are of intrinsic interest, or where the structure of G0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates the robustness of this approach. 1 INTRODUCTION Mixture modeling is an effective and widely practiced density estimation method, capable of representing the phe- nomena that underlie many real-world datasets. However, a long-standing difficulty in mixture analysis is choosing the number of mixture components. Model selection methods, such as cross-validation or methods based on Bayes factors, treat the number of components as an unknown constant and set its value based on the observed data. -
Prior Distributions for Variance Parameters in Hierarchical Models
Bayesian Analysis (2006) 1, Number 3, pp. 515–533 Prior distributions for variance parameters in hierarchical models Andrew Gelman Department of Statistics and Department of Political Science Columbia University Abstract. Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new folded-noncentral-t family of conditionally conjugate priors for hierarchical standard deviation pa- rameters, and then consider noninformative and weakly informative priors in this family. We use an example to illustrate serious problems with the inverse-gamma family of “noninformative” prior distributions. We suggest instead to use a uni- form prior on the hierarchical standard deviation, using the half-t family when the number of groups is small and in other settings where a weakly informative prior is desired. We also illustrate the use of the half-t family for hierarchical modeling of multiple variance parameters such as arise in the analysis of variance. Keywords: Bayesian inference, conditional conjugacy, folded-noncentral-t distri- bution, half-t distribution, hierarchical model, multilevel model, noninformative prior distribution, weakly informative prior distribution 1 Introduction Fully-Bayesian analyses of hierarchical linear models have been considered for at least forty years (Hill, 1965, Tiao and Tan, 1965, and Stone and Springer, 1965) and have remained a topic of theoretical and applied interest (see, e.g., Portnoy, 1971, Box and Tiao, 1973, Gelman et al., 2003, Carlin and Louis, 1996, and Meng and van Dyk, 2001). Browne and Draper (2005) review much of the extensive literature in the course of comparing Bayesian and non-Bayesian inference for hierarchical models. As part of their article, Browne and Draper consider some different prior distributions for variance parameters; here, we explore the principles of hierarchical prior distributions in the context of a specific class of models. -
The Empirical Bayes Estimation of an Instantaneous Spike Rate with a Gaussian Process Prior
The empirical Bayes estimation of an instantaneous spike rate with a Gaussian process prior Shinsuke Koyama Department of Physics, Kyoto University Department of Statistics and Center for Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Takeaki Shimokawa Department of Physics, Kyoto University Sakyo-ku, Kyoto 606-8502, Japan [email protected] Shigeru Shinomoto Department of Physics, Kyoto University Sakyo-ku, Kyoto 606-8502, Japan [email protected] Abstract We investigate a Bayesian method for systematically capturing the underlying fir- ing rate of a neuron. Any method of rate estimation requires a prior assumption about the flatness of the underlying rate, which can be represented by a Gaussian process prior. This Bayesian framework enables us to adjust the very assump- tion by taking into account the distribution of raw data: A hyperparameter of the Gaussian process prior is selected so that the marginal likelihood is maximized. It takes place that this hyperparameter diverges for spike sequences derived from a moderately fluctuating rate. By utilizing the path integral method, we demonstrate two cases that exhibit the divergence continuously and discontinuously. 1 Introduction Bayesian methods based on Gaussian process priors have recently become quite popular, and provide flexible non-parametric methods to machine learning tasks such as regression and classification problems [1, 2, 3]. The important advantage of Gaussian process models is the explicit probabilistic formulation, which not only provides probabilistic predictions but also gives the ability to infer hyperparameters of models [4]. One of the interesting applications of the Gaussian process is the analysis of neuronal spike se- quences.