Nonparametric Regression
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Least Squares Regression Principal Component Analysis a Supervised Dimensionality Reduction Method for Machine Learning in Scientific Applications
Universitat Politecnica` de Catalunya Bachelor's Degree in Engineering Physics Bachelor's Thesis Least Squares Regression Principal Component Analysis A supervised dimensionality reduction method for machine learning in scientific applications Supervisor: Author: Dr. Xin Yee H´ector Pascual Herrero Dr. Joan Torras June 29, 2020 Abstract Dimension reduction is an important technique in surrogate modeling and machine learning. In this thesis, we present three existing dimension reduction methods in de- tail and then we propose a novel supervised dimension reduction method, `Least Squares Regression Principal Component Analysis" (LSR-PCA), applicable to both classifica- tion and regression dimension reduction tasks. To show the efficacy of this method, we present different examples in visualization, classification and regression problems, com- paring it to state-of-the-art dimension reduction methods. Furthermore, we present the kernel version of LSR-PCA for problems where the input are correlated non-linearly. The examples demonstrated that LSR-PCA can be a competitive dimension reduction method. 1 Acknowledgements I would like to express my gratitude to my thesis supervisor, Professor Xin Yee. I would like to thank her for giving me this wonderful opportunity and for her guidance and support during all the passing of this semester, putting herself at my disposal during the difficult times of the COVID-19 situation. Without her, the making of this thesis would not have been possible. I would like to extend my thanks to Mr. Pere Balsells, for allowing students like me to conduct their thesis abroad, as well as to the Balsells Foundation for its help and support throughout the whole stay. -
Nonparametric Regression 1 Introduction
Nonparametric Regression 10/36-702 Larry Wasserman 1 Introduction Now we focus on the following problem: Given a sample (X1;Y1);:::,(Xn;Yn), where d Xi 2 R and Yi 2 R, estimate the regression function m(x) = E(Y jX = x) (1) without making parametric assumptions (such as linearity) about the regression function m(x). Estimating m is called nonparametric regression or smoothing. We can write Y = m(X) + where E() = 0. This follows since, = Y − m(X) and E() = E(E(jX)) = E(m(X) − m(X)) = 0 A related problem is nonparametric prediction. Given a pair (X; Y ), we want to predict Y from X. The optimal predictor (under squared error loss) is the regression function m(X). Hence, estimating m is of interest for its own sake and for the purposes of prediction. Example 1 Figure 1 shows data on bone mineral density. The plots show the relative change in bone density over two consecutive visits, for men and women. The smooth estimates of the regression functions suggest that a growth spurt occurs two years earlier for females. In this example, Y is change in bone mineral density and X is age. Example 2 Figure 2 shows an analysis of some diabetes data from Efron, Hastie, Johnstone and Tibshirani (2004). The outcome Y is a measure of disease progression after one year. We consider four covariates (ignoring for now, six other variables): age, bmi (body mass index), and two variables representing blood serum measurements. A nonparametric regression model in this case takes the form Y = m(x1; x2; x3; x4) + . -
Nonparametric Regression with Complex Survey Data
CHAPTER 11 NONPARAMETRIC REGRESSION WITH COMPLEX SURVEY DATA R. L. Chambers Department of Social Statistics University of Southampton A.H. Dorfman Office of Survey Methods Research Bureau of Labor Statistics M.Yu. Sverchkov Department of Statistics Hebrew University of Jerusalem June 2002 11.1 Introduction The problem considered here is one familiar to analysts carrying out exploratory data analysis (EDA) of data obtained via a complex sample survey design. How does one adjust for the effects, if any, induced by the method of sampling used in the survey when applying EDA methods to these data? In particular, are adjustments to standard EDA methods necessary when the analyst's objective is identification of "interesting" population (rather than sample) structures? A variety of methods for adjusting for complex sample design when carrying out parametric inference have been suggested. See, for example, SHS, Pfeffermann (1993) and Breckling et al (1994). However, comparatively little work has been done to date on extending these ideas to EDA, where a parametric formulation of the problem is typically inappropriate. We focus on a popular EDA technique, nonparametric regression or scatterplot smoothing. The literature contains a limited number of applications of this type of analysis to survey data, usually based on some form of sample weighting. The design-based theory set out in chapter 10, with associated references, provides an introduction to this work. See also Chesher (1997). The approach taken here is somewhat different. In particular, it is model-based, building on the sample distribution concept discussed in section 2.3. Here we develop this idea further, using it to motivate a number of methods for adjusting for the effect of a complex sample design when estimating a population regression function. -
Nonparametric Bayesian Inference
Nonparametric Bayesian Methods 1 What is Nonparametric Bayes? In parametric Bayesian inference we have a model M = ff(yjθ): θ 2 Θg and data Y1;:::;Yn ∼ f(yjθ). We put a prior distribution π(θ) on the parameter θ and compute the posterior distribution using Bayes' rule: L (θ)π(θ) π(θjY ) = n (1) m(Y ) Q where Y = (Y1;:::;Yn), Ln(θ) = i f(Yijθ) is the likelihood function and n Z Z Y m(y) = m(y1; : : : ; yn) = f(y1; : : : ; ynjθ)π(θ)dθ = f(yijθ)π(θ)dθ i=1 is the marginal distribution for the data induced by the prior and the model. We call m the induced marginal. The model may be summarized as: θ ∼ π Y1;:::;Ynjθ ∼ f(yjθ): We use the posterior to compute a point estimator such as the posterior mean of θ. We can also summarize the posterior by drawing a large sample θ1; : : : ; θN from the posterior π(θjY ) and the plotting the samples. In nonparametric Bayesian inference, we replace the finite dimensional model ff(yjθ): θ 2 Θg with an infinite dimensional model such as Z F = f : (f 00(y))2dy < 1 (2) Typically, neither the prior nor the posterior have a density function with respect to a dominating measure. But the posterior is still well defined. On the other hand, if there is a dominating measure for a set of densities F then the posterior can be found by Bayes theorem: R A Ln(f)dπ(f) πn(A) ≡ P(f 2 AjY ) = R (3) F Ln(f)dπ(f) Q where A ⊂ F, Ln(f) = i f(Yi) is the likelihood function and π is a prior on F. -
Nonparametric Regression and the Bootstrap Yen-Chi Chen December 5, 2016
Nonparametric Regression and the Bootstrap Yen-Chi Chen December 5, 2016 Nonparametric regression We first generate a dataset using the following code: set.seed(1) X <- runif(500) Y <- sin(X*2*pi)+rnorm(500,sd=0.2) plot(X,Y) 1.0 0.5 0.0 Y −0.5 −1.5 0.0 0.2 0.4 0.6 0.8 1.0 X This is a sine shape regression dataset. Simple linear regression We first fit a linear regression to this dataset: fit <- lm(Y~X) summary(fit) ## ## Call: ## lm(formula = Y ~ X) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.38395 -0.35908 0.04589 0.38541 1.05982 ## 1 ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 1.03681 0.04306 24.08 <2e-16 *** ## X -1.98962 0.07545 -26.37 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 '' 1 ## ## Residual standard error: 0.4775 on 498 degrees of freedom ## Multiple R-squared: 0.5827, Adjusted R-squared: 0.5819 ## F-statistic: 695.4 on 1 and 498 DF, p-value: < 2.2e-16 We observe a negative slope and here is the scatter plot with the regression line: plot(X,Y, pch=20, cex=0.7, col="black") abline(fit, lwd=3, col="red") 1.0 0.5 0.0 Y −0.5 −1.5 0.0 0.2 0.4 0.6 0.8 1.0 X After fitted a regression, we need to examine the residual plot to see if there is any patetrn left in the fit: plot(X, fit$residuals) 2 1.0 0.5 0.0 fit$residuals −0.5 −1.0 0.0 0.2 0.4 0.6 0.8 1.0 X We see a clear pattern in the residual plot–this suggests that there is a nonlinear relationship between X and Y. -
Testing for No Effect in Nonparametric Regression Via Spline Smoothing Techniques
Ann. Inst. Statist. Math. Vol. 46, No. 2, 251-265 (1994) TESTING FOR NO EFFECT IN NONPARAMETRIC REGRESSION VIA SPLINE SMOOTHING TECHNIQUES JUEI-CHAO CHEN Graduate Institute of Statistics and Actuarial Science, Feng Chic University, Taiehung ~072~, Taiwan, ROC (Received June 8, 1992; revised August 2, 1993) Abstract. We propose three statistics for testing that a predictor variable has no effect on the response variable in regression analysis. The test statistics are integrals of squared derivatives of various orders of a periodic smoothing spline fit to the data. The large sample properties of the test statistics are investigated under the null hypothesis and sequences of local alternatives and a Monte Carlo study is conducted to assess finite sample power properties. Key words and phrases: Asymptotic distribution, local alternatives, nonpara- metric regression, Monte Carlo. 1. Introduction Regression analysis is used to study relationships between the response vari- able and a predictor variable; therefore, all of the inference is based on the as- sumption that the response variable actually depends on the predictor variable. Testing for no effect is the same as checking this assumption. In this paper, we propose three statistics for testing that a predictor variable has no effect on the response variable and derive their asymptotic distributions. Suppose we have an experiment which yields observations (tin,y1),..., (tn~, Yn), where yj represents the value of a response variable y at equally spaced values tjn = (j - 1)/n, j = 1,...,n, of the predictor variable t. The regression model is (1.1) yj = #(tin) + ey, j --- 1,...,n, where the ej are iid random errors with E[el] = 0, Var[r ~2 and 0 K E[e 41 K oe. -
Enabling Deeper Learning on Big Data for Materials Informatics Applications
www.nature.com/scientificreports OPEN Enabling deeper learning on big data for materials informatics applications Dipendra Jha1, Vishu Gupta1, Logan Ward2,3, Zijiang Yang1, Christopher Wolverton4, Ian Foster2,3, Wei‑keng Liao1, Alok Choudhary1 & Ankit Agrawal1* The application of machine learning (ML) techniques in materials science has attracted signifcant attention in recent years, due to their impressive ability to efciently extract data‑driven linkages from various input materials representations to their output properties. While the application of traditional ML techniques has become quite ubiquitous, there have been limited applications of more advanced deep learning (DL) techniques, primarily because big materials datasets are relatively rare. Given the demonstrated potential and advantages of DL and the increasing availability of big materials datasets, it is attractive to go for deeper neural networks in a bid to boost model performance, but in reality, it leads to performance degradation due to the vanishing gradient problem. In this paper, we address the question of how to enable deeper learning for cases where big materials data is available. Here, we present a general deep learning framework based on Individual Residual learning (IRNet) composed of very deep neural networks that can work with any vector‑ based materials representation as input to build accurate property prediction models. We fnd that the proposed IRNet models can not only successfully alleviate the vanishing gradient problem and enable deeper learning, but also lead to signifcantly (up to 47%) better model accuracy as compared to plain deep neural networks and traditional ML techniques for a given input materials representation in the presence of big data. -
Chapter 5 Nonparametric Regression
Chapter 5 Nonparametric Regression 1 Contents 5 Nonparametric Regression 1 1 Introduction . 3 1.1 Example: Prestige data . 3 1.2 Smoothers . 4 2 Local polynomial regression: Lowess . 4 2.1 Window-span . 6 2.2 Inference . 7 3 Kernel smoothing . 9 4 Splines . 10 4.1 Number and position of knots . 11 4.2 Smoothing splines . 12 5 Penalized splines . 15 2 1 Introduction A linear model is desirable because it is simple to fit, it is easy to understand, and there are many techniques for testing the assumptions. However, in many cases, data are not linearly related, therefore, we should not use linear regression. The traditional nonlinear regression model fits the model y = f(Xβ) + 0 where β = (β1, . βp) is a vector of parameters to be estimated and X is the matrix of predictor variables. The function f(.), relating the average value of the response y on the predictors, is specified in advance, as it is in a linear regression model. But, in some situa- tions, the structure of the data is so complicated, that it is very difficult to find a functions that estimates the relationship correctly. A solution is: Nonparametric regression. The general nonparametric regression model is written in similar manner, but f is left unspecified: y = f(X) + = f(x1,... xp) + Most nonparametric regression methods assume that f(.) is a smooth, continuous function, 2 and that i ∼ NID(0, σ ). An important case of the general nonparametric model is the nonparametric simple regres- sion, where we only have one predictor y = f(x) + Nonparametric simple regression is often called scatterplot smoothing, because an important application is to tracing a smooth curve through a scatterplot of y against x and display the underlying structure of the data. -
COVID-19 and Machine Learning: Investigation and Prediction Team #5
COVID-19 and Machine Learning: Investigation and Prediction Team #5: Kiran Brar Olivia Alexander Kimberly Segura 1 Abstract The COVID-19 virus has affected over four million people in the world. In the U.S. alone, the number of positive cases have exceeded one million, making it the most affected country. There is clear urgency to predict and ultimately decrease the spread of this infectious disease. Therefore, this project was motivated to test and determine various machine learning models that can accurately predict the number of confirmed COVID-19 cases in the U.S. using available time-series data. COVID-19 data was coupled with state demographic data to investigate the distribution of cases and potential correlations between demographic features. Concerning the four machine learning models tested, it was hypothesized that LSTM and XGBoost would result in the lowest errors due to the complexity and power of these models, followed by SVR and linear regression. However, linear regression and SVR had the best performance in this study which demonstrates the importance of testing simpler models and only adding complexity if the data requires it. We note that LSTM’s low performance was most likely due to the size of the training dataset available at the time of this research as deep learning requires a vast amount of data. Additionally, each model’s accuracy improved after implementing time-series preprocessing techniques of power transformations, normalization, and the overall restructuring of the time-series problem to a supervised machine learning problem using lagged values. This research can be furthered by predicting the number of deaths and recoveries as well as extending the models by integrating healthcare capacity and social restrictions in order to increase accuracy or to forecast infection, death, and recovery rates for future dates. -
Kernel Smoothing Methods
Kernel Smoothing Methods Hanchen Wang Ph.D Candidate in Information Engineering, University of Cambridge September 29, 2019 Hanchen Wang ([email protected]) Kernel Smoothing Methods September 29, 2019 1 / 18 Overview 1 6.0 what is kernel smoothing? 2 6.1 one-dimensional kernel smoothers 3 6.2 selecting the width λ of the kernel p 4 6.3 local regression in R p 5 6.4 structured local regression models in R 6 6.5 local likelihood and other models 7 6.6 kernel density estimation and classification 8 6.7 radial basis functions and kernels 9 6.8 mixture models for density estimation and classifications 10 6.9 computation considerations 11 Q & A: relationship between kernel smoothing methods and kernel methods 12 one more thing: solution manual to these textbooks Hanchen Wang ([email protected]) Kernel Smoothing Methods September 29, 2019 2 / 18 6.0 what is kernel smoothing method? a class of regression techniques that achieve flexibility in esti- p mating function f (X ) over the domain R by fitting a different but simple model separately at each query point x0. p resulting estimated function f (X ) is smooth in R fitting gets done at evaluation time, memory-based methods require in principle little or no training, similar as kNN lazy learning require hyperparameter setting such as metric window size λ kernels are mostly used as a device for localization rather than high-dimensional (implicit) feature extractor in kernel methods Hanchen Wang ([email protected]) Kernel Smoothing Methods September 29, 2019 3 / 18 6.1 one-dimensional kernel smoothers, overview -
Input Output Kernel Regression: Supervised and Semi-Supervised Structured Output Prediction with Operator-Valued Kernels
Journal of Machine Learning Research 17 (2016) 1-48 Submitted 11/15; Revised 7/16; Published 9/16 Input Output Kernel Regression: Supervised and Semi-Supervised Structured Output Prediction with Operator-Valued Kernels C´elineBrouard [email protected] Helsinki Institute for Information Technology HIIT Department of Computer Science, Aalto University 02150 Espoo, Finland IBISC, Universit´ed'Evry´ Val d'Essonne 91037 Evry´ cedex, France Marie Szafranski [email protected] ENSIIE & LaMME, Universit´ed'Evry´ Val d'Essonne, CNRS, INRA 91037 Evry´ cedex, France IBISC, Universit´ed'Evry´ Val d'Essonne 91037 Evry´ cedex, France Florence d'Alch´e-Buc [email protected] LTCI, CNRS, T´el´ecom ParisTech Universit´eParis-Saclay 46, rue Barrault 75013 Paris, France IBISC, Universit´ed'Evry´ Val d'Essonne 91037 Evry´ cedex, France Editor: Koji Tsuda Abstract In this paper, we introduce a novel approach, called Input Output Kernel Regression (IOKR), for learning mappings between structured inputs and structured outputs. The approach belongs to the family of Output Kernel Regression methods devoted to regres- sion in feature space endowed with some output kernel. In order to take into account structure in input data and benefit from kernels in the input space as well, we use the Re- producing Kernel Hilbert Space theory for vector-valued functions. We first recall the ridge solution for supervised learning and then study the regularized hinge loss-based solution used in Maximum Margin Regression. Both models are also developed in the context of semi-supervised setting. In addition we derive an extension of Generalized Cross Validation for model selection in the case of the least-square model. -
Nonparametric-Regression.Pdf
Nonparametric Regression John Fox Department of Sociology McMaster University 1280 Main Street West Hamilton, Ontario Canada L8S 4M4 [email protected] February 2004 Abstract Nonparametric regression analysis traces the dependence of a response variable on one or several predictors without specifying in advance the function that relates the predictors to the response. This article dis- cusses several common methods of nonparametric regression, including kernel estimation, local polynomial regression, and smoothing splines. Additive regression models and semiparametric models are also briefly discussed. Keywords: kernel estimation; local polynomial regression; smoothing splines; additive regression; semi- parametric regression Nonparametric regression analysis traces the dependence of a response variable (y)ononeorseveral predictors (xs) without specifying in advance the function that relates the response to the predictors: E(yi)=f(x1i,...,xpi) where E(yi) is the mean of y for the ith of n observations. It is typically assumed that the conditional variance of y,Var(yi|x1i,...,xpi) is a constant, and that the conditional distribution of y is normal, although these assumptions can be relaxed. Nonparametric regression is therefore distinguished from linear regression, in which the function relating the mean of y to the xs is linear in the parameters, E(yi)=α + β1x1i + ···+ βpxpi and from traditional nonlinear regression, in which the function relating the mean of y to the xs, though nonlinear in its parameters, is specified explicitly, E(yi)=f(x1i,...,xpi; γ1,...,γk) In traditional regression analysis, the object is to estimate the parameters of the model – the βsorγs. In nonparametric regression, the object is to estimate the regression function directly.