Model Selection and Model Over-Fitting

Total Page:16

File Type:pdf, Size:1020Kb

Model Selection and Model Over-Fitting STATISTICS COLUMN Shengping Yang et al. Model selection & over-fitting Model selection and model over-fitting Shengping Yang PhD, Gilbert Berdine MD I have collected some demographic, clinic, and blood test data from colon cancer patients treated from 2010 to 2012. I am interested in factors that potentially affect serum CRP levels shortly after diagnosis. I am thinking to use all the data (variables) in a regression model to evaluate such relationships. Is model over-fitting a concern? Real world data very often consist of true signal Let’s recall the process of data collection - in and random noise. Although it’s ideal to fit only the reality, it is rarely feasible to evaluate the underlying true signal using a perfect statistical model to explore relationships using data of an entire population. The the underlying relationship between the outcome common approach is to take random samples from y=f(x)+ε and the independent variables (predictors) x, the population, and then using the sample data to in- many times the true signal f(x) and the random noise ε fer such relationships. However, due to the random cannot be separated from each other. In other words, nature of sampling, as well as the random noise in- we can only observe y and x, and would not be able to herited in individual observations, it is common that observe f(x), where f() is the function that defines the statistical models fitted using different random sam- underlying relationship between y and x. ples (collected from the same population) will have dif- A B y newy x newx ferent parameters. Moreover, in situations where ε is Corresponding author: Shengping Yang PhD substantial, and the number of independent variables Contact Information: [email protected] is large, a statistical model might mainly describe the DOI: 10.12746/swrccc2015.0312.160 random noise rather than the underlying relationship; this effect is called over-fitting. For example, if one 52 The Southwest Respiratory and Critical Care Chronicles 2015;3(12) Shengping Yang et al. Model selection & over-fitting fitsn data points to n parameters, the fit will be exact, error, penalizing extra parameters reduces the risk of but it will be unlikely to work well with another sample over-fitting; 2) use validation data set(s) to evaluate of n points from the same population. The opposite the performance of the fitted model - since the true problem – under-fitting – occurs when too few param- underlying relationship is supposed to be consistent eters are used. A model that is under-fitted will work (the random noise is not) across samples randomly well only if the parameter left out is homogeneously collected from the same population, models that con- distributed throughout the population. sistently have good performance on different samples are more likely to be the one that models the true un- The classical example of model over-fitting is to fit derlying relationship rather than the random noise. data with a polynomial regression. Suppose that the true underlying relationship between x and y in a pop- 1. Penalize models with more parameters ulation is y=30+x2 (this is rarely known in reality; we assume that it is known for demonstration purposes). Penalizing additional parameters (predictors) can First, we randomly sample 9 observations from the be achieved by either performing the traditional mod- population (panel A), and fit the data with both a sec- el selection (based on different criteria) or applying ond order polynomial (red curve; the “correct” model), penalized regression models. and a 10th order polynomial (blue curve: the “over-fit- ted” model). As we can see, although the 10th order (A) Traditional model selection polynomial has a perfect fit to the data, it substantially deviates from the true relationship (black curve). As i. Selection based on: Adjusted R-squared, Mal- a comparison, although the second order polynomial lows’ Cp, AIC and BIC does not fit the data perfectly, it agrees well with the true relationship. Next, we take another independent R-squared is the percentage of outcome variable random sample of 9 observations from the same pop- variation explained by the model, and describes how ulation (panel B). It can be seen that the second order close the data are to the fitted regression. In general, polynomial developed on the first sample has an ac- the higher the R-squared value, the better the model ceptable fit also to the second independent sample. fits. However, R-squared always increases with addi- On contrast, the 10th order polynomial has a very poor tional predictors, thus models with more extra predic- fit. In other words, the 10th order polynomial model tors always have higher R-squared values. The ad- fits not only the underlying relationship, but also the justed R-squared adjusts the R-squared value for the variation associated with random error, thus it is a number of predictors, and it increases only if the addi- over-fitted model. Therefore, it is not surprising that tional predictor improves the model more than would such a model fits well to one sample, but poorly to be expected by chance. Thus, models with higher another one. adjusted R-squared are generally considered better. Since the goal of any biomedical/clinical study is Mallows’ Cp estimates the mean squared prediction to disclose the true underlying relationship, it is critical error, and is a compromise among factors, including to find the “correct” model in data analysis. sample size, collinearity, and predictor effect sizes. The adequate models are those with Cp less than or To find the “correct” model and avoid model equal to the number of parameters (including the con- over-fitting, many methods have been proposed. The stant) in the model. majority of them adopt one of the following strategies: 1) penalize models with more parameters - since in- AIC is “Akaike’s Information Criterion”, and BIC is creased number of parameters in a model is associ- “Schwartz’ Bayesian Criterion.” Both aim at achiev- ated with higher probability of modeling the random ing a compromise between model goodness of fit and The Southwest Respiratory and Critical Care Chronicles 2015;3(12) 53 Shengping Yang et al. Model selection & over-fitting model complexity. The only difference between AIC towards zero. In general, the penalty parameter is and BIC is the penalty term, where BIC is more strin- chosen by cross validation to maximize out-of-sam- gent than AIC. The preferred models are those with ple fit. minimum AIC/BIC. Elastic Net regression was developed to overcome ii. Best subset / forward / backward / step-wise the limitations of LASSO, and in general outperforms selection LASSO when the predictors are highly correlated. Assuming that the total number of predictors is p, 2. Cross validation then the best subset selection fits2 p total models, and chooses the best model based on criteria, such as, Any random sample will differ from its population. adjusted R-squared, Mallows’ Cp, AIC/BIC. However, From a given population, two independent samples if the total number of predictors is large, for example, share the true underlying relationship, but not the greater than 30, then the computation can be a big sample-specific variation. It is important to assess issue. how well the model developed on one sample per- forms on another independent sample, and fine tune In forward selection, the most significant variable model parameters. Cross validation is an easy-to-im- (based on certain pre-set confidence level) is added plement tool to make such assessments. to the model one at a time, until no additional variable meets the criterion. (A) k-fold cross validation Backward selection starts with the full model that The k-fold cross validation is one of the most com- includes all the variables of interest, and then drop monly used methods. Basically, the sample is ran- non-significant variables one at a time, until all the domly partitioned into k equal size subsets, then one variables left are significant. of the k subsets is used as the validation set, and all the other k-1 subsets are used as the training set. Step-wise selection allows both adding and drop- This process is repeated k times (folds), so that each ping variables to allow dropped variables to be recon- of the k subsets is used exactly once as the valida- sidered. tion set. Results from the k validations can then be averaged to produce a single estimation. As a special As an alternative to the above traditional model se- case, if k equals N, which is the total number of obser- lection methods, penalized regressions achieve coef- vations, then the k-fold cross validation is called the ficient estimation and model selection simultaneously. leave-one-out cross validation. (B) Penalized regressions (LASSO regression and (B) random sub-sampling validation Elastic Net) In k-fold cross validation, the proportion of the The LASSO (Least Absolute Shrinkage and Selec- training/validation subsets depends on the number of tion Operator) achieves model selection by penaliz- iterations (folds). In situations where the total num- ing the absolute size of the regression coefficients. ber of sample observations is small, it would make In other words, LASSO includes a penalty term that sense to use random sub-sampling validation, such constrains the size of the estimated coefficients. As as bootstrap. Note that the disadvantages of random a result, solutions of the lasso regression will have sub-sampling are that some observations might many coefficients set exactly to zero, and the larger never have been sampled, and the results might vary the penalty applied, the more estimates are shrunk if such randomization is repeated.
Recommended publications
  • Goodness of Fit: What Do We Really Want to Know? I
    PHYSTAT2003, SLAC, Stanford, California, September 8-11, 2003 Goodness of Fit: What Do We Really Want to Know? I. Narsky California Institute of Technology, Pasadena, CA 91125, USA Definitions of the goodness-of-fit problem are discussed. A new method for estimation of the goodness of fit using distance to nearest neighbor is described. Performance of several goodness-of-fit methods is studied for time-dependent CP asymmetry measurements of sin(2β). 1. INTRODUCTION of the test is then defined as the probability of ac- cepting the null hypothesis given it is true, and the The goodness-of-fit problem has recently attracted power of the test, 1 − αII , is defined as the probabil- attention from the particle physics community. In ity of rejecting the null hypothesis given the alterna- modern particle experiments, one often performs an tive is true. Above, αI and αII denote Type I and unbinned likelihood fit to data. The experimenter Type II errors, respectively. An ideal hypothesis test then needs to estimate how accurately the fit function is uniformly most powerful (UMP) because it gives approximates the observed distribution. A number of the highest power among all possible tests at the fixed methods have been used to solve this problem in the confidence level. In most realistic problems, it is not past [1], and a number of methods have been recently possible to find a UMP test and one has to consider proposed [2, 3] in the physics literature. various tests with acceptable power functions. For binned data, one typically applies a χ2 statistic There is a long-standing controversy about the con- to estimate the fit quality.
    [Show full text]
  • A Robust Hybrid of Lasso and Ridge Regression
    A robust hybrid of lasso and ridge regression Art B. Owen Stanford University October 2006 Abstract Ridge regression and the lasso are regularized versions of least squares regression using L2 and L1 penalties respectively, on the coefficient vector. To make these regressions more robust we may replace least squares with Huber’s criterion which is a hybrid of squared error (for relatively small errors) and absolute error (for relatively large ones). A reversed version of Huber’s criterion can be used as a hybrid penalty function. Relatively small coefficients contribute their L1 norm to this penalty while larger ones cause it to grow quadratically. This hybrid sets some coefficients to 0 (as lasso does) while shrinking the larger coefficients the way ridge regression does. Both the Huber and reversed Huber penalty functions employ a scale parameter. We provide an objective function that is jointly convex in the regression coefficient vector and these two scale parameters. 1 Introduction We consider here the regression problem of predicting y ∈ R based on z ∈ Rd. The training data are pairs (zi, yi) for i = 1, . , n. We suppose that each vector p of predictor vectors zi gets turned into a feature vector xi ∈ R via zi = φ(xi) for some fixed function φ. The predictor for y is linear in the features, taking the form µ + x0β where β ∈ Rp. In ridge regression (Hoerl and Kennard, 1970) we minimize over β, a criterion of the form n p X 0 2 X 2 (yi − µ − xiβ) + λ βj , (1) i=1 j=1 for a ridge parameter λ ∈ [0, ∞].
    [Show full text]
  • Linear Regression: Goodness of Fit and Model Selection
    Linear Regression: Goodness of Fit and Model Selection 1 Goodness of Fit I Goodness of fit measures for linear regression are attempts to understand how well a model fits a given set of data. I Models almost never describe the process that generated a dataset exactly I Models approximate reality I However, even models that approximate reality can be used to draw useful inferences or to prediction future observations I ’All Models are wrong, but some are useful’ - George Box 2 Goodness of Fit I We have seen how to check the modelling assumptions of linear regression: I checking the linearity assumption I checking for outliers I checking the normality assumption I checking the distribution of the residuals does not depend on the predictors I These are essential qualitative checks of goodness of fit 3 Sample Size I When making visual checks of data for goodness of fit is important to consider sample size I From a multiple regression model with 2 predictors: I On the left is a histogram of the residuals I On the right is residual vs predictor plot for each of the two predictors 4 Sample Size I The histogram doesn’t look normal but there are only 20 datapoint I We should not expect a better visual fit I Inferences from the linear model should be valid 5 Outliers I Often (particularly when a large dataset is large): I the majority of the residuals will satisfy the model checking assumption I a small number of residuals will violate the normality assumption: they will be very big or very small I Outliers are often generated by a process distinct from those which we are primarily interested in.
    [Show full text]
  • Lasso Reference Manual Release 17
    STATA LASSO REFERENCE MANUAL RELEASE 17 ® A Stata Press Publication StataCorp LLC College Station, Texas Copyright c 1985–2021 StataCorp LLC All rights reserved Version 17 Published by Stata Press, 4905 Lakeway Drive, College Station, Texas 77845 Typeset in TEX ISBN-10: 1-59718-337-7 ISBN-13: 978-1-59718-337-6 This manual is protected by copyright. All rights are reserved. No part of this manual may be reproduced, stored in a retrieval system, or transcribed, in any form or by any means—electronic, mechanical, photocopy, recording, or otherwise—without the prior written permission of StataCorp LLC unless permitted subject to the terms and conditions of a license granted to you by StataCorp LLC to use the software and documentation. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted by this document. StataCorp provides this manual “as is” without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. StataCorp may make improvements and/or changes in the product(s) and the program(s) described in this manual at any time and without notice. The software described in this manual is furnished under a license agreement or nondisclosure agreement. The software may be copied only in accordance with the terms of the agreement. It is against the law to copy the software onto DVD, CD, disk, diskette, tape, or any other medium for any purpose other than backup or archival purposes. The automobile dataset appearing on the accompanying media is Copyright c 1979 by Consumers Union of U.S., Inc., Yonkers, NY 10703-1057 and is reproduced by permission from CONSUMER REPORTS, April 1979.
    [Show full text]
  • Overfitting Can Be Harmless for Basis Pursuit, but Only to a Degree
    Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree Peizhong Ju Xiaojun Lin Jia Liu School of ECE School of ECE Department of ECE Purdue University Purdue University The Ohio State University West Lafayette, IN 47906 West Lafayette, IN 47906 Columbus, OH 43210 [email protected] [email protected] [email protected] Abstract Recently, there have been significant interests in studying the so-called “double- descent” of the generalization error of linear regression models under the overpa- rameterized and overfitting regime, with the hope that such analysis may provide the first step towards understanding why overparameterized deep neural networks (DNN) still generalize well. However, to date most of these studies focused on the min `2-norm solution that overfits the data. In contrast, in this paper we study the overfitting solution that minimizes the `1-norm, which is known as Basis Pursuit (BP) in the compressed sensing literature. Under a sparse true linear regression model with p i.i.d. Gaussian features, we show that for a large range of p up to a limit that grows exponentially with the number of samples n, with high probability the model error of BP is upper bounded by a value that decreases with p. To the best of our knowledge, this is the first analytical result in the literature establishing the double-descent of overfitting BP for finite n and p. Further, our results reveal significant differences between the double-descent of BP and min `2-norm solu- tions. Specifically, the double-descent upper-bound of BP is independent of the signal strength, and for high SNR and sparse models the descent-floor of BP can be much lower and wider than that of min `2-norm solutions.
    [Show full text]
  • Least Squares After Model Selection in High-Dimensional Sparse Models.” DOI:10.3150/11-BEJ410SUPP
    Bernoulli 19(2), 2013, 521–547 DOI: 10.3150/11-BEJ410 Least squares after model selection in high-dimensional sparse models ALEXANDRE BELLONI1 and VICTOR CHERNOZHUKOV2 1100 Fuqua Drive, Durham, North Carolina 27708, USA. E-mail: [email protected] 250 Memorial Drive, Cambridge, Massachusetts 02142, USA. E-mail: [email protected] In this article we study post-model selection estimators that apply ordinary least squares (OLS) to the model selected by first-step penalized estimators, typically Lasso. It is well known that Lasso can estimate the nonparametric regression function at nearly the oracle rate, and is thus hard to improve upon. We show that the OLS post-Lasso estimator performs at least as well as Lasso in terms of the rate of convergence, and has the advantage of a smaller bias. Remarkably, this performance occurs even if the Lasso-based model selection “fails” in the sense of missing some components of the “true” regression model. By the “true” model, we mean the best s-dimensional approximation to the nonparametric regression function chosen by the oracle. Furthermore, OLS post-Lasso estimator can perform strictly better than Lasso, in the sense of a strictly faster rate of convergence, if the Lasso-based model selection correctly includes all components of the “true” model as a subset and also achieves sufficient sparsity. In the extreme case, when Lasso perfectly selects the “true” model, the OLS post-Lasso estimator becomes the oracle estimator. An important ingredient in our analysis is a new sparsity bound on the dimension of the model selected by Lasso, which guarantees that this dimension is at most of the same order as the dimension of the “true” model.
    [Show full text]
  • Modern Regression 2: the Lasso
    Modern regression 2: The lasso Ryan Tibshirani Data Mining: 36-462/36-662 March 21 2013 Optional reading: ISL 6.2.2, ESL 3.4.2, 3.4.3 1 Reminder: ridge regression and variable selection Recall our setup: given a response vector y 2 Rn, and a matrix X 2 Rn×p of predictor variables (predictors on the columns) Last time we saw that ridge regression, ^ridge 2 2 β = argmin ky − Xβk2 + λkβk2 β2Rp can have better prediction error than linear regression in a variety of scenarios, depending on the choice of λ. It worked best when there was a subset of the true coefficients that are small or zero But it will never sets coefficients to zero exactly, and therefore cannot perform variable selection in the linear model. While this didn't seem to hurt its prediction ability, it is not desirable for the purposes of interpretation (especially if the number of variables p is large) 2 Recall our example: n = 50, p = 30; true coefficients: 10 are nonzero and pretty big, 20 are zero Linear MSE ● True nonzero Ridge MSE True zero 1.0 ● Ridge Bias^2 ● 0.8 ● Ridge Var ● ● ● ● 0.6 ● 0.5 ● ● ● 0.4 Coefficients ● ● ● ● ● 0.0 ● ● 0.2 ● ● ● ● ● ● 0.0 −0.5 0 5 10 15 20 25 0 5 10 15 20 25 λ λ 3 Example: prostate data Recall the prostate data example: we are interested in the level of prostate-specific antigen (PSA), elevated in men who have prostate cancer. We have measurements of PSA on n = 97 men with prostate cancer, and p = 8 clinical predictors.
    [Show full text]
  • Chapter 2 Simple Linear Regression Analysis the Simple
    Chapter 2 Simple Linear Regression Analysis The simple linear regression model We consider the modelling between the dependent and one independent variable. When there is only one independent variable in the linear regression model, the model is generally termed as a simple linear regression model. When there are more than one independent variables in the model, then the linear model is termed as the multiple linear regression model. The linear model Consider a simple linear regression model yX01 where y is termed as the dependent or study variable and X is termed as the independent or explanatory variable. The terms 0 and 1 are the parameters of the model. The parameter 0 is termed as an intercept term, and the parameter 1 is termed as the slope parameter. These parameters are usually called as regression coefficients. The unobservable error component accounts for the failure of data to lie on the straight line and represents the difference between the true and observed realization of y . There can be several reasons for such difference, e.g., the effect of all deleted variables in the model, variables may be qualitative, inherent randomness in the observations etc. We assume that is observed as independent and identically distributed random variable with mean zero and constant variance 2 . Later, we will additionally assume that is normally distributed. The independent variables are viewed as controlled by the experimenter, so it is considered as non-stochastic whereas y is viewed as a random variable with Ey()01 X and Var() y 2 . Sometimes X can also be a random variable.
    [Show full text]
  • A Study of Some Issues of Goodness-Of-Fit Tests for Logistic Regression Wei Ma
    Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2018 A Study of Some Issues of Goodness-of-Fit Tests for Logistic Regression Wei Ma Follow this and additional works at the DigiNole: FSU's Digital Repository. For more information, please contact [email protected] FLORIDA STATE UNIVERSITY COLLEGE OF ARTS AND SCIENCES A STUDY OF SOME ISSUES OF GOODNESS-OF-FIT TESTS FOR LOGISTIC REGRESSION By WEI MA A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2018 Copyright c 2018 Wei Ma. All Rights Reserved. Wei Ma defended this dissertation on July 17, 2018. The members of the supervisory committee were: Dan McGee Professor Co-Directing Dissertation Qing Mai Professor Co-Directing Dissertation Cathy Levenson University Representative Xufeng Niu Committee Member The Graduate School has verified and approved the above-named committee members, and certifies that the dissertation has been approved in accordance with university requirements. ii ACKNOWLEDGMENTS First of all, I would like to express my sincere gratitude to my advisors, Dr. Dan McGee and Dr. Qing Mai, for their encouragement, continuous support of my PhD study, patient guidance. I could not have completed this dissertation without their help and immense knowledge. I have been extremely lucky to have them as my advisors. I would also like to thank the rest of my committee members: Dr. Cathy Levenson and Dr. Xufeng Niu for their support, comments and help for my thesis. I would like to thank all the staffs and graduate students in my department.
    [Show full text]
  • On Lasso Refitting Strategies
    On Lasso refitting strategies EVGENII CHZHEN1,* , MOHAMED HEBIRI1,** , and JOSEPH SALMON2,y 1LAMA, Universit´eParis-Est, 5 Boulevard Descartes, 77420, Champs-sur-Marne, France E-mail: *[email protected]; **[email protected] 2IMAG, Universit´ede Montpellier, Place Eug`eneBataillon, 34095 Montpellier Cedex 5, France E-mail: [email protected] A well-know drawback of `1-penalized estimators is the systematic shrinkage of the large coefficients towards zero. A simple remedy is to treat Lasso as a model-selection procedure and to perform a second refitting step on the selected support. In this work we formalize the notion of refitting and provide oracle bounds for arbitrary refitting procedures of the Lasso solution. One of the most widely used refitting techniques which is based on Least-Squares may bring a problem of interpretability, since the signs of the refitted estimator might be flipped with respect to the original estimator. This problem arises from the fact that the Least-Squares refitting considers only the support of the Lasso solution, avoiding any information about signs or amplitudes. To this end we define a sign consistent refitting as an arbitrary refitting procedure, preserving the signs of the first step Lasso solution and provide Oracle inequalities for such estimators. Finally, we consider special refitting strategies: Bregman Lasso and Boosted Lasso. Bregman Lasso has a fruitful property to converge to the Sign-Least-Squares refitting (Least-Squares with sign constraints), which provides with greater interpretability. We additionally study the Bregman Lasso refitting in the case of orthogonal design, providing with simple intuition behind the proposed method.
    [Show full text]
  • Lecture 2: Overfitting. Regularization
    Lecture 2: Overfitting. Regularization • Generalizing regression • Overfitting • Cross-validation • L2 and L1 regularization for linear estimators • A Bayesian interpretation of regularization • Bias-variance trade-off COMP-652 and ECSE-608, Lecture 2 - January 10, 2017 1 Recall: Overfitting • A general, HUGELY IMPORTANT problem for all machine learning algorithms • We can find a hypothesis that predicts perfectly the training data but does not generalize well to new data • E.g., a lookup table! COMP-652 and ECSE-608, Lecture 2 - January 10, 2017 2 Another overfitting example 1 M = 0 1 M = 1 t t 0 0 −1 −1 0 x 1 0 x 1 1 M = 3 1 M = 9 t t 0 0 −1 −1 0 x 1 0 x 1 • The higher the degree of the polynomial M, the more degrees of freedom, and the more capacity to “overfit” the training data • Typical overfitting means that error on the training data is very low, but error on new instances is high COMP-652 and ECSE-608, Lecture 2 - January 10, 2017 3 Overfitting more formally • Assume that the data is drawn from some fixed, unknown probability distribution • Every hypothesis has a "true" error J ∗(h), which is the expected error when data is drawn from the distribution. • Because we do not have all the data, we measure the error on the training set JD(h) • Suppose we compare hypotheses h1 and h2 on the training set, and JD(h1) < JD(h2) ∗ ∗ • If h2 is "truly" better, i.e. J (h2) < J (h1), our algorithm is overfitting. • We need theoretical and empirical methods to guard against it! COMP-652 and ECSE-608, Lecture 2 - January 10, 2017 4 Typical overfitting plot 1 Training Test RMS 0.5 E 0 0 3 6 9 M • The training error decreases with the degree of the polynomial M, i.e.
    [Show full text]
  • Testing Goodness Of
    Dr. Wolfgang Rolke University of Puerto Rico - Mayaguez CERN Phystat Seminar 1 Problem statement Hypothesis testing Chi-square Methods based on empirical distribution function Other tests Power studies Running several tests Tests for multi-dimensional data 2 ➢ We have a probability model ➢ We have data from an experiment ➢ Does the data agree with the probability model? 3 Good Model? Or maybe needs more? 4 F: cumulative distribution function 퐻0: 퐹 = 퐹0 Usually more useful: 퐻0: 퐹 ∊ ℱ0 ℱ0 a family of distributions, indexed by parameters. 5 Type I error: reject true null hypothesis Type II error: fail to reject false null hypothesis A: HT has to have a true type I error probability no higher than the nominal one (α) B: probability of committing the type II error (β) should be as low as possible (subject to A) Historically A was achieved either by finding an exact test or having a large enough sample. p value = probability to reject true null hypothesis when repeating the experiment and observing value of test statistic or something even less likely. If method works p-value has uniform distribution. 6 Note above: no alternative hypothesis 퐻1 Different problem: 퐻0: 퐹 = 푓푙푎푡 vs 퐻0: 퐹 = 푙푖푛푒푎푟 → model selection Usually better tests: likelihood ratio test, F tests, BIC etc. Easy to confuse: all GOF papers do power studies, those need specific alternative. Our question: is F a good enough model for data? We want to guard against any alternative. 7 Not again … Actually no, GOF equally important to both (everybody has a likelihood) Maybe more so for Bayesians, no non- parametric methods.
    [Show full text]