Weighted Least Squares

Total Page:16

File Type:pdf, Size:1020Kb

Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II Weighted Least Squares Recall the linear regression equation E(Y ) = β0 + β1x1 + β2x2 + ··· + βk xk We have estimated the parameters β0, β1, β2, ::: , βk by minimizing the sum of squared residuals n X 2 SSE = (yi − y^i ) i=1 n 2 X h ^ ^ ^ ^ i = yi − β0 + β1xi;1 + β2xi;2 + ··· + βk xi;k : i=1 1 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II Sometimes we want to give some observations more weight than others. We achieve this by minimizing a weighted sum of squares: n X 2 WSSE = wi (yi − y^i ) i=1 n 2 X h ^ ^ ^ ^ i = wi yi − β0 + β1xi;1 + β2xi;2 + ··· + βk xi;k i=1 The resulting β^s are called weighted least squares (WLS) estimates, and the WLS residuals are p wi (yi − y^i ): 2 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II Why use weights? Suppose that the variance is not constant: 2 var(Yi ) = σi : If we use weights 1 wi / 2 ; σi the WLS estimates have smaller standard errors than the ordinary least squares (OLS) estimates. That is, the OLS estimates are inefficient, relative to the WLS estimates. 3 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II 2 In fact, using weights proportional to 1/σi is optimal: no other weights give smaller standard errors. When you specify weights, regression software calculates standard 2 errors on the assumption that they are proportional to 1/σi . 4 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II How to choose the weights If you have many replicates for each unique combination of xs, 2 use si to estimate var(Y jxi ). Often you will not have enough replicates to give good variance estimates. The text suggests grouping observations that are \nearest neighbors". Alternatively you can use the regression diagnostic plots. 5 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II Example: Florida road contracts. dot11 <- read.table("Text/Exercises&Examples/DOT11.txt", header = TRUE) l1 <- lm(BIDPRICE ~ LENGTH, dot11) summary(l1) plot(l1) 6 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II The first plot uses unweighted residuals yi − y^i , but the others use weighted residuals. Also recall that they are \Standardized residuals" ∗ yi − y^i zi = p : s 1 − hi which are called Studentized residuals in the text. With weights, the standardized residuals are ∗ p yi − y^i zi = wi p : s 1 − hi 7 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II Note that the \Scale-Location" plot shows an increasing trend. Try weights that are proportional to powers of x = LENGTH: # Try power -1: plot(lm(BIDPRICE ~ LENGTH, dot11, weights = 1/LENGTH)) # Still slightly increasing; try power -2: plot(lm(BIDPRICE ~ LENGTH, dot11, weights = 1/LENGTH^2)) # Now slightly decreasing. summary() shows that the fitted equations are all very similar. weights = 1/LENGTH gives the smallest standard errors. 8 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II Often the weights are determined by fitted values, not by the independent variable: # Try power -1: plot(lm(BIDPRICE ~ LENGTH, dot11, weights = 1/fitted(l1))) # About flat; but try power -2: plot(lm(BIDPRICE ~ LENGTH, dot11, weights = 1/fitted(l1)^2)) # Now definitely decreasing. summary() shows that the fitted equations are again very similar. weights = 1/fitted(l1) gives the smallest standard errors. 9 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II Note Standard errors are computed as if the weights are known constants. In the last case, we used weights based on a preliminary OLS fit. Theory shows that in large samples the standard errors are also valid with estimated weights. 10 / 11 Special Topics Weighted Least Squares ST 430/514 Introduction to Regression Analysis/Statistics for Management and the Social Sciences II Note When you specify weights wi , lm() fits the model 2 2 σ σi = wi and the \Residual standard error" s is an estimate of σ: Pn w (y − y^ )2 s2 = i=1 i i i n − p If you change the weights, the meaning of σ (and s) changes. You cannot compare the residual standard errors for different weighting schemes (c.f. page 488, foot). 11 / 11 Special Topics Weighted Least Squares.
Recommended publications
  • Generalized Least Squares and Weighted Least Squares Estimation
    REVSTAT – Statistical Journal Volume 13, Number 3, November 2015, 263–282 GENERALIZED LEAST SQUARES AND WEIGH- TED LEAST SQUARES ESTIMATION METHODS FOR DISTRIBUTIONAL PARAMETERS Author: Yeliz Mert Kantar – Department of Statistics, Faculty of Science, Anadolu University, Eskisehir, Turkey [email protected] Received: March 2014 Revised: August 2014 Accepted: August 2014 Abstract: • Regression procedures are often used for estimating distributional parameters because of their computational simplicity and useful graphical presentation. However, the re- sulting regression model may have heteroscedasticity and/or correction problems and thus, weighted least squares estimation or alternative estimation methods should be used. In this study, we consider generalized least squares and weighted least squares estimation methods, based on an easily calculated approximation of the covariance matrix, for distributional parameters. The considered estimation methods are then applied to the estimation of parameters of different distributions, such as Weibull, log-logistic and Pareto. The results of the Monte Carlo simulation show that the generalized least squares method for the shape parameter of the considered distri- butions provides for most cases better performance than the maximum likelihood, least-squares and some alternative estimation methods. Certain real life examples are provided to further demonstrate the performance of the considered generalized least squares estimation method. Key-Words: • probability plot; heteroscedasticity; autocorrelation; generalized least squares; weighted least squares; shape parameter. 264 Yeliz Mert Kantar Generalized Least Squares and Weighted Least Squares 265 1. INTRODUCTION Regression procedures are often used for estimating distributional param- eters. In this procedure, the distribution function is transformed to a linear re- gression model. Thus, least squares (LS) estimation and other regression estima- tion methods can be employed to estimate parameters of a specified distribution.
    [Show full text]
  • Power Comparisons of Five Most Commonly Used Autocorrelation Tests
    Pak.j.stat.oper.res. Vol.16 No. 1 2020 pp119-130 DOI: http://dx.doi.org/10.18187/pjsor.v16i1.2691 Pakistan Journal of Statistics and Operation Research Power Comparisons of Five Most Commonly Used Autocorrelation Tests Stanislaus S. Uyanto School of Economics and Business, Atma Jaya Catholic University of Indonesia, [email protected] Abstract In regression analysis, autocorrelation of the error terms violates the ordinary least squares assumption that the error terms are uncorrelated. The consequence is that the estimates of coefficients and their standard errors will be wrong if the autocorrelation is ignored. There are many tests for autocorrelation, we want to know which test is more powerful. We use Monte Carlo methods to compare the power of five most commonly used tests for autocorrelation, namely Durbin-Watson, Breusch-Godfrey, Box–Pierce, Ljung Box, and Runs tests in two different linear regression models. The results indicate the Durbin-Watson test performs better in the regression model without lagged dependent variable, although the advantage over the other tests reduce with increasing autocorrelation and sample sizes. For the model with lagged dependent variable, the Breusch-Godfrey test is generally superior to the other tests. Key Words: Correlated error terms; Ordinary least squares assumption; Residuals; Regression diagnostic; Lagged dependent variable. Mathematical Subject Classification: 62J05; 62J20. 1. Introduction An important assumption of the classical linear regression model states that there is no autocorrelation in the error terms. Autocorrelation of the error terms violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem (Plackett, 1949, 1950; Greene, 2018) does not apply, and that ordinary least squares estimators are no longer the best linear unbiased estimators.
    [Show full text]
  • Generalized and Weighted Least Squares Estimation
    LINEAR REGRESSION ANALYSIS MODULE – VII Lecture – 25 Generalized and Weighted Least Squares Estimation Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur 2 The usual linear regression model assumes that all the random error components are identically and independently distributed with constant variance. When this assumption is violated, then ordinary least squares estimator of regression coefficient looses its property of minimum variance in the class of linear and unbiased estimators. The violation of such assumption can arise in anyone of the following situations: 1. The variance of random error components is not constant. 2. The random error components are not independent. 3. The random error components do not have constant variance as well as they are not independent. In such cases, the covariance matrix of random error components does not remain in the form of an identity matrix but can be considered as any positive definite matrix. Under such assumption, the OLSE does not remain efficient as in the case of identity covariance matrix. The generalized or weighted least squares method is used in such situations to estimate the parameters of the model. In this method, the deviation between the observed and expected values of yi is multiplied by a weight ω i where ω i is chosen to be inversely proportional to the variance of yi. n 2 For simple linear regression model, the weighted least squares function is S(,)ββ01=∑ ωii( yx −− β0 β 1i) . ββ The least squares normal equations are obtained by differentiating S (,) ββ 01 with respect to 01 and and equating them to zero as nn n ˆˆ β01∑∑∑ ωβi+= ωiixy ω ii ii=11= i= 1 n nn ˆˆ2 βω01∑iix+= βω ∑∑ii x ω iii xy.
    [Show full text]
  • Lecture 24: Weighted and Generalized Least Squares 1
    Lecture 24: Weighted and Generalized Least Squares 1 Weighted Least Squares When we use ordinary least squares to estimate linear regression, we minimize the mean squared error: n 1 X MSE(b) = (Y − X β)2 (1) n i i· i=1 th where Xi· is the i row of X. The solution is T −1 T βbOLS = (X X) X Y: (2) Suppose we minimize the weighted MSE n 1 X W MSE(b; w ; : : : w ) = w (Y − X b)2: (3) 1 n n i i i· i=1 This includes ordinary least squares as the special case where all the weights wi = 1. We can solve it by the same kind of linear algebra we used to solve the ordinary linear least squares problem. If we write W for the matrix with the wi on the diagonal and zeroes everywhere else, then W MSE = n−1(Y − Xb)T W(Y − Xb) (4) 1 = YT WY − YT WXb − bT XT WY + bT XT WXb : (5) n Differentiating with respect to b, we get as the gradient 2 r W MSE = −XT WY + XT WXb : b n Setting this to zero at the optimum and solving, T −1 T βbW LS = (X WX) X WY: (6) But why would we want to minimize Eq. 3? 1. Focusing accuracy. We may care very strongly about predicting the response for certain values of the input | ones we expect to see often again, ones where mistakes are especially costly or embarrassing or painful, etc. | than others. If we give the points near that region big weights, and points elsewhere smaller weights, the regression will be pulled towards matching the data in that region.
    [Show full text]
  • Multiple Regression
    ICPSR Blalock Lectures, 2003 Bootstrap Resampling Robert Stine Lecture 4 Multiple Regression Review from Lecture 3 Review questions from syllabus - Questions about underlying theory - Math and notation handout from first class Review resampling in simple regression - Regression diagnostics - Smoothing methods (modern regr) Methods of resampling – which to use? Today Multiple regression - Leverage, influence, and diagnostic plots - Collinearity Resampling in multiple regression - Ratios of coefficients - Differences of two coefficients - Robust regression Yes! More t-shirts! Bootstrap Resampling Multiple Regression Lecture 4 ICPSR 2003 2 Locating a Maximum Polynomial regression Why bootstrap in least squares regression? Question What amount of preparation (in hours) maximizes the average test score? Quadmax.dat Constructed data for this example Results of fitting a quadratic Scatterplot with added quadratic fit (or smooth) 0 5 10 15 20 HOURS Least squares regression of the model Y = a + b X + c X2 + e gives the estimates a = 136 b= 89.2 c = –4.60 So, where’s the maximum and what’s its CI? Bootstrap Resampling Multiple Regression Lecture 4 ICPSR 2003 3 Inference for maximum Position of the peak The maximum occurs where the derivative of the fit is zero. Write the fit as f(x) = a + b x + c x2 and then take the derivative, f’(x) = b + 2cx Solving f’(x*) = 0 for x* gives b x* = – 2c ≈ –89.2/(2)(–4.6)= 9.7 Questions - What is the standard error of x*? - What is the 95% confidence interval? - Is the estimate biased? Bootstrap Resampling Multiple Regression Lecture 4 ICPSR 2003 4 Bootstrap Results for Max Location Standard error and confidence intervals (B=2000) Standard error is slightly smaller with fixed resampling (no surprise)… Observation resampling SE* = 0.185 Fixed resampling SE* = 0.174 Both 95% intervals are [9.4, 10] Bootstrap distribution The kernel looks pretty normal.
    [Show full text]
  • Weighted Least Squares
    Weighted Least Squares 2 ∗ The standard linear model assumes that Var("i) = σ for i = 1; : : : ; n. ∗ As we have seen, however, there are instances where σ2 Var(Y j X = xi) = Var("i) = : wi ∗ Here w1; : : : ; wn are known positive constants. ∗ Weighted least squares is an estimation technique which weights the observations proportional to the reciprocal of the error variance for that observation and so overcomes the issue of non-constant variance. 7-1 Weighted Least Squares in Simple Regression ∗ Suppose that we have the following model Yi = β0 + β1Xi + "i i = 1; : : : ; n 2 where "i ∼ N(0; σ =wi) for known constants w1; : : : ; wn. ∗ The weighted least squares estimates of β0 and β1 minimize the quantity n X 2 Sw(β0; β1) = wi(yi − β0 − β1xi) i=1 ∗ Note that in this weighted sum of squares, the weights are inversely proportional to the corresponding variances; points with low variance will be given higher weights and points with higher variance are given lower weights. 7-2 Weighted Least Squares in Simple Regression ∗ The weighted least squares estimates are then given as ^ ^ β0 = yw − β1xw P w (x − x )(y − y ) ^ = i i w i w β1 P 2 wi(xi − xw) where xw and yw are the weighted means P w x P w y = i i = i i xw P yw P : wi wi ∗ Some algebra shows that the weighted least squares esti- mates are still unbiased. 7-3 Weighted Least Squares in Simple Regression ∗ Furthermore we can find their variances 2 ^ σ Var(β1) = X 2 wi(xi − xw) 2 3 1 2 ^ xw 2 Var(β0) = 4X + X 25 σ wi wi(xi − xw) ∗ Since the estimates can be written in terms of normal random variables, the sampling distributions are still normal.
    [Show full text]
  • Regression Diagnostics and Model Selection
    Regression diagnostics and model selection MS-C2128 Prediction and Time Series Analysis Fall term 2020 MS-C2128 Prediction and Time Series Analysis Regression diagnostics and model selection Week 2: Regression diagnostics and model selection 1 Regression diagnostics 2 Model selection MS-C2128 Prediction and Time Series Analysis Regression diagnostics and model selection Content 1 Regression diagnostics 2 Model selection MS-C2128 Prediction and Time Series Analysis Regression diagnostics and model selection Regression diagnostics Questions: Does the model describe the dependence between the response variable and the explanatory variables well 1 contextually? 2 statistically? A good model describes the dependencies as well as possible. Assessment of the goodness of a regression model is called regression diagnostics. Regression diagnostics tools: graphics diagnostic statistics diagnostic tests MS-C2128 Prediction and Time Series Analysis Regression diagnostics and model selection Regression model selection In regression modeling, one has to select 1 the response variable(s) and the explanatory variables, 2 the functional form and the parameters of the model, 3 and the assumptions on the residuals. Remark The first two points are related to defining the model part and the last point is related to the residuals. These points are not independent of each other! MS-C2128 Prediction and Time Series Analysis Regression diagnostics and model selection Problems in defining the model part (i) Linear model is applied even though the dependence between the response variable and the explanatory variables is non-linear. (ii) Too many or too few explanatory variables are chosen. (iii) It is assumed that the model parameters are constants even though the assumption does not hold.
    [Show full text]
  • Weighted Least Squares, Heteroskedasticity, Local Polynomial Regression
    Chapter 6 Moving Beyond Conditional Expectations: Weighted Least Squares, Heteroskedasticity, Local Polynomial Regression So far, all our estimates have been based on the mean squared error, giving equal im- portance to all observations. This is appropriate for looking at conditional expecta- tions. In this chapter, we’ll start to work with giving more or less weight to different observations. On the one hand, this will let us deal with other aspects of the distri- bution beyond the conditional expectation, especially the conditional variance. First we look at weighted least squares, and the effects that ignoring heteroskedasticity can have. This leads naturally to trying to estimate variance functions, on the one hand, and generalizing kernel regression to local polynomial regression, on the other. 6.1 Weighted Least Squares When we use ordinary least squares to estimate linear regression, we (naturally) min- imize the mean squared error: n 1 2 MSE(β)= (yi xi β) (6.1) n i 1 − · = The solution is of course T 1 T βOLS =(x x)− x y (6.2) We could instead minimize the weighted mean squared error, n 1 2 WMSE(β, w)= wi (yi xi β) (6.3) n i 1 − · = 119 120 CHAPTER 6. WEIGHTING AND VARIANCE This includes ordinary least squares as the special case where all the weights wi = 1. We can solve it by the same kind of linear algebra we used to solve the ordinary linear least squares problem. If we write w for the matrix with the wi on the diagonal and zeroes everywhere else, the solution is T 1 T βWLS =(x wx)− x wy (6.4) But why would we want to minimize Eq.
    [Show full text]
  • Outlier Detection Based on Robust Parameter Estimates
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 23 (2017) pp.13429-13434 © Research India Publications. http://www.ripublication.com Outlier Detection based on Robust Parameter Estimates Nor Azlida Aleng1, Nyi Nyi Naing2, Norizan Mohamed3 and Kasypi Mokhtar4 1,3 School of Informatics and Applied Mathematics, Universiti Malaysia Terengganu, 21030 Kuala Terengganu, Terengganu, Malaysia. 2 Institute for Community (Health) Development (i-CODE), Universiti Sultan Zainal Abidin, Gong Badak Campus, 21300 Kuala Terengganu, Malaysia. 4 School of Maritime Business and Management, Universiti Malaysia Terengganu, 21030 Kuala Terengganu, Terengganu, Malaysia. Orcid; 0000-0003-1111-3388 Abstract evidence that the outliers can lead to model misspecification, incorrect analysis result and can make all estimation procedures Outliers can influence the analysis of data in various different meaningless, (Rousseeuw and Leroy, 1987; Alma, 2011; ways. The outliers can lead to model misspecification, incorrect Zimmerman, 1994, 1995, 1998). Outliers in the response analysis results and can make all estimation procedures variable represent model failure. Outliers in the regressor meaningless. In regression analysis, ordinary least square variable values are extreme in X-space are called leverage estimation is most frequently used for estimation of the points. parameters in the model. Unfortunately, this estimator is sensitive to outliers. Thus, in this paper we proposed some Rousseeuw and Lorey (1987) defined that outliers in three statistics for detection of outliers based on robust estimation, types; 1) vertical outliers, 2) bad leverage point, 3) good namely least trimmed squares (LTS). A simulation study was leverage point. Vertical outlier is an observation that has performed to prove that the alternative approach gives a better influence on the error term of y-dimension (response) but is results than OLS estimation to identify outliers.
    [Show full text]
  • Econometrics Streamlined, Applied and E-Aware
    Econometrics Streamlined, Applied and e-Aware Francis X. Diebold University of Pennsylvania Edition 2014 Version Thursday 31st July, 2014 Econometrics Econometrics Streamlined, Applied and e-Aware Francis X. Diebold Copyright c 2013 onward, by Francis X. Diebold. This work is licensed under the Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License. (Briefly: I retain copyright, but you can use, copy and distribute non-commercially, so long as you give me attribution and do not modify.) To view a copy of the license, visit http://creativecommons.org/licenses/by-nc-nd/4.0. To my undergraduates, who continually surprise and inspire me Brief Table of Contents About the Author xxi About the Cover xxiii Guide to e-Features xxv Acknowledgments xxvii Preface xxxv 1 Introduction to Econometrics1 2 Graphical Analysis of Economic Data 11 3 Sample Moments and Their Sampling Distributions 27 4 Regression 41 5 Indicator Variables in Cross Sections and Time Series 73 6 Non-Normality, Non-Linearity, and More 105 7 Heteroskedasticity in Cross-Section Regression 143 8 Serial Correlation in Time-Series Regression 151 9 Heteroskedasticity in Time-Series Regression 207 10 Endogeneity 241 11 Vector Autoregression 243 ix x BRIEF TABLE OF CONTENTS 12 Nonstationarity 279 13 Big Data: Selection, Shrinkage and More 317 I Appendices 325 A Construction of the Wage Datasets 327 B Some Popular Books Worth Reading 331 Detailed Table of Contents About the Author xxi About the Cover xxiii Guide to e-Features xxv Acknowledgments xxvii Preface xxxv 1 Introduction to Econometrics1 1.1 Welcome..............................1 1.1.1 Who Uses Econometrics?.................1 1.1.2 What Distinguishes Econometrics?...........3 1.2 Types of Recorded Economic Data...............3 1.3 Online Information and Data..................4 1.4 Software..............................5 1.5 Tips on How to use this book..................7 1.6 Exercises, Problems and Complements.............8 1.7 Historical and Computational Notes..............
    [Show full text]
  • Weighted Least Squares
    Weighted Least Squares Jeff Gill and Lucas Leemann Washington University, St. Louis 2 Weighted least squares is a standard compensation technique for non- constant error variance (heteroscedasticity), which is common in political science data. By assigning individual weights to the observations the het- erescedasticity can be removed by design. The square root of the inverse of the error variance of the observation is typically used as weight. The key idea is that less weight is given to those observations with a large error variance. This forces the variance of the residuals to be constant. Weighted least squares is an example of the broader class of generalized least squares estimators. The idea was first presented by Aitken (1935). Theory The ordinary linear model has the form y = Xβ + ", where y is a n × 1 outcome vector with continuous measure, X is a n×k invertible matrix with explanatory variables down the columns and a leading column of ones, β is a k × 1 parameter vector to be estimated, and " is a n × 1 error vector with assumed mean zero. The ordinary least squares estimator of β is achieved by minimizing the squared error terms and is produced by: (X0X)−1X0y. In presence of heteroscedasticity the ordinary least squares estimator of β is not BLUE: the best linear unbiased estimator. The term \best" here means it achieves the minimum possible variance. Weighted least squares allows one to reformulate the model and gener- ate estimators which are in principle BLUE. The introduction of a weight 1 Weighted Least Squares 2 matrix Ω into the calculation of β^ removes the heteroscedasticity from the model.
    [Show full text]
  • Ravinder Et Al. Validating Regression Assumptions
    Ravinder et al. Validating Regression Assumptions DECISION SCIENCES INSTITUTE Validating Regression Assumptions Through Residual Plots – Can Students Do It? Handanhal Ravinder Montclair State University Email: [email protected] Mark Berenson Montclair State University Email: [email protected] Haiyan Su Montclair State University Email: [email protected] ABSTRACT In linear regression an important part of the procedure is to check for the validity of the four key assumptions. Introductory courses in business statistics take a graphical approach by displaying residual plots of situations where these assumptions are violated. Yet it is not clear that students at this level can assess residual plots well enough to make judgments about the violation of the assumptions. This paper discusses the details of an experiment in progress that addresses this issue using a two-period crossover design. Preliminary results are presented. KEYWORDS: Regression, Linearity, Homoscedasticity, Scatterplot, Cross-over design INTRODUCTION When assessing the assumptions of linearity, independence of error, normality, and equal spread in simple linear regression modeling, students of introductory business statistics are typically presented with an exploratory, graphical approach to the topic of residual analysis (for example, Donnelly, 2015: Anderson, Sweeney & Williams, 2016; Levine, Szabat, & Stephan, 2016). Except for the assumption of independence of error where the Durbin-Watson technique (1951) is presented, no basic text covers any confirmatory approach to any of the other assumptions. Berenson (2013) however, raises a basic question – are inexperienced undergraduate business students really capable of critically assessing residual plots? If this is not the case, it would be imperative to introduce confirmatory, inferential approaches to supplement the graphical, exploratory approaches currently found in texts and taught in classrooms.
    [Show full text]