(Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Evaluation of Several Error Measures Applied to the Sales Forecast System of Chemicals Supply Enterprises
http://ijba.sciedupress.com International Journal of Business Administration Vol. 11, No. 4; 2020 Evaluation of Several Error Measures Applied to the Sales Forecast System of Chemicals Supply Enterprises Ma. del Rocío Castillo Estrada1, Marco Edgar Gómez Camarillo1, María Eva Sánchez Parraguirre1, Marco Edgar Gómez Castillo2, Efraín Meneses Juárez1 & M. Javier Cruz Gómez3 1 Facultad de Ciencias Básicas, Ingeniería y Tecnología, Universidad Autónoma de Tlaxcala, Mexico 2 Escuela de Graduados en Administración, Instituto Tecnológico y de Estudios Superiores de Monterrey, Mexico 3 Department of Chemical Engineering, Universidad Nacional Autónoma de México (UNAM), Coyoacán, México Correspondence: M. Javier Cruz Gómez, Full Professor, Department of Chemical Engineering, Universidad Nacional Autónoma de México (UNAM), Coyoacán, México. Tel: 52-55-5622-5359. Received: May 2, 2020 Accepted: June 12, 2020 Online Published: June 30, 2020 doi:10.5430/ijba.v11n4p39 URL: https://doi.org/10.5430/ijba.v11n4p39 Abstract The objective of the industry in general, and of the chemical industry in particular, is to satisfy consumer demand for products and the best way to satisfy it is to forecast future sales and plan its operations. Considering that the choice of the best sales forecast model will largely depend on the accuracy of the selected indicator (Tofallis, 2015), in this work, seven techniques are compared, in order to select the most appropriate, for quantifying the error presented by the sales forecast models. These error evaluation techniques are: Mean Percentage Error (MPE), Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Mean Absolute Scaled Error (MASE), Symmetric Mean Absolute Percentage Error (SMAPE) and Mean Absolute Arctangent Percentage Error (MAAPE). -
Another Look at Measures of Forecast Accuracy
Another look at measures of forecast accuracy Rob J Hyndman Department of Econometrics and Business Statistics, Monash University, VIC 3800, Australia. Telephone: +61{3{9905{2358 Email: [email protected] Anne B Koehler Department of Decision Sciences & Management Information Systems Miami University Oxford, Ohio 45056, USA. Telephone: +1{513{529{4826 E-Mail: [email protected] 2 November 2005 1 Another look at measures of forecast accuracy Abstract: We discuss and compare measures of accuracy of univariate time series forecasts. The methods used in the M-competition and the M3-competition, and many of the measures recom- mended by previous authors on this topic, are found to be degenerate in commonly occurring situ- ations. Instead, we propose that the mean absolute scaled error become the standard measure for comparing forecast accuracy across multiple time series. Keywords: forecast accuracy, forecast evaluation, forecast error measures, M-competition, mean absolute scaled error. 2 Another look at measures of forecast accuracy 3 1 Introduction Many measures of forecast accuracy have been proposed in the past, and several authors have made recommendations about what should be used when comparing the accuracy of forecast methods applied to univariate time series data. It is our contention that many of these proposed measures of forecast accuracy are not generally applicable, can be infinite or undefined, and can produce misleading results. We provide our own recommendations of what should be used in empirical comparisons. In particular, we do not recommend the use of any of the measures that were used in the M-competition and the M3-competition. -
Lecture 13: Simple Linear Regression in Matrix Format
11:55 Wednesday 14th October, 2015 See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 13: Simple Linear Regression in Matrix Format 36-401, Section B, Fall 2015 13 October 2015 Contents 1 Least Squares in Matrix Form 2 1.1 The Basic Matrices . .2 1.2 Mean Squared Error . .3 1.3 Minimizing the MSE . .4 2 Fitted Values and Residuals 5 2.1 Residuals . .7 2.2 Expectations and Covariances . .7 3 Sampling Distribution of Estimators 8 4 Derivatives with Respect to Vectors 9 4.1 Second Derivatives . 11 4.2 Maxima and Minima . 11 5 Expectations and Variances with Vectors and Matrices 12 6 Further Reading 13 1 2 So far, we have not used any notions, or notation, that goes beyond basic algebra and calculus (and probability). This has forced us to do a fair amount of book-keeping, as it were by hand. This is just about tolerable for the simple linear model, with one predictor variable. It will get intolerable if we have multiple predictor variables. Fortunately, a little application of linear algebra will let us abstract away from a lot of the book-keeping details, and make multiple linear regression hardly more complicated than the simple version1. These notes will not remind you of how matrix algebra works. However, they will review some results about calculus with matrices, and about expectations and variances with vectors and matrices. Throughout, bold-faced letters will denote matrices, as a as opposed to a scalar a. 1 Least Squares in Matrix Form Our data consists of n paired observations of the predictor variable X and the response variable Y , i.e., (x1; y1);::: (xn; yn). -
In This Segment, We Discuss a Little More the Mean Squared Error
MITOCW | MITRES6_012S18_L20-04_300k In this segment, we discuss a little more the mean squared error. Consider some estimator. It can be any estimator, not just the sample mean. We can decompose the mean squared error as a sum of two terms. Where does this formula come from? Well, we know that for any random variable Z, this formula is valid. And if we let Z be equal to the difference between the estimator and the value that we're trying to estimate, then we obtain this formula here. The expected value of our random variable Z squared is equal to the variance of that random variable plus the square of its mean. Let us now rewrite these two terms in a more suggestive way. We first notice that theta is a constant. When you add or subtract the constant from a random variable, the variance does not change. So this term is the same as the variance of theta hat. This quantity here, we will call it the bias of the estimator. It tells us whether theta hat is systematically above or below than the unknown parameter theta that we're trying to estimate. And using this terminology, this term here is just equal to the square of the bias. So the mean squared error consists of two components, and these capture different aspects of an estimator's performance. Let us see what they are in a concrete setting. Suppose that we're estimating the unknown mean of some distribution, and that our estimator is the sample mean. In this case, the mean squared error is the variance, which we know to be sigma squared over n, plus the bias term. -
Minimum Mean Squared Error Model Averaging in Likelihood Models
Statistica Sinica 26 (2016), 809-840 doi:http://dx.doi.org/10.5705/ss.202014.0067 MINIMUM MEAN SQUARED ERROR MODEL AVERAGING IN LIKELIHOOD MODELS Ali Charkhi1, Gerda Claeskens1 and Bruce E. Hansen2 1KU Leuven and 2University of Wisconsin, Madison Abstract: A data-driven method for frequentist model averaging weight choice is developed for general likelihood models. We propose to estimate the weights which minimize an estimator of the mean squared error of a weighted estimator in a local misspecification framework. We find that in general there is not a unique set of such weights, meaning that predictions from multiple model averaging estimators might not be identical. This holds in both the univariate and multivariate case. However, we show that a unique set of empirical weights is obtained if the candidate models are appropriately restricted. In particular a suitable class of models are the so-called singleton models where each model only includes one parameter from the candidate set. This restriction results in a drastic reduction in the computational cost of model averaging weight selection relative to methods which include weights for all possible parameter subsets. We investigate the performance of our methods in both linear models and generalized linear models, and illustrate the methods in two empirical applications. Key words and phrases: Frequentist model averaging, likelihood regression, local misspecification, mean squared error, weight choice. 1. Introduction We study a focused version of frequentist model averaging where the mean squared error plays a central role. Suppose we have a collection of models S 2 S to estimate a population quantity µ, this is the focus, leading to a set of estimators fµ^S : S 2 Sg. -
Calibration: Calibrate Your Model
DUE TODAYCOMPUTER FILES AND QUESTIONS for Assgn#6 Assignment # 6 Steady State Model Calibration: Calibrate your model. If you want to conduct a transient calibration, talk with me first. Perform calibration using UCODE. Be sure your report addresses global, graphical, and spatial measures of error as well as common sense. Consider more than one conceptual model and compare the results. Remember to make a prediction with your calibrated models and evaluate confidence in your prediction. Be sure to save your files because you will want to use them later in the semester. Suggested Calibration Report Outline Title Introduction describe the system to be calibrated (use portions of your previous report as appropriate) Observations to be matched in calibration type of observations locations of observations observed values uncertainty associated with observations explain specifically what the observation will be matched to in the model Calibration Procedure Evaluation of calibration residuals parameter values quality of calibrated model Calibrated model results Predictions Uncertainty associated with predictions Problems encountered, if any Comparison with uncalibrated model results Assessment of future work needed, if appropriate Summary/Conclusions Summary/Conclusions References submit the paper as hard copy and include it in your zip file of model input and output submit the model files (input and output for both simulations) in a zip file labeled: ASSGN6_LASTNAME.ZIP Calibration (Parameter Estimation, Optimization, Inversion, Regression) adjusting parameter values, boundary conditions, model conceptualization, and/or model construction until the model simulation matches field observations We calibrate because 1. the field measurements are not accurate reflecti ons of the model scale properties, and 2. -
Bayes Estimator Recap - Example
Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Last Lecture . Biostatistics 602 - Statistical Inference Lecture 16 • What is a Bayes Estimator? Evaluation of Bayes Estimator • Is a Bayes Estimator the best unbiased estimator? . • Compared to other estimators, what are advantages of Bayes Estimator? Hyun Min Kang • What is conjugate family? • What are the conjugate families of Binomial, Poisson, and Normal distribution? March 14th, 2013 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 1 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 2 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Recap - Bayes Estimator Recap - Example • θ : parameter • π(θ) : prior distribution i.i.d. • X1, , Xn Bernoulli(p) • X θ fX(x θ) : sampling distribution ··· ∼ | ∼ | • π(p) Beta(α, β) • Posterior distribution of θ x ∼ | • α Prior guess : pˆ = α+β . Joint fX(x θ)π(θ) π(θ x) = = | • Posterior distribution : π(p x) Beta( xi + α, n xi + β) | Marginal m(x) | ∼ − • Bayes estimator ∑ ∑ m(x) = f(x θ)π(θ)dθ (Bayes’ rule) | α + x x n α α + β ∫ pˆ = i = i + α + β + n n α + β + n α + β α + β + n • Bayes Estimator of θ is ∑ ∑ E(θ x) = θπ(θ x)dθ | θ Ω | ∫ ∈ Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 3 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 4 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Loss Function Optimality Loss Function Let L(θ, θˆ) be a function of θ and θˆ. -
01031-9781451878004.Pdf
© 2005 International Monetary Fund March 2005 IMF Country Report No. 05/116 Canada: Selected Issues This Selected Issues paper for Canada was prepared by a staff team of the International Monetary Fund as background documentation for the periodic consultation with the member country. It is based on the information available at the time it was completed on February 1, 2005. The views expressed in this document are those of the staff team and do not necessarily reflect the views of the government of Canada or the Executive Board of the IMF. The policy of publication of staff reports and other documents by the IMF allows for the deletion of market-sensitive information. To assist the IMF in evaluating the publication policy, reader comments are invited and may be sent by e-mail to [email protected]. Copies of this report are available to the public from International Monetary Fund ● Publication Services 700 19th Street, N.W. ● Washington, D.C. 20431 Telephone: (202) 623 7430 ● Telefax: (202) 623 7201 E-mail: [email protected] ● Internet: http://www.imf.org Price: $15.00 a copy International Monetary Fund Washington, D.C. ©International Monetary Fund. Not for Redistribution This page intentionally left blank ©International Monetary Fund. Not for Redistribution INTERNATIONAL MONETARY FUND CANADA Selected Issues Prepared by T. Bayoumi, M. Mühleisen, I. Ivaschenko, A. Justiniano, K. Krajnyák, B. Sutton and A. Swiston (all WHD), D. Botman, S. Danninger, and D. Hauner (all FAD), G. De Nicoló, R. Corker, and A. Tieman (all MFD), and R. Cardarelli (RES) Approved by the Western Hemisphere Department February 1, 2005 Contents Pages PART I: REAL SECTOR ISSUES I. -
1.2 Examples of Descriptive Statistics in Statistics, a Summary Statistic Is a Single Numerical Measure of an At- Tribute of a Sample
Arkansas Tech University MATH 3513: Applied Statistics I Dr. Marcel B. Finan 1.2 Examples of Descriptive Statistics In Statistics, a summary statistic is a single numerical measure of an at- tribute of a sample. In this section we discuss commonly used summary statistics. Sample Mean The sample mean (also known as average or arithmetic mean) is a mea- sure of the \center" of the data. Suppose that a sample consists of the data values x1; x2; ··· ; xn: Then the sample mean is given by the formula n x1 + x2 + ··· + xn 1 X x = = x : n n i i=1 Example 1.2.1 A random sample of 10 ATU students reported missing school 7, 6, 8, 4, 2, 7, 6, 7, 6, 5 days. Find the sample mean. Solution. The sample mean is 7+6+8+4+2+7+6+7+6+5 x = = 5:8 days 10 The Sample Variance and Standard Deviation The Standard deviation is a measure of the spread of the data around the sample mean. When the spread of data is large we expect most of the sample values to be far from the mean and when the spread is small we expect most of the sample values to be close to the mean. Suppose that a sample consists of the data values x1; x2; ··· ; xn: One way of measuring how these values are spread around the mean is to compute the deviations of these values from the mean, i.e., x1 − x; x2 − x; ··· ; xn − x and then take their average, i.e., find how far, on average, is each data value from the mean. -
Predictive Analytics
Predictive Analytics L. Torgo [email protected] Faculdade de Ciências / LIAAD-INESC TEC, LA Universidade do Porto Dec, 2014 Introduction What is Prediction? Definition Prediction (forecasting) is the ability to anticipate the future. Prediction is possible if we assume that there is some regularity in what we observe, i.e. if the observed events are not random. Example Medical Diagnosis: given an historical record containing the symptoms observed in several patients and the respective diagnosis, try to forecast the correct diagnosis for a new patient for which we know the symptoms. © L.Torgo (FCUP - LIAAD / UP) Prediction Dec, 2014 2 / 240 Introduction Prediction Models Are obtained on the basis of the assumption that there is an unknown mechanism that maps the characteristics of the observations into conclusions/diagnoses. The goal of prediction models is to discover this mechanism. Going back to the medical diagnosis what we want is to know how symptoms influence the diagnosis. Have access to a data set with “examples” of this mapping, e.g. this patient had symptoms x; y; z and the conclusion was that he had disease p Try to obtain, using the available data, an approximation of the unknown function that maps the observation descriptors into the conclusions, i.e. Prediction = f (Descriptors) © L.Torgo (FCUP - LIAAD / UP) Prediction Dec, 2014 3 / 240 Introduction “Entities” involved in Predictive Modelling Descriptors of the observation: set of variables that describe the properties (features, attributes) of the cases in the data set Target variable: what we want to predict/conclude regards the observations The goal is to obtain an approximation of the function Y = f (X1; X;2 ; ··· ; Xp), where Y is the target variable and X1; X;2 ; ··· ; Xp the variables describing the characteristics of the cases. -
A Note on the Mean Absolute Scaled Error
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Erasmus University Digital Repository A note on the Mean Absolute Scaled Error Philip Hans Franses Econometric Institute Erasmus School of Economics Abstract Hyndman and Koehler (2006) recommend that the Mean Absolute Scaled Error (MASE) becomes the standard when comparing forecast accuracy. This note supports their claim by showing that the MASE nicely fits within the standard statistical procedures to test equal forecast accuracy initiated in Diebold and Mariano (1995). Various other criteria do not fit as they do not imply the relevant moment properties, and this is illustrated in some simulation experiments. Keywords: Forecast accuracy, Forecast error measures, Statistical testing This revised version: February 2015 Address for correspondence: Econometric Institute, Erasmus School of Economics, POB 1738, NL-3000 DR Rotterdam, the Netherlands, [email protected] Thanks to the Editor, an anonymous Associate Editor and an anonymous reviewer for helpful comments and to Victor Hoornweg for excellent research assistance. 1 Introduction Consider the case where an analyst has two competing one-step-ahead forecasts for a time series variable , namely , and , , for a sample = 1,2, . , . The forecasts bring along the forecast errors , and� 1 , , respectively.�2 To examine which of the two sets of forecasts provides most accuracy, the1̂ analyst2̂ can use criteria based on some average or median of loss functions of the forecast errors. Well-known examples are the Root Mean Squared Error (RMSE) or the Median Absolute Error (MAE), see Hyndman and Koehler (2006) for an exhaustive list of criteria and see also Table 1 below. -
Error Measures for Generalizing About Forecasting Methods: Empirical Comparisons
Error Measures For Generalizing About Forecasting Methods: Empirical Comparisons By J. Scott Armstrong and Fred Collopy Reprinted with permission form International Journal of Forecasting, 8 (1992), 69-80. Abstract: This study evaluated measures for making comparisons of errors across time series. We analyzed 90 annual and 101 quarterly economic time series. We judged error measures on reliability, construct validity, sensitivity to small changes, protection against outliers, and their relationship to decision making. The results lead us to recommend the Geometric Mean of the Relative Absolute Error (GMRAE) when the task involves calibrating a model for a set of time series. The GMRAE compares the absolute error of a given method to that from the random walk forecast. For selecting the most accurate methods, we recommend the Median RAE (MdRAE) when few series are available and the Median Absolute Percentage Error (MdAPE) otherwise. The Root Mean Square Error (RMSE) is not reliable, and is therefore inappropriate for comparing accuracy across series. Keywords: Forecast accuracy, M-Competition, Relative absolute error, Theil's U. 1. Introduction Over the past-two decades, many studies have been conducted to identify which method will provide the most accurate forecasts for a given class of time series. Such generalizations are important because organizations often rely upon a single method for a given type of data. For example, a company might find that the Holt-Winters' exponential smoothing method is accurate for most of its series, and thus decide to base its cash flow forecasts on this method. This paper examines error measures for drawing conclusions about the relative accuracy of extrapolation methods.