Evaluation of Forecasting Techniques and Forecast Errors

Total Page:16

File Type:pdf, Size:1020Kb

Evaluation of Forecasting Techniques and Forecast Errors ISSN: 1402-1544 ISBN 978-91-86233-XX-X Se i listan och fyll i siffror där kryssen är LICENTIATE T H E SIS Peter Wallström Peter Department of Business Administration and Social Sciences Division of Industrial marketing, e-commerce and logistics ISSN: 1402-1757 ISBN 978-91-86233-63-1 Evaluation of forecasting of Evaluation forecasting techniques and forecast errors Luleå University of Technology 2009 techniques and forecast errors With focus on intermittent demand Peter Wallström s With focus on intermittent demand LICENTIATE THESIS EVALUATION OF FORECASTING TECHNIQUES AND FORECAST ERRORS WITH FOCUS ON INTERMITTENT DEMAND Peter Wallström Luleå, May 2009 Luleå University of Technology Department of Business Administration and Social Sciences Division of Industrial Marketing, e-commerce and Logistics Industrial logistics Printed by Universitetstryckeriet, Luleå 2009 ISSN: 1402-1757 ISBN 978-91-86233-63-1 Luleå www.ltu.se I’m just doing my rock’n’roll duty creating a buzz buzz buzz Mitchell/Dubois Like the fly on the wheel, who says "What a lot of dust we're raising" Lee/Lifeson/Peart Hypotesen har förflyktigats i tomma intet Johannes Kepler Abstract Abstract To decide in advance the amount of resources that is required next week or next month can be both a complicated and hazardous task depending on the situation, despite the known time frame when the resources are needed. Intermittent demand, or slow-moving demand, that is when there are time periods without demand and then suddenly a time period with demand, becomes even more difficult to forecast. If the demand is underestimated it will lead to lost sales and therefore lost revenues. If the demand is overestimated, in the best case the stock is increased or in worst case, the items lie unsold until they become obsolete. The items with intermittent demand can have a value of up to 60% of the total stock value for all items. This thesis addresses the topic of forecasting intermittent demand and how to measure the accuracy of the chosen forecast method or methods. Four forecasting methods are tested on almost 18 months of empirical demand data from a manufacturing company. The tested forecasting methods are single exponential smoothing, Croston and two modification of the Croston method, one by Syntetos and Boylan the other by Segerstedt (modified Croston). Four start values and eight smoothing constants are tested. The methods are evaluated with different accuracy measures; variance (MSE and MAD), bias (CFE, the maximum and minimum value of CFE) and sMAPE. In addition with a new complementary measure of bias; Periods in Stock (PIS), PIS considers the time aspect, when the forecast error occurred not just the error size. Also two variants of MAD and MSE are tested. To improve the evaluation of the bias measures, the percentages of demand occasions that can not be fulfilled are used. I Evaluation of Forecasting Techniques and Forecast Errors The relationship between the different errors for a certain method is examined with principal component analysis (PCA). The errors are also examined with logistic regression to find out if a certain forecasting method is favoured by certain accuracy measures. The logistic regression is based on descriptive statistics for time series plus the mean absolute change that considers the sequence of the time series as well as the variation. Ranking and error quotients between different methods are other applied methods. The results of the research both confirm and contradict earlier findings. Among the confirming research results are the bias among the different methods. Croston and Modified Croston are overestimating the demand, Syntetos and Boylan’s Croston variant has a tendency to underestimate the demand. Single exponential smoothing is relatively biasfree when low smoothing constants are concerned. The contradictive results are that CFE is not a suitable measure of bias at least when the number of forecasting periods is limited. The value of CFE can indicate a nonbiased forecast when both PIS and the percentage of unmet demands indicate a biased forecast. PIS is also less sensitive to transient demand events that can distort CFE. PIS is recommended as a bias measure for limited time series, especially considering intermittent demand, along with the percentage of unmet demand. Another result is that MAD is not reliable since the method in certain circumstances favours methods that underestimate the demand. II Abstract in Swedish Abstract in Swedish Att på förhand bestämma vilken mängd resurser som krävs nästa vecka eller nästa månad kan vara både en komplicerad och riskfylld uppgift beroende på situationen, trots att man känner till när resurserna behövs. Intermittent efterfrågan, eller lågrörlig efterfrågan, är när många perioder saknar efterfrågan och plötsligt sker en efterfrågan en period. Detta gör det svårare att prognostisera. Om efterfrågan underskattas kommer det att leda till förlorad försäljning och därmed förlorade intäkter. Om efterfrågan är överskattad kommer det i bästa fall att leda till ökat lager eller, i värsta fall, leda till osålda produkter och till slut inkurans. Artiklar med intermittent efterfrågan kan utgöra upp till 60 % av det totala lagervärdet för samtliga artiklar. Denna uppsats avhandlar prognoser av intermittent efterfrågan samt hur prognosfelen ska mätas för den valda eller de valda prognosmetoderna. Fyra prognosmetoder utvärderas med nästan 18 månaders empirisk efterfrågedata från ett tillverkande företag. De utvärderade metoderna är exponentiell utjämning, Croston och två modifierade varianter av Croston; Syntetos och Boylans metod samt modifierad Croston av Segerstedt. Fyra olika startvärden och åtta utjämningskonstanter används. Prognosmetoderna utvärderas med olika typer av prognosfel; varians (MSE och MAD), bias (CFE samt max- och minvärde av CFE) och sMAPE. Vidare sker utvärdering med ett komplimenterande mått för bias, Lagerperioder (Periods in Stock, PIS). PIS tar tidsaspekten i beaktande och inte bara storleken på prognosfelen. Dessutom undersöks två varianter av MAD och MSE. För att förbättra utvärderingen av biasmåtten undersöks procentantalet av de efterfrågetillfällen som en prognosmetod inte kan uppfylla. III Evaluation of Forecasting Techniques and Forecast Errors Förhållandet mellan de olika prognosfelen undersöks med hjälp av principal component analysis (PCA). Prognosfelen undersöks även med binär logistisk regression för att utröna huruvida vissa prognosmetoder gynnas av vissa prognosfel. Den logistiska regressionen baseras på deskriptiv statistik för tidsserierna samt medelabsolutförändringen som tar ordningen för tidserien i beaktande såväl som variationen. Rankning och kvoter mellan olika prognosfel från olika metoder är andra tillämpade metoder. Resultatet av forskningen både bekräftar och motsäger tidigare forskning. Bland de bekräftande resultaten är den bias olika prognosmetoder har. Croston och modifierad Croston överskattar efterfrågan, Syntetos och Boylans metod underskattar efterfrågan. Exponentiell utjämning är förhållandevis fri från bias när utjämningskonstanterna har låga värden. De avvikande resultaten är att CFE inte är lämpligt att använda när antal prognosperioder är begränsat. Värdet för CFE kan indikera att prognosen är fri från bias när både PIS och procentandelen icke mött efterfrågan. PIS är dessutom mindre känslig för transienta efterfrågehändelser som kan förvränga CFE. PIS rekommenderas som ett biasmått när tidsserien är ändlig, särskilt när det gäller intermittent efterfrågan, tillsammans med måttet procentandelen icke mött efterfrågan. Andra resultat är att MAD inte är pålitlig eftersom måttet, under vissa förhållanden, gynnar prognosmetoder som underskattar efterfrågan. IV Contents Contents ABSTRACT.........................................................................................................I ABSTRACT IN SWEDISH..............................................................................III CONTENTS.......................................................................................................V PREFACE .........................................................................................................IX ABBREVIATIONS ..........................................................................................XI 1 INTRODUCTION .....................................................................................1 1.1 Background ......................................................................................1 1.2 The Complexity of Forecasting........................................................2 1.3 Aim and Objective ...........................................................................4 1.4 Research Questions ..........................................................................4 1.5 Limitations .......................................................................................4 1.6 Definitions........................................................................................6 2 THEORETICAL FRAMEWORK.............................................................7 2.1 Introduction and Notation ................................................................7 2.2 Forecasting Methods........................................................................9 2.2.1 Single Exponential Smoothing (SES) ..................................9 2.2.2 The Croston Method (Croston) ..........................................11 2.2.3 Croston According to Syntetos Boylan (SyBo) .................12 2.2.4 Modified Croston (ModCr) ................................................13
Recommended publications
  • 01031-9781451878004.Pdf
    © 2005 International Monetary Fund March 2005 IMF Country Report No. 05/116 Canada: Selected Issues This Selected Issues paper for Canada was prepared by a staff team of the International Monetary Fund as background documentation for the periodic consultation with the member country. It is based on the information available at the time it was completed on February 1, 2005. The views expressed in this document are those of the staff team and do not necessarily reflect the views of the government of Canada or the Executive Board of the IMF. The policy of publication of staff reports and other documents by the IMF allows for the deletion of market-sensitive information. To assist the IMF in evaluating the publication policy, reader comments are invited and may be sent by e-mail to [email protected]. Copies of this report are available to the public from International Monetary Fund ● Publication Services 700 19th Street, N.W. ● Washington, D.C. 20431 Telephone: (202) 623 7430 ● Telefax: (202) 623 7201 E-mail: [email protected] ● Internet: http://www.imf.org Price: $15.00 a copy International Monetary Fund Washington, D.C. ©International Monetary Fund. Not for Redistribution This page intentionally left blank ©International Monetary Fund. Not for Redistribution INTERNATIONAL MONETARY FUND CANADA Selected Issues Prepared by T. Bayoumi, M. Mühleisen, I. Ivaschenko, A. Justiniano, K. Krajnyák, B. Sutton and A. Swiston (all WHD), D. Botman, S. Danninger, and D. Hauner (all FAD), G. De Nicoló, R. Corker, and A. Tieman (all MFD), and R. Cardarelli (RES) Approved by the Western Hemisphere Department February 1, 2005 Contents Pages PART I: REAL SECTOR ISSUES I.
    [Show full text]
  • A Note on the Mean Absolute Scaled Error
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Erasmus University Digital Repository A note on the Mean Absolute Scaled Error Philip Hans Franses Econometric Institute Erasmus School of Economics Abstract Hyndman and Koehler (2006) recommend that the Mean Absolute Scaled Error (MASE) becomes the standard when comparing forecast accuracy. This note supports their claim by showing that the MASE nicely fits within the standard statistical procedures to test equal forecast accuracy initiated in Diebold and Mariano (1995). Various other criteria do not fit as they do not imply the relevant moment properties, and this is illustrated in some simulation experiments. Keywords: Forecast accuracy, Forecast error measures, Statistical testing This revised version: February 2015 Address for correspondence: Econometric Institute, Erasmus School of Economics, POB 1738, NL-3000 DR Rotterdam, the Netherlands, [email protected] Thanks to the Editor, an anonymous Associate Editor and an anonymous reviewer for helpful comments and to Victor Hoornweg for excellent research assistance. 1 Introduction Consider the case where an analyst has two competing one-step-ahead forecasts for a time series variable , namely , and , , for a sample = 1,2, . , . The forecasts bring along the forecast errors , and� 1 , , respectively.�2 To examine which of the two sets of forecasts provides most accuracy, the1̂ analyst2̂ can use criteria based on some average or median of loss functions of the forecast errors. Well-known examples are the Root Mean Squared Error (RMSE) or the Median Absolute Error (MAE), see Hyndman and Koehler (2006) for an exhaustive list of criteria and see also Table 1 below.
    [Show full text]
  • Demand Forecasting
    LUNCH & LEARN DEMAND FORECASTING Dr. Andre Murphy [email protected] 619.591.9715 DEMAND FORECASTING Demands & Forecasts Forecasting Models DEFINITIONS . Demand Forecasting: Is the process of making predictions of the future based on past and present data and analysis of trends for a product or service . Demand Management: Includes demand planning, prioritizing and management DEMANDS AND FORECASTS Demand Types Forecast Types . Independent . Qualitative . Dependent . Quantitative INDEPENDENT DEMAND . Demand for an end item . Forecasted Examples: . Uniforms . Tail hook DEPENDENT DEMAND . Demand for assemblies, components, or ingredients necessary to make an end item . Calculated based upon Independent Demand Examples: . Ferrirum M54, C64 steel . Wheel, tire, nut … DEMANDS AND FORECASTS Demand Types Forecast Types Independent Qualitative Dependent Quantitative DEFINITIONS . Qualitative Forecasting: When there is little historical data to rely on, such as when establishing sparing for a completely new and unique item, and intuition or expert judgement is required QUALITATIVE FORECAST Subjective prediction of demand . Often used in absence of historical data . May use mathematical modeling May include: • Expert judgment • “Delphi” method (panel of SMEs) QUALITATIVE METHODS Grass Roots: Deriving future demand by asking end-user or an organization closest to the end-user Market Research: Attempting to identify end-user patterns, emerging trends, and new products for support Panel Consensus: Deriving future estimations from the synergy of a panel of experts in subject area Delphi Method: Similar to Panel Consensus but with anonymity Historical Analogy: Identify another similar commodity to the one being forecasted. DEFINITIONS . Quantitative Forecasting: When historical data exists and is helpful in calculating future demand or forecasting based on numbers QUANTITATIVE FORECAST Predicts demand based upon historical data .
    [Show full text]
  • A Note on the Mean Absolute Scaled Error Philip Hans Franses ∗ Econometric Institute, Erasmus School of Economics, the Netherlands Article Info a B S T R a C T
    International Journal of Forecasting 32 (2016) 20–22 Contents lists available at ScienceDirect International Journal of Forecasting journal homepage: www.elsevier.com/locate/ijforecast A note on the Mean Absolute Scaled Error Philip Hans Franses ∗ Econometric Institute, Erasmus School of Economics, The Netherlands article info a b s t r a c t Keywords: Hyndman and Koehler (2006) recommend that the Mean Absolute Scaled Error (MASE) Forecast accuracy should become the standard when comparing forecast accuracies. This note supports their Forecast error measures claim by showing that the MASE fits nicely within the standard statistical procedures Statistical testing initiated by Diebold and Mariano (1995) for testing equal forecast accuracies. Various other criteria do not fit, as they do not imply the relevant moment properties, and this is illustrated in some simulation experiments. ' 2015 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved. 1. Introduction estimate of the standard deviation of dN by σON , the DM 12 d12 test for one-step-ahead forecasts is Consider the case where an analyst has two compet- N d12 ing one-step-ahead forecasts for a time series variable yt , DM D ∼ N.0; 1/; σON namely yO1;t and yO2;t , for a sample t D 1; 2;:::; T . The d12 O O forecasts have the associated forecast errors "1;t and "2;t , under the null hypothesis of equal forecast accuracy. Even respectively. To examine which of the two sets of fore- though Diebold and Mariano(1995, p. 254) claim that this casts provides the best accuracy, the analyst can use cri- result holds for any arbitrary function f , it is quite clear that teria based on some average or median of loss functions of the function should allow for proper moment conditions in the forecast errors.
    [Show full text]
  • Forecasting Analysts' Forecast Errors By
    Forecasting Analysts’ Forecast Errors By Jing Liu* [email protected] and Wei Su [email protected] Mailing Address: 110 Westwood Plaza, Suite D403 Anderson School of Management University of California, Los Angeles Los Angeles, CA 90095 * Liu is from the Anderson School at UCLA and Cheung Kong Graduate School of Business. Su is from the Anderson School at UCLA. We thank David Aboody, Carla Hayn, Jack Hughes, and Stan Markov for helpful suggestions. All errors are our own. Forecasting Analysts’ Forecast Errors Abstract In this paper, we examine whether analysts’ forecasts errors are predictable out of sample. Following market efficiency studies by Ou and Penman (1989) and Lev and Thiagarajan (1993), we employ a comprehensive list of forecasting variables. Our estimation procedures include the traditional OLS as well as a more robust procedure that minimizes the sum of absolute errors (LAD). While in-sample we find significant prediction power using both OLS and LAD, we find far stronger results using LAD out of sample, with an average reduction in forecast errors of over thirty percent measured by the mean squared error or near ten percent measured by the mean absolute error. Most of the prediction power comes from firms whose forecasts are predicted to be too optimistic. The stock market seems to understand the inefficiencies in analyst forecasts: a trading strategy based on the predicted analyst forecast errors does not generate abnormal profits. Conversely, analysts seem to fail to understand the inefficiencies present in the stock prices: a trading strategy directly based on the predicted stock returns generates significant abnormal returns, and the abnormal returns are associated with predictable analyst forecast errors.
    [Show full text]
  • How to Calculate Forecast Accuracy for Stocked Items with a Lumpy Demand - a Case Study at Alfa Laval
    School of Innovation, Design and Engineering How to calculate forecast accuracy for stocked items with a lumpy demand - A case study at Alfa Laval Master thesis Advanced level, 30 credits Product and process development Production and logistic Elsa Ragnerstam Abstract Inventory management is an important part of a good functioning logistic. Nearly all the literature on optimal inventory management uses criteria of cost minimization and profit maximization. To have a well functioning forecasting system it is important to have a balance in the inventory. But, it exist different factors that can results in uncertainties and difficulties to maintain this balance. One important factor is the customers’ demand. Over half of the stocked items are in stock to prevent irregular orders and an uncertainty demand. The customers’ demand can be categorized into four categories: Smooth, Erratic, Intermittent and Lumpy. Items with a lumpy demand i.e. the items that are both intermittent and erratic are the hardest to manage and to forecast. The reason for this is that the quantity and demand for these items varies a lot. These items may also have periods of zero demand. Because of this, it is a challenge for companies to forecast these items. It is hard to manage the random values that appear at random intervals and leaving many periods with zero demand. Due to the lumpy demand, an ongoing problem for most organization is the inaccuracy of forecasts. It is almost impossible to predict exact forecasts. It does not matter how good the forecasts are or how complex the forecast techniques are, the instability of the markets confirm that the forecasts always will be wrong and that errors therefore always will exist.
    [Show full text]
  • Lecture 15 Forecasting
    RS – EC2 - Lecture 15 Lecture 15 Forecasting 1 Forecasting • A shock is often used to describe an unexpected change in a variable or in the value of the error terms at a particular time period. • A shock is defined as the difference between expected (a forecast) and what actually happened. • One of the most important objectives in time series analysis is to forecast its future values. It is the primary objective of ARIMA modeling: • Two types of forecasts. - In sample (prediction): The expected value of the RV (in-sample), given the estimates of the parameters. - Out of sample (forecasting): The value of a future RV that is not observed by the sample. 1 RS – EC2 - Lecture 15 Forecasting – Basic Concepts • Any forecasts needs an information set, IT. This includes data, models and/or assumptions available at time T. The forecasts will be conditional on IT. • The variable to forecast YT+l is a RV. It can be fully characterized by a pdf. • In general, it is difficult to get the pdf for the forecast. In practice, we get a point estimate (the forecast) and a C.I. • Notation: - Forecast for T+l made at T: , |, . - T+l forecast error: 2 - Mean squared error (MSE): ] Forecasting – Basic Concepts ˆ • To get a point estimate, Y T , we need a cost function to judge various alternatives. This cost function is call loss function. Since we are working with forecast, we work with a expected loss function. • A popular loss functions is the MSE, which is quadratic and symmetric. We can use asymmetric functions, for example, functions that penalize positive errors more than negative errors.
    [Show full text]
  • (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology
    Performance Metrics (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology Alexei Botchkarev Principal, GS Research & Consulting, Adjunct Prof., Department of Computer Science, Ryerson University Toronto, Ontario, Canada Abstract Performance metrics (error measures) are vital components of the evaluation frameworks in various fields. The intention of this study was to overview of a variety of performance metrics and approaches to their classification. The main goal of the study was to develop a typology that will help to improve our knowledge and understanding of metrics and facilitate their selection in machine learning regression, forecasting and prognostics. Based on the analysis of the structure of numerous performance metrics, we propose a framework of metrics which includes four (4) categories: primary metrics, extended metrics, composite metrics, and hybrid sets of metrics. The paper identified three (3) key components (dimensions) that determine the structure and properties of primary metrics: method of determining point distance, method of normalization, method of aggregation of point distances over a data set. The paper proposed a new primary metrics typology designed around the key metrics components. The suggested typology has been shown to cover most of the commonly used primary metrics – total of over 40. The main contribution of this paper is in ordering knowledge of performance metrics and enhancing understanding of their structure and properties by proposing a new typology, generic primary metrics mathematic formula and a visualization chart. Keywords: Performance metrics, error measures, accuracy measures, distance, similarity, dissimilarity, properties, typology, classification, machine learning, regression, forecasting, prognostics, prediction, evaluation, estimation, modeling Note: This is a draft paper submitted to a peer-reviewed journal.
    [Show full text]
  • Has Macroeconomic Forecasting Changed After the Great Recession? Panel-Based Evidence on Accuracy and Forecaster Behaviour from Germany
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Döpke, Jörg; Fritsche, Ulrich; Müller, Karsten Working Paper Has macroeconomic forecasting changed after the Great Recession? Panel-based evidence on accuracy and forecaster behaviour from Germany DEP (Socioeconomics) Discussion Papers - Macroeconomics and Finance Series, No. 3/2018 Provided in Cooperation with: Hamburg University, Department Socioeconomics Suggested Citation: Döpke, Jörg; Fritsche, Ulrich; Müller, Karsten (2018) : Has macroeconomic forecasting changed after the Great Recession? Panel-based evidence on accuracy and forecaster behaviour from Germany, DEP (Socioeconomics) Discussion Papers - Macroeconomics and Finance Series, No. 3/2018, Hamburg University, Department Socioeconomics, Hamburg This Version is available at: http://hdl.handle.net/10419/194020 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence.
    [Show full text]
  • Has Macroeconomic Forecasting Changed After the Great Recession? – Panel-Based Evidence on Accuracy and Forecaster Behaviour from Germany
    Department Socioeconomics Has Macroeconomic Forecasting changed after the Great Recession? – Panel-based Evidence on Accuracy and Forecaster Behaviour from Germany Jörg Döpke Ulrich Fritsche Karsten Müller DEP (Socioeconomics) Discussion Papers Macroeconomics and Finance Series 3/2018 Hamburg, 2018 Has Macroeconomic Forecasting changed after the Great Recession? { Panel-based Evidence on Accuracy and Forecaster Behaviour from Germany J¨orgD¨opke,∗ Ulrich Fritsche,y Karsten M¨uller z May 2, 2018 Abstract Based on a panel of annual data for 17 growth and inflation forecasts from 14 institutions for Germany, we analyse forecast accuracy for the periods before and after the Great Recession, in- cluding measures of directional change accuracy based on Receiver Operating Curves (ROC). We find only small differences on forecast accuracy between both time periods. We test whether the conditions for forecast rationality hold in both time periods. We document an increased cross- section variance of forecasts and a changed correlation between inflation and growth forecast errors after the crisis, which might hint to a changed forecaster behaviour. This is also sup- ported by estimated loss functions before and after the crisis, which suggest a stronger incentive to avoid overestimations (growth) and underestimations (inflation) after the crisis. Estimating loss functions for a 10|year rolling window also reveal shifts in the level and direction of loss asymmetry and strengthens the impression of a changed forecaster behaviour after the Great Recession.
    [Show full text]