Demand Forecasting

Total Page:16

File Type:pdf, Size:1020Kb

Demand Forecasting LUNCH & LEARN DEMAND FORECASTING Dr. Andre Murphy [email protected] 619.591.9715 DEMAND FORECASTING Demands & Forecasts Forecasting Models DEFINITIONS . Demand Forecasting: Is the process of making predictions of the future based on past and present data and analysis of trends for a product or service . Demand Management: Includes demand planning, prioritizing and management DEMANDS AND FORECASTS Demand Types Forecast Types . Independent . Qualitative . Dependent . Quantitative INDEPENDENT DEMAND . Demand for an end item . Forecasted Examples: . Uniforms . Tail hook DEPENDENT DEMAND . Demand for assemblies, components, or ingredients necessary to make an end item . Calculated based upon Independent Demand Examples: . Ferrirum M54, C64 steel . Wheel, tire, nut … DEMANDS AND FORECASTS Demand Types Forecast Types Independent Qualitative Dependent Quantitative DEFINITIONS . Qualitative Forecasting: When there is little historical data to rely on, such as when establishing sparing for a completely new and unique item, and intuition or expert judgement is required QUALITATIVE FORECAST Subjective prediction of demand . Often used in absence of historical data . May use mathematical modeling May include: • Expert judgment • “Delphi” method (panel of SMEs) QUALITATIVE METHODS Grass Roots: Deriving future demand by asking end-user or an organization closest to the end-user Market Research: Attempting to identify end-user patterns, emerging trends, and new products for support Panel Consensus: Deriving future estimations from the synergy of a panel of experts in subject area Delphi Method: Similar to Panel Consensus but with anonymity Historical Analogy: Identify another similar commodity to the one being forecasted. DEFINITIONS . Quantitative Forecasting: When historical data exists and is helpful in calculating future demand or forecasting based on numbers QUANTITATIVE FORECAST Predicts demand based upon historical data . Often uses math models Examples of math models . Moving Average . Exponential Smoothing . … QUANTITATIVE METHODS Time Series: Models that predict future demand based on past history Causal Relationships: Models that use statistical techniques to establish relationships between demand and various outside factors Simulation: Models that can incorporate some randomness and non-linear effects DEMAND FORECASTING Demands & Forecasts Forecasting Models 3 Months of Demand Data Period 1 2 3 4 5 6 7 8 9 10 Forecast Demand 10 10 15 20 15 10 15 20 25 10 Highest Demand = 15 Lowest Demand = 10 Average Demand ~ 12 10 Months of Demand Data Period 1 2 3 4 5 6 7 8 9 10 Forecast Demand 10 10 15 20 15 10 15 20 25 10 Highest Demand = 25 Lowest Demand = 10 Average Demand = 15 FORECAST MODELS Moving Average Forecast: Predicts future need based upon a “rolling” average of historical data Weighted Moving Average Forecast: Predicts future need based upon weighted historical data . Applies factor to historical period to weight future anticipated demand FORECASTING MODELS 3-Week Moving Average formula Ft+1 = (Dt-2 + Dt-1 + Dt-0)/3 Moving Average Forecast Model 3-Moving Average forecasts based on previous 3 months of data Period 1 2 3 4 5 6 7 8 9 10 Quarterly 12 15 17 15 13 15 20 Moving Average Historical 10 10 15 20 15 10 15 20 25 10 Demand 4-Month Moving Average forecasts based on previous 4 months of data Period 1 2 3 4 5 6 7 8 9 10 4-Month Moving 14 15 15 15 15 18 Average Historical 10 10 15 20 15 10 15 20 25 10 Demand Moving Average Forecast Model 6-Month Moving Average forecasts based on previous 6 months of data Period 1 2 3 4 5 6 7 8 9 10 6-Month Moving 13 14 16 18 Average Forecast Historical 10 10 15 20 15 10 15 20 25 10 Demand 4-Month Moving Average forecasts based on previous 4 months of data Period 1 2 3 4 5 6 7 8 9 10 4-Month Moving 14 15 15 15 15 18 Average Historical 10 10 15 20 15 10 15 20 25 10 Demand Moving Average Forecast Model Using a large number of Using a small number of previous periods previous periods Forecast will be slow to Very little smoothing will respond to changing occur conditions Growth and seasonality could Nervous forecasts present result in shortages of supply problems for production and material planners Exponential Smoothing Forecast Model Simple exponential smoothing uses: 1. Use historical data to create initial forecast 2. Use weighted prior Forecast (Ft) and Demand (Dt) to create future forecasts Future Forecast = Ft+1 = αDt + (1 – α)Ft Where alpha is the exponential smoothing constant Exponential Smoothing Forecast Model Forecast for future 8 months using exponential smoothing and alpha factor (α) = 0.3 Period Previous Current 1 2 3 4 5 6 7 8 Forecast 15 18 13 14 18 16 12 14 18 22 Actual Demand 10 10 15 20 15 10 15 20 25 10 Forecast for future 8 months using exponential smoothing and alpha factor (α) = 0.6 Period Previous Current 1 2 3 4 5 6 7 8 Forecast 15 18 16 15 17 16 14 15 16 19 Actual Demand 10 10 15 20 15 10 15 20 25 10 EXPONENTIAL SMOOTHING FORECAST MODEL High α (close to 1) Low α (close to 0) Weight applied to the previous Weight applied to the previous period’s forecast is large period’s forecast is small Responsive to change Slow to respond to changes Risk of nervous forecasts Stable forecasts EXPONENTIAL SMOOTHING FORECAST F28 = αD27 + (1 - α)F27 F28 = 0.1 x 900 + (1 - 0.1) x 1000 F28 = 90 + .9 x 1000 F28 = 90 + 900 F28 = 990 Ft+1 = αDt + (1 – α)Ft • D27 = 900 (actual demand) • F27 = 1,000 (previous period’s forecast for period 27) • Assume that α = 0.1 DEMAND FORECAST ACCURACY Factors affecting forecast accuracy include; . What level are you calculating – product, family, business unit, total? . Are you using ship date, invoice date, receipt date to calculate? . If comparing against contractor/supplier dates, what are they using? DEMAND FORECAST ACCURACY Location A Location B Total Forecast Demand 25 75 100 Actual Demand 75 25 100 Forecast Error (%) 67% 100% 0% Forecast Accuracy(%) 33% 0% 100% “The Devil is in the Details” In aggregate, the forecast accuracy is 100%, however, if it isn’t the proper mix or location – your customers may not be happy DEMAND FORECAST ACCURACY Forecast Accuracy: Goodness of demand model fit . How well forecast model predicted actual demand? . How do you measure good fit? What is bias? . Examples of Forecast Error Measurement • Forecast Error, Percent Error • Mean Error, Mean Percent Error • Mean Absolute Error, • Mean Absolute Percent Error, Mean Squared Error • Mean Absolute Deviation, % Mean Absolute Deviation • … FORECAST ACCURACY (CONT.) Depot A Depot B Total Forecast 25 75 100 Example Provided Actual 75 25 100 Percent Forecast 33% 0% (-100%) 100% Accuracy Depot A Depot B Total Alternative Example: Forecast 25 75 100 • Demand volume Actual 75 25 100 weighted using absolute errors: 0% Absolute Error 50 50 100 • Item Average (no weighting – 33% & - Percent 100%): -33.5% Forecast 33% -100% 0% Accuracy 29 DEMAND FORECAST ACCURACY Forecast Error Measurements Formula Forecast Error et = Yt – Ft Percent Forecast Error PEt = (Yt – Ft)/Yt x 100 Mean Absolute Deviation MADt= | | 1 ∑=1 − ̅ Y = Actual Demand F = Forecast e = Error i = iterations from mean DEMAND FORECAST ACCURACY Forecast Error Measurements Formula Mean Error MEt = 1 =1 Mean Absolute Error MAEt = ∑ | | 1 Mean Squared Error ∑=1 MSEt = 1 2 ∑=1 Y = Actual Demand F = Forecast e = Error DEMAND FORECASTING Demands & Forecasts Forecasting Models.
Recommended publications
  • 01031-9781451878004.Pdf
    © 2005 International Monetary Fund March 2005 IMF Country Report No. 05/116 Canada: Selected Issues This Selected Issues paper for Canada was prepared by a staff team of the International Monetary Fund as background documentation for the periodic consultation with the member country. It is based on the information available at the time it was completed on February 1, 2005. The views expressed in this document are those of the staff team and do not necessarily reflect the views of the government of Canada or the Executive Board of the IMF. The policy of publication of staff reports and other documents by the IMF allows for the deletion of market-sensitive information. To assist the IMF in evaluating the publication policy, reader comments are invited and may be sent by e-mail to [email protected]. Copies of this report are available to the public from International Monetary Fund ● Publication Services 700 19th Street, N.W. ● Washington, D.C. 20431 Telephone: (202) 623 7430 ● Telefax: (202) 623 7201 E-mail: [email protected] ● Internet: http://www.imf.org Price: $15.00 a copy International Monetary Fund Washington, D.C. ©International Monetary Fund. Not for Redistribution This page intentionally left blank ©International Monetary Fund. Not for Redistribution INTERNATIONAL MONETARY FUND CANADA Selected Issues Prepared by T. Bayoumi, M. Mühleisen, I. Ivaschenko, A. Justiniano, K. Krajnyák, B. Sutton and A. Swiston (all WHD), D. Botman, S. Danninger, and D. Hauner (all FAD), G. De Nicoló, R. Corker, and A. Tieman (all MFD), and R. Cardarelli (RES) Approved by the Western Hemisphere Department February 1, 2005 Contents Pages PART I: REAL SECTOR ISSUES I.
    [Show full text]
  • A Note on the Mean Absolute Scaled Error
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Erasmus University Digital Repository A note on the Mean Absolute Scaled Error Philip Hans Franses Econometric Institute Erasmus School of Economics Abstract Hyndman and Koehler (2006) recommend that the Mean Absolute Scaled Error (MASE) becomes the standard when comparing forecast accuracy. This note supports their claim by showing that the MASE nicely fits within the standard statistical procedures to test equal forecast accuracy initiated in Diebold and Mariano (1995). Various other criteria do not fit as they do not imply the relevant moment properties, and this is illustrated in some simulation experiments. Keywords: Forecast accuracy, Forecast error measures, Statistical testing This revised version: February 2015 Address for correspondence: Econometric Institute, Erasmus School of Economics, POB 1738, NL-3000 DR Rotterdam, the Netherlands, [email protected] Thanks to the Editor, an anonymous Associate Editor and an anonymous reviewer for helpful comments and to Victor Hoornweg for excellent research assistance. 1 Introduction Consider the case where an analyst has two competing one-step-ahead forecasts for a time series variable , namely , and , , for a sample = 1,2, . , . The forecasts bring along the forecast errors , and� 1 , , respectively.�2 To examine which of the two sets of forecasts provides most accuracy, the1̂ analyst2̂ can use criteria based on some average or median of loss functions of the forecast errors. Well-known examples are the Root Mean Squared Error (RMSE) or the Median Absolute Error (MAE), see Hyndman and Koehler (2006) for an exhaustive list of criteria and see also Table 1 below.
    [Show full text]
  • A Note on the Mean Absolute Scaled Error Philip Hans Franses ∗ Econometric Institute, Erasmus School of Economics, the Netherlands Article Info a B S T R a C T
    International Journal of Forecasting 32 (2016) 20–22 Contents lists available at ScienceDirect International Journal of Forecasting journal homepage: www.elsevier.com/locate/ijforecast A note on the Mean Absolute Scaled Error Philip Hans Franses ∗ Econometric Institute, Erasmus School of Economics, The Netherlands article info a b s t r a c t Keywords: Hyndman and Koehler (2006) recommend that the Mean Absolute Scaled Error (MASE) Forecast accuracy should become the standard when comparing forecast accuracies. This note supports their Forecast error measures claim by showing that the MASE fits nicely within the standard statistical procedures Statistical testing initiated by Diebold and Mariano (1995) for testing equal forecast accuracies. Various other criteria do not fit, as they do not imply the relevant moment properties, and this is illustrated in some simulation experiments. ' 2015 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved. 1. Introduction estimate of the standard deviation of dN by σON , the DM 12 d12 test for one-step-ahead forecasts is Consider the case where an analyst has two compet- N d12 ing one-step-ahead forecasts for a time series variable yt , DM D ∼ N.0; 1/; σON namely yO1;t and yO2;t , for a sample t D 1; 2;:::; T . The d12 O O forecasts have the associated forecast errors "1;t and "2;t , under the null hypothesis of equal forecast accuracy. Even respectively. To examine which of the two sets of fore- though Diebold and Mariano(1995, p. 254) claim that this casts provides the best accuracy, the analyst can use cri- result holds for any arbitrary function f , it is quite clear that teria based on some average or median of loss functions of the function should allow for proper moment conditions in the forecast errors.
    [Show full text]
  • Forecasting Analysts' Forecast Errors By
    Forecasting Analysts’ Forecast Errors By Jing Liu* [email protected] and Wei Su [email protected] Mailing Address: 110 Westwood Plaza, Suite D403 Anderson School of Management University of California, Los Angeles Los Angeles, CA 90095 * Liu is from the Anderson School at UCLA and Cheung Kong Graduate School of Business. Su is from the Anderson School at UCLA. We thank David Aboody, Carla Hayn, Jack Hughes, and Stan Markov for helpful suggestions. All errors are our own. Forecasting Analysts’ Forecast Errors Abstract In this paper, we examine whether analysts’ forecasts errors are predictable out of sample. Following market efficiency studies by Ou and Penman (1989) and Lev and Thiagarajan (1993), we employ a comprehensive list of forecasting variables. Our estimation procedures include the traditional OLS as well as a more robust procedure that minimizes the sum of absolute errors (LAD). While in-sample we find significant prediction power using both OLS and LAD, we find far stronger results using LAD out of sample, with an average reduction in forecast errors of over thirty percent measured by the mean squared error or near ten percent measured by the mean absolute error. Most of the prediction power comes from firms whose forecasts are predicted to be too optimistic. The stock market seems to understand the inefficiencies in analyst forecasts: a trading strategy based on the predicted analyst forecast errors does not generate abnormal profits. Conversely, analysts seem to fail to understand the inefficiencies present in the stock prices: a trading strategy directly based on the predicted stock returns generates significant abnormal returns, and the abnormal returns are associated with predictable analyst forecast errors.
    [Show full text]
  • How to Calculate Forecast Accuracy for Stocked Items with a Lumpy Demand - a Case Study at Alfa Laval
    School of Innovation, Design and Engineering How to calculate forecast accuracy for stocked items with a lumpy demand - A case study at Alfa Laval Master thesis Advanced level, 30 credits Product and process development Production and logistic Elsa Ragnerstam Abstract Inventory management is an important part of a good functioning logistic. Nearly all the literature on optimal inventory management uses criteria of cost minimization and profit maximization. To have a well functioning forecasting system it is important to have a balance in the inventory. But, it exist different factors that can results in uncertainties and difficulties to maintain this balance. One important factor is the customers’ demand. Over half of the stocked items are in stock to prevent irregular orders and an uncertainty demand. The customers’ demand can be categorized into four categories: Smooth, Erratic, Intermittent and Lumpy. Items with a lumpy demand i.e. the items that are both intermittent and erratic are the hardest to manage and to forecast. The reason for this is that the quantity and demand for these items varies a lot. These items may also have periods of zero demand. Because of this, it is a challenge for companies to forecast these items. It is hard to manage the random values that appear at random intervals and leaving many periods with zero demand. Due to the lumpy demand, an ongoing problem for most organization is the inaccuracy of forecasts. It is almost impossible to predict exact forecasts. It does not matter how good the forecasts are or how complex the forecast techniques are, the instability of the markets confirm that the forecasts always will be wrong and that errors therefore always will exist.
    [Show full text]
  • Lecture 15 Forecasting
    RS – EC2 - Lecture 15 Lecture 15 Forecasting 1 Forecasting • A shock is often used to describe an unexpected change in a variable or in the value of the error terms at a particular time period. • A shock is defined as the difference between expected (a forecast) and what actually happened. • One of the most important objectives in time series analysis is to forecast its future values. It is the primary objective of ARIMA modeling: • Two types of forecasts. - In sample (prediction): The expected value of the RV (in-sample), given the estimates of the parameters. - Out of sample (forecasting): The value of a future RV that is not observed by the sample. 1 RS – EC2 - Lecture 15 Forecasting – Basic Concepts • Any forecasts needs an information set, IT. This includes data, models and/or assumptions available at time T. The forecasts will be conditional on IT. • The variable to forecast YT+l is a RV. It can be fully characterized by a pdf. • In general, it is difficult to get the pdf for the forecast. In practice, we get a point estimate (the forecast) and a C.I. • Notation: - Forecast for T+l made at T: , |, . - T+l forecast error: 2 - Mean squared error (MSE): ] Forecasting – Basic Concepts ˆ • To get a point estimate, Y T , we need a cost function to judge various alternatives. This cost function is call loss function. Since we are working with forecast, we work with a expected loss function. • A popular loss functions is the MSE, which is quadratic and symmetric. We can use asymmetric functions, for example, functions that penalize positive errors more than negative errors.
    [Show full text]
  • Evaluation of Forecasting Techniques and Forecast Errors
    ISSN: 1402-1544 ISBN 978-91-86233-XX-X Se i listan och fyll i siffror där kryssen är LICENTIATE T H E SIS Peter Wallström Peter Department of Business Administration and Social Sciences Division of Industrial marketing, e-commerce and logistics ISSN: 1402-1757 ISBN 978-91-86233-63-1 Evaluation of forecasting of Evaluation forecasting techniques and forecast errors Luleå University of Technology 2009 techniques and forecast errors With focus on intermittent demand Peter Wallström s With focus on intermittent demand LICENTIATE THESIS EVALUATION OF FORECASTING TECHNIQUES AND FORECAST ERRORS WITH FOCUS ON INTERMITTENT DEMAND Peter Wallström Luleå, May 2009 Luleå University of Technology Department of Business Administration and Social Sciences Division of Industrial Marketing, e-commerce and Logistics Industrial logistics Printed by Universitetstryckeriet, Luleå 2009 ISSN: 1402-1757 ISBN 978-91-86233-63-1 Luleå www.ltu.se I’m just doing my rock’n’roll duty creating a buzz buzz buzz Mitchell/Dubois Like the fly on the wheel, who says "What a lot of dust we're raising" Lee/Lifeson/Peart Hypotesen har förflyktigats i tomma intet Johannes Kepler Abstract Abstract To decide in advance the amount of resources that is required next week or next month can be both a complicated and hazardous task depending on the situation, despite the known time frame when the resources are needed. Intermittent demand, or slow-moving demand, that is when there are time periods without demand and then suddenly a time period with demand, becomes even more difficult to forecast. If the demand is underestimated it will lead to lost sales and therefore lost revenues.
    [Show full text]
  • (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology
    Performance Metrics (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology Alexei Botchkarev Principal, GS Research & Consulting, Adjunct Prof., Department of Computer Science, Ryerson University Toronto, Ontario, Canada Abstract Performance metrics (error measures) are vital components of the evaluation frameworks in various fields. The intention of this study was to overview of a variety of performance metrics and approaches to their classification. The main goal of the study was to develop a typology that will help to improve our knowledge and understanding of metrics and facilitate their selection in machine learning regression, forecasting and prognostics. Based on the analysis of the structure of numerous performance metrics, we propose a framework of metrics which includes four (4) categories: primary metrics, extended metrics, composite metrics, and hybrid sets of metrics. The paper identified three (3) key components (dimensions) that determine the structure and properties of primary metrics: method of determining point distance, method of normalization, method of aggregation of point distances over a data set. The paper proposed a new primary metrics typology designed around the key metrics components. The suggested typology has been shown to cover most of the commonly used primary metrics – total of over 40. The main contribution of this paper is in ordering knowledge of performance metrics and enhancing understanding of their structure and properties by proposing a new typology, generic primary metrics mathematic formula and a visualization chart. Keywords: Performance metrics, error measures, accuracy measures, distance, similarity, dissimilarity, properties, typology, classification, machine learning, regression, forecasting, prognostics, prediction, evaluation, estimation, modeling Note: This is a draft paper submitted to a peer-reviewed journal.
    [Show full text]
  • Has Macroeconomic Forecasting Changed After the Great Recession? Panel-Based Evidence on Accuracy and Forecaster Behaviour from Germany
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Döpke, Jörg; Fritsche, Ulrich; Müller, Karsten Working Paper Has macroeconomic forecasting changed after the Great Recession? Panel-based evidence on accuracy and forecaster behaviour from Germany DEP (Socioeconomics) Discussion Papers - Macroeconomics and Finance Series, No. 3/2018 Provided in Cooperation with: Hamburg University, Department Socioeconomics Suggested Citation: Döpke, Jörg; Fritsche, Ulrich; Müller, Karsten (2018) : Has macroeconomic forecasting changed after the Great Recession? Panel-based evidence on accuracy and forecaster behaviour from Germany, DEP (Socioeconomics) Discussion Papers - Macroeconomics and Finance Series, No. 3/2018, Hamburg University, Department Socioeconomics, Hamburg This Version is available at: http://hdl.handle.net/10419/194020 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence.
    [Show full text]
  • Has Macroeconomic Forecasting Changed After the Great Recession? – Panel-Based Evidence on Accuracy and Forecaster Behaviour from Germany
    Department Socioeconomics Has Macroeconomic Forecasting changed after the Great Recession? – Panel-based Evidence on Accuracy and Forecaster Behaviour from Germany Jörg Döpke Ulrich Fritsche Karsten Müller DEP (Socioeconomics) Discussion Papers Macroeconomics and Finance Series 3/2018 Hamburg, 2018 Has Macroeconomic Forecasting changed after the Great Recession? { Panel-based Evidence on Accuracy and Forecaster Behaviour from Germany J¨orgD¨opke,∗ Ulrich Fritsche,y Karsten M¨uller z May 2, 2018 Abstract Based on a panel of annual data for 17 growth and inflation forecasts from 14 institutions for Germany, we analyse forecast accuracy for the periods before and after the Great Recession, in- cluding measures of directional change accuracy based on Receiver Operating Curves (ROC). We find only small differences on forecast accuracy between both time periods. We test whether the conditions for forecast rationality hold in both time periods. We document an increased cross- section variance of forecasts and a changed correlation between inflation and growth forecast errors after the crisis, which might hint to a changed forecaster behaviour. This is also sup- ported by estimated loss functions before and after the crisis, which suggest a stronger incentive to avoid overestimations (growth) and underestimations (inflation) after the crisis. Estimating loss functions for a 10|year rolling window also reveal shifts in the level and direction of loss asymmetry and strengthens the impression of a changed forecaster behaviour after the Great Recession.
    [Show full text]