
Why is Weather Forecasting Still Under a Cloud ? Abstract: Weather forecasts are now impressively accurate, with short-term predictions achieving success rates of around 85 per cent. Despite this, forecasting remains the butt of much folklore, such as the notion that taking an umbrella on the advice of a forecast makes rain less likely to fall. Using probability theory, I show that such folklore has some basis in reality via the so-called "base-rate effect". The public's intuitive recognition of this effect may well explain their continued scepticism about weather forecasts. INTRODUCTION Weather forecasting is one of the triumphs of applied mathematics. World-wide data collection, sophisticated numerical models and state-of-the-art computing have now been combined by meteorologists to forecast the behaviour of this complex and non-linear phenomenon with impressive accuracy. The UK Meteorological Office, widely regarded as one of the best forecasting services in the world, currently reports computer model accuracies of 71 per cent for its 24 hr forecasts of rain, rising to 83 per cent following input from human forecasters1. Many people, however, remain resolutely sceptical of the reliability of weather forecasting: opinion polls show that dissatisfaction with Met Office forecasts currently runs at around 15-20 per cent1. Given the non-linearity of the weather system - and thus the sensitivity of forecasts to the unavoidably imperfect state of meteorological data - some level of dissatisfaction is, of course, inevitable. This source of public scepticism might best be tackled by promoting a wider awareness of the implications of chaos for weather forecasting. Another source of dissatisfaction lies in the limit on accuracy imposed by computer technology. Even using supercomputers capable of tens of gigaflops (thousand million floating point operations a second), today's numerical weather models are still too broad-brush to permit truly "local" forecasts to be made. As a result, it will be some years yet before forecasters are able to end the frustration of predictions proving accurate in one region, yet failing miserably just a few kilometres away. There is, however, a third source of dissatisfaction which appears not to have been widely recognised by meteorologists. At its heart is a probabilistic concept known as the "base rate effect", which ties the value of a forecast to the frequency of the phenomenon being forecast. In what follows, I use probability theory to show how even today's impressively accurate forecasting methods can fall foul of the base-rate effect, with consequences that can seem distinctly paradoxical. FORECASTS AND BAYES'S THEOREM The forecasting of any complex non-linear system such as the weather is inevitably a probabilistic process. The aim of the forecaster is thus be to produce predictions that are significantly more reliable than those achieved by random guessing. Mathematically, the forecasting process can be modelled by Bayes's Theorem, which shows how the odds on the occurrence of a specific event - say, a rain-shower - are improved (i.e. increased) in the light of a forecast: Odds(Event | Forecast) = LR x Odds(Event) (1) where Odds(Event) = Pr(Event)/Pr(~Event) etc., "~" denotes negation, " | " denotes "given", and "LR" is the Likelihood Ratio, defined by LR = Pr(Forecast | Event) / Pr(Forecast |~Event) (2) Forecasts based on random guessing are as likely to be right as wrong, and the odds of the event occurring in the light of such a forecast, Odds(Event | Guessing) are thus no higher than the "base-rate", Odds(Event). By (1), guessed forecasts can thus be characterised by a likelihood ratio of unity: they add no information about the chances of the event taking place. If a forecasting technique is to be useful, therefore, it must give LR > 1. The success of forecasting techniques is not usually stated in terms of likelihood ratios. Instead, it is typically given in terms of a somewhat vague concept such as "percentage of accurate forecasts". We can, however, convert such percentages into the corresponding likelihood ratios. Let R represent the event of rain, F be the event of rain being forecast, and Q the observed probability of the forecast proving correct, i.e. the frequency with which a forecast correctly predicts rain or correctly predicts no rain. As these latter events are mutually exclusive, we have Q = Pr(F & R) + Pr(~F & ~R) (3) = Pr(F | R).Pr(R) + Pr(~F |~R).Pr(~R) (4) Met Office data for rain forecasting2 shows that Pr(F | R) ~ Pr(~F |~R) so that (4) reduces to Q = Pr(F | R) = Pr(~F | ~R) (5) and the likelihood ratio LR becomes LR = Pr( F | R) / Pr(F | ~R) = Q /(1 - Q) (6) and so, by (1) Odds(Rain | Forecast) = Q . Odds(Rain) /(1 - (Q) (7) The UK Meteorological Office's stated accuracy rate for its 24hr forecasts is 83 per cent; putting Q = 0.83 into (6) implies that for Met Office forecasts we have LR = 4.9. To illustrate the implications of this, we note that the daily probability of rain for England and Wales is about 0.4, thus giving Odds(Rain) of 0.67. By (7), this implies that a forecast of rain made using the Met Office's 83 per cent accurate techniques lead to odds of rain taking place of 4.9 x 0.67 = 3.3; i.e. the forecast can be expected to be correct about 77 per cent of the time. This highlights two crucial aspects of the interpretation of forecasting. First, our perception of forecast accuracy is not determined solely by the accuracy rate Q. The quantity of real importance to users of forecasting data is the conditional probability Odds(Event | Forecast) - and as (1) shows, this depends crucially on the base-rate for the phenomenon being forecast, Odds(Event). Second, when this fact is taken into account, the probability of a specific forecast proving correct can be significantly lower than the accuracy figure quoted for the forecasting technique. For the Met Office forecasts of rain, for example, the accuracy figure of 83 per cent becomes, after allowance for the UK daily rain base-rate, a conditional probability of rain of 77 per cent. This reduction in perceived accuracy is the so-called "base-rate effect": the ability of a low base-rate to dilute the reliability of an accurate forecasting method. Neglect of the base-rate effect has been shown to have serious implications in fields as diverse as cancer screening3 and DNA profiling4. Its implications for weather forecasting, however, appear to have been largely overlooked. Yet as I now show, the base-rate effect can seriously - and negatively - affect public perception of the reliability of even highly accurate weather forecasts. BASE-RATES AND UMBRELLA-TAKING The essence of the base-rate is simply put. If an event is sufficiently rare, then even highly accurate forecasting methods can still fail to raise the chances of the event taking place above 50:50. From (7), this will happen for any phenomenon whose base rate is less than Pr(min) where Pr(min) < (1 - Q ) (8) With Q = 0.83, this leads to Pr(min) for Met Office forecasts of 0.17; predictions of weather events with a frequency below this are more likely to be wrong than right -despite the impressively high accuracy of the forecasting technology. For example, consider the well-known problem of deciding whether or not to take an umbrella in the light of a forecast of rain. At first glance, it would seem that Met Office forecasts are well-able to provide reliable advice on which to base such a decision. The daily base-rate for rain in England and Wales is 0.4, which exceeds the critical value of 0.17 by a comfortable margin. However, this probability is not appropriate for the umbrella-carrying problem; what we require is the probability of rain occurring on the hourly timescale relevant to umbrella-taking5: this is 0.08 - a much lower base-rate which does meets the inequality (8). From (7) we then find that the probability that we shall require our umbrella, following even an 83 per cent accurate forecast, is just 0.3. In other words, those in the UK who take an umbrella in response to a forecast will typically find themselves needlessly burdened with it about 2 times out of 3. The situation will be even worse for those living in South-East England, where the hourly probability of rain is only about half the national average. To this extent, the folklore that taking an umbrella reduces the chances of rain falling is borne out. It is not, of course, that the weather "knows" that one is carrying an umbrella. It is simply that placing one's complete faith in the forecast alone fails to take into account the relatively low base-rate of hourly rain in the UK. Indeed, in a recent paper6, I showed that decision theory leads to the conclusion that unless one is quite concerned about getting wet, the optimal decision is never to take an umbrella on walks, even if showers are forecast. It thus seems that public scepticism of weather forecasting may be an example of where ordinary people have a good, intuitive grasp of the impact of base-rates on their decision-making7: experience has told them that for the relatively short time they are out on a walk or shopping trip, the chances of rain falling are relatively low. Perhaps the Met Office could consider making this more clear in its forecasts, especially during showery weather. Certainly it would be wrong to respond by blaming the current inability of computer models to accurately forecast rain on hourly, rather than a daily, timescales: the low base-rate of hourly rain will still lead to apparently poor forecast reliability even if the Met Office does succeed in predict hourly showers with the same accuracy as its current forecasts.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages3 Page
-
File Size-