Radio Occultation Data: Its Utility in NWP and Climate Fingerprinting 1
Total Page:16
File Type:pdf, Size:1020Kb
Radio Occultation Data: Its Utility in NWP and Climate Fingerprinting Stephen S. Leroy, Yi Huang, and James G. Anderson Harvard School of Engineering and Applied Sciences Anderson Group, 12 Oxford St., Link Building, Cambridge, Massachusetts, USA 1 Introduction My job in the Anderson Group at Harvard is centered about support for the Climate Absolute Radiance and Refractivity Observatory (CLARREO), a climate mission, so I must begin my talk with an apology, that I won’t say much if anything about numerical weather prediction and instead focus on climate. I think you will still find some interesting and very recognizable concepts. In this talk I will give a brief overview of climate benchmarking, how one can interpret time series of climate benchmarks by scalar prediction, and how an objective method for formulating mission accuracy requirements naturally falls out. Then I will discuss what makes radio occultation a climate benchmark data type, how retrieval is performed on radio occultation data, and the information a time series of radio occultation data can yield on the climate system. Next I will discuss the climate information content of high spectral resolution infrared spectra and what might be gained from considering radio occultation data and infrared spectral data jointly. Finally I will summarize. The deployment of space-based climate benchmarks was demanded by the recent decadal review of the U.S. National Aeronautics and Space Administration (NASA) and National Oceanographic and Atmo- spheric Administration by the National Research Council. In particular, it recommended the CLARREO mission: CLARREO addresses three key societal objectives: 1) the essential responsibility to present and future generations to put in place a benchmark climate record that is global, accurate in perpetuity, tested against independent strategies that reveal systematic errors, and pinned to international standards; 2) the development of an operational climate forecast that is tested and trusted through a disciplined strategy using state-of-the-art observations with mathematically-rigorous techniques to systematically improve those forecasts to establish credibility; and 3) disciplined decision structures that assimilate accurate data and fore- casts into intelligible and specific products that promote international commerce as well as societal stability and security. A bit of translation is in order. “Accurate in perpetuity” is not a requirement that the present and all future generations of NASA engineers fly CLARREO satellites but rather it is a requirement that the data obtained by CLARREO be useful for measuring climate change to all future generations of climate scientists. “International standards” refers to S.I. traceability, a metrological approach to instrument design that assures an unbroken and testable chain of calibration to the international standards that define the units of measurement. “Tested against independent strategies that reveal systematic error” simply requires that climate benchmark instruments have the ability to obtain its own error bars empirically. Lastly, “mathematically rigorous techniques” refers to Bayesian statistics (in which optimal detection methods are implicit) to infer climate trends underlying long time series of data. My particular specialty is the last. ECMWF Seminar on Diagnosis of Forecasting and Data AssimilationSystems,7–10September2009 301 LEROY, S.S. ET AL.: RADIO OCCULTATION AND CLIMATE FINGERPRINTING 2 Climate benchmarks S.I. traceability is described in The International Vocabulary of Basic and General Terms in Metrology (ISO 1993): “Traceability [is] a property of the result of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons all having stated uncertainties.” In short, “S.I. traceability” is far more than contemporary scientific marketplace jargon; instead it points toward the measurement practice we learned in grade school that observations demand error estimates based on the overall accuracy of our observing apparatus. The only way error estimates can be obtained is by a documented and reproducible chain of comparisons back to the international standard that defines the units of the observations. Two organizations that maintain such standards are the Bureau des Poids et M´esures in Paris and the U.S. National Institutes for Standards and Technology. Climate benchmarks must be S.I. traceable. The great advantage gained is that a climate benchmark can be used for observing climate change even in the case of a discontinuous time series of data. The com- munity’s experience in constructing “climate data records” from instruments whose calibrations were deemed “stable” has not been good. Take, for example, the records of microwave brightness temperature constructed from measurements of the NOAA satellites’ Microwave Sounding Units (MSU) and of total solar irradiance (TSI). The time series can only be formed by bias-correcting the records of individual instruments so that they match the records of overlapping preceding and succeeding instruments. These efforts have failed because multiple versions of climate data records based on the same data have yielded different long term trends and because even a minor break in the time series of observations renders most of the record useless. With S.I. traceability, the record survives breaks in the time series of observations. Moreover, independent efforts at obtaining long term trends based on S.I. traceable observations will yield the same trends to within empirically determined error. Others have written on the design of S.I. traceable instrumentation, and since it is not my specialty, I will not do so here. For reference, see Pollock et al. (2000), Pollock et al. (2003). One such design is given by Dykema and Anderson (2006). Not every known data type can be obtained by instrumentation of S.I. traceable design, but once we have sorted out those that can be, it is interesting to find out what can be learned about the climate system from long term trends in those data types. Scalar prediction is one way to determine the information content in S.I. traceable data types. The goal of any investigation of information content in climate monitoring must be the reduction of uncertainty in climate prediction. With credible time series of information-rich data, one ought to be able to obtain improved accuracy and precision in climate projection. (I have used “projection” to refer to prediction aided by intelligent application of data.) A brief description of scalar prediction follows. 3 Scalar prediction Scalar prediction is two levels of Bayesian inference applied to long term trends. It is closely re- lated to linear multi-pattern regression (Hasselmann 1997; Allen and Tett 1999) and optimal detection (Bell 1986; Hasselmann 1993; North et al. 1995). Scalar prediction, or any method used to extract infor- mation from long term trends in climate, requires a model for the data that is at least minimally credible and must account for the natural inter-annual variations of climate as a source of error. One central tenant of climate is that it responds approximately linearly to subtle changes in external forcing—the forcing generally a perturbation to the radiative balance of the system not typically associated with a steady- state climate—which cannot be observed directly because the atmosphere-ocean-cryosphere-biosphere system varies from year to year even in the absence of external forcing. Information content studies should seek to minimize the effects of these natural fluctuations and count the residuals as uncertainty 302 ECMWF Seminar onDiagnosisof Forecastingand DataAssimilation Systems, 7–10 September 2009 LEROY, S.S. ET AL.: RADIO OCCULTATION AND CLIMATE FINGERPRINTING in the inference of “climate trends”. In the first level of inference, that of optimal detection, one has a long term trend in data dd/dt which is linearly related to a climate trend dα/dt in some as yet unnamed (and arbitrary) variable α: dd dg dα d = + dn (1) dt dα i dt dt in which g(x) is the model and data operator, a function of the atmosphere-ocean-cryosphere-biosphere state x, (dg/dα)i the total derivative of that model and data operator with respect to arbitrary variable α. Natural variability enters through the random inter-annual fluctuations dn and the statistically in- significant trends d(dn)/dt to which they give rise. In the first level of inference, the operator g(x) is considered linear and its derivative x, (dg/dα)i constant and certain while the climate trend dα/dt is completely unknown. The solution for the most likely value of the climate trend dα/dt is dα dd = (sT Σ−1 s )−1sT Σ−1 (2) dt ml i dn/dt i i dn/dt dt α Σ where si = (dg/d )i and dn/dt is the covariance of the random quantity d(dn)/dt as determined from a steady-state simulation of climate. The posterior uncertainty in the determination of dα/dt is σ 2 T Σ−1 −1 dα/dt = (si dn/dt si) . (3) Equations 1, 2, and 3 are those of optimal detection, the first level of Bayesian inference. In the second level of inference, one must account for the existence in uncertainty in modeling. The Jacobian dg/dαi, while minimally credible, is most definitely uncertain. This uncertainty is factored in by weighting each model in an ensemble of climate models according to the quality of its fit to the data. α σ The final result for the most probable trend (d /dt)mp and its uncertainty dα/dt is given by dα dd = (sT Σ−1s)−1sT Σ−1 (4) dt mp dt σ 2 T Σ−1 −1 dα/dt = (s s) (5) dα 2 Σ ≡ Σ + Σδ . (6) dn/dt dt s Σ Σ The equations are the same as those of optimal detection with the exceptions that dn/dt is replaced by and si by s. Both of these new quantities are derived from an ensemble of climate models, the quantity s being the mean si of the ensemble of climate models, and Σδs the covariance of δs = si − s over the ensemble of models.