![The Impact of Unique Meteorological Phenomena Detected by the Oklahoma Mesonet and ARS Micronet on Automated Quality Control Christopher A](https://data.docslib.org/img/3a60ab92a6e30910dab9bd827208bcff-1.webp)
The Impact of Unique Meteorological Phenomena Detected by the Oklahoma Mesonet and ARS Micronet on Automated Quality Control Christopher A. Fiebrich and Kenneth C. Crawford Oklahoma Climatological Survey, Norman, Oklahoma ABSTRACT To ensure quality data from a meteorological observing network, a well-designed quality control system is vital. Automated quality assurance (QA) software developed by the Oklahoma Mesonetwork (Mesonet) provides an effi- cient means to sift through over 500 000 observations ingested daily from the Mesonet and from a Micronet spon- sored by the Agricultural Research Service of the United States Department of Agriculture (USDA). However, some of nature's most interesting meteorological phenomena produce data that fail many automated QA tests. This means perfectly good observations are flagged as erroneous. Cold air pooling, "inversion poking," mesohighs, mesolows, heat bursts, variations in snowfall and snow cover, and microclimatic effects produced by variations in vegetation are meteorological phenomena that pose a problem for the Mesonet's automated QA tests. Despite the fact that the QA software has been engineered for most observa- tions of real meteorological phenomena to pass the various tests—but is stringent enough to catch malfunctioning sensors—erroneous flags are often placed on data during extreme events. This manuscript describes how the Mesonet's automated QA tests responded to data captured from microscale meteorological events that, in turn, were flagged as erroneous by the tests. The Mesonet's operational plan is to cata- log these extreme events in a database so QA flags can be changed manually by expert eyes. 1. Introduction and personnel at the central processing site for the Oklahoma Mesonet in Norman, Oklahoma. The Oklahoma Mesonet, funded and maintained The task of manually inspecting each datum from by the state of Oklahoma, is an environmental moni- 157 automated stations, each recording observations at toring network of 115 stations deployed across the 5-min intervals, is insurmountable. Hence, automated state. The United States Department of Agriculture quality assurance (QA) software is an essential tool. and its Agricultural Research Service (ARS) funds the For each datum, the Mesonet's automated software operation of a Micronet of 42 stations in the Little generates a QA flag that indicates the quality of each Washita Watershed in southwest Oklahoma. Data observation. However, some of nature's most inter- from both networks are quality assured by software esting meteorological conditions provide data that fail many QA tests. As a result, some good observations are flagged as erroneous. The purpose of this manuscript is to illustrate how Corresponding author address: Christopher A. Fiebrich, Okla- automated quality control procedures can fail when homa Climatological Survey, 100 E. Boyd St., Suite 1210, microscale phenomena are detected by meteorologi- Norman, OK 73019. E-mail: [email protected] cal observing networks. A remedy for this problem In final form 27 April 2001. involves the use of a database to catalog unique me- ©2001 American Meteorological Society teorological events. The meteorological phenomena Bulletin of the American Meteorological Society 219 7 Unauthenticated | Downloaded 10/11/21 09:50 AM UTC that create special problems for automated QA soft- unusual meteorological events. In addition, the use of ware that will be discussed include the following: data quality flags to denote potential data problems is discussed in Snyder and Pruitt (1992). • cold air pooling and "inversion poking"; • mesohighs and mesolows; 1) RANGE TEST • heat bursts; The range test is an algorithm that determines if • snowfall and snow cover; and an observation lies within a predetermined range. The • microclimatic effects produced by variations in allowable ranges are based on sensor specifications vegetation. and annual climate extremes in Oklahoma; each pa- rameter has a unique set of limits (Table 1). If a da- tum is observed outside of the allowable range, it is 2. The Oklahoma Mesonet's quality flagged with a failure flag. assurance system 2) STEP TEST The automated quality control performed on Mesonet The step test uses sequential observations to de- and ARS data is one part of an extensive QA system. termine which data represent unrealistic "jumps" dur- Together, four components compose the Mesonet's ing the observation time interval for each parameter. QA system: 1) laboratory calibration and testing, 2) on- Observations that exceed the maximum allowed step site intercomparison, 3) automated QA, and 4) manual (Table 1) receive a warning flag. The thresholds for QA. During laboratory calibration and testing, all sen- the step test were determined through repeated test- sors are calibrated in the Mesonet lab to validate or im- ing of the algorithm on Mesonet data. prove upon the calibrations sent from the instrument manufacturer. Next, through on-site intercomparisons, 3) PERSISTENCE TEST instruments deployed to stations across the state are The persistence test analyzes data on a calendar day periodically compared (on average, once a year) to en- basis to determine if any parameter underwent little sure accurate performance by the sensors. Automated or no variation. If the daily standard deviation of pa- QA software (described in detail later in this section) rameter observations is less than or equal to twice the evaluates the data received from remote stations. standard deviation threshold (Table 1), all data are Finally, a meteorologist, trained in state-of-the-art QA flagged with a suspect flag for that entire day. The se- procedures, reviews each day the suspicious observa- verity of the flags is increased to a warning level when tions detected by other components of the QA system. the daily standard deviation is less than or equal to Human judgment is added to complement the auto- the standard deviation threshold. mated QA. For a complete description of the Okla- A second algorithm in the persistence test com- homa Mesonet's QA system, see Shafer et al. (2000). pares the daily range of each parameter to a predeter- mined "delta" threshold (Table 1). When the daily a. Automated QA range of observations is less than or equal to the The Mesonet's automated QA software consists of threshold value, all observations for that parameter five tests: 1) range, 2) step, 3) persistence, 4) spatial, receive a QA flag of warning for the day. All data are and 5) like-instrument. The software evaluates data flagged as suspect when the range of values exceeds calendar day by calendar day. At the end of the process, the minimum threshold, but the observed range is less one of four QA flags is assigned (each with increas- than 1.5 times the minimum threshold. Wind speeds ing severity) to each datum: 0) "good," 1) "suspect," that are stuck near 0 m s"1 (because of freezing rain) 2) "warning," or 3) "failure." Such stratification al- or barometers with loose wires are often detected by lows the data user to decide the level of QA they pre- the persistence test. Similar to the thresholds set for fer. Most users request that data receiving warning or the step test, the thresholds used for the persistence failure flags from the QA system be removed from tests were determined by repeated testing of the algo- the datasets they obtain from Mesonet archives. rithms on Mesonet data. Meek and Hatfield (1994) documented algorithms similar to the Mesonet's range, step, and persistence 4) SPATIAL TEST tests, as well as a system of QA flags. They also noted The spatial test performs a Barnes objective analy- that a flagging system could serve to find real but sis (Barnes 1964) for each parameter at each obser- 2180 Vol. 82,, No. 10, October 2001 Unauthenticated | Downloaded 10/11/21 09:50 AM UTC vation time. The analysis is repeated with each sta- A similar method of performing objective intercom- tion successively excluded from the field being ana- parisons between neighboring stations is described in lyzed. For each analysis, an expected value, based on Wade (1987). data from surrounding stations, is calculated for each site. The expected value is compared with the actual 5) LIKE-INSTRUMENT TEST observation at that site. If the difference between the The like-instrument test compares the air tempera- estimate and the observed value is greater than 2 times tures at 1.5 and 9 m at Mesonet sites equipped with the standard deviation of the sample (known bad ob- temperature sensors at both levels. When the two sen- servations are excluded from the sample), the obser- sor values differ by more than 10°C, the data are vation receives a suspect flag from the test. If the flagged as suspect. This threshold was determined by difference exceeds 3 times the standard deviation, a analyzing a climatology of extreme inversions detected warning flag is issued. Minimum standard deviations within the depth of the Mesonet tower (Fiebrich and are also established so that during quiescent times, a Crawford 1998). It is planned for the like-instrument good observation does not become flagged due to a test to expand and compare soil temperatures at vary- very small departure from its expected value (Table 1). ing levels and wind speeds at different heights. TABLE 1. Ranges and thresholds for the automated QA tests. Min Observation Range Step Persistence Persistence spatial Parameter interval allowed allowed std dev delta std dev Relative humidity (%) 5 min 0-103 20 0.1 0.1 20.0 1.5-m air temperature (°C)
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-