474 JOURNALOFATMOSPHERICANDOCEANICTECHNOLOGY VOLUME17

QualityAssuranceProceduresintheOklahomaMesonetwork

MARKA.SHAFER,CHRISTOPHERA.FIEBRICH,ANDDEREKS.ARNDT OklahomaClimatologicalSurvey,Norman,

SHERMANE.FREDRICKSON* National Severe Storms Laboratory,Norman,Oklahoma

TIMOTHYW.HUGHES Environmental Verification and Analysis Center,Norman,Oklahoma

(Manuscriptreceived6April1999,in®nalform14June1999)

ABSTRACT Highqualitydatasourcesarecriticaltoscientists,engineers,anddecisionmakersalike.Themodelsthat scientistsdevelopandtestwithquality-assureddataeventuallybecomeusedbyawidercommunity,frompolicy makers'long-termstrategiesbaseduponweatherandclimatepredictionstoemergencymanagers'decisionsto deployresponsecrews.Theprocessofdevelopinghighqualitydatainonenetwork,theOklahomaMesonetwork ()isdetailedinthismanuscript. TheOklahomaMesonetquality-assuranceproceduresconsistoffourprincipalcomponents:aninstrument laboratory,®eldvisits,automatedcomputerroutines,andmanualinspection.Theinstrumentlaboratoryensures thatallsensorsthataredeployedinthenetworkmeasureuptohighstandardsestablishedbytheMesonet SteeringCommittee.Routineandemergency®eldvisitsprovideamanualinspectionoftheperformanceofthe sensorsandreplacementasnecessary.Automatedcomputerroutinesmonitordataeachday,setdata¯agsas appropriate,andalertpersonnelofpotentialerrorsinthedata.Manualinspectionprovideshumanjudgmentto theprocess,catchingsubtleerrorsthatautomatedtechniquesmaymiss. Thequality-assurance(QA)processistiedtogetherthroughef®cientcommunicationlinks.AQAmanager servesastheconduitthroughwhomallquestionsconcerningdataquality¯ow.TheQAmanagerreceivesdaily reportsfromtheautomatedsystem,issuestroubleticketstoguidethetechniciansinthe®eld,andissuessummary reportstothebroadercommunityofdatausers.TechniciansandotherMesonetstaffremainincontactthrough cellularcommunications,pagers,andtheWorldWideWeb.Together,thesemeansofcommunicationprovidea seamlesssystem:fromidentifyingsuspiciousdata,to®eldinvestigations,tofeedbackonactiontakenbythe technician.

1.Introduction ningstagesofthenetworkthroughoperationaldata monitoringandlong-termanalyses.Thismanuscriptde- TheOklahomaMesonetwork(Mesonet),developed tailsqualityassurance(QA)proceduresdeveloped throughapartnershipbetweentheUniversityof throughthecourseofbuildingtheMesonetandem- OklahomaandOklahomaStateUniversity,isaper- ployedoperationallyinMay1999. manentmesoscaleweatherobservationnetwork.Care TheOklahomaMesonetoperates115stationsona wastakenalongeverystepoftheprocesstoensurethat continuousbasis(Fig.1).Thirteenatmosphericandsub- theOklahomaMesonetwouldprovideresearch-quality surfacevariables(hereafter,parameters)arerecorded data.Theproceduresdocumentedinthismanuscriptare designedtoensurethisquality,fromtheearliestplan- every5minateachsite,producing288observations ofeachparameterperstationperday(Elliottetal.1994; Brocketal.1995).Severalotherparametersareob- servedevery15or30min.Fromitscommissioningin *Additionalaf®liation:OklahomaClimatologicalSurvey,Mesonet March1994throughMay1999,theOklahomaMesonet Project,Norman,Oklahoma. hassuccessfullycollectedandarchived99.9%ofover 75millionpossibleobservations.Becauseofthiscon- tinuousobservationcycle,aneedexistedtoensurethe Correspondingauthoraddress:MarkA.Shafer,OklahomaCli- matologicalSurvey,100E.BoydSt.,Suite1210,Norman,OK73019. qualityofdatacomingfromover2500instruments.A E-mail:[email protected] comprehensiveQAsystemwasdevelopedtocomple-

᭧2000AmericanMeteorologicalSociety APRIL 2000 SHAFER ET AL. 475

FIG. 1. Map of Mesonet site locations. Land ownership for each of the sites is according to the symbols de®ned in the legend. There are 42 sites located in the ARS Micronet in southwest Oklahoma. ment the network's ef®cient collection and transmission followed closely; however, in a few cases, some guide- of environmental observations. The system utilizes lines could not be met. These cases resulted from 1) the feedback from an instrumentation laboratory, ®eld com- nature of the terrain; 2) a lack of suitable sites offered; parisons, automated tests, and visual analyses to rec- 3) the a priori decision to locate on particular parcels ognize and catalog suspect and/or erroneous observa- of land, such as existing agricultural research stations. tions. Ef®cient communication between all components Site photographs and documentation are available on- in the QA system is essential to quickly replace ques- line at http://okmesonet.ocs.ou.edu/ so data users can tionable instruments. ``visit'' a site. The Site Standards Committee also provided guid- ance for site layout and parameter selection. speed 2. Network design considerations and direction are measured at a height of 10 m to match The Mesonet Steering Committee established 11 sub- World Meteorological Organization (WMO) standards, committees, each drawing upon the experience of ex- and and relative are measured at perts within their respective ®elds. Seven of these sub- 1.5 m for consistency with existing National Oceanic committees (see appendix) offered recommendations and Atmospheric Administration cooperative observa- pertaining to the quality of data recorded within the tions and airport stations. The characteristics of Mesonet network. The subcommittees represented three focus ar- sites are also generally consistent with standards rec- eas. The Site Standards and Site Selection committees ommended by the American Association of State Cli- developed a consistent set of criteria for site selection matologists (AASC) for automated station net- and provided guidance in locating sites. The Parameter works. The layout for all Mesonet sites is depicted in Selection and Sensor Speci®cation committees evalu- Fig. 2. The tower stands nearly in the center of a 10 m ated many diverse requests for instrumentation to be ϫ 10 m enclosure. It is surrounded by a cattle-panel installed on the Mesonet towers and developed rec- fence, 1.3 m high, to secure the area from animals and ommendations for instrument purchases. The Station nearby human activity. Maintenance, Quality Assurance, and Data Management One difference between Mesonet and AASC rec- committees developed guidelines to maintain the quality ommendations is in the height of the wind monitor. Mey- of and access to data once the network was established. er and Hubbard (1992) note that the AASC's recom- The Site Standards Committee recommended that mended height of 3 m for wind measurements is a com- sites should be located in rural areas, representative of promise between the expense of installing 10-m towers as large an area as possible, and ¯at, with all obstacles and problems with exposure affecting wind measure- being at a distance of more than 300 m away from the ments at lower heights. Because the Site Standards wind sensors (Shafer et al. 1993). These guidelines were Committee recommended installation of 10-m towers 476 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17

projects. The intercomparison site also allows evalua- tion of proposed instrument changes, the addition of new instruments, and changes in network con®guration, without affecting data collected from operational sites. The Sensor Speci®cation Committee developed per- formance speci®cations for each instrument individu- ally, designing the speci®cations to meet criteria for both research and operational purposes. The make and model of each instrument were selected separately for each parameter, allowing uniformity among those sensors de- ployed in the ®eld. By equipping operational sites with similar sensors, the potential of measurement bias, when comparing measurements between sites, is reduced. This strategy also allows technicians to draw from a common stock of spare sensors when sensor replacement is re- quired. By using similar instruments at all ®eld sites, a user does not need to be concerned by different instru- ment error characteristics when comparing data between sites. The remaining committees provided guidance relating to collection and archival of data. In particular, the Qual- ity Assurance Committee recommended the following steps to develop a quality data stream: R laboratory calibrations to test sensors at delivery and at routine intervals thereafter; FIG. 2. Schematic of Mesonet instruments site layout. The perspec- tive is a side view of the station looking northward. R ®eld intercomparisons, both during technician visits and through site intercomparisons; R real-time, automated data-monitoring software; R documentation of sites and processes; for wind measurements, the AASC's concerns for ex- R independent review; and posure are mitigated. A second difference concerns the R publication of data quality assessment height at which the is mounted. The AASC recommends a height of 1.0 m to reduce splash effects, While not all recommendations have been fully imple- while the WMO recommends a height of 0.3 m to reduce mented, the following sections document steps taken wind effects. The AASC's concerns of splash effects toward meeting these recommendations. were resolved by modi®cations to the rain gauge made by Mesonet staff (discussed in section 4b). Installation 3. Overview of Mesonet QA processes of wind screens around the rain gauges to reduce tur- bulence in the vicinity of the gauge ori®ce addresses The need for thorough and ef®cient data archival and the WMO concerns. retrieval, combined with the need to optimize daily per- In addition to general guidelines for the network, the formance of the network, dictated a dual purpose for Site Selection Committee recommended that several the 's QA system: sites be provided to allow comparison with other in- 1) augment data archives with a trustworthy assessment struments and other networks. Two of the original 108 of the con®dence in each datum, and sites, representing different climate regimes, were in- 2) assess the ongoing performance of the network to stalled within 100 m of Automated Surface Observing keep current and future data quality at the highest System sites (located at Hobart and McAlester). These level that is operationally possible. collocated sites provide an opportunity to compare ob- servations from two networks in different climatic zones Fortunately, the techniques that apply to these goals (Crawford et al. 1995). Eleven additional sites were lo- overlap, and the same QA tools can be used to assess cated within 2 km of a Co- both past and present data quality. The real-time nature operative Observer Site. of the Oklahoma Mesonet requires more than post facto An intercomparison site in Norman, located 100 m analysis of data collected during ®eld projects (e.g., see from an operational Mesonet site, allows in-®eld com- Wade 1987). The techniques must apply to ongoing data parison of data from different instruments. Higher qual- collection to identify problems before they become se- ity instruments, whose cost prohibits their deployment rious. throughout the network, may be temporarily deployed The Oklahoma Mesonet's QA system compiles in- at the intercomparison site as part of various research formation from four distinct analysis classes: laboratory APRIL 2000 SHAFER ET AL. 477

TABLE 1. Timescales of quality-assurance procedures used by the sent to an archiving computer that veri®es that obser- Oklahoma Mesonet. vations are within acceptable bounds. The archiving Time computer rechecks that there are no gaps in the data interval Analysis technique records. Seconds Data checked at the datalogger Throughout the day, data users, Mesonet staff, and Minutes Collection of missing observations the QA manager monitor the data to make sure they are First-pass automated QA; primarily range test collected and that system performance is acceptable. Hours General visual inspection via Web and kiosks Days Automated QA with daily reports Obvious erroneous values reported by a sensor may be Trouble tickets issued and technicians respond noted in ``snapshot'' images of data from the network Weeks QA manager's visual inspection (aggregate data) or in time series graphs of reported data from a single Months QA manager's report station. Routine site visits Each night, automated QA procedures check the data Year Sensor calibration Sensor rotation for more subtle errors and produce a report for the QA Instrument design modi®cations manager, which is waiting in the manager's e-mail ``in basket'' the next morning. The QA manager scans the report, makes determinations of potential problems, and calibration and testing, on-site intercomparison, auto- issues trouble tickets as necessary. Trouble tickets may mated routines, and manual inspection. Each produces be issued at any time, but the majority of trouble tickets valuable information about the network's performance. result from the morning QA reports and from monthly The results from any one component are shared through- assessments of network performance. out the system to establish an accurate assessment of Every month, visual inspection of the data archives data quality. yields clues to more subtle effects, such as sensor bias Because of the volume of QA information and the and drift. At the end of the month, these analyses are need for coordinated QA decision making, a QA man- assembled into the QA manager's report, which often ager is employed to act as a ``traf®c cop'' for instrument spur more requests for technicians to investigate sensors and data-quality issues. The QA manager maintains at certain sites. Several times every year, technicians communication among all facets of the QA system, is- visit each Mesonet site, perform analyses and sues calls-to-action regarding problem sensors, tracks intercomparisons, and repair or replace defective sen- and archives speci®c instrument/data problems, and sors. guides further development of the QA system. Testing and deploying quality instruments are crucial These quality-assurance procedures integrate into a parts of ensuring quality data. The efforts of technicians seamless system that incorporates both time and system and instrument specialists in the Mesonet's calibration components. Table 1 depicts the quality-assurance pro- laboratory provide a long-term foundation for network cess on timescales that range from when the data are performance. Ongoing calibration of all sensors and recorded to a long-term investigation of data quality. ®eld rotation represents one portion of instrument qual- Quality-assurance procedures begin at the site. Labo- ity. A second part is an examination of instrument per- ratory-veri®ed calibrations and sensor rotations ensure formance and a comparison of data from similar in- that quality instruments are deployed. Codes may also struments. Recommendations for instrument design be transmitted from a site to indicate potential data- changes, which may result from these efforts, feed back logger problems or the presence of a technician at a site. into the system to enhance the long-term quality of data Computers at the base station in Norman monitor from the Oklahoma Mesonet. system operations and verify that all possible data have Data are never altered. Instead, all archived Mesonet been collected. If data are missing, the computers au- data are coupled with QA ``¯ags'' [``good,'' ``suspect,'' tomatically and repeatedly attempt to contact the station ``warning,'' or ``failure'' (QA ¯ags 0, 1, 2, and 3, re- until data are retrieved. As data are collected, they are spectively; see Table 2)] that indicate the level of con-

TABLE 2. Mesonet QA ¯ags with brief descriptions. These ¯ags are available with all archived Mesonet datasets. QA ¯ag value QA status Brief description 0 Good Datum has passed all QA tests. 1 Suspect There is concern about accuracy of datum. 2 Warning Datum is very questionable but information can be extracted. 3 Failure Datum is unusable. 4 Not installed yet Station awaiting installation of sensor. 5 Likely good Reduce automated QA ¯ag by one level. 6 Known good Set automated QA ¯ag to 0. 8 Never installed This station is not intended to measure this parameter. 9 Missing data Datum is missing for this station and parameter. 478 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17

FIG. 3. A portion of the Mesonet qualparm table, detailing events at the El Reno (ELRE) site. On the afternoon of 21 Nov 1995, a grass ®re damaged or destroyed several instruments; this event is captured on the fourth line of the ELRE section. The eighth line shows a failure that occurred in Aug 1997. The column headers are, in order, station number (STNM), station identi®er (STID), series of ¯ags for 5-min resolution parameters, series of ¯ags for 15-min res- olution parameters, and date/time of change. Comments may be retained to the right of the date/ time.

®dence that Mesonet personnel place upon each obser- signi®es a change in the QA status of one or more pa- vation. The use of four values to characterize the relative rameters at a speci®c time (see example in Fig. 3). The quality of data provides more information to the data numerical values are synonymous with those of the Me- user regarding con®dence in the observation than would sonet QA ¯ags given in Table 2. a dichotomous ¯ag (0, 1). Based on user preference, the Because the qualparm table contains both historical QA ¯ags can be applied during data retrieval (elimi- and current information, it acts as a database of past nating varying degrees of suspicious data), or the ¯ags network performance and a baseline for current QA can be written separately. The former option offers con- processes. The table is manually produced and edited, venience, while the latter allows the user to make the which provides the opportunity to incorporate human ultimate decisions about data quality. Using this infor- judgment within automated QA processes. In addition mation, data users can make informed decisions about to specifying errors in data, the qualparm table can be the application of Mesonet data to their projects without used to override decisions made by the automated QA the limitations of data that have been altered before (see ¯ags 5 and 6 in Table 2). It has a strong in¯uence being seen by the users. on, but does not always dictate, the ®nal QA ¯ags as- The use of quality-assurance ¯ags that accompany signed to each Mesonet observation. the data is analogous to the procedures described by Meek and Hat®eld (1994) and Snyder and Pruitt (1992). 4. Instrument calibration and comparison Meek and Hat®eld employ QA ¯ags, which supplement but do not alter the data, to describe which test the data The Oklahoma Mesonet operates a full-time labora- failed. Also, consistent with Meek and Hat®eld, QA tory calibration and ®eld maintenance facility, staffed ¯ags are used as a supplement rather than a substitute by an instrumentation specialist, a meteorological en- for regular ®eld maintenance and calibration. Snyder gineering technician, and ®ve full-time electronics tech- and Pruitt use ¯ags slightly differently, with QA ¯ags nicians. The laboratory provides a facility where Me- being grouped into two general categories: informative sonet personnel can verify the calibration of sensors, and severe. Their severe ¯ags correspond to missing compare instrument performance to other reference in- data or inoperative sensors (our ¯ags 9 and 8, respec- struments, and test instrument and network design im- tively) or data out of range (our 3). The informative provements. ¯ags include data derived from other variables or ag- The instrument laboratory is designed for two pur- gregated data that contains one or more ¯agged hourly poses. First, every sensor installed in the Mesonet must observations. They also include ¯ags for data slightly pass various calibration processes. The Mesonet's in- out of range, corresponding to our suspect category, or strument laboratory provides an opportunity for inde- data that could not be tested. The Mesonet does not pendent veri®cation of the manufacturer's calibration. include the not tested category because all data undergo No sensor is deployed to a ®eld site without this veri- at least a range check. ®cation. Second, the instrument laboratory provides Several components of the QA system refer to a qual- feedback on the performance of instruments and sug- parm table. This table is the Mesonet's living history gests improvements for the network. The controlled en- of network performance and data con®dence. Each col- vironment of the laboratory and ®eld tests from the umn represents a speci®c Mesonet parameter; each row Norman reference station provide side-by-side compar- APRIL 2000 SHAFER ET AL. 479 isons of reference and operational instrumentation. in a database for use by technicians and by the QA Technicians also test the effects of solar radiation, wind, manager if questions arise about a speci®c sensor's per- temperature, and other atmospheric conditions on in- formance. Ultimately, the sensor is reinstalled at a Me- struments, with the goal of improving instrument design. sonet site at the next opportunity and the residence time starts again. a. Instrument calibration b. Instrument design tests Each Mesonet sensor undergoes a veri®cation of its calibration coef®cients at several points along its op- The knowledge gained from the calibration laboratory erational life cycle. The cycle consists of 1) a prede- and from tests using the reference station is used to ployment laboratory check of the sensor calibration shape instrument research and design upgrades to im- (``pre-cal''), 2) an on-site instrument intercomparison prove performance. Aside from direct instrument eval- when the sensor is installed, 3) periodic on-site inter- uations, ®eld studies, such as examining heat ¯ux, have comparisons during normal operation, and 4) a post®eld also contributed new insights into the performance of laboratory check of the sensor calibration (``post-cal''). the Mesonet. The following descriptions show how lab- Sensors may be removed from the ®eld because either oratory research on instrument and system designs have a problem is detected or a sensor approaches a rec- contributed to an improvement in instrument perfor- ommended ``site-residence time.'' mance, and thus, to the quality of data recorded from Tests in the laboratory utilize high quality reference the Mesonet. instruments (Richardson 1995). These are used to verify that all sensors are within established speci®c inaccu- 1) RAIN GAUGE RETROFIT racy limits for each instrument deployed in the Mesonet (Brock et al. 1995). The pre-cal veri®es that each new An early ``problem instrument'' was the rain gauge sensor performs within these limits before it is deployed (MetOne 099M tipping bucket with 0.25-mm resolu- to a ®eld site. Calibration coef®cients, which are de- tion). By early 1994, serious shortcomings in the rain termined in the laboratory for rain gauges, pyranome- gauge's performance became evident, necessitating ters, and moisture sensors, are applied to these data modi®cations to the gauge design. Technicians diag- as they are retrieved from the Mesonet archives. Be- nosed signi®cant splash effects and recommended sev- cause the coef®cients are not programmed into the data eral alterations, including replacing the perforated plate loggers, the data logger code does not need to be revised screen inside the catch funnel with a wire mesh screen with new coef®cients when a sensor is replaced. In ad- and increasing the height of the catch funnel sides by dition, this technique permits the adjustment of post- 7.6 cm. These alteration criteria were shared with the processed data on a case-by-case basis, such as may be manufacturer and modi®cations were made to the gauge required to correct errors in calibration coef®cients or design. adjust for sensor drift, without altering data stored in Experience in the ®eld showed additional problems the archives. with the tipping bucket and its bearings. Heating and Sensors remain in the ®eld until problems are en- cooling appeared to crack the bearings, dislocating countered, design upgrades are needed, or the sensor buckets on some gauges. In addition, separation of the approaches its recommended site-residence time. De- mercury in the tip switches caused problems of either sign modi®cations are needed, for example, if a design apparent under tipping or double tipping. Technicians ¯aw is discovered, or if a manufacturer recommends an recommended redesigning the bearings, resoldering upgrade to its instruments. The sensor's residence time weak bucket seams, drilling ``weep holes'' on the un- is the period beyond which a sensor typically begins to derside of the bucket dividers, reshaping the bucket piv- show symptoms of excessive drift. The limit is estab- ot, and changing from mercury switches to magnetic lished based upon network experience with that type of reed switches. These modi®cations were performed by instrument and manufacturer recommendations. Upon Mesonet staff in the calibration laboratory. Many of reaching the limit, the sensor is ``rotated'' back to the these modi®cations were also adopted by the manufac- calibration laboratory for recerti®cation and recalibra- turer. tion as opportunity and spare units become available. Before any adjustments are made to a sensor on its 2) RADIATION SHIELD DESIGN STUDIES return to the laboratory, the post-cal check attempts to determine how well that sensor was performing at the Richardson et al. (1999) studied the limitations of time of its removal from a site. The sensor is then naturally ventilated solar radiation temperature shields. cleaned and any repairs and modi®cations are made. At They showed that the characteristics of the temperature this point, a series of pre-cal tests and adjustments are sensor were important in determining radiation heating employed until the sensor's performance falls within effects (e.g., heating from solar radiation). Gill nonas- established inaccuracy limits. Results of pre-cal checks, pirated, multiplate radiation temperature shields (Gill instrument upgrades, and post-cal checks are recorded 1979) are designed to protect air temperature sensors 480 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17 from solar radiation, but this may not be an optimal general maintenance and on-site sensor intercompari- design for all temperature sensors. If the temperature sons. Although this maintenance interval is less than sensor is highly re¯ective and has a small diameter, then most automated networks (Meyer and a radiation shield that maximizes ¯ow past the sensor Hubbard 1992), daily monitoring of data from the net- is preferable. Such ®ndings result in improved mea- work makes this routine maintenance interval adequate surements when new instruments are chosen for system to maintain the quality of the sensors at the ®eld sites. upgrades. Because of this daily monitoring, problems are detected quickly, and technicians may be dispatched for addi- tional site visits as necessary. 3) TEMPERATURE AND RELATIVE HUMIDITY General maintenance consists of evaluating the over- SENSING PROBLEM all integrity of the site (e.g., structures, guy wires, ad- The Mesonet routinely measures temperature at 1.5 verse weathering) and trimming vegetation. Stanhill m using a Campbell/Vaisala HMP35C temperature and (1992) demonstrated the importance of frequently clean- relative humidity sensor and at 9 m using a Thermo- ing the outer hemisphere of the . Extended metrics Thermistor (TMM). For a study of heat ¯ux, periods between cleaning can result in as much as an Brotzge and Crawford (2000) installed a TMM sensor 8.5% reduction in net solar radiation due to the depo- at 1.5 m to minimize the in¯uence of sensor character- sition of dust and particles on the dome. Because of istics on measurements between the two levels. During concerns about site security and integrity, only Mesonet this study, the data from the TMM at 1.5 m was also technicians are allowed within the site enclosure. More compared with data from the existing HMP35C (Fred- frequent maintenance of general site characteristics, rickson et al. 1998). Temperature differences that ranged such as trimming grass and cleaning the pyranometer from 0.5Њ to nearly 2.0ЊC were discovered at eight of dome raised concerns that inexperienced local personnel the nine test sites. may disrupt soil temperature plots, accidentally loosen These temperature differences were consistent with or cut wires, or dent the pyranometer dome during clean- those that would be expected from errors in sensor cal- ing. In addition, with more than half the sites located ibration. However, precalibration records, on-site inter- on privately owned land, weekly visits to sites could comparisons, and other QA efforts indicated that all strain relations with the site owners. Consequently, de- sensors were performing within expectations. Because tection of underreporting between tech- the TMM sensors had recently been calibrated and in- nician site visits is left up to the Mesonet QA manager. stalled, the HMP35C sensors from several sites were On-site sensor intercomparisons use a set of reference returned to the laboratory for additional calibration instruments identical to those at the site. These are typ- checks. These units again indicated no signi®cant cal- ically performed for air temperature, relative humidity, ibration problems. Reference sensors and calibration barometric pressure, and solar radiation. The reference records were also reviewed to eliminate these as possible instruments are frequently recalibrated in the laboratory. sources of errors. The intercomparisons are performed using software that Because no errors could be found in the instrumen- records data from both the operational and reference tation or calibration records, technicians then conducted instruments in 1-min intervals. The technician subjec- tests on the datalogger program code. They discovered tively selects a quiescent 5-min interval for calculation that the errors in the HMP35C sensors resulted from of average sample error (Fig. 4). This minimizes the measuring relative humidity before measuring temper- potential for meteorological transients to cause the tech- ature, for this particular combination of probe and logger nician to misdiagnose instrument performance, which code. Residual voltage from the relative humidity sensor could be more likely if only a single pair of observations circuit superimposed an extraneous signal upon the volt- were used. By using on-site intercomparisons each time age from the temperature sensor. When temperature was a technician visits a site, the need for duplicate humidity sampled before relative humidity, no such effects were sensors as recommended by Allen (1996) is mitigated. observed. The problem was not discovered earlier in the The technician physically checks other sensors for calibration process because the laboratory calibration obvious problems, either visually or audibly (e.g., lis- program sampled temperature ®rst, whereas all ®eld and tening for noisy bearings in the wind monitor). To test comparison test kits sampled humidity ®rst. Thus, a mi- the rain gauge, the technician drips a known quantity nor modi®cation to the datalogger's program code of water from a specially designed reservoir into the solved a problem that appeared to be attributable to gauge at a known constant rate. The gauge's response sensor calibration errors. is compared to its calibration data from the laboratory records. The technician then partially disassembles the gauge, visually inspects, cleans, and reassembles it, and 5. Site visits: Intercomparison and maintenance repeats the calibration test. Field tests are performed by Mesonet technicians, These on-site intercomparisons are not used to correct each of whom is responsible for speci®c Mesonet sites. data. Instead, they provide performance checkpoints Each site is visited at least three times annually for during the sensor's operational life cycle. As a result of APRIL 2000 SHAFER ET AL. 481

FIG. 4. Display from on-site intercomparison software. Bottom panel: A time series of relative humidity observations from the operational instrument (dotted line) and the reference instrument (solid line). Top panel: The absolute error between the two sets of observations. Acceptable error bounds are indicated by the horizontal dotted lines. As a result of errors exceeding allowable bounds, the technician replaced the sensor. these visits, technicians may detect sensor problems that to the Complex Quality Control method described by are dif®cult to ascertain by other components of the QA Gandin (1988). The resultant ¯ag is then compared with system. On the other hand, a site visit may reveal that the corresponding ¯ag from the qualparm table (see sec- a perceived QA problem is attributable to a real local tion 3). The larger of these two values is retained and effect. written to the daily QA ®les. During the time that a technician is present at a site, a special data ¯ag is activated in the site datalogger. a. Automated QA routines The ¯ag is reported with the regular observations and is archived for future reference. Should questions arise The range test is based upon a combination of per- concerning any aspect of data quality, it is possible to formance speci®cations for each sensor and the annual check the archives of QA ¯ags to see if the presence of climate extremes across Oklahoma. Each parameter has a technician could be a proximate cause of the suspi- predetermined limits (Table 3). Any observation that cious data. occurs outside of the maximum or minimum allowable value is ¯agged as a ``failure.'' The range test is di- chotomous and is the only test capable of indicating a 6. Automated quality assurance failure by itself. During each overnight period, a series of automated The step test compares the change between successive QA techniques are applied to the previous day's data. observations. If the difference exceeds an allowed value, Each observation is checked for validity by ®ve separate distinct for each parameter, the observation is ¯agged test routines: range, step, persistence, spatial, and like- as a ``warning.'' If either one of the data points used in instrument comparison. The ®rst three of these tests are the comparison are missing, the test indicates a null similar to those described by Meek and Hat®eld's (1994) result for those pairs; other tests determine which data bounded values, rate-of-change, and continual no-ob- point is missing. The step test has proven useful for served-change tests. Results from each test are then detecting erroneous readings due to loose wires or da- passed to a decision-making algorithm that incorporates talogger problems. the results from all tests into a single QA ¯ag, similar The persistence test checks an entire day's data from 482 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17

TABLE 3. List of Mesonet parameters and threshold values for the various QA tests. A value of Ϫ999 indicates a parameter is not checked by that test. Thresholds for and soil for the like-instrument comparison test have not been determined; these tests are not operational. Like- Range Persistence Spatial Instrument Companion Parameter Units Min/Max Step delta std dev threshold parameter Relative humidity % 0/103 20 0.1 20.0 Ϫ999 1.5-m air temperature ЊC Ϫ30/50 10 0.1 3.0 10 TA9M 10-m wind speed msϪ1 0/60 40 0.0 5.0 Ϫ999 WS2M 10-m wind vector msϪ1 0/60 40 0.0 5.0 Ϫ999 WSPD 10-m deg 0/360 360 0.0 45.0 Ϫ999 Direction std dev deg 0/180 90 0.1 60.0 Ϫ999 Speed std dev msϪ1 0/20 10 0.1 5.0 Ϫ999 Maximum wind speed msϪ1 0/100 80 0.1 10.0 Ϫ999 Rainfall mm 0/508 25 Ϫ999 50.0 Ϫ999 Pressure mb 800/1050 10 0.1 1.5 Ϫ999 Solar radiation WmϪ2 Ϫ1/1500 800 0.1 400.0 Ϫ999 9-m air temperature ЊC Ϫ30/50 10 0.1 3.0 Ϫ999 2-m wind speed msϪ1 0/60 40 0.1 5.0 Ϫ999 10-cm soil temp, sod ЊC Ϫ30/50 3 0.0 5.0 Ϫ999 TB10 10-cm soil temp, bare ЊC Ϫ30/50 3 0.0 5.0 Ϫ999 5-cm soil temp, sod ЊC Ϫ30/55 5 0.0 5.0 Ϫ999 TS10 5-cm soil temp, bare ЊC Ϫ30/55 5 0.0 5.0 Ϫ999 TB10 30-cm soil temp, sod ЊC Ϫ30/50 2 0.0 5.0 Ϫ999 Leaf wetness % 0/100 50 Ϫ999 Ϫ999 Ϫ999 Battery voltage V 10/16 3 0.0 1.0 Ϫ999

a single station, one parameter at a time. The mean and Observations are weighted according to their distance standard deviation for that parameter are calculated. If from the station being evaluated: the standard deviation is below an acceptable minimum, w(r )z the corresponding data values are ¯agged as either ``sus- ͸ ii Ze ϭ , pect'' or warning, depending upon the departure of the w(ri) standard deviation from the minimum threshold. The ͸ where Z is the estimated value of a parameter at a persistence test, because it uses aggregate statistics, can- e particular station, z is each observation, and w is the not discern which observations within the time period i are responsible for the offense. Consequently, all data weight applied to the observed value, based on the dis- values for that particular site and parameter receive the tance between the observation and the point being es- timated (r ). The weight decreases exponentially with same suspect or warning ¯ag. i The persistence routine also uses a ``delta test'' to distance from the station: 2 Ϫrio/k check the largest difference between any pair of obser- w(ri) ϭ e . vations within a selected time range, usually 24 h. If the difference is less than a minimum acceptable change, The weight function ko is determined by the Barnes all data values for that parameter for the station under routine, based upon the mean station spacing within the consideration are ¯agged as suspect or warning, de- network. The radius of in¯uence is approximately 100 pending upon the departure of the delta from the thresh- km for the Oklahoma Mesonet. old. All stations within the radius of in¯uence, except the The more serious of the two ¯ags from the persistence station being evaluated and those stations identi®ed as tests is retained and reported as the resultant ¯ag by the failures from the range test or as warnings or failures persistence routine. The persistence test is useful for in the qualparm table, are used to calculate an estimated determining damaged instruments or those ``stuck'' at value. The mean, median, and standard deviation of the a particular reading (e.g., due to icing conditions or observations within the radius of in¯uence are also cal- faulty communications between an instrument and the culated. In the central part of the Mesonet, 20±25 sta- datalogger). tions are typically used to determine the estimates. The The spatial test utilizes a one-pass Barnes objective difference between the observed and estimated value is analysis routine (Barnes 1964) to estimate a value for compared to the standard deviation: each observation. The test is designed to detect gross zeoϪ z errors in individual observations; more subtle errors are ⌬ϭ . ␴ identi®ed through manual inspection. Thus a one-pass Barnes analysis provides reasonable estimates without Any observation whose difference exceeds twice the placing too many demands upon computing resources. standard deviation (⌬Ͼ2) is ¯agged as suspect; any APRIL 2000 SHAFER ET AL. 483 difference that exceeds three times the standard devia- ¯agged only by the step test. If the datum is ¯agged tion is ¯agged as warning. If fewer than six observations only by the spatial test, the spatial test ¯ag is retained. were used to determine the estimated value, no ¯ag is Flags from the like-instrument test are compared with set. Thus, stations in the Oklahoma panhandle are not the corresponding spatial test to determine which ob- included in the spatial analysis. servation of the pair is questionable. If an observation Standard deviations are used rather than absolute is ¯agged by both the like-instrument and spatial tests, thresholds to allow increased variability during situa- the ®nal determination will be either a warning, if both tions of large contrasts across the network (K. Brewster routines indicate the observation is suspect, or a failure 1995, personal communication). This technique toler- if one of the two routines indicates a more serious error. ates larger departures from estimated values for stations If an observation, ¯agged by the like-instrument test, along a frontal boundary than for those stations in a exhibits spatial consistency with neighboring sites, it is nearly uniform air mass. To avoid ¯agging stations when considered good. the calculated standard deviation is small, a predeter- The like-instrument test is also used to compensate mined threshold for the standard deviation is used (Table for shortcomings in the spatial test. Observations that 3). For example, if the standard deviation of air tem- pass the like-instrument test exhibit a consistency be- perature for a subset of sites is 0.5ЊC, stations departing tween the two parameters checked. If one of these ob- by more than 1.0ЊC from the estimated value would be servations is ¯agged by the spatial test, the cause may ¯agged; however, a minimum standard deviation of be attributable to a real, but localized, effect, such as a 3.0ЊC ensures that only stations differing by more than temperature inversion. To compensate for such effects, 6.0ЊC are ¯agged. If a standard deviation of 5.0ЊCis the spatial ¯ag is downgraded to the next lower level noted across a frontal boundary, stations would have to in such instances. depart from their estimated values by more than 10.0ЊC After these adjustments have been made, the results before being ¯agged. Although this procedure reduces from individual tests are added together and then com- the number of false alarms, small errors are dif®cult to pared to the qualparm table value. If the qualparm value detect under such circumstances. is ®ve or six, the corresponding QA ¯ag is either down- The like-instrument test compares pairs of similar pa- graded by one level or reset to zero (¯ags 5 and 6, respectively). For all other qualparm values, the greater rameters, for example air temperature at 1.5 and 9.0 m. of the QA ¯ag or qualparm is retained in the QA archive. Any differences exceeding a speci®ed threshold are ¯agged as suspect; differences more than twice the threshold are ¯agged as warnings. If one parameter in b. Using automated QA summary reports the like-instrument pair is bad, the like-instrument test Along with individual ¯ags, the automated QA pro- cannot discern which observation is the culprit; both duces a summary report each morning (Fig. 5) that de- observations receive the ¯ag. The ®nal determination tails data irregularities from the previous day. The report as to which one is bad is made in the decision-making is automatically e-mailed to the QA manager, who scans algorithm. The like-instrument test also is used to iden- it for potential new instrument problems and checks tify good data, which are ¯agged by other routines, such suspicious data to determine whether further action is as instances of localized temperature inversions. Thresh- warranted. An entry is included in a list of ``¯ag counts'' olds were determined via a climatological study of tem- if more than 10% of a parameter's observations from a perature inversions across Oklahoma (Fiebrich and station for that day are indicated as suspect, or if one Crawford 1998). or more observations are listed as a warning or failure. The determinations of each of these routines are sent The 10% suspect threshold prevents the QA manager to a decision-making algorithm. The algorithm ®rst ¯ags from unnecessarily examining spurious data, allowing all missing observations as ``missing data'' in the ®nal concentration on more serious problems. QA archive. The algorithm also notes those stations at The summary report also includes a listing of station which certain sensors are not installed and marks those parameters, which have the highest average departure observations with a ¯ag of ``never installed'' in the QA from their estimated values. The ratios of the difference archive. between the observed and estimated values to the stan- Corrections are made to ¯ags from the step test by dard deviation (⌬, from the spatial test) are averaged combining them with the results from the spatial test. over the entire 24-h period. To make these results easier The step test sets a ¯ag if the data series marks an abrupt to interpret, the resulting number is scaled to 100, form- change. It cannot discern whether this change is toward ing an Aggregate Score Index (ASI). An ASI score of or away from ``background'' values. By combining the 100 means that the average reading departs from its results with the spatial test, the step away from back- estimated value by twice the standard deviation of the ground can be increased in severity, while the one re- sample. This value can be used to detect a parameter turning toward background can be decreased. The step that does not immediately violate Mesonet's automated and spatial tests combined yield a failure if an obser- QA standards but may be developing a bias that merits vation is ¯agged by both tests, or a suspect ¯ag if it is attention. ASI scores above 50 generally warrant close 484 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17

the preceding seven days of data in order to incorporate data that may have been collected after the regular re- porting time and to include the QA manager's revisions to the qualparm table. The automated QA procedure is also repeated on a monthly, three-monthly, and annual basis. This delay allows suf®cient time to secure missing data and for additional revisions of qualparm values. For example, a problem detected via a QA analysis method will have the appropriate ¯ag(s) updated in the qualparm table. By reapplying the automated QA after a one-month lag, these updated qualparm values yield more accurate QA ¯ags, which become the permanent QA ®les.

7. Manual inspection Subtle problems, such as instrument drift, are some- times not recognizable through automated analysis. They may be obscured by a small bias in the obser- vations. A thorough visual analysis of Mesonet data via FIG. 5. An excerpt from the automated QA summary report for 21 a well-trained meteorologist normally helps identify Sep 1997. The letters in parentheses represent the current sensor status in the qualparm table (S, suspect; W, warning; F, failure), if not these subtle problems. In addition, some mesoscale ``good'' (Ð). Pressure observations from HUGO and WYNO and (e.g., drylines entering the network) or localized phe- 5-cm soil temperature at GOOD appear as new problems. Daily ag- nomena (e.g, , heatbursts) may be erro- gregates of departures from estimated values are listed in the ASI. neously ¯agged by the automated analysis. Thus, man- ual inspection can also ensure that good data are not attention; those over 100 are usually associated with improperly ¯agged. more serious problems that will likely appear in the ¯ag counts in the top portion of the summary report. The top 10 ASIs are listed in the report in descending order. a. Real-time data monitoring The high degree of variability in rainfall patterns pos- es a special problem for automated quality-assurance A number of techniques are used to monitor the qual- techniques. Because localized events may be detected ity of the data as they are collected. These include data- by a single gauge, or areas of enhanced precipitation collection software, kiosk displays of color-®lled con- may fall between gauges, the Mesonet's automated qual- tours of various parameters, meteograms, and Web pag- ity assurance is not permitted to apply ¯ags to rainfall. es. Instead, the 24-h accumulated precipitation value is Data-collection software keeps track of each obser- compared to an estimated value for the site, using the vation from a station. If a station fails to respond to a same techniques as previously described. If a site reports request for data, the system queues a request for later less than 25% of the estimated value, provided that the collection, a process that is known as ``hole collection.'' estimate is at least 12.7 mm (0.5 in.), a report is included Although this process does not ensure the quality of the in the daily e-mail to the QA manager. Also, if the actual observations, it does a remarkably effective job estimated value for the site is near zero, but the gauge at ensuring a complete dataset. The Mesonet maintains indicates the occurrence of at least two bucket tips, a an archival rate of 99.9%. Thus, a researcher can be report is included to the QA manager. In the ®rst in- con®dent that requested data will have been collected stance, a low rainfall report may indicate a clogged rain and archived. gauge. In the second instance, water may accumulate The QA manager may also monitor incoming Me- and drip slowly through the obstruction over subsequent sonet data during the day. A suite of real-time Web days, indicating a problem that is otherwise dif®cult to products is typically used to view current data. These detect. When rainfall totals appear in the QA report, the include maps of raw Mesonet parameters, maps of de- QA manager compares them to WSR-88D re¯ectivity rived parameters, 24-h meteogram time-series plots of and precipitation estimates to make a subjective decision various Mesonet parameters, and maps of 3-h trends for as to whether the rain gauge needs to be investigated several parameters. The QA manager may also use real- by a technician. Suspect rainfall amounts are listed for time kiosks and, of course, raw data ®les available upon a one-month period, making it easier to identify patterns data ingest. Using various Web pages, Mesonet opera- of under- or overreporting rain gauges. tors also monitor the data as they arrive and report sus- The automated QA is reapplied each day to each of picious events to the QA manager. APRIL 2000 SHAFER ET AL. 485

FIG. 6. Average dew point temperature at 1200 UTC for Aug 1997. The anomaly in southeast Oklahoma is caused by microscale effects. The anomaly in west-central Oklahoma prompted the QA manager to issue a trouble ticket. b. Long-term analysis the maximum relative humidity and examination of spa- tial consistency of wind speeds, are used routinely in Aside from inspection of raw data and various time- the QA manager's monthly analyses. Several other tech- series plots, manual inspection also involves contoured niques, such as comparing actual to estimated solar ra- maps that represent long-term (several weeks to several diation, are presently under development for inclusion months) averages of selected parameters. Contours are in both automated QA routines and incorporation in determined by simple objective analysis techniques monthly analyses. Other longer-term analyses, such as (typically a Barnes analysis), then interpreted through ``double mass analysis'' techniques, are only now be- the eyes of the QA manager. coming possible as the Mesonet has a long enough his- Subtle problems with parameters such as air temper- torical record for these techniques to be successful. ature, dewpoint temperature, and wind speed can best be identi®ed by an analysis of the average conditions at a speci®c time of day during the period of interest c. Case-by-case investigation (Fig. 6). Other parameters, such as rainfall and solar radiation, lend themselves well to an analysis of cu- Not all suspect observations lend themselves to a mulative totals during the period of interest. In either quick conclusion as to whether or not the data should case, subtle problems typically appear as data irregu- be ¯agged. Site-speci®c characteristics are often dis- larities that cannot be explained by the QA manager's covered where the automated QA and manual inspec- meteorological knowledge and experience with local tion indicate potential concern with the data. The QA anomalies. manager thoroughly investigates these potential prob- At a single Mesonet site where similar parameters are lems before sending a technician for a ®eld compari- measured by several instruments (e.g., wind speed at son. two levels, or soil temperatures at a number of depths), The QA manager's investigation includes creating comparison between instruments can help identify prob- time-series graphs depicting data from the site and pa- lem sensors. For instance, an objectively analyzed plot rameter in question, along with data from neighboring of the average difference between two sensors, in as- sites. In some cases, when a parameter is highly variable sociation with other maps, can be used to help pinpoint across an area (e.g., soil temperature at 5 cm), the au- a sensor problem. This method is particularly useful tomated spatial test may not properly discern which with soil temperature probes because a traditional spatial among several sites has the suspect data. In these sit- analysis of soil temperature is limited by the localized uations, ¯ags may be erroneously placed on the wrong nature of soil conditions. A full list of parameters that station. These techniques enable the QA manager to are evaluated via these visual inspections is given in correctly identify questionable sensors. Table 4. The QA manager keeps a daily log of less serious Allen (1996) recommends several long-term tests to problems so those sites that produce short-lived suspi- detect problems with data. Some of these tests, such as cious data can be monitored for several days before 486 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17

TABLE 4. List of parameters evaluated via long-term visual analyses. All products are monthly averages of the parameter at the indicated times. ``Monthly'' indicates averages, accumulations, or extrema of all data for that parameter during an entire month. Parameter Time(s) of observation Air temperature (1.5 m) 0000, 0600, 1200, 1800 UTC Dewpoint 0000, 0600, 1200, 1800 UTC Air temperature (9 m) 0000, 0600, 1200, 1800 UTC Maximum relative humidity Monthly maximum Rainfall Monthly cumulative Pressure (reduced to std elevation) 0000, 0600, 1200, 1800 UTC Solar radiation 0000, 0600, 1200, 1800 UTC Daily average solar radiation Monthly Wind speed (10 m) 0000, 0600, 1200, 1800 UTC Wind direction, vector average (10 m) 0000, 0600, 1200, 1800 UTC Wind speed (2 m) 0000, 0600, 1200, 1800 UTC Soil temperature, 10-cm sod 0000, 0600, 1200, 1800 UTC Soil temperature, 10-cm bare 0000, 0600, 1200, 1800 UTC Soil temperature, 5-cm sod 0000, 0600, 1200, 1800 UTC Soil temperature, 5-cm bare 0000, 0600, 1200, 1800 UTC Soil temperature, 30-cm sod 0000, 0600, 1200, 1800 UTC Difference Fields Soil temperature, 10-cm sod±10±cm bare Monthly average Soil temperature, 5-cm sod±5-cm bare Monthly average Soil temperature, 10-cm±5-cm sod Monthly average Soil temperature, 10-cm±5-cm bare Monthly average

notifying a technician. This strategy is especially useful observations to be ¯agged erroneously by the automatic for determining whether a pyranometer is reporting data QA. Cold-air out¯ow boundaries and heatbursts often- spikes due to a loose wire or whether the spikes may times cause a station to be spatially inconsistent with be attributable to a bird using the sensor as a temporary respect to neighboring sites. Associated wind gusts and perch. Using data from more than one rain event is also pressure dips create data failures during the step test. preferred when making a determination as to whether Visual inspection (such as that illustrated in Fig. 8) al- a ¯ag is required on a seemingly over- or underreporting lows the QA manager to determine whether such in- rain gauge. consistencies may actually be real events. A number of local effects also raise concerns about The highly variable nature of rainfall necessitates that the quality of meteorological data. For instance, an ag- rainfall observations ¯agged as suspicious receive close ricultural ®eld near a station can sometimes affect ob- scrutiny from the QA manager. Figure 9 illustrates how servations at a Mesonet site. Figure 7 displays the de- a lone shower can create havoc for an automated spatial pendence of afternoon temperatures at the Altus site on QA routine. The 1-mm rainfall total observed at Broken wind direction. When had a northeasterly com- Bow (southeast corner of Oklahoma) appeared as an ponent, the temperatures at Altus and nearby Tipton overestimate because all neighboring sites reported no were very similar. However, when winds traversed an rain. By comparing with WSR-88D data for the area, irrigated farm located south and southwest of the Altus the QA Manager determined that this report was valid. site, the temperature at Altus was as much as 3ЊC cooler Logs of data failures detected by the automated QA than that observed at Tipton. system provide additional sources of feedback to the QA manager. First, one is able to identify situations in d. Keeping the good data which QA test thresholds may be too stringent, such as Daily inspection of the results tabulated by the au- with the soil temperatures. Second, such logs may in- tomated QA analysis also allows the QA manager to dicate that the automated QA routines do not perform track erroneously ¯agged data. The range test was well under certain weather scenarios and need to be thought to be least susceptible to producing erroneous modi®ed. Third, the QA manager may recognize a lo- QA ¯ags. Nevertheless, the hot, dry summer of 1998 calized or unique weather event that was erroneously resulted in numerous 5-cm soil temperatures exceeding ¯agged by the automated QA. The qualparm table may 50ЊC. With air temperatures approaching the mid-40ЊC be used to downgrade the severity of ¯ags determined range, the data were believed to be suspect at worst. by the automated QA processes for the duration of the After further investigation, the range test threshold for event (¯ag 5 or 6; see Table 2). For example, a thun- both the 5-cm soil temperatures (under native sod and derstorm out¯ow event may be indicated in the qual- under bare soil) was increased to 55ЊC. parm table such that the results of the other QA tests Thunderstorms also have caused numerous accurate will be ignored and the ®nal ¯ag for the corresponding APRIL 2000 SHAFER ET AL. 487

FIG. 7. Dependency of air temperature on wind direction for the Altus and Tipton Mesonet sites during 6±9 Aug 1998. data will be reset to zero. The ability to downgrade QA rological events are included in the discussion. The re- ¯ags was not yet operational at the time of this writing. port is sent to interested parties via e-mail and a paper copy is retained for future reference. 8. Communication within the QA system a. The trouble ticket Evolution of the Mesonet QA system includes a re- ®nement of communication between all components of The trouble ticket is the fundamental method used to the QA system (Arndt et al. 1998). More ef®cient com- report and record sensor problems, and to ensure and munication yields more accurate data ¯agging and a log their resolution. When a data problem is identi®ed quicker response by Mesonet personnel to developing and determined to be legitimate, a trouble ticket is is- or existing data quality problems. sued. The outgoing ticket contains information (station, The QA manager is responsible for coordinating in- parameter, description of problem, date/time of problem coming QA information from a variety of sources (e.g., onset, urgency), which assists the Mesonet technician automated reports, visual inspection, technician input, with assessment of the problem. The technician works data user reports), for determining the reality of an ap- the trouble ticket into the maintenance agenda based on parent problem, and for issuing a call-to-action in the the problem's urgency, solvability, and the availability form of a trouble ticket. Use of the Internet, cellular of replacement sensors. telephones, and paging technology speeds the response Using information from the trouble ticket and on-site time to developing problems. analysis, the technician takes action to resolve the prob- To keep Mesonet personnel informed about recent and lem. The sensor may be serviced or replaced, or the current QA-related events, the QA manager prepares a technician may discover that the data problem is not monthly report, which summarizes instrument transac- attributable to the sensor. The trouble ticket carries the tions and documents a subjective assessment of the net- technician's in-®eld observations and decisions (method work's performance (Fig. 10). The report provides a of ®x, ®x date/time, instrument transaction) to the QA synopsis of QA concerns and actions during the pre- manager. If necessary, the QA manager updates meta- vious month, and notes the overall status of data quality data ®les (containing information regarding instrument from the Mesonet. Signi®cant observations of meteo- transactions and location ledgers) and the qualparm ta- 488 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17

FIG. 8. Pressure spike and corresponding wind gust at the Seiling Mesonet site on 4 Jun 1998. ble. The trouble ticket is archived (electronically and to ticket (Fig. 11b). Through this template, a technician a paper ®le) because it contains valuable historical in- may relay the results of a station visit to the QA man- formation (metadata) about a data problem from a spe- ager, allowing the QA manager to adjust qualparm set- ci®c sensor. tings to appropriate values long before the paper trouble ticket is processed. A third template involves sensors that require instru- b. Internet communication in the QA system ment-speci®c calibration coef®cients (rain gauges and When the QA manager enters the problem description pyranometers; not shown). When these instruments are for a trouble ticket, there is an option to send an elec- relocated, calibration tables must be updated quickly so tronic form of the trouble ticket via e-mail to the ap- the proper coef®cients can be immediately applied to propriate personnel. An electronic trouble ticket serves the outgoing data. The WWW interface produces a for- two purposes: 1) it alerts technicians who are based matted e-mail message, which details the instrument away from the central facility and 2) maintenance ac- change. tions can begin before a paper ticket reaches the tech- An instrument ``residence-time'' database is another nician. tool available via the WWW to assist Mesonet personnel Three World Wide Web (WWW) templates are avail- (Fig. 12). A plot shows the time, in months, that each able to Mesonet personnel. The ®rst allows a Mesonet sensor of a given type has been on location. The QA employee to report apparent problems to the QA man- Manager often considers residence time when making ager and the technician with maintenance responsibili- decisions regarding questionable sensors and symptoms ties at the affected station. Each category listed on a of instrument ``drift.'' In addition, Mesonet technicians trouble ticket is duplicated on the WWW page (Fig. use these maps to plan site visitation itineraries, which 11a). When the completed template is submitted, an improves the rotation schedule of Mesonet sensors. e-mail message is sent to the QA manager, giving notice Similarly, a list of unresolved trouble tickets is kept of the potential problem. The QA manager then decides online for review by Mesonet personnel. This ®le helps whether to issue a formal trouble ticket. technicians integrate lower-priority trouble tickets Another template is used by technicians to report in- (which do not require an urgent trip) into their visitation formation stemming from the resolution of a trouble schedules. APRIL 2000 SHAFER ET AL. 489 c. The use of cellular and paging technology Another example of how the QA process feeds back into network design occurred in March 1995 when six Oftentimes, new concerns about data quality arise Mesonet stations in southwest Oklahoma simultaneous- while Mesonet technicians are away from the central ly reported barometric pressures that the QA manager facility or are otherwise unable to access an Internet suspected were too low. This anomaly was detected via connection. The use of cellular and alphanumeric paging visual inspection of a plot containing a 7-day average technology maintains a reliable line of communication of sea level pressure at 1800 UTC. The simultaneous with these technicians. Frequently, a Mesonet operator but slight underreporting by Mesonet in will submit one of the WWW templates for a technician close proximity to each other revealed an unexpected based on details given in a cellular phone conversation. design condition. Rain had frozen over the barometer Alphanumeric pagers are also used to send urgent mes- sages to Mesonet personnel. port, sealing it from the outside atmosphere. As the air Each morning, an operator dispatches a brief report cooled inside the sealed box, the pressure decreased. to the technicians via an alphanumeric pager. This report Project staff decided that because this event represented contains information about the status of observed prob- a rare occurrence, a technical solution could introduce lems with data from particular stations and includes doc- more serious problems; thus no alterations were made umentation of any major power failures or communi- to the station design. This decision was, in part, based cation breakdowns within the network during the pre- upon an earlier experience, in which a porous ®lter had vious 24 h. Technicians and the QA manager also pe- been added to the barometer ports to prevent debris and riodically receive observations of severe wind speeds, insects from nesting in the tubes. The ®lters tended to which may cause damage at a Mesonet site, via their absorb water and temporarily clog the tubes. Shortly alphanumeric pagers. after their addition to the network, the ®lters had to be removed. Results from QA processes, combined with human 9. Five years of quality-assurance experiences experience and meteorological knowledge, can lead to As with any operational network, some factors cause the discovery of previously unknown microscale phe- frequent problems within the system. Lightning, spiders, nomena. One example is the relatively frequent indi- insects, grass ®res, gophers, and vandalism are several cation of data problems, as detected by the automated examples of the nemeses with which Mesonet personnel spatial analysis routine, at the Medicine Park station in must contend. Experience over ®ve years has shown southwest Oklahoma during overnight periods. The sta- that remote weather stations provide ideal habitats for tion is located within an isolated mountain range, bor- a number of insects and animals. Spiders like to nest in dered by ¯at grasslands. Quite often, this station, be- the rain gauge funnels and insects ®nd that pressure cause of its elevation, extends through the base of a tubes provide nice housing. Burrowing animals chew strong temperature inversion, which otherwise goes un- through cables connected to the soil temperature probes, observed at nearby Mesonet sites. During these events, and, on occasion, have even been found to remove the the higher temperatures and lower moisture values at probes themselves. Sadly, humans also have plagued a the Medicine Park site appear to be anomalous but in few of our sites, with vandalism and theft of equipment. fact they are real. A consistency between temperature Some of the notable events causing problems with the data from 1.5 and 9.0 m, as indicated by the like-in- Mesonet include strument comparison test, is an effective method of iden- tifying such ``inversion pokers.'' R a horse using a temperature sensor radiation shield as Anomalies in wind direction at sites along the Ar- a scratching post; River in eastern Oklahoma (Sallisaw and Web- R tumbleweeds accumulating within the Mesonet enclo- bers Falls; Fig. 1) caught the attention of Mesonet staff. sure, insulating soil temperatures; When winds were light across the region, these sites R grass ®res melting instruments and housings (but pro- developed an east wind anomaly. Initially, Mesonet staff viding interesting data during the event); suspected a problem with the alignment of the wind R airborne sand and dust during a drought clogging rain sensors, but upon closer investigation, they discovered gauges; and R that the anomaly was real. A similar, but weaker, anom- cattle getting inside the Mesonet enclosure, disturbing aly was discovered along the Red River in southeast soil probes and knocking over a solar radiation tripod. Oklahoma. Quality assurance also has identi®ed subtle effects The Ouachita Mountains in southeast Oklahoma have that result from changes in the environment, and not a a pronounced effect on mesoscale weather patterns; they fault with the instruments. Soil temperature biases in are large enough to sustain a mountain/valley circula- monthly analyses have led technicians to discover prob- tion, as detected by the Mesonet site at Wister. Boundary lems with erosion and sterilant leaching from the bare layer ¯ow during the afternoon is uniform with sur- soil plots. Technicians now routinely check the depth rounding sites, but in the absence of large-scale forcing, of the soil temperature sensors when they visit a site. the wind at night becomes light northerly. Obstacle ¯ow 490 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17

FIG. 9. A comparison of Mesonet rainfall reports with WSR-88D data for 14 Jul 1998. (a) Rainfall totals (in mm) for southeast Oklahoma for a 24-h period ending at 0000 UTC 15 Jul 1998. (b) WSR-88D base re¯ectivity from KSRX (Fort Smith, AR) on 14 Jul 1998 at 0044 UTC. The QA manager used the WSR- 88D image to con®rm that the report of precipitation at Broken Bow was accurate. also can be seen in the neighboring counties of Le¯ore, anomalously warm and dry conditions at night. During Pushmataha, and McCurtain. the 5-year period of 1994±98, more than 50 of these During 1998, Mesonet personnel discovered that ``rare'' events were detected within Oklahoma (e.g., see heatbursts are a relatively common event. Heatbursts MacKeen et al. 1998). Heatbursts invariably appear as often affect only one or two stations, and appear as data problems in the automated QA, but human judg- APRIL 2000 SHAFER ET AL. 491

FIG. 10. An excerpt from the QA manager's monthly report for Feb 1998. The QA manager includes a brief synopsis of the performance of the network followed by the status of trouble tickets, sorted by parameter. ment is able to override these indications and preserve of the system to improve another component. For ex- a record of these events for future study. ample, knowledge gained from postcalibration of sen- sors helps to optimize the desired residence time of certain instruments so that data quality is maximized 10. Conclusions and unnecessary sensor rotation is minimized. Postcal- The Oklahoma Mesonet's QA system represents a ibration of sensors has proven to be bene®cial when compilation of processes that range from simple testing determining preinstallation calibration strategies. On- to sophisticated analysis. Automated processes prompt site instrument intercomparisons help validate or refute human attention, while human experience shapes the the QA manager's visual interpretations. This feedback, design and interpretation of the automated processes. in turn, is important to the continual re®nement of the The system utilizes multiple techniques of data and in- automated QA routines. strument analysis. When applied individually, these It is important that the QA system be scrutinized as components have unique strengths and weaknesses. rigorously as the data to ensure continual improvements When a concerted effort is made to compile their results in the data quality of the archives and to achieve the and integrate that information via ef®cient communi- optimum quality of real-time data. Thoughtful adjust- cation, a comprehensive QA system emerges that ac- ments and additions to the QA system are sought to curately assesses the quality of both past and present improve the accuracy and response time of the QA sys- data from the Oklahoma Mesonet. tem. Ideally, improvements in the QA system should Constant improvement of the QA system is necessary take advantage of the latest available technology and to better realize its goals of high quality data in both techniques in data analysis. However, it should be re- real time and in its archives. Because the Mesonet uses membered that the purpose of these QA ¯ags is to pro- a complex QA scheme of instrument calibration, ®eld vide guidance to those who use data from the Oklahoma comparisons, automated routines, and manual inspec- Mesonet. Because no data are altered, the user can make tion, it is possible to use feedback from one component the ®nal judgment as to the quality of the data. 492 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17

FIG. 11. WWW pages used to streamline the communication of trouble tickets and instrument decisions: (a) trouble ticket submission page and (b) trouble ticket resolution submission page.

FIG. 12. Residence time (in months) of Mesonet barometers. The values in the panhandle represent two three-station clusters that are too closely spaced to be displayed at their proper geographic location. APRIL 2000 SHAFER ET AL. 493

The role of the experienced human is essential to the Les Showell, National Severe Storms Laboratory, QA process. Many actions and decisions rely upon the Norman knowledge and experience of the technicians, laboratory John Lambert, National Weather Service, Norman personnel, the QA manager, and external users. Human John Damicone, Oklahoma State University, Still- experience is critical when assessing the reality of a water questionable event. Human interaction with the Mesonet Ron Elliott, Oklahoma State University, Stillwater QA system has resulted in the discovery of many subtle, but real, meteorological events, and has provided a ca- c. Parameter Selection pability to distinguish these phenomena from data prob- lems. Recommend a prioritized list of parameters to be mea- sured at each site consistent with overall project goals Acknowledgments. The number of authors on this pa- and resources. per is an indication of the complexity of the QA process. Membership: Many people have contributed to producing quality data from the Oklahoma Mesonet. In particular, we would Ron Elliott (Chair), Oklahoma State University, Still- like to acknowledge Dr. Fred Brock, architect of the water Mesonet QA procedures; Dale Morris, Putnam Reiter, Ron Alberty, WSR-88D Operational Support Facility, Jared Bostic, and the Mesonet operators; our laboratory Norman and ®eld techniciansÐKen Meyers, Gary Reimer, Bill Dennis McCarthy, National Weather Service Forecast Wyatt, David Grimsley, James Kilby, and Leslie CainÐ Of®ce, Norman and last but not least, our former QA managers: David Kathy Peter, USGS, Oklahoma City Shellberg, Curtis Marshall, Jeff Basara, and Jerry Brotzge. We also appreciate the guidance provided by d. Sensor Speci®cation the Mesonet Steering Committee: Ken Crawford, Ron Elliott, Steve Stadler, Howard Johnson, Mike Eilts, Al Using input from the Parameter Selection Subcom- Sutherland, and former members Chuck Doswell, Ger- mittee, recommend the standards for sensor accuracy ritt Cuperus, and Jim Duthie. The Oklahoma Mesonet and reliability. was ®nanced by funds from the Exxon Oil Overcharge Membership: Settlement Fund as administered by the Oklahoma De- partment of Commerce. We gratefully acknowledge the Marv Stone (Chair), Oklahoma State University, State of Oklahoma's continuing ®nancial contribution Stillwater to annual operational costs of the Mesonet. Sam Harp, Oklahoma State University, Stillwater Ken Brown, National Weather Service Forecast Of- ®ce, Norman APPENDIX Fred Brock, University of Oklahoma, Norman Data Quality Subcommittees Established by the Mesonet Steering Committee e. Station Maintenance a. Site Standards Recommend the standards and strategy for meeting Recommend technical standards by which potential station maintenance requirements. sites are evaluated. Membership: Membership: Sam Harp (Chair), Oklahoma State University, Still- Claude Duchon (Chair), University of Oklahoma, water Norman Jerry Hunt, National Weather Service Forecast Of®ce, Ron Hanson, USGS, Oklahoma City Norman John F. Stone, Oklahoma State University, Stillwater Sherman Fredrickson, National Severe Storms Lab- Steve Stadler, Oklahoma State University, Stillwater oratory, Norman b. Site Selection f. Quality Assurance Recommend appropriate speci®c sites and alternative Recommend the activities necessary to maintain data sites, based on recommendations from the Site Stan- quality. dards Committee. Membership: Membership: Fred Brock (Chair), University of Oklahoma, Norman Carroll Scoggins (Chair), Army Corps of Engineers, Bill Bumgarner, WSR-88D Operational Support Fa- Tulsa cility, Norman 494 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 17

Sherman Fredrickson, National Severe Storms Lab- ®guration decisions for an automated weather station network. oratory, Norman Appl. Eng. Agric., 10 (1), 45±51. Fiebrich, C. A., and K. C. Crawford 1998: An investigation of sig- Claude Duchon, University of Oklahoma, Norman ni®cant low-level temperature inversions as measured by the Oklahoma Mesonet. Preprints, 10th Symp. on Meteorological Observations and Instrumentation, Phoenix, AZ, Amer. Meteor. g. Data Management Soc., 337±342. Fredrickson, S. E., J. A. Brotzge, D. Grimsley, and F. V. Brock, 1998: Develop and recommend data manipulation strategies An unusual Oklahoma Mesonet temperature sensor problem to ensure quality data are available on demand in an (nearly) slips by in-situ, real-time, short-term and long-term QA ef®cient manner and format. efforts . . . and why this could happen to you. Preprints, 10th Symp. on Meteorological Observations and Instrumentation, Membership: Phoenix, AZ, Amer. Meteor. Soc., 343±348. Gandin, L. S., 1988: Complex quality control of meteorological ob- Howard Johnson (Chair), Oklahoma Climatological servations. Mon. Wea. Rev., 116, 1137±1156. Survey, Norman Gill, G. C., 1979: Development of a small radiation shield for air temperature measurements on drifting buoys. NOAA Data Buoy W. S. Fargo, Oklahoma State University, Stillwater Of®ce Contract 01-7-038-827(11). [Available from NOAA Data Steve Stadler, Oklahoma State University, Stillwater Buoy Of®ce, Bay St. Louis, MS 39520.] Wes Roberts, University of Oklahoma, Norman MacKeen, P., D. L. Andra, and D. A. Morris, 1998: The 22±23 May 1996 heatburst: A severe wind event. Preprints, 19th Conf. on Severe Local Storms, Minneapolis, MN, Amer. Meteor. Soc., REFERENCES 510±513. Meek, D. W., and J. L. Hat®eld, 1994: Data quality checking for single station meteorological databases. Agric. Forest Meteor., Allen, R. G., 1996: Assessing integrity of weather data for reference 69, 85±109. evapotranspiration estimation. J. Irrig. Drain. Eng., 122 (2), 97± Meyer, S. J., and K. G. Hubbard, 1992: Nonfederal automated weather 106. stations and networks in the United States and Canada: A pre- Arndt, D. S., M. A. Shafer, S. E. Fredrickson, and J. P. Bostic, 1998: liminary survey. Bull. Amer. Meteor. Soc., 73, 449±457. Quality assurance at the Oklahoma Mesonet: A systems view- Richardson, S. J., 1995: Automated temperature and relative humidity point. Preprints, 10th Symp. on Meteorological Observations and calibrations for the Oklahoma Mesonetwork. J. Atmos. Oceanic Instrumentation, Phoenix, AZ, Amer. Meteor. Soc., 349±354. Technol., 12, 951±959. Barnes, S. L., 1964: A technique for maximizing details in numerical , F. V. Brock, S. R. Semmer, and C. Jirak, 1999: Minimizing analysis. J. Appl. Meteor., 3, 396±409. errors associated with multiplate radiation shields. J. Atmos. Brock, F. V., K. C. Crawford, R. L. Elliott, G. W. Cuperus, S. J. Oceanic Technol., 16, 1862±1872. Stadler, H. L. Johnson, and M. D. Eilts, 1995: The Oklahoma Shafer, M. A., T. Hughes, and J. D. Carlson, 1993: The Oklahoma Mesonet, a technical overview. J. Atmos. Oceanic Technol., 12, Mesonet: Site selection and layout. Preprints, Eighth Symp. on 5±19. Meteorological Observations and Instrumentation, Anaheim, Brotzge, J. A., and K. C. Crawford, 2000: Estimating sensible heat CA, Amer. Meteor. Soc., 231±236. ¯uxes from the Oklahoma Mesonet. J. Appl. Meteor., 39, 102± Snyder, R. L., and W. O. Pruitt, 1992: Evapotranspiration data man- 116. agement in California. Irrigation and Drainage, Saving a Threat- Crawford, K. C., D. S. Arndt, D. J. Shellberg, and B. M. Brus, 1995: ened ResourceÐIn Search of Solutions, T. E. Engman, Ed., A comparison of differences in meteorological measurements American Society of Civil Engineers, 128±133. made by the Oklahoma Mesonetwork at two co-located ASOS Stanhill, G., 1992: Accuracy of global radiation measurements at sites. Preprints, 11th Int. Conf. on Interactive Information and unattended, automatic weather stations. Agric. Forest Meteor., Processing Systems for , Oceanography, and Hy- 61, 151±156. drology, Dallas, TX, Amer. Meteor. Soc., 299±303. Wade, C. G., 1987: A quality control program for surface mesome- Elliott, R. L., F. V. Brock, M. L. Stone, and S. L. Harp, 1994: Con- teorological data. J. Atmos. Oceanic Technol., 4, 435±453.