The Use of Paradata to Monitor and Manage Survey Data Collection

Total Page:16

File Type:pdf, Size:1020Kb

The Use of Paradata to Monitor and Manage Survey Data Collection Section on Survey Research Methods – JSM 2010 The use of paradata to monitor and manage survey data collection Frauke Kreuter,¤ Mick Couper,y Lars Lybergz Abstract Paradata are automatic data collected about the survey data collection process captured during computer assisted data collection, and include call records, interviewer observations, time stamps, keystroke data, travel and expense information, and other data captured during the process. Increasingly such data are being used in real time to both monitor and manage large scale data collection processes. In this paper we use a statistical process control perspective to describe how such data can be used to monitor the survey process. Process control charts and statistical models can be used to identify areas of concern during data collection, and can lead to further investigation of the problem and (if necessary) intervention. We describe the data and analyses that are available and present several case studies of paradata use in di®erent types of surveys and organizations. Key Words: Paradata, Process Control, Survey Management 1. Introduction Researchers all around the world nowadays use computer assisted methods in one form or another to collect survey data. Such computer assistance is obvious in web surveys, but is equally present in telephone surveys supported by automated call schedulers or mail surveys that take advantage of logs provided by postal services. All of these systems produce auxiliary data about the survey process as by-products. While discussing the use of keystroke data, Couper (1998) originally coined the term paradata as a general notion of such by-product process data produced by a computer-assisted data collection system. Since then survey methodologists have broadened the paradata concept to other aspects of the survey process and other modes of collection (Couper and Lyberg, 2005; Scheuren, 2005; O'Reilly, 2009), to include call records, interviewer observations, time stamps, travel and expense information, and other data captured during the process. In fact, paradata are a subset of the broader class of process data that in some agencies have been used for decades to control various survey processes including for example listing, data collection, data capture, editing, coding, and printing (Mudryk et al., 2001; Minton, 1969; Lyberg, 1981). Data about processes allow statistically informed evaluation, monitoring and managing of such processes. And as the survey world moves towards the imple- mentation of quality metrics, measures for quality improvements and cost savings (Biemer and Caspar, 1994; Lyberg et al., 1997; Aitken et al., 2004; Couper and Ly- berg, 2005), such data become valuable tools. Both data users and data producers are aware of the potential bene¯ts of paradata and this has been reflected in several sessions at not only the Joint Statistical Meetings (JSM), but also at the bi-annual conferences of the European Survey Research Association (ESRA) as well as the Quality conferences, co-organized by Eurostat. ¤JPSM University of Maryland, USA & IAB, Germany yUniversity of Michigan, USA & JPSM University of Maryland, USA zUniversity of Stockholm, Sweden 282 Section on Survey Research Methods – JSM 2010 In this paper we will ¯rst review a set of increasingly common uses of para- data following survey data collection, and then discuss the possibilities of a more concurrent use of paradata in the spirit of statistical process control. Examples primarily from the National Survey of Family Growth will be given to show how process control charts and statistical models can be used to identify areas of concern during data collection, and can lead to further investigation of the problem and (if necessary) intervention. Web and telephone survey examples for monitoring and interventions will also be discussed. 2. Paradata and their Post-Survey Use To date, most examples in which paradata have been used, can be characterized as post-survey assessments or post-survey corrections of errors common in the survey process. Kreuter and Casas-Cordero (2010) modi¯ed the display of the total survey error framework (Groves et al., 2004) to visualize the various paradata used to address each of the error sources (see Figure 1). Paradata from call records, which typically include the time of contact (day and time), as well as the outcome of a call (non-contact, refusal, ineligible, interview, appointment etc.), are for example often used to evaluate and reduce nonresponse error (Heerwegh et al., 2007; Blom et al., 2010). Call record data are available for both respondents and non-respondents in a survey and thus are prime candidates to study nonresponse bias. Using such call record data in post-survey level-of-e®ort analyses, early respon- dents are compared to late respondents under the assumption that late respondents are more similar to nonrespondents than early respondents (Stinchcombe et al., 1981; Smith, 1984; Schnell, 1998; Kennickell, 1999; Chiu et al., 2001; Duhart et al., 2001; Lynn et al., 2002; Lynn, 2003; Wang et al., 2005; Stoop, 2005; Voogt and Saris, 2005; Billiet et al., 2007); for a meta-analysis of the results see Olson (2010). With the goal of assessing net quality gains, researchers use call record data to ex- amine the relationship between nonresponse and measurement error (Green, 1991; Yan et al., 2004; Olson, 2006; Peytchev and Peytcheva, 2007; Yu and Yan, 2007). In other applications, call record data from one survey are used to determine optimal call schedules for subsequent surveys or survey waves (Weeks et al., 1987; Greenberg and Stokes, 1990; Stokes and Greenberg, 1990; Matsuo et al., 2006). The possibility of using call record data for nonresponse adjustment as been discussed for quite some time (Drew and Fuller, 1980; Pottho® et al., 1993; Groves and Couper, 1998), and current papers demonstrate the relationship between detailed information in call records and the probability to respond to a survey request (Beaumont, 2005; Biemer and Wang, 2007; Blom, 2009; Kreuter and Kohler, 2009). A second set of data subsumed under the concept of paradata is collected during the initial phase of establishing contact and convincing sample units to participate in the survey. These paradata are observations made by the interviewer. Inter- viewers are charged with collecting observations of neighborhoods and housing unit characteristics in a number of surveys usually along the lines suggested by Cam- panelli et al. (1997), Groves and Couper (1998), and Lynn (2003). Other sets of interviewer observations made at the door-step are those capturing the interaction between interviewer and respondent and respondents' reasons for refusal (Campan- elli et al., 1997; Bates and Piani, 2005; Bates et al., 2008). Interviewer observations of variables close to the survey { such as the presence of children in a fertility survey (Lepkowski et al., 2010) or indicators of age or disability in a health survey (Couper 283 Section on Survey Research Methods – JSM 2010 Measurement Representation Construct Target Population i Y Listing Information: Validity Time stamps, Driving Coverage Error Measurement Sampling Frame Yi yC Key Strokes: Response Times; Back-Ups; Edits Sampling Error Sample y Measurement Vocal Characteristics: s Error Pitch, Disfluencies Nonresponse Error Contact Data & Response Respondents Interviewer Information: yi Day/Time; Proxy-Y; HU yr Characteristics Processing Adjustment Error Key strokes: Error Edited Response Edit failures Postsurvey Adjustment yip yrw Survey Statistic yprw Figure 1: Total Survey Error components and paradata for their assessment (Kreuter and Casas-Cordero, 2010) et al., 1994; Groves et al., 1995) { can complement call record data in response propensity models due to their likely stronger relationship to survey variables of interest (Kreuter et al., 2010), although di±cult issues in modeling may arise when strong predictors of response are combined with strong predictors of survey outcome variables (Kreuter and Olson, 2010). Audio recordings of the interaction between interviewer and respondent, and in particular the vocal characteristics of the respondent and interviewer are a third set of paradata used to examine survey nonresponse (Groves et al., 2007; Best et al., 2009) as well as measurement error (Jans, 2010). Recordings become more common as digital storage becomes less expensive (Couper, 2005; Herget et al., 2005; Thissen et al., 2007). However the post-processing of such recordings into usable paradata is a large task and has been undertaken in only a few methodological studies (Jans, 2010; Conrad et al., 2010). Most data collection software records the time to complete a question, a set of questions or the whole interview (response times), and capture keystrokes, with which researchers can, for example, measure how often a respondent backed up and changed an answer and whether supplementary de¯nitions are used (Couper, 1998). Response times and keystroke measures have been used to study aspects of the response process (Bassili and Fletcher, 1991; Kreuter, 2002; Heerwegh, 2003; Kaminska and Billiet, 2007; Yan and Tourangeau, 2008; Couper et al., 2009; Lenzner et al., 2009; Peytchev, 2009), to evaluate interviewers (Couper et al., 1997; Mockovak and Powers, 2008), and to review the performance of questions in pretests (Couper, 2000; Stern, 2008; Hicks et al., 2009). The examples given here are just a subset of what can be found in the literature. 284 Section on Survey Research Methods – JSM 2010 However,
Recommended publications
  • Data Collection WP Template
    Distr. GENERAL 02 September 2013 WP 14 ENGLISH ONLY UNITED NATIONS ECONOMIC COMMISSION FOR EUROPE CONFERENCE OF EUROPEAN STATISTICIANS Seminar on Statistical Data Collection (Geneva, Switzerland, 25-27 September 2013) Topic (ii): Managing data collection functions in a changing environment THE BUSINESS STATISTICAL PORTAL: A NEW WAY OF ORGANIZING AND MANAGING DATA COLLECTION PROCESSES FOR BUSINESS SURVEYS IN ISTAT Working Paper Prepared by Natale Renato Fazio, Manuela Murgia, Andrea Nunnari, ISTAT – Italian National Statistical Institute I. Introduction 1. A broad shift of the social and technological context requires ISTAT to modify the way data are collected in order to improve the efficiency in producing timely estimates and, at the same time, to increase or at least maintain the level of data quality, while reducing respondent burden. All of this, within a climate of general budgets cuts. 2. In 2010, as a way to deal with these challenges, a medium-long term project was launched (Stat2015), envisaging the gradual standardisation and industrialisation of the entire cycle of ISTAT statistical processes, according to a model based on a metadata-driven, service-oriented architecture. With regard to data collection, as well for the other production phases, Stat2015 aims at outlining strong enterprise architecture standards. 3. A first step along these lines was to develop a comprehensive set of requirements according to which select, design or further develop the IT tools supporting data collection. The requirements encompassed four areas, covering the main functions of the entire (data collection) process: i) Survey unit management, ii) Data collection management, iii) Questionnaire, iv) Communication facilities. 4. A significant contribution to the definition and implementation of these requirements came with the development of the new ISTAT Business Statistical Portal, a single entry point for Web-based data collection from enterprises, which is at the same time an attempt to streamline the organization and management of business surveys as a whole.
    [Show full text]
  • Standardization and Harmonization of Data and Data Collection Initiatives
    Conference of the Parties to the WHO Framework Convention on Tobacco Control Fourth session FCTC/COP/4/15 Punta del Este, Uruguay, 15–20 November 2010 15 August 2010 Provisional agenda item 6.2 Standardization and harmonization of data and data collection initiatives Report of the Convention Secretariat INTRODUCTION 1. At its third session (Durban, South Africa, 17–22 November 2008) the Conference of the Parties requested the Convention Secretariat to compile a report on data collection measures, in decision FCTC/COP3(17). The decision outlined the fact that this should be undertaken under the guidance of the Bureau, and with the assistance of competent authorities within WHO, in particular WHO’s Tobacco Free Initiative, as well as relevant intergovernmental and nongovernmental organizations with specific expertise in this area. Paragraph 6 of the decision stated that the report should cover measures: to improve the comparability of data over time; to standardize1 collected data within and between Parties; to develop indicators and definitions to be used by Parties’ national and international data collection initiatives; and to further harmonize2 with other data collection initiatives. 2. The request to provide such a report is in line with Article 23.5 of the Convention, which requires the Conference of the Parties to “keep under regular review the implementation of the Convention” and to “promote and guide the development and periodic refinement of comparable methodologies for research and the collection of data, in addition to those provided for in Article 20, relevant to the implementation of the Convention”. 1 Standardization: the adoption of generally accepted uniform technical specifications, criteria, methods, processes or practices to measure an item.
    [Show full text]
  • Data Collection and Analysis Plan FAMILY OPTIONS STUDY
    Data Collection and Analysis Plan FAMILY OPTIONS STUDY UU.S..S. DDepaepartmentrtment ofof HouHousingsing andand UrUrbanban DevDevelopelopmmentent || OOfficeffice ofof PolicPolicyy DevDevelopelopmentment andand ResReseaearchrch Visit PD&R’s website www.huduser.org to find this report and others sponsored by HUD’s Office of Policy Development and Research (PD&R). Other services of HUD USER, PD&R’s research information service, include listservs, special interest reports, bimonthly publications (best practices, significant studies from other sources), access to public use databases, and a hotline (800-245-2691) for help accessing the information you need. Data Collection and Analysis Plan FAMILY OPTIONS STUDY Prepared for: U.S. Department of Housing and Urban Development Washington, D.C. Prepared by: Abt Associates Inc. Daniel Gubits Michelle Wood Debi McInnis Scott Brown Brooke Spellman Stephen Bell In partnership with : Vanderbilt University Marybeth Shinn March 2013 Disclaimer Disclaimer The contents of this report are the views of the authors and do not necessarily reflect the views or policies of the U.S. Department of Housing and Urban Development or the U.S. Government. Family Options Study Data Collection and Analysis Plan Table of Contents Family Options Study Revised Data Collection and Analysis Plan Table of Contents Chapter 1. Introduction and Evaluation Design .................................................................................. 1 1.1 Overview and Objectives ........................................................................................................
    [Show full text]
  • Ground Data Collection and Use
    ANDREWS. BENSON WILLIAMC. DRAEGER LAWRENCER. PETTINGER University of California* Berkeley, Calif. 94720 Ground Data Collection and Use These are essential components of an agricultural resource survey. INTRODUCTION sions based on their efforts to perform re- HE IMPORTANCE OF collecting meaningful gional agricultural inventories in Maricopa T and timely ground data for resource in- County, Arizona, using space and high-alti- ventories which employ remote sensing tech- tude aerial photography, as part of an ongoing niques is often discussed, and appropriately NASA-sponsored research project (Draeger, so. However, those wishing to conduct inven- et a]., 1970). This paper draws upon examples tories frequently fail to devote as much time from this research. However, much of the dis- to the planning of field activities which occur cussion is relevant to other disciplines for in conjunction with an aircraft mission as which ground data is important. they do in planning the flight itself. As a re- sult. adequate remote sensing data mav be collected,-but no adequate sipporting infor- Preliminary evaluation of the geographical ABSTRACT:During the past two years, extensive studies have been conducted out in the Phoenix, Arizona area to ascertain the degree to which sequential high- altitude aircraft and spacecraft imagery can contribute to operational agri- cultural crop surveys. Data collected on the ground within the test site con- stituted an essential component of the three phases of the survey: (I)farniliariza- tion with the area and design of preliminary evalz~ationexperiments, (2) train- in,g of interpreters, and (3)providing the basis upon which image interpretation estimates can be adjusted and evaluated.
    [Show full text]
  • Design and Implementation of N-Of-1 Trials: a User's Guide
    Design and Implementation of N-of-1 Trials: A User’s Guide N of 1 The Agency for Healthcare Research and Quality’s (AHRQ) Effective Health Care Program conducts and supports research focused on the outcomes, effectiveness, comparative clinical effectiveness, and appropriateness of pharmaceuticals, devices, and health care services. More information on the Effective Health Care Program and electronic copies of this report can be found at www.effectivehealthcare.ahrq. gov. This report was produced under contract to AHRQ by the Brigham and Women’s Hospital DEcIDE (Developing Evidence to Inform Decisions about Effectiveness) Methods Center under Contract No. 290-2005-0016-I. The AHRQ Task Order Officer for this project was Parivash Nourjah, Ph.D. The findings and conclusions in this document are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the views ofAHRQ or the U.S. Department of Health and Human Services. Therefore, no statement in this report should be construed as an official position of AHRQ or the U.S. Department of Health and Human Services. Persons using assistive technology may not be able to fully access information in this report. For assistance contact [email protected]. The following investigators acknowledge financial support: Dr. Naihua Duan was supported in part by NIH grants 1 R01 NR013938 and 7 P30 MH090322, Dr. Heather Kaplan is part of the investigator team that designed and developed the MyIBD platform, and this work was supported in part by Cincinnati Children’s Hospital Medical Center. Dr. Richard Kravitz was supported in part by National Institute of Nursing Research Grant R01 NR01393801.
    [Show full text]
  • Collapsing the Distinction Between Experimentation and Treatment in the Regulation of New Drugs
    File: LAAKMANN EIC PUBLISH.doc Created on: 3/30/2011 4:56:00 PM Last Printed: 4/7/2011 11:58:00 AM COLLAPSING THE DISTINCTION BETWEEN EXPERIMENTATION AND TREATMENT IN THE REGULATION OF NEW DRUGS Anna B. Laakmann* The explosion of scientific knowledge in recent decades has simulta- neously increased pressure on the Food and Drug Administration (FDA) both to expedite market entry of promising medical breakthroughs and to safeguard the public from harms caused by new products. Ironically, as the FDA and drug manufacturers spend huge sums in the premarketing phase gathering and interpreting data from planned trials, an enormous amount of valuable information about postmarketing clinical experiences with FDA-regulated drugs is not being effectively captured. The Article asserts that the FDA should formally recognize the blurred line between experimentation and treatment by adopting a more fluid approach to its review of new medical technologies. It argues that the FDA should shift its focus toward harnessing and distilling accumulated experiential knowledge about the effects of new drugs so as to enable patients and doctors to make rational treatment decisions based on inevitably imperfect information. The Article sets forth an alternative regulatory approach that seamlessly incor- porates prospective outcomes data into the FDA drug review process. This approach would be facilitated by the creation of a centralized database that serves as a clearinghouse of information about treatment outcomes. The proposed scheme would help to address current failures in the imple- mentation of the agency’s “fast-track” programs. More broadly, it could serve as the instrumentality to effectuate a shift in the perceived role of the FDA from market gatekeeper to consolidator and purveyor of information about drug safety and efficacy.
    [Show full text]
  • (EPOC) Data Collection Checklist
    Cochrane Effective Practice and Organisation of Care Review Group DATA COLLECTION CHECKLIST Page 2 Cochrane Effective Practice and Organisation of Care Review Group (EPOC) Data Collection Checklist CONTENTS Item Page Introduction 5-6 1 Inclusion criteria* 7-8 1.1 Study design* 7 1.1.1 Randomised controlled trial* 1.1.2 Controlled clinical trial* 1.1.3 Controlled before and after study* 1.1.4 Interrupted time series* 1.2 Methodological inclusion criteria* 8 2 Interventions* 9-12 2.1 Type of intervention 9 2.1.1 Professional interventions* 9 2.1.2 Financial interventions* 10 2.1.2.1 Provider interventions* 2.1.2.2 Patient interventions* 2.1.3 Organisational interventions* 11 2.1.3.1 Provider orientated interventions* 2.1.3.2 Patient orientated interventions* 2.1.3.3 Structural interventions* 2.1.4 Regulatory interventions* 12 2.2 Controls* 13 3 Type of targeted behaviour* 13 4 Participants* 14-15 4.1 Characteristics of participating providers* 14 4.1.1 Profession* Item Page Page 3 4.1.2 Level of training* 4.1.3 Clinical speciality* 4.1.4 Age 4.1.5 Time since graduation 4.2 Characteristics of participating patients* 15 4.2.1 Clinical problem* 4.2.2 Other patient characteristics 4.2.3 Number of patients included in the study* 5 Setting* 16 5.1 Reimbursement system 5.2 Location of care* 5.3 Academic Status* 5.4 Country* 5.5 Proportion of eligible providers from the sampling frame* 6 Methods* 17 6.1 Unit of allocation* 6.2 Unit of analysis* 6.3 Power calculation* 6.4 Quality criteria* 17-22 6.4.1 Quality criteria for randomised controlled trials
    [Show full text]
  • Patient Reported Outcomes (PROS) in Performance Measurement
    Patient-Reported Outcomes Table of Contents Introduction ............................................................................................................................................ 2 Defining Patient-Reported Outcomes ............................................................................................ 2 How Are PROs Used? ............................................................................................................................ 3 Measuring research study endpoints ........................................................................................................ 3 Monitoring adverse events in clinical research ..................................................................................... 3 Monitoring symptoms, patient satisfaction, and health care performance ................................ 3 Example from the NIH Collaboratory ................................................................................................................... 4 Measuring PROs: Instruments, Item Banks, and Devices ....................................................... 5 PRO Instruments ............................................................................................................................................... 5 Item Banks........................................................................................................................................................... 8 Devices .................................................................................................................................................................
    [Show full text]
  • The Assisi Think Tank Meeting Breast Large Database for Standardized Data Collection in Breast Cancer—ATTM.BLADE
    Journal of Personalized Medicine Article The Assisi Think Tank Meeting Breast Large Database for Standardized Data Collection in Breast Cancer—ATTM.BLADE Fabio Marazzi 1, Valeria Masiello 1,* , Carlotta Masciocchi 2, Mara Merluzzi 3, Simonetta Saldi 4, Paolo Belli 5, Luca Boldrini 1,6 , Nikola Dino Capocchiano 6, Alba Di Leone 7 , Stefano Magno 7 , Elisa Meldolesi 1, Francesca Moschella 7, Antonino Mulé 8, Daniela Smaniotto 1,6, Daniela Andreina Terribile 8, Luca Tagliaferri 1 , Gianluca Franceschini 6,7 , Maria Antonietta Gambacorta 1,6, Riccardo Masetti 6,7, Vincenzo Valentini 1,6 , Philip M. P. Poortmans 9,10 and Cynthia Aristei 11 1 Fondazione Policlinico Universitario “A. Gemelli” IRCCS, UOC di Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, 00168 Roma, Italy; [email protected] (F.M.); [email protected] (L.B.); [email protected] (E.M.); [email protected] (D.S.); [email protected] (L.T.); [email protected] (M.A.G.); [email protected] (V.V.) 2 Fondazione Policlinico Universitario “A. Gemelli” IRCCS, 00168 Roma, Italy; [email protected] 3 Radiation Oncology Section, Department of Medicine and Surgery, University of Perugia, 06123 Perugia, Italy; [email protected] 4 Radiation Oncology Section, Perugia General Hospital, 06123 Perugia, Italy; [email protected] 5 Fondazione Policlinico Universitario
    [Show full text]
  • Asset Management Data Collection for Supporting Decision Processes Asset Management Data Collection for Supporting Decision Processes
    Asset Management Data Collection for Supporting Decision Processes Asset Management Data Collection for Supporting Decision Processes Prepared by: Gerardo W. Flintsch, Ph.D., P.E., Center for Safe and Sustainable Transportation Infrastructure, Virginia Tech Transportation Institute [email protected] J. W. Bryant, Jr., Ph.D., P.E., Asset Management Division, Virginia Department of Transportation [email protected] Prepared May 2006 March 2009 Preface Asset Management is a strategic approach to the optimal allocation of resources for the management, operation, maintenance, and preservation of transporta- tion infrastructure (FHWA 1999). The concept of Asset Management combines engineering and economic principles with sound business practices to support decisionmaking at the strategic, network, and project levels. One of the key aspects of the development of Asset Management is data collection. The way in which transportation agencies collect, store, and analyze data has evolved along with advances in technology, such as mobile computing, advanced sensors, distributed databases, and spatial technologies. These technologies have enabled data collection and integration procedures necessary to support the comprehensive analyses and evaluation processes needed for Asset Management. However, in many cases, the data collection activities have not been designed specifically to support the decision processes inherent in Asset Management. As a result, the use of the aforementioned technologies has led agencies to collect very large amounts of data and create vast databases that have not always been useful or necessary for supporting decisionmaking processes. Although agencies have placed a large emphasis on collecting and inte- grating data, little effort has been placed on linking the data collection to the agencies’ decisionmaking processes.
    [Show full text]
  • PCORI Methodology Standards Academic Curriculum Self-Assessment for Category 9: Standards for Adaptive and Bayesian Trial Designs
    PCORI Methodology Standards Academic Curriculum Self-Assessment for Category 9: Standards for Adaptive and Bayesian Trial Designs Select the single best answer to each question. 1. What is the difference between the traditional clinical trial designs of the past few decades and adaptive clinical trials? a. Adaptive trials allow changes in the way the trial is run, at will, at any time during the trial. b. Traditional clinical trial designs require much more preparation and pre-study planning than adaptive trials do. c. Adaptive trials incorporate predefined rules for changing certain aspects of the clinical trial in response to accruing data. d. Adaptive clinical trials never incorporate randomization. 2. Do Bayesian statistical inference conditions on the clinical trial’s data make inference about the study’s treatment effects? a. Yes b. No 3. Is Bayesian posterior inference always heavily influenced by the prior distribution? a. Yes b. No 4. How might an adaptive clinical trial provide randomized evidence to support individualizing therapies? a. By altering randomization probabilities within predefined patient subgroups to favor a treatment that appears to be more beneficial b. By maintaining equal randomization of patients to trial treatments, but always allowing post hoc analyses of trial data in subgroups that were not defined prior to opening the study c. By changing the trial’s entry criteria to enrich for patients who do not appear to benefit from any treatments in the trial d. By allowing the trial’s sponsors to change the study chair in response to the accruing trial data 5. Which of the following statements represents a reason for intensive and extensive planning for an adaptive trial? a.
    [Show full text]
  • A User's Guide Registries for Evaluating Patient Outcomes
    U.S. Department of Guide Registries for Evaluating Patient Outcomes: A User’s Health and Human Services Agency for Healthcare Research and Quality 540 Gaither Road Rockville, MD 20850 Volume 1 Registries for Evaluating Patient Outcomes: A User’s Guide Third Edition AHRQ Pub. No. 13(14)-EHC111 April 2014 ISBN: 978-1-58763-432-1 The Effective Health Care Program of the Agency for Healthcare Research and Quality (AHRQ) conducts and supports research focused on the outcomes, effectiveness, comparative clinical effectiveness, and appropriateness of pharmaceuticals, devices, and health care services. More information on the Effective Health Care Program can be found at www.effectivehealthcare.ahrq.gov. This report was produced under contract to AHRQ by the Outcome DEcIDE Center (Developing Evidence to Inform Decisions about Effectiveness) under Contract No. 290-2005-00351 TO7. The AHRQ Task Order Officer for this project was Elise Berliner, Ph.D. The findings and conclusions in this document are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the views of AHRQ or the U.S. Department of Health and Human Services. Therefore, no statement in this report should be construed as an official position of AHRQ or the U.S. Department of Health and Human Services. Copyright Information: ©2014 United States Government, as represented by the Secretary of the Department of Health and Human Services, by assignment. All rights reserved. The Agency for Healthcare Research and Quality (AHRQ) permits members of the public to reproduce, redistribute, publicly display, and incorporate this work into other materials provided that it must be reproduced without any changes to the work or portions thereof, except as permitted as fair use under the U.S.
    [Show full text]