Examples of Survey Methodological Reporting That Are Consistent with Aapor Transparency Initiative Standards

Total Page:16

File Type:pdf, Size:1020Kb

Examples of Survey Methodological Reporting That Are Consistent with Aapor Transparency Initiative Standards EXAMPLES OF SURVEY METHODOLOGICAL REPORTING THAT ARE CONSISTENT WITH AAPOR TRANSPARENCY INITIATIVE STANDARDS Transparency Initiative Coordination Committee American Association for Public Opinion Research Revised August 1, 2015 Examples of Methodological Reporting that are Consistent with AAPOR Transparency Initiative Standards. Page 1 INTRODUCTION Participation in AAPOR’s Transparency Initiative involves willingness to provide the items listed below in any reports of survey research results that are publicly released, or to make them publicly available, preferably on the organization’s web site, immediately upon release of that report. These items correspond to the first part of Section III.A of AAPOR’s Code of Professional Ethics and Practices, which was revised in May 2015. This document provides several annotated examples of methodological reporting that includes each of these elements: Item 1: Who sponsored the research study and who conducted it. If different from the sponsor, the original sources of funding will also be disclosed. Item 2: The exact wording and presentation of questions and response options whose results are reported. This includes preceding interviewer or respondent instructions and any preceding questions that might reasonably be expected to influence responses to the reported results. Item 3: A definition of the population under study and its geographic location. Item 4: Dates of data collection. Item 5: A description of the sampling frame(s) and its coverage of the target population, including mention of any segment of the target population that is not covered by the design. This many include, for example, exclusion of Alaska and Hawaii in U.S. surveys; exclusion of specific provinces or rural areas in international surveys; and exclusion of non-panel members in panel surveys. If possible the estimated size of non-covered segments will be provided. If a size estimate cannot be provided, this will be explained. If no frame or list was utilized, this will be indicated. Item 6: The name of the sample supplier, if the sampling frame and/or the sample itself was provided by a third party. Item 7: The methods used to recruit the panel or participants, if the sample was drawn from a pre-recruited panel or pool of respondents. Item 8: A description of the sample design, giving a clear indication of the method by which the respondents were selected, recruited, intercepted or otherwise contacted or encountered, along with any eligibility requirements and/or oversampling. If quotas were used, the variables defining the quotas will be reported. If a within-household selection procedure was used, this will be described. The description of the sampling frame and sample design will include sufficient detail to determine whether the respondents were selected using probability or non- probability methods. Examples of Methodological Reporting that are Consistent with AAPOR Transparency Initiative Standards. Page 2 Item 9: Method(s) and mode(s) used to administer the survey (e.g., CATI, CAPI, ACASI, IVR, mail survey, web survey) and the language(s) offered. Item 10: Sample sizes (by sampling frame if more than one was used) and a discussion of the precision of the findings. For probability samples, the estimates of sampling error will be reported, and the discussion will state whether or not the reported margins of sampling error or statistical analyses have been adjusted for the design effect due to weighting, clustering, or other factors. Disclosure requirements for non-probability samples are different because the precision of estimates from such samples is a model- based measure (rather than the average deviation from the population value over all possible samples). Reports of non-probability samples will only provide measures of precision if they are accompanied by a detailed description of how the underlying model was specified, its assumptions validated and the measure(s) calculated. To avoid confusion, it is best to avoid using the term “margin of error” or “margin of sampling error” in conjunction with non-probability samples. Item 11: A description of how the weights were calculated, including the variables used and the sources of weighting parameters, if weighted estimates are reported. Item 12: Contact for obtaining more information about the study. Section III.D of the AAPOR Code is also applicable: Item 13: If the results reported are based on multiple samples of multiple modes, the preceding items will be disclosed for each. In addition, other items included in the second part of Section III.A of the AAPOR Code are to be released within 30 days of any request for those materials, and Sections III.E-F are also to be observed. Finally, the revised AAPOR Code also includes disclosure standards for qualitative research and for content analysis, in Sections III.B and III.C, respectively. These are considered in separate documents. The examples provided below are intended to emphasize the fact that an infinite variety of formats can be employed to successfully report these disclosure elements and comply with Transparency Initiative requirements. Examples of Methodological Reporting that are Consistent with AAPOR Transparency Initiative Standards. Page 3 Example 1: Dual Frame Landline/Cell Phone Sample Design Example 2: Dual Frame Landline/Cell Phone Sample Design Example 3: Dual Frame Landline/Cell Phone Sample Design Example 4: Mixed Mode Survey for a Dual Frame Address Based Sample Combined with a Landline RDD Sample Example 5: Web Survey Example 6: Area Probability Face-to-Face Survey Examples of Methodological Reporting that are Consistent with AAPOR Transparency Initiative Standards. Page 4 EXAMPLE 1 DUAL FRAME LANDLINE/CELL PHONE SAMPLE DESIGN ABOUT THE SURVEY The analysis in this report is based on telephone interviews [9] conducted (INCLUSIVE DATES) [4] among a national sample of adults, 18 years of age or older, living in all 50 U.S. states and the District of Columbia [3] (NNNN respondents were interviewed on a landline telephone, and NNNN were interviewed on a cell phone, including NN who had no landline telephone) [10]. The survey was conducted by interviewers at (ORGANIZATION NAME) [1]. Interviews were conducted in English and Spanish [9. The questionnaire used in this survey is available at: 222.xxxxx.com/qs. [2] A combination of landline and cell phone random digit dial samples were used [5]. As the proportion of Americans who rely solely or mostly on cell phones for their telephone service continues to grow, sampling both landline and cell phone numbers helps to ensure that our surveys represent all adults who have access to either (only about 2% of households in the U.S. do not have access to any phone). Both the landline and cell phone samples were provided by (NAME OF VENDOR) [6]. The design of the landline sample ensures representation of both listed and unlisted numbers (including those not yet listed) by using random digit dialing [8]. This method uses random generation of the last two digits of telephone numbers selected on the basis of the area code, telephone exchange, and bank number. A bank is defined as 100 contiguous telephone numbers, for example 800-555-1200 to 800-555- 1299. The telephone exchanges are selected to be proportionally stratified by county and by telephone exchange within the county. Only banks of telephone numbers containing three or more listed residential numbers are selected. The cell phone sample is drawn through systematic sampling from dedicated wireless banks of 100 contiguous numbers and shared service banks with no directory-listed landline numbers (to ensure that the cell phone sample does not include banks that are also included in the landline sample). The sample is designed to be representative both geographically and by large and small wireless carriers. Both the landline and cell samples are released for interviewing in replicates, which are small random samples of each larger sample. Using replicates to control the release of telephone numbers ensures that the complete call procedures are followed for all numbers dialed. The use of replicates also improves the overall representativeness of the survey by helping to ensure that the regional distribution of numbers called is appropriate. Respondents in the landline sample were selected by asking at random for the youngest adult male or female who is now at home. If there is no eligible person of the requested gender at home, interviewers ask to speak with the youngest adult of the opposite gender, who is now at home. Interviews in the cell sample were conducted with the person who answered the phone, if that person was an adult 18 years of age or older [8]. At least 7 attempts were made to complete an interview at every sampled telephone number. The calls were staggered over times of day and days of the week (including at least one daytime call) to maximize the chances of making contact with a potential respondent. Interviewing was also spread as evenly as possible across the field period. An effort was made to recontact most interview breakoffs and refusals to attempt to convert them to completed interviews. Examples of Methodological Reporting that are Consistent with AAPOR Transparency Initiative Standards. Page 5 The response rate for this survey was (RR) in the landline sample and (RR) in the cell sample, as computed using the American Association for Public Opinion Research’s Response Rate 3. The response rate is the percentage of known or assumed residential households for which a completed interview was obtained. Survey participation tends to vary for different subgroups of the population, and these subgroups are likely to also vary on questions of substantive interest. To compensate for these known biases, the sample data are weighted for analysis. The sample is first weighted by household size for those with landline phones to account for the fact that people in larger households have a lower probability of being selected.
Recommended publications
  • Hypothesis Testing with Two Categorical Variables 203
    Chapter 10 distribute or Hypothesis Testing With Two Categoricalpost, Variables Chi-Square copy, Learning Objectives • Identify the correct types of variables for use with a chi-square test of independence. • Explainnot the difference between parametric and nonparametric statistics. • Conduct a five-step hypothesis test for a contingency table of any size. • Explain what statistical significance means and how it differs from practical significance. • Identify the correct measure of association for use with a particular chi-square test, Doand interpret those measures. • Use SPSS to produce crosstabs tables, chi-square tests, and measures of association. 202 Part 3 Hypothesis Testing Copyright ©2016 by SAGE Publications, Inc. This work may not be reproduced or distributed in any form or by any means without express written permission of the publisher. he chi-square test of independence is used when the independent variable (IV) and dependent variable (DV) are both categorical (nominal or ordinal). The chi-square test is member of the family of nonparametric statistics, which are statistical Tanalyses used when sampling distributions cannot be assumed to be normally distributed, which is often the result of the DV being categorical rather than continuous (we will talk in detail about this). Chi-square thus sits in contrast to parametric statistics, which are used when DVs are continuous and sampling distributions are safely assumed to be normal. The t test, analysis of variance, and correlation are all parametric. Before going into the theory and math behind the chi-square statistic, read the Research Examples for illustrations of the types of situations in which a criminal justice or criminology researcher would utilize a chi- square test.
    [Show full text]
  • Questionnaire Design Guidelines for Establishment Surveys
    Journal of Official Statistics, Vol. 26, No. 1, 2010, pp. 43–85 Questionnaire Design Guidelines for Establishment Surveys Rebecca L. Morrison1, Don A. Dillman2, and Leah M. Christian3 Previous literature has shown the effects of question wording or visual design on the data provided by respondents. However, few articles have been published that link the effects of question wording and visual design to the development of questionnaire design guidelines. This article proposes specific guidelines for the design of establishment surveys within statistical agencies based on theories regarding communication and visual perception, experimental research on question wording and visual design, and findings from cognitive interviews with establishment survey respondents. The guidelines are applicable to both paper and electronic instruments, and cover such topics as the phrasing of questions, the use of space, the placement and wording of instructions, the design of answer spaces, and matrices. Key words: Visual design; question wording; cognitive interviews. 1. Introduction In recent years, considerable effort has been made to develop questionnaire construction guidelines for how questions should appear in establishment surveys. Examples include guidelines developed by the Australian Bureau of Statistics (2006) and Statistics Norway (Nøtnæs 2006). These guidelines have utilized the rapidly emerging research on how the choice of survey mode, question wording, and visual layout influence respondent answers, in order to improve the quality of responses and to encourage similarity of construction when more than one survey data collection mode is used. Redesign efforts for surveys at the Central Bureau of Statistics in the Netherlands (Snijkers 2007), Statistics Denmark (Conrad 2007), and the Office for National Statistics in the United Kingdom (Jones et al.
    [Show full text]
  • Those Missing Values in Questionnaires
    Those Missing Values in Questionnaires John R. Gerlach, Maxim Group, Plymouth Meeting, PA Cindy Garra, IMS HEALTH; Plymouth Meeting, PA Abstract alternative values are not bona fide SAS missing values; consequently, a SAS procedure, expectedly, will include Questionnaires are notorious for containing these alternative values, thereby producing bogus results. responses that later become missing values during processing and analyses. Contrary to a non-response that Typically, you can re-code the original data, so results in a typical, bona fide, missing value, questions that the missing values become ordinary missing values, might allow several alternative responses, necessarily thus allowing SAS to process only appropriate values. Of outside the usual range of appropriate responses. course, there's a loss of information since the original Whether a question represents continuous or categorical missing values did not constitute a non-response. Also, data, a good questionnaire offers meaningful alternatives, such pre-processing might include a myriad of IF THEN / such as: "Refused to Answer" and, of course, the ELSE statements, which can be very tedious and time- quintessential "Don't Know." Traditionally, these consuming to write, and difficult to maintain. Thus, given alternative responses have numeric values such as 97, a questionnaire that has over a hundred variables with 998, or 9999 and, therefore, pose problems when trying varying levels of missing values, the task of re-coding to distinguish them from normal responses, especially these variables becomes very time consuming at best. when multiple missing values exist. This paper discusses Even worse, the permanent re-coding of alternative missing values in SAS and techniques that facilitate the responses to ordinary missing numeric values in SAS process of dealing with multi-valued, meaningful missing precludes categorical analysis that requires the inclusion values often found in questionnaires.
    [Show full text]
  • 10 Questions Opinion Polls
    Questions you might have on 10opinion polls 1. What is an opinion poll? An opinion poll is a survey carried out to measure views on a certain topic within a specific group of people. For example, the topic may relate to who Kenyans support in the presidential race, in which case, the group of people interviewed will be registered voters. 2. How are interviewees for an opinion poll selected? The group of people interviewed for an opinion poll is called a sample. As the name suggests, a sample is a group of people that represents the total population whose opinion is being surveyed. In a scientific opinion poll, everyone has an equal chance of being interviewed. 3. So how come I have never been interviewed for an opinion poll? You have the same chance of being polled as anyone else living in Kenya. However, chances of this are very small and are estimated at about 1 in 14,000. This is because there are approximately 14 million registered voters in Kenya and, for practical and cost reasons, usually only between 1,000 and 2,000 people are interviewed for each survey carried out. 4. How can such a small group be representative of the entire population? In order to ensure that the sample/survey group is representative of the population, the surveyors must ensure that the group reflects the characteristics of the whole. For instance, to get a general idea of who might win the Kenyan presidential election, only the views of registered voters in Kenya will be surveyed as these are the people who will be able to influence the election.
    [Show full text]
  • Esomar/Grbn Guideline for Online Sample Quality
    ESOMAR/GRBN GUIDELINE FOR ONLINE SAMPLE QUALITY ESOMAR GRBN ONLINE SAMPLE QUALITY GUIDELINE ESOMAR, the World Association for Social, Opinion and Market Research, is the essential organisation for encouraging, advancing and elevating market research: www.esomar.org. GRBN, the Global Research Business Network, connects 38 research associations and over 3500 research businesses on five continents: www.grbn.org. © 2015 ESOMAR and GRBN. Issued February 2015. This Guideline is drafted in English and the English text is the definitive version. The text may be copied, distributed and transmitted under the condition that appropriate attribution is made and the following notice is included “© 2015 ESOMAR and GRBN”. 2 ESOMAR GRBN ONLINE SAMPLE QUALITY GUIDELINE CONTENTS 1 INTRODUCTION AND SCOPE ................................................................................................... 4 2 DEFINITIONS .............................................................................................................................. 4 3 KEY REQUIREMENTS ................................................................................................................ 6 3.1 The claimed identity of each research participant should be validated. .................................................. 6 3.2 Providers must ensure that no research participant completes the same survey more than once ......... 8 3.3 Research participant engagement should be measured and reported on ............................................... 9 3.4 The identity and personal
    [Show full text]
  • Handbook of Recommended Practices for Questionnaire Development and Testing in the European Statistical System
    Handbook of Recommended Practices for Questionnaire Development and Testing in the European Statistical System Release year: 2006 Authors: G. Brancato, S. Macchia, M. Murgia, M. Signore, G. Simeoni - Italian National Institute of Statistics, ISTAT K. Blanke, T. Körner, A. Nimmergut - Federal Statistical Office Germany, FSO P. Lima, R. Paulino - National Statistical Institute of Portugal, INE J.H.P. Hoffmeyer-Zlotnik - German Center for Survey Research and Methodology, ZUMA Version 1 Acknowledgements We are grateful to the experts from the network countries who supported us in all relevant stages of the work: Anja Ahola, Dirkjan Beukenhorst, Trine Dale, Gustav Haraldsen. We also thank all colleagues from European and overseas NSIs who helped us in understanding the current practices and in the review of the draft version of the handbook. Executive summary Executive Summary Questionnaires constitute the basis of every survey-based statistical measurement. They are by far the most important measurement instruments statisticians use to grasp the phenomena to be measured. Errors due to an insufficient questionnaire can hardly be compensated at later stages of the data collection process. Therefore, having systematic questionnaire design and testing procedures in place is vital for data quality, particularly for a minimisation of the measurement error. Against this background, the Directors General of the members of the European Statistical System (ESS) stressed the importance of questionnaire design and testing in the European Statistics Code of Practice, endorsed in February 2005. Principle 8 of the Code states that “appropriate statistical procedures, implemented from data collection to data validation, must underpin quality statistics.” One of the indicators referring to this principle requires that “questionnaires are systematically tested prior to the data collection.” Taking the Code of Practice as a starting point, this Recommended Practice Manual aims at further specifying the requirements of the Code of Practice.
    [Show full text]
  • Lecture 1: Why Do We Use Statistics, Populations, Samples, Variables, Why Do We Use Statistics?
    1pops_samples.pdf Michael Hallstone, Ph.D. [email protected] Lecture 1: Why do we use statistics, populations, samples, variables, why do we use statistics? • interested in understanding the social world • we want to study a portion of it and say something about it • ex: drug users, homeless, voters, UH students Populations and Samples Populations, Sampling Elements, Frames, and Units A researcher defines a group, “list,” or pool of cases that she wishes to study. This is a population. Another definition: population = complete collection of measurements, objects or individuals under study. 1 of 11 sample = a portion or subset taken from population funny circle diagram so we take a sample and infer to population Why? feasibility – all MD’s in world , cost, time, and stay tuned for the central limits theorem...the most important lecture of this course. Visualizing Samples (taken from) Populations Population Group you wish to study (Mostly made up of “people” in the Then we infer from sample back social sciences) to population (ALWAYS SOME ERROR! “sampling error” Sample (a portion or subset of the population) 4 This population is made up of the things she wishes to actually study called sampling elements. Sampling elements can be people, organizations, schools, whales, molecules, and articles in the popular press, etc. The sampling element is your exact unit of analysis. For crime researchers studying car thieves, the sampling element would probably be individual car thieves – or theft incidents reported to the police. For drug researchers the sampling elements would be most likely be individual drug users. Inferential statistics is truly the basis of much of our scientific evidence.
    [Show full text]
  • Assessment of Socio-Demographic Sample Composition in ESS Round 61
    Assessment of socio-demographic sample composition in ESS Round 61 Achim Koch GESIS – Leibniz Institute for the Social Sciences, Mannheim/Germany, June 2016 Contents 1. Introduction 2 2. Assessing socio-demographic sample composition with external benchmark data 3 3. The European Union Labour Force Survey 3 4. Data and variables 6 5. Description of ESS-LFS differences 8 6. A summary measure of ESS-LFS differences 17 7. Comparison of results for ESS 6 with results for ESS 5 19 8. Correlates of ESS-LFS differences 23 9. Summary and conclusions 27 References 1 The CST of the ESS requests that the following citation for this document should be used: Koch, A. (2016). Assessment of socio-demographic sample composition in ESS Round 6. Mannheim: European Social Survey, GESIS. 1. Introduction The European Social Survey (ESS) is an academically driven cross-national survey that has been conducted every two years across Europe since 2002. The ESS aims to produce high- quality data on social structure, attitudes, values and behaviour patterns in Europe. Much emphasis is placed on the standardisation of survey methods and procedures across countries and over time. Each country implementing the ESS has to follow detailed requirements that are laid down in the “Specifications for participating countries”. These standards cover the whole survey life cycle. They refer to sampling, questionnaire translation, data collection and data preparation and delivery. As regards sampling, for instance, the ESS requires that only strict probability samples should be used; quota sampling and substitution are not allowed. Each country is required to achieve an effective sample size of 1,500 completed interviews, taking into account potential design effects due to the clustering of the sample and/or the variation in inclusion probabilities.
    [Show full text]
  • Meta-Analysis of the Relationships Between Different Leadership Practices and Organizational, Teaming, Leader, and Employee Outcomes*
    Journal of International Education and Leadership Volume 8 Issue 2 Fall 2018 http://www.jielusa.org/ ISSN: 2161-7252 Meta-Analysis of the Relationships Between Different Leadership Practices and Organizational, Teaming, Leader, and Employee Outcomes* Carl J. Dunst Orelena Hawks Puckett Institute Mary Beth Bruder University of Connecticut Health Center Deborah W. Hamby, Robin Howse, and Helen Wilkie Orelena Hawks Puckett Institute * This study was supported, in part, by funding from the U.S. Department of Education, Office of Special Education Programs (No. 325B120004) for the Early Childhood Personnel Center, University of Connecticut Health Center. The contents and opinions expressed, however, are those of the authors and do not necessarily reflect the policy or official position of either the Department or Office and no endorsement should be inferred or implied. The authors report no conflicts of interest. The meta-analysis described in this paper evaluated the relationships between 11 types of leadership practices and 7 organizational, teaming, leader, and employee outcomes. A main focus of analysis was whether the leadership practices were differentially related to the study outcomes. Studies were eligible for inclusion if the correlations between leadership subscale measures (rather than global measures of leadership) and outcomes of interest were reported. The random effects weighted average correlations between the independent and dependent measures were used as the sizes of effects for evaluating the influences of the leadership practices on the outcome measures. One hundred and twelve studies met the inclusion criteria and included 39,433 participants. The studies were conducted in 31 countries in different kinds of programs, organizations, companies, and businesses.
    [Show full text]
  • Summary of Human Subjects Protection Issues Related to Large Sample Surveys
    Summary of Human Subjects Protection Issues Related to Large Sample Surveys U.S. Department of Justice Bureau of Justice Statistics Joan E. Sieber June 2001, NCJ 187692 U.S. Department of Justice Office of Justice Programs John Ashcroft Attorney General Bureau of Justice Statistics Lawrence A. Greenfeld Acting Director Report of work performed under a BJS purchase order to Joan E. Sieber, Department of Psychology, California State University at Hayward, Hayward, California 94542, (510) 538-5424, e-mail [email protected]. The author acknowledges the assistance of Caroline Wolf Harlow, BJS Statistician and project monitor. Ellen Goldberg edited the document. Contents of this report do not necessarily reflect the views or policies of the Bureau of Justice Statistics or the Department of Justice. This report and others from the Bureau of Justice Statistics are available through the Internet — http://www.ojp.usdoj.gov/bjs Table of Contents 1. Introduction 2 Limitations of the Common Rule with respect to survey research 2 2. Risks and benefits of participation in sample surveys 5 Standard risk issues, researcher responses, and IRB requirements 5 Long-term consequences 6 Background issues 6 3. Procedures to protect privacy and maintain confidentiality 9 Standard issues and problems 9 Confidentiality assurances and their consequences 21 Emerging issues of privacy and confidentiality 22 4. Other procedures for minimizing risks and promoting benefits 23 Identifying and minimizing risks 23 Identifying and maximizing possible benefits 26 5. Procedures for responding to requests for help or assistance 28 Standard procedures 28 Background considerations 28 A specific recommendation: An experiment within the survey 32 6.
    [Show full text]
  • Survey Experiments
    IU Workshop in Methods – 2019 Survey Experiments Testing Causality in Diverse Samples Trenton D. Mize Department of Sociology & Advanced Methodologies (AMAP) Purdue University Survey Experiments Page 1 Survey Experiments Page 2 Contents INTRODUCTION ............................................................................................................................................................................ 8 Overview .............................................................................................................................................................................. 8 What is a survey experiment? .................................................................................................................................... 9 What is an experiment?.............................................................................................................................................. 10 Independent and dependent variables ................................................................................................................. 11 Experimental Conditions ............................................................................................................................................. 12 WHY CONDUCT A SURVEY EXPERIMENT? ........................................................................................................................... 13 Internal, external, and construct validity ..........................................................................................................
    [Show full text]
  • Interactive Voice Response for Data Collection in Low and Middle-Income Countries
    Interactive Voice Response for Data Collection in Low and Middle-Income Countries Viamo Brief April 2018 Suggested Citation Greenleaf, A.R. Vogel, L. 2018. Interactive Voice Response for Data Collection in Low and Middle- Income Countries. Toronto, Canada: Viamo. 1 0 - EXECUTIVE SUMMARY Expanding mobile network coverage, decreasing cost of cellphones and airtime, and a more literate population have made mobile phone surveys an increasingly viable option for data collection in low- and middle-income countries (LMICs). Interactive voice response (IVR) is a fast and cost-effective option for survey data collection. The benefits of trying to reach respondents in low and middle-income countries (LMICs) via cell phone have been described by The World Bank,[1] academics[2,3], and practitioners[4] alike. IVR, a faster and less expensive option than face-to-face surveys, can collect data in areas that are difficult for human interviewers to reach. This brief explains applications of IVR for data collection in LMICs. Sections 1- 4 provide background information about IVR and detail the advantages of “robo-calls”. The next three sections explain the three main target groups for IVR. Beginning with Section 5 we outline the four approaches to sampling a general population and address IVR data quality. Known respondents, who are often enrolled for monitoring and evaluation, are covered in Section 6, along with best practices for maximizing participant engagement. Finally, in Section 7 we explain how professionals use IVR for surveillance and reporting. Woven throughout Sections 5-7, four case studies illustrate how four organizations have successfully used IVR to for data collection.
    [Show full text]