3MC Program with Abstract

Total Page:16

File Type:pdf, Size:1020Kb

3MC Program with Abstract * Invited Papers Tuesday, July 26th, 2:00 p.m. - 3:30 p.m. Session: Adapting Interviewer Training Across Cultures Chair: Esther Ullman, University of Michigan Location: Michigan Ballroom I Perspectives on international training Katherine Mason, RTI International Steven Litavecz, RTI International David Plotner, RTI International In this presentation, Ms. Mason will discuss experiences with international trainings which have been conducted in the following countries: Nigeria, Kenya, Mexico, Indonesia, Ghana, Zambia, Thailand, India and China. Topics will include cultural perspectives on time management, reading items verbatim and challenges with multiple translations, dealing with proxies, respondent privacy, challenges with subcontractors and data transmission issues. Challenges and lessons learned of conducting computer assisted personal interviewing (CAPI) training and providing capacity building supports for a national household panel survey in Ghana Yu-chieh (Jay) Lin, University of Michigan The knowledge sharing of moving from paper to computer assisted personal interviewing (CAPI) and launching the national household panel survey in Ghana among global survey research and operational team members provides unique insights on adapting interviewer training with local contexts and overcoming both infrastructure and cultural challenges. Lessons learned include using the onsite train-the-trainer experience to finalize interviewer training components, customizing training activities based on trainees' immediate feedback and learning progress, being flexible with different working styles and communication approaches, understanding of differences and reacting quickly, and developing innovative technical solutions for field data collection, data sharing and analyses, and quality control. This presentation provides management and technical examples for audiences who are particularly planning to conduct survey research and data collection in developing countries or areas where need capacity building supports. Interviewer training for a pre-school evaluation in Chad Nathan Jones, University of Wisconsin Survey Center Many researchers and staff providing technical support for data collection in developing countries face challenges when hiring and training local interviewers. In order to apply best practices for data collection, assure cultural adaptation of survey measures, and effectively work with multi-cultural data collection teams, interviewer training needs to be adapted to the different cultures where it occurs. The panel, Training in Developing Countries, will feature presentations by representatives from several survey organizations. This presentation will highlight lessons learned while training interviewers and conducting surveys for parents and children attending a preschool program for Darfuri refugees living in the Goz Amer refugee camp in Eastern Chad. I will focus on strategies for training inexperienced interviewers, maintaining data quality, and field work management in harsh conditions. Tuesday, July 26th, 2:00 p.m. - 3:30 p.m. Session: Approaches to Test for Measurement Invariance Chair: Eldad Davidov, University of Zurich Location: Michigan Ballroom II Measurement invariance of different dimensions of nationalism in the ISSP : Comparisons over two time points and four countries Peter Schmidt, University of Giessen Jan Ciecuch, University of Zurich Eldad Davidov, University of Zurich Recently there has been a controversy about the use of techniques for establishing measurement invariance in the Journal Comparative Politics 2015 and 2016 dealing with scales in the World Value Study. In our study we evaluate two new techniques for establishing measurement invariance: Bayesian approximate invariance proposed by Muthen/ Aspourov(2012) and van der Schoot(2013) and the alignment procedure proposed by Aspourov/Muthen (2014). We analyze data from two waves of the ISSP identity module over several countries and evaluate the results based on the classical invariance tests compared with the bayesian approximate invariance test and the solution given by the alignment procedure. The cross-country comparability of the immigration module in the European Social Survey 2014-15 Jan Cieciuch, University of Zurich Eldad Davidov, University of Zurich Peter Schmidt, University of Giessen Rene Algesheimer, University of Zurich A special module about attitudes toward immigration and threat due to immigration was implemented in the 7th Round of European Social Survey (ESS). In our project we set two goals. The first one was to establish in a theory- driven way latent variables based on items included in the module. These latent variables can be used by researchers in their substantive work on immigration using the ESS. The second goal was to test for measurement invariance of these scales across 15 ESS countries. We proposed the four following latent variables: allowing for immigrants belonging to different ethnic groups than the majority population into the country; qualification for entry; and two types of threat due to immigrants, realistic and symbolic. First, we tested each latent variable in each country separately in single Confirmatory Factor Analyses (CFAs). Next, we tested for measurement invariance of each latent variable using multigroup Confirmatory Factor Analysis (MGCFA). We differentiated between three levels of measurement invariance, configural, metric and scalar, and we applied two approaches: an exact and an approximate measurement invariance approach. If full or partial exact measurement invariance could not be established, we tested whether approximate invariance was given. Configural and metric invariance was supported for all constructs across most countries. Unfortunately scalar invariance was supported for the latent variables only across a subset of countries. The subset of countries where approximate scalar invariance was established was larger than the subset of countries for which exact measurement invariance could be established. Comparing groups that are only partially and approximately comparable: an adaptive Bayesian approach Daniel L. Oberski, Tilburg University Muthén & Asparouhov (2012) introduced the idea of Bayesian approximate measurement invariance : rather than assume that all measurement parameters are equal across groups, a fudge factor is introduced that allows for relatively small random differences. The fudge factor must be specified in advance and results can be rather sensitive to this choice (Rudnev 2015). Moreover, when subsets of items not approximately invariant, the approximate invariance procedure does not do well at detecting these violating (partially noninvariant) items and can lead to serious bias in the estimates of interest (Van de Schoot et al. 2014). Muthén & Asparouhov (2013) called this the alignment problem and suggested a solution based on factor rotation methods. This talk discusses a different possible solution to the alignment problem that follows naturally from the Bayesian approach and connects directly with the regularization literature (Tutz 2012; Hastie et al. 2015). Our approach is to adaptively learn from the data which items are approximately invariant, and which are not. We discuss simulation results that compare the performance of this procedure with that of the standard approximate MI model. We also apply our new approach to empirical data, demonstrating how our approach may be useful for comparing groups that are only partially, and approximately, comparable. * Measurement invariance in international large-scale assessments: Integrating theory and method Fons van de Vijver, Tilburg University, The Netherlands Ralph Carstens, The IEA Data Processing and Research Center, Germany Wolfram Schulz, The Australian Council for Educational Research, Australia One of the aims of international large-scale assessments (ILSAs) of educational achievement is to collect standardized data that allow cross-national comparisons of student achievement, behaviors or attitudes, and the influences of school and classroom factors or family background on those outcomes. Complex modeling of cross-national data requires that the items designed to measure a latent factor have invariant psychometric characteristics, which means that they measure the same trait across countries, the measured latent constructs have the same meaning in all participating countries, and survey respondents interpret the items in a similar way. Non-invariant measures might due to systematic biases in the measurement instrument or differences in the way specific items are responded. Lack of measurement invariance can introduce bias and limit comparison across countries. This present study contributes to invariance evaluation over and beyond the existing investigations through the modeling of differences in measurement. We start with an overview of the basic concept of measurement invariance and its challenges with regard to ILSAs. We evaluate measurement invariance with the exact invariance assumptions and integrate a relatively new prototype of the assumptions with substantive insights about measurement bias from different factors namely in the context, item and person levels. We present a stepwise strategy for evaluating measurement invariance with a productive way of dealing with the unattainable ideal of strict invariance assumptions. With an empirical example, we argue that a generic latent structural and measurement modeling or simple structure of a common-factor model is unlikely to yield fully comparable
Recommended publications
  • Hybrid Online Audience Measurement Calculations Are Only As Good As the Input Data Quality
    People First: A User-Centric Hybrid Online Audience by John Brauer, Manager, Product Leadership, Measurement Model Nielsen Overview As with any media measurement methodology, hybrid online audience measurement calculations are only as good as the input data quality. Hybrid online audience measurement goes beyond a single metric to forecast online audiences more accurately by drawing on multiple data sources. The open question for online publishers and advertisers is, ‘Which hybrid audience measurement model provides the most accurate measure?’ • User-centric online hybrid models rely exclusively on high quality panel data for audience measurement calculations, then project activity to all sites informed by consumer behavior patterns on tagged sites • Site-centric online hybrid models depend on cookies for audience calculations, subject to the inherent weaknesses of cookie data that cannot tie directly to unique individuals and their valuable demographic information • Both hybrid models assume that panel members are representative of the universe being measured and comprise a base large enough to provide statistical reliability, measured in a way that accurately captures behavior • The site-centric hybrid model suffers from the issue of inaccurately capturing unique users as a result of cookie deletion and related activities that can result in audience measurement estimates different than the actual number • Rapid uptake of new online access devices further exacerbates site-centric cookie data tracking issues (in some cases overstating unique browsers by a factor of five) by increasing the likelihood of one individual representing multiple cookies Only the user-centric hybrid model enables a fair comparison of audiences across all web sites, avoids the pitfall of counting multiple cookies versus unique users, and employs a framework that anticipates future changes such as additional data sets and access devices.
    [Show full text]
  • Combining Probability and Nonprobability Samples to Form Efficient Hybrid Estimates: an Evaluation of the Common Support Assumption Jill A
    Combining Probability and Nonprobability Samples to form Efficient Hybrid Estimates: An Evaluation of the Common Support Assumption Jill A. Dever RTI International, Washington, DC Proceedings of the 2018 Federal Committee on Statistical Methodology (FCSM) Research Conference Abstract Nonprobability surveys, those without a defined random sampling scheme, are becoming more prevalent. These studies can offer faster results at less cost than many probability surveys, especially for targeting important subpopulations. This can be an attractive option given the continual challenge of doing more with less, as survey costs continue to rise and response rates to plummet. Nonprobability surveys alone, however, may not fit the needs (purpose) of Federal statistical agencies where population inference is critical. Nonprobability samples may best serve to enhance the representativeness of certain domains within probability samples. For example, if locating and interviewing a required number of subpopulation members is resource prohibitive, data from a targeted nonprobability survey may lower coverage bias exhibited in a probability survey. In this situation, the question is how to best combine information from both sources. Our research searches for an answer to this question through an evaluation of hybrid estimation methods currently in use that combine probability and nonprobability data. Methods that employ generalized analysis weights (i.e., one set of weights for all analyses) are the focus because they enable other survey researchers and policy makers to analyze the data. The goal is to identify procedures that maximize the strength of each data source to produce hybrid estimates with the low mean square error. The details presented here focus on the propensity score adjusted (PSA) nonprobability weights needed prior to combining the data sources, and the common support assumption critical to hybrid estimation and PSA weighting.
    [Show full text]
  • A Critical Review of Studies Investigating the Quality of Data Obtained with Online Panels Based on Probability and Nonprobability Samples1
    Callegaro c02.tex V1 - 01/16/2014 6:25 P.M. Page 23 2 A critical review of studies investigating the quality of data obtained with online panels based on probability and nonprobability samples1 Mario Callegaro1, Ana Villar2, David Yeager3,and Jon A. Krosnick4 1Google, UK 2City University, London, UK 3University of Texas at Austin, USA 4Stanford University, USA 2.1 Introduction Online panels have been used in survey research as data collection tools since the late 1990s (Postoaca, 2006). The potential great cost and time reduction of using these tools have made research companies enthusiastically pursue this new mode of data collection. However, 1 We would like to thank Reg Baker and Anja Göritz, Part editors, for their useful comments on preliminary versions of this chapter. Online Panel Research, First Edition. Edited by Mario Callegaro, Reg Baker, Jelke Bethlehem, Anja S. Göritz, Jon A. Krosnick and Paul J. Lavrakas. © 2014 John Wiley & Sons, Ltd. Published 2014 by John Wiley & Sons, Ltd. Callegaro c02.tex V1 - 01/16/2014 6:25 P.M. Page 24 24 ONLINE PANEL RESEARCH the vast majority of these online panels were built by sampling and recruiting respondents through nonprobability methods such as snowball sampling, banner ads, direct enrollment, and other strategies to obtain large enough samples at a lower cost (see Chapter 1). Only a few companies and research teams chose to build online panels based on probability samples of the general population. During the 1990s, two probability-based online panels were documented: the CentER data Panel in the Netherlands and the Knowledge Networks Panel in the United States.
    [Show full text]
  • Options for Conducting Web Surveys Matthias Schonlau and Mick P
    Statistical Science 2017, Vol. 32, No. 2, 279–292 DOI: 10.1214/16-STS597 © Institute of Mathematical Statistics, 2017 Options for Conducting Web Surveys Matthias Schonlau and Mick P. Couper Abstract. Web surveys can be conducted relatively fast and at relatively low cost. However, Web surveys are often conducted with nonprobability sam- ples and, therefore, a major concern is generalizability. There are two main approaches to address this concern: One, find a way to conduct Web surveys on probability samples without losing most of the cost and speed advantages (e.g., by using mixed-mode approaches or probability-based panel surveys). Two, make adjustments (e.g., propensity scoring, post-stratification, GREG) to nonprobability samples using auxiliary variables. We review both of these approaches as well as lesser-known ones such as respondent-driven sampling. There are many different ways Web surveys can solve the challenge of gen- eralizability. Rather than adopting a one-size-fits-all approach, we conclude that the choice of approach should be commensurate with the purpose of the study. Key words and phrases: Convenience sample, Internet survey. 1. INTRODUCTION tion and invitation of sample persons to a Web sur- vey. No complete list of e-mail addresses of the general Web or Internet surveys1 have come to dominate the survey world in a very short time (see Couper, 2000; population exists from which one can select a sample Couper and Miller, 2008). The attraction of Web sur- and send e-mailed invitations to a Web survey. How- veys lies in the speed with which large numbers of ever, for many other important populations of interest people can be surveyed at relatively low cost, using (e.g., college students, members of professional asso- complex instruments that extend measurement beyond ciations, registered users of Web services, etc.), such what can be done in other modes (especially paper).
    [Show full text]
  • EFAMRO / ESOMAR Position Statement on the Proposal for an Eprivacy Regulation —
    EFAMRO / ESOMAR Position Statement on the Proposal for an ePrivacy Regulation — April 2017 EFAMRO/ESOMAR Position Statement on the Proposal for an ePrivacy Regulation April 2017 00. Table of contents P3 1. About EFAMRO and ESOMAR 2. Key recommendations P3 P4 3. Overview P5 4. Audience measurement research P7 5. Telephone and online research P10 6. GDPR framework for research purposes 7. List of proposed amendments P11 a. Recitals P11 b. Articles P13 2 EFAMRO/ESOMAR Position Statement on the Proposal for an ePrivacy Regulation April 2017 01. About EFAMRO and ESOMAR This position statement is submitted In particular our sector produces research on behalf of EFAMRO, the European outcomes that guide decisions of public authorities (e.g. the Eurobarometer), the non- Research Federation, and ESOMAR, profit sector including charities (e.g. political the World Association for Data, opinion polling), and business (e.g. satisfaction Research and Insights. In Europe, we surveys, product improvement research). represent the market, opinion and In a society increasingly driven by data, our profession ensures the application of appropriate social research and data analytics methodologies, rigour and provenance controls sectors, accounting for an annual thus safeguarding access to quality, relevant, turnover of €15.51 billion1. reliable, and aggregated data sets. These data sets lead to better decision making, inform targeted and cost-effective public policy, and 1 support economic development - leading to ESOMAR Global Market Research 2016 growth and jobs. 02. Key Recommendations We support the proposal for an ePrivacy Amendment of Article 8 and Recital 21 to enable Regulation to replace the ePrivacy Directive as research organisations that comply with Article this will help to create a level playing field in a true 89 of the General Data Protection Regulation European Digital Single Market whilst increasing (GDPR) to continue conducting independent the legal certainty for organisations operating in audience measurement research activities for different EU member states.
    [Show full text]
  • Lesson 3: Sampling Plan 1. Introduction to Quantitative Sampling Sampling: Definition
    Quantitative approaches Quantitative approaches Plan Lesson 3: Sampling 1. Introduction to quantitative sampling 2. Sampling error and sampling bias 3. Response rate 4. Types of "probability samples" 5. The size of the sample 6. Types of "non-probability samples" 1 2 Quantitative approaches Quantitative approaches 1. Introduction to quantitative sampling Sampling: Definition Sampling = choosing the unities (e.g. individuals, famililies, countries, texts, activities) to be investigated 3 4 Quantitative approaches Quantitative approaches Sampling: quantitative and qualitative Population and Sample "First, the term "sampling" is problematic for qualitative research, because it implies the purpose of "representing" the population sampled. Population Quantitative methods texts typically recognize only two main types of sampling: probability sampling (such as random sampling) and Sample convenience sampling." (...) any nonprobability sampling strategy is seen as "convenience sampling" and is strongly discouraged." IIIIIIIIIIIIIIII Sampling This view ignores the fact that, in qualitative research, the typical way of IIIIIIIIIIIIIIII IIIII selecting settings and individuals is neither probability sampling nor IIIII convenience sampling." IIIIIIIIIIIIIIII IIIIIIIIIIIIIIII It falls into a third category, which I will call purposeful selection; other (= «!Miniature population!») terms are purposeful sampling and criterion-based selection." IIIIIIIIIIIIIIII This is a strategy in which particular settings, persons, or activieties are selected deliberately in order to provide information that can't be gotten as well from other choices." Maxwell , Joseph A. , Qualitative research design..., 2005 , 88 5 6 Quantitative approaches Quantitative approaches Population, Sample, Sampling frame Representative sample, probability sample Population = ensemble of unities from which the sample is Representative sample = Sample that reflects the population taken in a reliable way: the sample is a «!miniature population!» Sample = part of the population that is chosen for investigation.
    [Show full text]
  • Rating the Audience: the Business of Media
    Balnaves, Mark, Tom O'Regan, and Ben Goldsmith. "Bibliography." Rating the Audience: The Business of Media. London: Bloomsbury Academic, 2011. 256–267. Bloomsbury Collections. Web. 2 Oct. 2021. <>. Downloaded from Bloomsbury Collections, www.bloomsburycollections.com, 2 October 2021, 14:23 UTC. Copyright © Mark Balnaves, Tom O'Regan and Ben Goldsmith 2011. You may share this work for non-commercial purposes only, provided you give attribution to the copyright holder and the publisher, and provide a link to the Creative Commons licence. Bibliography Adams, W.J. (1994), ‘Changes in ratings patterns for prime time before, during, and after the introduction of the people meter’, Journal of Media Economics , 7: 15–28. Advertising Research Foundation (2009), ‘On the road to a new effectiveness model: Measuring emotional responses to television advertising’, Advertising Research Foundation, http://www.thearf.org [accessed 5 July 2011]. Andrejevic, M. (2002), ‘The work of being watched: Interactive media and the exploitation of self-disclosure’, Critical Studies in Media Communication , 19(2): 230–48. Andrejevic, M. (2003), ‘Tracing space: Monitored mobility in the era of mass customization’, Space and Culture , 6(2): 132–50. Andrejevic, M. (2005), ‘The work of watching one another: Lateral surveillance, risk, and governance’, Surveillance and Society , 2(4): 479–97. Andrejevic, M. (2006), ‘The discipline of watching: Detection, risk, and lateral surveillance’, Critical Studies in Media Communication , 23(5): 392–407. Andrejevic, M. (2007), iSpy: Surveillance and Power in the Interactive Era , Lawrence, KS: University Press of Kansas. Andrews, K. and Napoli, P.M. (2006), ‘Changing market information regimes: a case study of the transition to the BookScan audience measurement system in the US book publishing industry’, Journal of Media Economics , 19(1): 33–54.
    [Show full text]
  • Big Data and Audience Measurement: a Marriage of Convenience? Lorie Dudoignon*, Fabienne Le Sager* Et Aurélie Vanheuverzwyn*
    Big Data and Audience Measurement: A Marriage of Convenience? Lorie Dudoignon*, Fabienne Le Sager* et Aurélie Vanheuverzwyn* Abstract – Digital convergence has gradually altered both the data and media worlds. The lines that separated media have become blurred, a phenomenon that is being amplified daily by the spread of new devices and new usages. At the same time, digital convergence has highlighted the power of big data, which is defined in terms of two connected parameters: volume and the frequency of acquisition. Big Data can be as voluminous as exhaustive and its acquisition can be as frequent as to occur in real time. Even though Big Data may be seen as risking a return to the paradigm of census that prevailed until the end of the 19th century – whereas the 20th century belonged to sampling and surveys. Médiamétrie has chosen to consider this digital revolution as a tremendous opportunity for progression in its audience measurement systems. JEL Classification : C18, C32, C33, C55, C80 Keywords: hybrid methods, Big Data, sample surveys, hidden Markov model Reminder: The opinions and analyses in this article are those of the author(s) and do not necessarily reflect * Médiamétrie ([email protected]; [email protected]; [email protected]) their institution’s or Insee’s views. Receveid on 10 July 2017, accepted on 10 February 2019 Translation by the authors from their original version: “Big Data et mesure d’audience : un mariage de raison ?” To cite this article: Dudoignon, L., Le Sager, F. & Vanheuverzwyn, A. (2018). Big Data and Audience Measurement: A Marriage of Convenience? Economie et Statistique / Economics and Statistics, 505‑506, 133–146.
    [Show full text]
  • Independent Study on Indicators for Media Pluralism in the Member States – Towards a Risk-Based
    Independent Study on Indicators for Media Pluralism in the Member States – Towards a Risk-Based Approach Prepared for the European Commission Directorate-General Information Society and Media SMART 007A 2007-0002 by K.U.Leuven – ICRI (lead contractor) Jönköping International Business School - MMTC Central European University - CMCS Ernst & Young Consultancy Belgium Preliminary Final Report Contract No.: 30-CE-0154276/00-76 Leuven, April 2009 Editorial Note: The findings reported here provide the basis for a public hearing to be held in Brussels in the spring of 2009. The Final Report will take account of comments and suggestions made by stakeholders at that meeting, or subsequently submitted in writing. The deadline for the submission of written comments will be announced at the hearing as well as on the website of the European Commission from where this preliminary report is available. Prof. Dr. Peggy Valcke Project leader Independent Study on “Indicators for Media Pluralism in the Member States – Towards a Risk-Based Approach” AUTHORS OF THE REPORT This study is carried out by a consortium of three academic institutes, K.U. Leuven – ICRI, Central European University – CMCS and Jönköping International Business School – MMTC, and a consultancy firm, Ernst & Young Consultancy Belgium. The consortium is supported by three categories of subcontractors: non-affiliated members of the research team, members of the Quality Control Team and members of the network of local media experts (‘Country Correspondents’). The following persons have contributed to the Second Interim Report: For K.U.Leuven – ICRI: Prof. Dr. Peggy Valcke (project leader; senior legal expert) Katrien Lefever Robin Kerremans Aleksandra Kuczerawy Michael Dunstan and Jago Chanter (linguistic revision) For Central European University – CMCS: Prof.
    [Show full text]
  • Final Abstracts in Order of Presentation
    Final Abstracts in Order of Presentation Sunday, September 20, 2015 9:30-11:30 a.m. Paper Session I Interactions of Survey Error and Ethnicity I Session Chair: Sunghee Lee Invited Presentation: Ethnic Minorities in Surveys: Applying the TSE Paradigm to Surveys Among Ethnic Minority Groups to Assess the Relationship Between Survey Design, Sample Frame and Survey Data Quality Joost Kappelhof1 Institute for Social Research/SCP1 Minority ethnic groups are difficult to survey mainly because of cultural differences, language barriers, socio-demographic characteristics and a high mobility (Feskens, 2009). As a result, ethnic minorities are often underrepresented in surveys (Groves & Couper, 1998; Stoop, 2005). At the same time, national and international policy makers need specific information about these groups, especially on issues such as socio-economic and cultural integration. Using the TSE framework, we will integrate existing international empirical literature on survey research among ethnic minorities. In particular, this paper will discuss four key topics in designing and evaluating survey research among ethnic minorities for policy makers. First of all, it discusses the reasons why ethnic minorities are underrepresented in survey. In this part an overview of the international empirical literature on reasons why it is difficult to conduct survey research among ethnic minorities will be placed in the TSE framework. Secondly, it reviews measures that can be undertaken to increase the representation of minorities in surveys and it discusses the consequences of these measures. In particular the relationship with survey design, sample frame and trade-off decisions in the TSE paradigm is discussed in combination with budget and time considerations.
    [Show full text]
  • Workshop on Probability-Based and Nonprobability Survey Research
    Workshop on Probability-Based and Nonprobability Survey Research Collaborative Research Center SFB 884 University of Mannheim June 25-26, 2018 Keynote: Jon A. Krosnick (Stanford University) Scientific Committee: Carina Cornesse Alexander Wenz Annelies Blom Location: SFB 884 – Political Economy of Reforms B6, 30-32 68131 Mannheim Room 008 (Ground Floor) Schedule Monday, June 25 08:30 – 09:10 Registration and coffee 09:10 – 09:30 Conference opening 09:30 – 10:30 Session 1: Professional Respondents and Response Quality o Professional respondents: are they a threat to probability-based online panels as well? (Edith D. de Leeuw) o Response quality in nonprobability and probability-based online panels (Carina Cornesse and Annelies Blom) 10:30 – 11:00 Coffee break 11:00 – 12:30 Session 2: Sample Accuracy o Comparing complex measurement instruments across probabilistic and non-probabilistic online surveys (Stefan Zins, Henning Silber, Tobias Gummer, Clemens Lechner, and Alexander Murray-Watters) o Comparing web nonprobability based surveys and telephone probability-based surveys with registers data: the case of Global Entrepreneurship Monitor in Luxembourg (Cesare A. F. Riillo) o Does sampling matter? Evidence from personality and politics (Mahsa H. Kashani and Annelies Blom) 12:30 – 13:30 Lunch 1 13:30 – 15:00 Session 3: Conceptual Issues in Probability-Based and Nonprobability Survey Research o The association between population representation and response quality in probability-based and nonprobability online panels (Alexander Wenz, Carina Cornesse, and Annelies Blom) o Probability vs. nonprobability or high-information vs. low- information? (Andrew Mercer) o Non-probability based online panels: market research practitioners perspective (Wojciech Jablonski) 15:00 – 15:30 Coffee break 15:30 – 17:00 Session 4: Practical Considerations in Online Panel Research o Replenishment of the Life in Australia Panel (Benjamin Phillips and Darren W.
    [Show full text]
  • Measuring the Audience: Why It Matters to Independent News Media and How It Can Contribute to Media Development
    Measuring the Audience: Why It Matters to Independent News Media and How It Can Contribute to Media Development BY MICHELLE J. FOSTER October 2014 Measuring the Audience: Why It Matters to Independent News Media and How It Can Contribute to Media Development ABOUT CIMA OCTOBER 2014 The Center for International Media Assistance (CIMA), at the National Endowment for Democracy, CONTENTS works to strengthen the support, raise the visibility, and improve the effectiveness of independent Introduction . 1 media development throughout the world . The center provides information, builds networks, The Exposure Model of Audience Research . 3 conducts research, and highlights the indispensable Audience Data Move Directly into the Ad Buying Process . 5 role independent media play in the creation and development of sustainable democracies . An Business Development . 9 important aspect of CIMA’s work is to research ways Audience Development . 13 to attract additional U .S . private sector interest in and support for international media development . Informing and Evaluating Media Development Efforts . 15 CIMA convenes working groups, discussions, and Limits to Audience Research Tool panels on a variety of topics in the field of media for Media Development . 18 development and assistance . The center also issues Conclusions and Recommendations . 19 reports and recommendations based on working group discussions and other investigations . These Endnotes . 21 reports aim to provide policymakers, as well as donors and practitioners, with ideas for bolstering the effectiveness of media assistance . Center for International Media Assistance ABOUT THE AUTHOR National Endowment for Democracy Michelle Foster consults and works with media 1025 F STREET, N W. ., 8TH FLOOR companies to develop strong management practices WASHINGTON, DC 20004 and business models .
    [Show full text]