Workshop on Probability-Based and Nonprobability Survey Research
Total Page:16
File Type:pdf, Size:1020Kb
Workshop on Probability-Based and Nonprobability Survey Research Collaborative Research Center SFB 884 University of Mannheim June 25-26, 2018 Keynote: Jon A. Krosnick (Stanford University) Scientific Committee: Carina Cornesse Alexander Wenz Annelies Blom Location: SFB 884 – Political Economy of Reforms B6, 30-32 68131 Mannheim Room 008 (Ground Floor) Schedule Monday, June 25 08:30 – 09:10 Registration and coffee 09:10 – 09:30 Conference opening 09:30 – 10:30 Session 1: Professional Respondents and Response Quality o Professional respondents: are they a threat to probability-based online panels as well? (Edith D. de Leeuw) o Response quality in nonprobability and probability-based online panels (Carina Cornesse and Annelies Blom) 10:30 – 11:00 Coffee break 11:00 – 12:30 Session 2: Sample Accuracy o Comparing complex measurement instruments across probabilistic and non-probabilistic online surveys (Stefan Zins, Henning Silber, Tobias Gummer, Clemens Lechner, and Alexander Murray-Watters) o Comparing web nonprobability based surveys and telephone probability-based surveys with registers data: the case of Global Entrepreneurship Monitor in Luxembourg (Cesare A. F. Riillo) o Does sampling matter? Evidence from personality and politics (Mahsa H. Kashani and Annelies Blom) 12:30 – 13:30 Lunch 1 13:30 – 15:00 Session 3: Conceptual Issues in Probability-Based and Nonprobability Survey Research o The association between population representation and response quality in probability-based and nonprobability online panels (Alexander Wenz, Carina Cornesse, and Annelies Blom) o Probability vs. nonprobability or high-information vs. low- information? (Andrew Mercer) o Non-probability based online panels: market research practitioners perspective (Wojciech Jablonski) 15:00 – 15:30 Coffee break 15:30 – 17:00 Session 4: Practical Considerations in Online Panel Research o Replenishment of the Life in Australia Panel (Benjamin Phillips and Darren W. Pennay) o Terms of agreement: the inclusion of Muslim minorities (Elisabeth Ivarsflaten and Paul Sniderman) o Do you get what you asked for? On the implementation of a survey in nonprobability online panels (Daniela Ackermann-Piek and Annelies Blom) 2 Tuesday, June 26 09:30 – 10:30 Keynote An update on the accuracy of probability sample surveys and non- probability sample surveys (Jon A. Krosnick) 10:30 – 11:00 Coffee break 11:00 – 12:30 Session 5: Variance Estimation and Weighting Adjustments o Precision of estimates based on non-probability online panels (Marek Fuchs and Tobias Baier) o Improving estimates from non-probability online surveys (Dina Neiger, Andrew C. Ward, Darren W. Pennay, Paul J. Lavrakas, and Benjamin Phillips) o Weighting and estimation strategies for probability-based and nonprobability panel research (Christian Bruch, Barbara Felderer, and Annelies Blom) 12:30 – 13:30 Lunch 13:30 – 15:00 Session 6: Combining Probability-Based and Nonprobability Samples o Rationale for conducting and methods for calibrating hybrid probability/non-probability surveys (David Dutwin) o Estimating the size of the LGB population with a random telephone/Internet survey and a joint Internet volunteer survey and taking advantage of the two to increase the analytical LGB subsample (Stéphane Legleye and Géraldine Charrance) o Blending probability and nonprobability samples for survey inference under a Bayesian framework (Joseph W. Sakshaug, Arkadiusz Wisniowski, Diego Perez-Ruiz, and Annelies Blom) 3 15:00– 15:30 Coffee break 15:30– 17:00 Session 7: Nonprobability Survey Research and Big Data o How research on probability and nonprobability panels can inform passive data collection studies (Bella Struminskaya) o Probability, nonprobability sampling, and Big Data (Andreas Quatember) o Digital trace data: just another nonprobability sample? (Josh Pasek) 4 Abstracts Session 1: Professional Respondents and Response Quality Professional respondents: are they a threat to probability-based online panels as well? Edith D. de Leeuw A major concern about the quality of non-probability online panels centers on the presence of ‘professional’ respondents. In reaction to criticism on non-probability panels, probability- based online panels are established. However, probability-based panels suffer initial nonresponse during panel formation with the danger of selective nonresponse. Are probability-based panels a safeguard against professional respondents? In the Netherlands a large study (NOPVO) of 19 (opt-in) online panels reports on professional respondents. We partly replicated their study in two Dutch probability based online panels: a probability sample of the general population of the Netherlands (LISS panel), and a probability sample of the four largest ethnic minority groups in the Netherlands (LISS immigrant panel). In the probability-based Dutch online panels, the number of panel memberships was lower than in the NOPVO panels: 84.5% of the LISS members and 80.3% of the immigrant panel did not belong to other panels, while in the NOPVO-panel study only 38% was not a member of multiple panels. In the NOPVO-study on average more than 80% of the respondents reported to have completed more than one questionnaire in the past 4 weeks, while in the two probability-based panels this was less than 40 %. Response quality in nonprobability and probability-based online panels Carina Cornesse and Annelies Blom The ongoing debate about the quality of nonprobability online panels predominantly discusses whether or not these panels have representative sets of respondents. While the number of publications on nonprobability panel representativeness is increasing, less attention has so far been paid to potential measurement errors in nonprobability as compared to probability panels. In our paper, we investigate whether there are differences in satisficing across probability and nonprobability online panels using three indicators to operationalize survey satisficing (item nonresponse and non-substantive answers, straight-lining in grids, and mid-point selection in a visual design experiment). These indicators are included in a questionnaire module that was implemented across nine online panels in Germany: one academic probability online panel that includes the offline population, one commercial probability online panel, and seven nonprobability online panels, all differing with respect to their sampling and recruitment methods. Our analyses show significantly less straight-lining in probability than in nonprobability online panels, but no significant differences regarding 5 mid-point selection. With respect to non-substantive answers, we find that significantly more respondents in the probability than in the nonprobability panels say that they don’t know who they voted for in the last general election or refuse to report their height and body weight. Session 2: Sample Accuracy Comparing complex measurement instruments across probabilistic and non- probabilistic online surveys Stefan Zins, Henning Silber, Tobias Gummer, Clemens Lechner, and Alexander Murray-Watters The quality of non-probabilistic samples is often only determined by the comparison of estimated frequency distribution of items with that of benchmarks distributions, e.g. obtained from official statistics or probabilistic samples (Yaeger et al., 2011). However, simple frequencies are often not of primary interest, but dependencies between the measured variables are at the core of theory building and testing in behavioural sciences. In order to compare possible method effects of different non-probabilistic sampling designs with respect to the joint distribution of the measured variables, an established measuring instrument (BFI- 2-S instrument, see, Soto and John, 2017) with measurements from two nonprobabilistic samples and one probabilistic sample are evaluated. The probabilistic mixed-mode sample serves as a reference. With an already carried out study, the sample type of a non-probabilistic online access panel will be examined. As a second non-probabilistic sample type we plan to analyse a so called river sample. River sampling is a relatively uncontrolled recruitment of respondents via ads on various websites. This type of sampling has not yet been taken up in any comparative study of this kind in Germany and rarely internationally. The main goal of this method comparison study is to compare the ability to use the different types of samples to model the latent variables of the measuring instrument. For this we will evaluate two alternative methods to model the latent variables. One will be the standard method that uses factor analysis and for the other we will use a causal search algorithm (FOFC). 6 Comparing web nonprobability based surveys and telephone probability- based surveys with registers data: the case of Global Entrepreneurship Monitor in Luxembourg Cesare A. F. Riillo Failure to predict Brexit and the US election outcome has called into question poll and survey methodology. This study contributes to this debate by assessing the Total Survey Error of two surveys on entrepreneurship. One survey is probability-based and is conducted by fix-line phone. The other survey is based on an opt-in web panel and it is nonprobability based. The same questions are administered in both surveys. The study assess with survey resembles better official register data in terms of distribution of socio demographic characteristics of respondents and in terms of variable of interest (entrepreneurial activity). Research is based on the Global Entrepreneurship Monitor (GEM)