Questionnaire Design

David Ashley Jeffrey Henning • Human Capital Data Analytics • President of Division Manager for the U.S. Researchscape International Department of Homeland Security • Past President of the MRII (2009) • Current President of the MRII and and the author of upcoming the editor of the upcoming questionnaire design course questionnaire design course • [email protected][email protected] • @jhenning on Twitter JH DA Agenda

1. How respondents think 2. Questionnaire design overview 3. Addressing common mistakes Is a Survey the Right Arrow to Hit the Target?

• Sometimes the best survey is to not do a survey at all • Talk to stakeholders who will use the data to understand their wants and needs • Is someone elsewhere in the organization doing a survey on this topic or researching this issue? • Are customers (or employees or …) the only source of this information? • Do your CRM, web analytics or other systems hold data that would address this issue?

JH Asking a Lot of the Respondent Literally and Figuratively

1. Interpret the meaning of a question 2. all relevant facts related to question 3. Internally summarize those facts 4. Report summary judgment accurately

JH Respondent Behaviors

Cognitive Social Survey Behaviors Behaviors Behaviors Satisficing Acquiescence Response styles Social desirability bias Response substitution Economic behavior Halo error Mode effects Practice effects Panel conditioning

JH Weak Satisficing Strong Satisficing • Selecting the first choice that • Endorsing the status quo appears reasonable instead of change • Agreeing with assertions • Failing to differentiate in (“acquiescence response ratings bias”) • Selecting “Don’t know” rather than giving an opinion Source: Krosnick, J. A. (1991). “Response Strategies for Coping with the Cognitive • Randomly choosing Demands of Attitude Measures in Surveys.” Applied , 5, 213-236.

JH Memory Biases

Choice-supportive bias Generation effect Osborn effect Source Confusion Change bias -of-truth effect Part-list cueing effect Spacing effect Childhood Lag effect Peak-end effect Stereotypical bias Consistency bias Leveling and Sharpening Persistence Suffix effect Context effect Levels-of-processing effect Picture superiority effect Suggestibility Cross-race effect List-length effect Positivity effect Telescoping effect Primacy effect Testing effect Misattribution Processing difficulty effect Modality effect Reminiscence bump Verbatim effect Mood congruent memory bias Rosy retrospection Humor effect Next-in-line effect Self-relevance effect Zeigarnik effect

Source: http://en.wikipedia.org/wiki/Memory_bias

JH

• Some respondents are simply agreeable, and indicate agreement out of politeness • Other respondents expect that the researchers agree with the listed items and defer to their judgment • Most respondents find agreeing takes less effort than carefully weighing each optional level of disagreement and agreement Source: Saris, Krosnick and Shaeffer, 2005

JH Mode Effects

• Face-to-face: a “guest” script Social desirability bias highest: • Phone interviews: a “solicitor” 1. Telephone surveys script 2. Face-to-face surveys • IVR interviews: a “voice mail” 3. IVR surveys script 4. Mail surveys • Internet surveys: a “web form” 5. Web surveys script • Mail surveys: a “form” script

JH Answers Patterns for Common Response Styles

Completely Somewhat Neither agree Somewhat Completely Disagree Agree Response Style disagree disagree nor disagree agree agree

Optimal Responding

Extreme Response Style (ERS)

Response Range (RR)

Mild Response Style

Truncated Scales Truncated Midpoint Response (MPR) Acquiescence Response Style (ARS) Disacquiescence Response Style (DARS) Social Styles Social Socially Desirable Responding (SDR) Noncontingent

- Social/Anti Responding (NCR)

JH Response Styles by Country: Informed by Culture

Source: Johnson, Kulesa, Cho, Shavitt, 2003; Vovici

JH Agenda

1. How respondents think 2. Questionnaire design overview 3. Addressing common mistakes Sample Question Sequence

DA Scale Types

DA Unidimensional vs. Multidimensional

DA Data Levels

DA Question Balance and Symmetry

DA Skip Patterns and Branching

DA Avoid Biases

DA Agenda

1. How respondents think 2. Questionnaire design overview 3. Addressing common mistakes Asking Objective Questions

• Respondents should not be able to determine where you stand on any topic

o Use nonjudgmental wording o Choose neutral terms • Don’t ask leading questions

o Not “What do you like about your service?” o But “What, if anything, do you like…?” • Write from the respondent’s perspective not your perspective

JH Asking Objective Questions

• Remove ambiguity: “What is your favorite drink?” (drink = beverage or drink = alcoholic beverage) • Ask one item at a time ⇒ not: “How would you rate our price and service?” ⇒ not: “How easy to reach someone to help?” • Avoid industry jargon • Specify how you use general terms • Don’t make subtle distinctions • Have others proofread your questions for clarity • Pre-test survey with a segment of your audience

JH To Label or Not Label Each Point of a Scale

Many Variations Possible

JH To Label or Not Label Each Point of a Scale

Many Variations Possible Best Practices • Respondents prefer fully labeled scales • Fully labeled scales have greater reliability and validity • Numeric values alter the meaning of labels and should be avoided • 5-point unipolar and 7-point bipolar scales have greatest reliability and validity • Where possible use standard scales rather than write your own

Source: Krosnick, J. A., & Fabrigar, L. R. (1997). “Designing rating scales for effective measurement in surveys.”

JH

Patterns to“He's Use embiggened for Scales that role with his Unipolar Scale (0..100)cromulent Bipolar Scale (-1..0..+1) performance.” • Completely* cromulent • Completely* disembiggened • Mostly disembiggened • Very cromulent • Somewhat disembiggened • Moderately cromulent • Neither embiggened nor • Slightly cromulent disembiggened • Somewhat embiggened • Not at all cromulent • Mostly embiggened *or “Extremely” where appropriate • Completely* embiggened

JH

Patterns to Use for Scales

Unipolar Scale (0..100) Bipolar Scale (-1..0..+1) • Completely* cromulent • Completely* disembiggened • Mostly disembiggened • Very cromulent • Somewhat disembiggened • Moderately cromulent • Neither embiggened nor • Slightly cromulent disembiggened • Somewhat embiggened • Not at all cromulent • Mostly embiggened *or “Extremely” where appropriate • Completely* embiggened

JH Patterns to Use for Scales

Unipolar Scale (0..100) Examples with Completely ___ acceptable • Completely* cromulent ___ likely ___ probable • Very cromulent ___ satisfied ___ true of me • Moderately cromulent ___ true of what I believe

• Slightly cromulent Examples with Extremely ___ aware • Not at all cromulent ___ concerned *or “Extremely” where appropriate ___ easy ___ familiar ___ important ___ influential

JH Other Common Unipolar Scales Frequency Always, Often, Sometimes, Rarely, Never Grade A, B, C, D, F Like extremely well, Like quite well, Like moderately, Like Liking slightly, Not like at all Essential, High priority, Medium priority, Low priority, Not a Priority priority Quality Excellent, Good, Fair, Poor, Very Poor (traditional) Quality Excellent, Good, Average, Poor, Terrible (contemporary) Excellent given the price, Good given the price, Average Quality (relative) given the price, Poor given the price, Terrible given the price Quantity All, Most, Half, Some, None Developing Custom Scales

Labels Reflect Equal Intervals Recent Client Example 100 How do you feel about advertisements on television? 80 • Love 60 • Like 40 • Neutral • Dislike 20

0 Other scale ideas: • Love, Like, Neutral, Dislike, Hate • Terrible, Poor, Average, Good, Excellent

JH Bipolar Scales are Outdated

Case Against Bipolar Scales Alternatives • Because bipolar scales contrast two opposites, they require more cognitive • Bipolar satisfaction, importance, and likelihood scales can easily be replaced by effort for respondents to evaluate than unipolar scales. Respondents must unipolar scales that will be more reliable and easier on the respondent decide which of two extremes to select (or the midpoint), then to what degree • Bipolar scales work well for changes in quantity and attitudes about changes in they tend towards that extreme. For a unipolar scale, respondents are just quantity. A bipolar scale works well for predicting positive vs. negative assessing the extent. recommendations of brands (Schneider, Berent, Thomas, and Krosnick). • The midpoints of bipolar scales are a point of confusion, open to different • Malhotra, Krosnick, and Thomas show that breaking bipolar scales into . Sometimes a choice like Neither interpretations by different respondents multiple questions improves criterion validity over using single questions. boring nor interesting is selected as a “don’t know” response, while to other For instance, instead of using a 7-point bipolar scale, use these three questions: respondents it might mean “neither” (“neutral”) or it might mean “boring in some ways and interesting in others.” (This example is from “The Science of Asking 1. “Do you think that the amount of money the federal government spends on Questions” by Nora Schaeffer and Stanley Presser). the U.S. military should be increased, decreased, or neither increased nor decreased?” • Survey authors can go astray by presenting bipolar questions where the 2. [if answered increased] “Do you think that the amount of money the federal endpoints are not pure opposites (e.g., Cheap to Expensive, with cheap having connotations of low quality in addition to low price). government spends on the U.S. military should be increased a lot, a moderate amount, or a little?” • Better-formed bipolar scales are longer and require more reading than 3. [if answered decreased] “Do you think that the amount of money the federal unipolar scales. While bipolar scales often are presented using 5-point scales (e.g., Completely unimportant, Somewhat unimportant, Neither unimportant nor government spends on the U.S. military should be decreased a lot, a important, Somewhat important, Completely important), 7-point scales are more moderate amount, or a little?” reliable (and minimize context effects from earlier questions). Yet it turns out that asking bipolar scales as three questions is more reliable still (see below). http://www.researchscape.com/blog/when-and-how-to-use-bipolar-scales • Bipolar scales can also complicate factor analysis, according to Wijbrandt van Schuur and Henk Kiers. “Don’t Know” or “No Opinion” Choices

• When “satisficing”, respondents will select a no-opinion choice if presented if one, sometimes even if they have an opinion. • When no such choice is presented, most respondents will choose from the other choices. • Omit a no-opinion choice when asking for attitude. Provide a “Don’t Know” choice when prompting to recall specifics. Source: Krosnick, J. A., & Fabrigar, L. R. (1997). “Designing rating scales for effective measurement in surveys.”

JH Matrix/Table/Grid Questions

• Concise technique for combining questions with common topics • Can be 50% faster for respondent to complete online but speed may lead to mistakes Source: SSI, “Grid Test Summary”, 2009

JH Reimagine Matrix Questions when You Can

"We know respondents don't like grids... Now, we're beginning to learn that not only are grids frustrating for respondents - they actually produce inferior data." - Jackie Lorch, SSI VP • Break each row of the matrix into a separate question or group of questions on its own page • Rewrite each row of the matrix into separate questions, replacing checkboxes with fully labeled scales • Refactor importance matrixes into choose-many questions • Refactor Yes/No Matrix questions into choose-many questions (checkbox lists) instead • At least limit to one grid per study • Be creative

JH Are Respondents Too Agreeable?

Likert Scale Best Practices • Completely disagree • The traditional Likert scale is obsolete • Disagree • Over 100 studies have • Somewhat disagree demonstrated acquiescence bias • Neither agree or disagree • Use “construct-specific response • Somewhat agree options” instead – common rating • Agree scales and custom scales • Completely agree - Saris, Krosnick & Shaeffer (2005)

JH Before...

JH ...and After How interested are you in buying sponsor products or brands when you can? • Not at all interested, Slightly interested, Moderately interested, Very interested, Extremely interested

How interested are you in finding out more about what sponsor brands offer? • Not at all interested, Slightly interested, Moderately interested, Very interested, Extremely interested

How do sponsor brands compare to non-sponsor brands? • Much better, Slightly better, The same, Slightly worse, Much worse

How likely are you to watch commercial breaks when the commercials are sponsored than when they are not sponsored? • Much more likely, Slightly more likely, As likely, Slightly less likely, Much less likely

How do you identify with sponsor brands compared to non-sponsor brands? Brands sponsoring the are.... • Much more for people like me, Slightly more for people like me, As much for me as non-sponsor brands, Slightly less for people like me, Much less for people like me

How appreciative are you that brands help give you access to the games? • Not at all appreciative, Slightly appreciative, Moderately appreciative, Very appreciative, Extremely appreciative

How appreciative are you that sponsor brands help fund student athletes’ ability to attend college? • Not at all appreciative, Slightly appreciative, Moderately appreciative, Very appreciative, Extremely appreciative

JH Constant Sum & Allocation Questions

• Limit the number of items • If you have too many items, break them into categories and then ask follow-on allocation questions • If appropriate, include an Other category as a safety valve for respondents • Use matrix questions with ranges instead if respondents cannot easily recollect and quantify their past behavior • Provide visual feedback to the current working sum

JH Source: Ocucom

JH Impact of Showing Logos

• May increase brand awareness recall • May be more accurate for assessing print, web and video campaigns • May be less accurate for assessing radio campaigns

JH JH Tips for Shortening the Survey

• Keep Your Focus – Remove questions that don’t directly address the goal of the survey • Ask Only Most Important Questions – Common research tactic to have three similar questions on similar topic: use one • Don’t Ask Esoteric Questions – Cut questions that make distinctions only apparent to those within your organization • Don’t Set False Expectations – Remove questions that raise issues that can’t be addressed (for customers, free services; for employees, extended vacation time)

JH Shorten the Survey from the Respondent’s Perspective

• Skip Respondents Past Inapplicable Sections – Don’t subject respondent to survey about products or services they don’t have or can’t have • Import Answers – Use CRM data to pipe in answers to “hidden questions” • Randomize Displayed Sections – For less important sections, randomly display only one section to each respondent • Break into Multiple Questionnaires – Maybe questions around different target groups are so different that they are best served with different questionnaires • Use Fewer Pages – Page submits add a burden, so the fewer pages the better for the most part. • Keep the Questionnaire Interesting – Respondents perceive interesting surveys as shorter!

JH Test, Test, Test

Self-Test Pre-test Pilot Test Publish

Self-Test Question flow Answer validation Question wording Required answers Question types Skip patterns Scale consistency Errors of omission

JH Key Takeaways

1. Keep in mind respondent behaviors and memory biases 2. Use respondent-facing language rather than corporate jargon 3. Use alternatives to scales for multilingual research 4. Replace agreement scales with other scales where possible 5. Re-factor grids; try to limit a study to no more than one grid 6. Use 5-point fully labeled unipolar scales where possible for English studies 7. Re-factor bipolar questions 8. Shorten as much as you can Questionnaire Design

David Ashley Jeffrey Henning • Human Capital Data Analytics • President of Division Manager for the U.S. Researchscape International Department of Homeland Security • Past President of the MRII (2009) • Current President of the MRII and and the author of upcoming the editor of the upcoming questionnaire design course questionnaire design course • [email protected][email protected] • @jhenning on Twitter APPENDIX Focus on a Goal

• Be precise about what information you need to gather and what you plan on doing with it • If your sponsor or client hasn’t done a survey in a while, the tendency is for every department to chime in with questions they want you to ask • A narrow goal will help you to relentlessly simplify the survey

JH Examples of Good Research Objectives

Good Objectives Bad Objectives • Determine how often callers into • It’s been a while; we should do a help desk check the customer- survey. service web site before calling • What are our customers thinking • Prioritize the feature ideas for a right now? new product with prospects • Maybe customers would like it if • Find out how satisfied we provided support after 6 pm. customers are with tech support • Let’s create a committee and • Determine if employees feel that see what they want to find out organization is living up its core from a survey. values

JH Sources of Secondary Data

Internal External Sales receipts, employee Data from external sources data, and financial records on issues relevant to the Process documentation, issue under study other official internal Sources include documents professional journals and articles, books, magazines, and newspapers

DA Asking a Lot of the Respondent Literally and Figuratively

1. Interpret the meaning of a question 2. Recall all relevant facts related to question 3. Internally summarize those facts 4. Report summary judgment accurately

JH Respondent Coping Strategies Different Types of Processing

1. Interpret question meaning

2. Recall relevant facts

3. Internally summarize facts

4. Report summary judgment

JH Respondent Behavior Degrades as Survey Lengthens

Optimizing

Weak Satisficing

Strong Satisficing

Cheating

JH Social Desirability Bias, by Topic

• Health initiatives - Respondents exaggerate frequency of exercise and compliance with medical regimens • Voting behavior - Respondents exaggerate intent to vote • Illegal behavior - Respondents underreport drug usage and criminal history • Sexual behavior - Respondents deny, sanitize or mainstream aspects of their sexual lives • Bigotry - Respondents downplay any prejudices • Salary - Poor respondents overstate income; rich respondents understate it

JH Halo Error & Response Substitution

• What makes cheesesteaks and Tastykakes® taste even better? . Fans like the food 11% more when the Eagles win. Victory tastes delicious. . Halo error confuses the true strengths and weaknesses of products and services. . Makes benchmarking attributes across competing brands and products unreliable (brands are well documented as introducing halo effects). . Leads to misinterpretation of satisfaction attributes. • Response substitution is when respondents' answers to questions might sometimes reflect attitudes that respondents want to convey, but that the researcher has not asked about

JH What Respondents Like About Taking Surveys

Nothing 34%

Dislike surveys 24%

Chance to voice their opinion 13%

Earn an incentive 11%

Like to be helpful 10%

Interesting topics 9%

Answers make a difference 7%

0% 20% 40% 60% 80% 100% Source: Vovici Survey Nation study, N = 100 RDD sampled U.S. adults

JH The Third Factor: Questions’ Proper Sequence

Screener • Use the inverted pyramid approach, Open-Ended Questions drilling down General Questions • Ask harder questions Specific Questions first, before

Demographics respondents grow tired of the survey Follow-up

JH The Third Factor: Questions’ Proper Sequence

Screener • Conventionally, screeners route people out of survey Open-Ended Questions depending on answers to initial question General Questions • Some recruitment surveys

Specific Questions route people out throughout the survey

Demographics

Follow-up

JH The Third Factor: Questions’ Proper Sequence

Screener • Capture their views in their own words before biasing them with your later questions Open-Ended Questions o “What, if anything, do you like about…?” General Questions o “What, if anything, do you dislike…” Specific Questions • Great for brand recall and awareness Demographics • Answers provide color commentary to later closed- Follow-up ended questions • Answers validate choice lists

JH Trade Off Increased Abandonment for More Verbatim Responses

Survey Concluding Survey Beginning with with Open-Ends Open-Ends Abandonment rate 1% 6%

Responses with at least 61% 90% one verbatim answered

Average length of 13 words 13 words verbatim answer Sample size 70 79

Source: Vovici research on research, 9-24-09

JH The Fourth Habit: Order Questions Logically

Screener Use Skip Use Patterns Branching Open-Ended Questions

General Questions

Specific Questions

Demographics

Follow-up

JH Sequential Filtering Provides Better Conversational Flow

Sequential Filtering Grouped Filtering

Source: Lisa Carley- Baxter, Andy Peytchev and Michele C. Black, 2010

JH The Third Factor: Questions’ Proper Sequence

Screener • Place near end as tedious and intrusive but can be answered Open-Ended Questions on “autopilot” • Use demographic and General Questions firmographic questions to

Specific Questions profile respondents and their organizations Demographics • Enables you to cross-tabulate & compare subgroups Follow-up • Pre-populate from CRM systems or other sources where possible

JH The Third Factor: Questions’ Proper Sequence

Screener • Ask for any final comments about any aspect of survey Open-Ended Questions or topic General Questions • Ask for permission to follow-up with them about Specific Questions their answers

Demographics • Prompt if they have an issue they want to be Follow-up contacted about

JH The Third Factor: Questions’ Proper Sequence

Screener • Only at the very end ask respondent satisfaction Open-Ended Questions questions evaluating the survey General Questions • Use to drive continual Specific Questions improvement to your

Demographics research process itself • Key measures: Follow-up How interesting was the survey How long was it Open comments

JH Four Basic Question Types

Open-Ended Questions Closed-Ended Questions

Essay Question Fill in the Blank Choose One Choose Many

JH Force vs. Unforced Scales

DA The “Shoulds” of Question Wording

• Q. should be focused on a single issue or topic • Q. should be interpreted the same way by all respondents • Q. should use the respondent’s core vocabulary • Q. should be a grammatically simple sentence if possible • Q. should be brief

JH The “Should Nots” of Question Wording

• Q. should not assume criteria that are not obvious • Q. should not be beyond the respondent’s ability or experience • Q. should not use a specific example to represent a general case • Q. should not request recall of specifics when only generalities will be remembered • Q. should not require the respondent to guess a generalization • Q. should not ask for details that cannot be related • Q. should not contain words that overstate the condition • Q. should not have ambiguous wording • Q. should not be “double-barreled” • Q. should not lead the respondent to a particular answer • Q. should not have “loaded” wording or phrasing.

JH Open-Ended Questions vs. Closed-Ended Questions

beige blue Bluw green purpel red fire engine Black blue red green purple red sea black blue Gray grey red green sky black blue Green orange red blue blue Blue r0x!! green orange red yellow

JH Best Practices

Open-Ended Questions Closed-Ended Questions • Great for hearing from the respondent • Make sure the list contains all common in their own words choices • Provides unbiased, unfiltered answers • Better to have too many items rather • Good for catching anything you missed than too few, but try not to clutter the at end of a survey list • Limit use, though, because: • Provide the respondent with an “Other – please specify” choice o Time consuming and taxing to answer • Arrange the choices in logical order • If no logical order, then o Difficult to analyze randomize the order

JH Yes/No Questions: Common Pitfalls

• Force-fitting a question into a yes/no format by overriding what “Yes” or “No” means • Providing caveats to the Yes/No choices • Asking for a single Yes/No to multiple items • Letting the respondent select both Yes and No • Asking questions that can’t be answered Yes/No • Listing a bunch of similar Yes/No questions in a matrix (paper, web) • Asking questions that no one wants to say “no” to (“acquiescence ”)

JH Choose-Many Questions

• Use whenever more than one choice is applicable • Always include a “none of the above” as an exclusive choice, otherwise can’t tell if respondent answered question • Aim for mutually exclusive choices: Avoid providing choices that can be synonymous or subsets/supersets of one another • Don’t use a list box to show the choices • If the choice list has no natural order, randomize the order (anchoring “None of the above” to bottom)

JH When in Doubt, Use Rating Questions

Ranking Questions Rating Questions • Taxing for respondents, requiring • Lead to less differentiation among them to compare multiple items choices against one another • Ratings often fall into narrow • Difficulty increases upper band disproportionately as choices are • Personal variations in rating styles added • Possible spurious positive • Take three times longer to answer correlations due to individuals’ than rating questions (Munson personal variations and McIntyre, 1979) Alwin, D. F., & Krosnick, J. A. (1985). “The • Limit range of statistical analysis measurement of values in surveys: A comparison of ratings and rankings.”

JH Mental Effort of Rating vs. Ranking Questions Ranking Questions are More Taxing for Respondents

70 60 Rating 50 Ranking 40 30 20 Comparisons 10 0 2 3 4 5 6 7 8 9 10 11 12 # Choices

JH What’s Your Favorite Color?

Poll A – 6 Choices Poll H – 15 Choices • Green • Blue • Purple • Pink • Green • Orange • Red • White • Black • Blue • Black • Red • Maroon • Magenta • Yellow • Brown • Gray • Lime Green • None of those! • Sky Blue • Hot Pink • Whatever Hayley's favorite color is ;)

JH Provide Exhaustive Lists of Choices

Top 3 Favorite Colors As Determined by Questions with Different Numbers of Choices 15

10

5

0 A B C D E F G H Correct Ranking

JH Use Dropdowns Only for Known Lists

• Great for letting respondents pick from a long list of known choices (e.g., states, provinces) • Even casual users will use the keyboard effectively (e.g., click M three times to cycle from Maine to Maryland to Massachusetts) • Make sure not to make any choice the default (e.g., having Alabama selected) • Make the default choice an instruction like “Click here to choose” • Don’t use when the choices have to be read to be understood, as in lists of industries or job titles

JH Juxtaposing Next & Previous Buttons

1.20%

1.00%

0.80%

0.60%

0.40%

Rate of Previous Button Use Button Previous of Rate 0.20%

0.00% 9.4 9.5 9.6 9.7 9.8 9.9 10 10.1 10.2 10.3 Completion Time (minutes) Source: Couper, Baker, Mechling

JH Juxtaposing Next & Previous Buttons

1.20%

1.00%

0.80%

0.60%

0.40%

Rate of Previous Button Use Button Previous of Rate 0.20%

0.00% 0 2 4 6 8 10 12 Completion Time (minutes) Source: Couper, Baker, Mechling

JH Right Length: Not Too Short, Not Too Long

2-4 questions: Transactional Survey

5-10 questions: Event Evaluation

10-20 questions: Customer Satisfaction

20-30 questions: Planning

50-70 questions: Major Account Review

70-90 questions: Employee Satisfaction

JH Shorten the Survey

100% Decrease the number 90% Abandonment Rate of matrix/grid 80% Of 180-Question questions 70% Survey 60% Reduce the number of 50% open-ended questions 40% Put demographic 30% questions at the end 20% Shorten the survey! 10% 0% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Intro Average 6 questions per block Source: “Dropouts on the Web”, Galesic, 2006

JH Causes of Survey Incompletes

Primary Reason Respondent Abandoned Survey

Subject matter 35%

Media downloads 20%

Survey length 20%

Grids 15%

Too many open-ends 5%

0% 20% 40% 60% 80% 100% Source: Lightspeed Research Interesting Questionnaires are Perceived as Shorter

Optimal 1 length 0.9 0.8 perception gap 0.7 } 0.6 Somewhat too long 0.5 Higher interest in questionnaire 0.4 Lower interest in questionnaire 0.3 0.2 Absolutely 0.1

too long Source: on length of questionnaire “Effects 2002, Galešic Mirta response rates”, 0 Fewer Average More questions number of questions questions

JH