Chapter 7: Survey Research

A. LEARNING OUTCOMES. After studying this chapter students should be able to:  Explain basic concepts about populations and samples.

 Describe the benefits and limitations of survey research.

 Describe different types of probability and nonprobability sampling, and explain why probability sampling generally is preferred.

 Explain the concepts of sampling error and confidence level.

 Discuss major factors that affect decisions about sample size.

 Describe the general steps in developing a questionnaire.

 Identify different types (i.e., formats) of questions.

 Explain major issues that arise in the wording and placement of questions.

 Describe face-to-face and telephone interviewing and the advantages and disadvantages of each.

 Describe mail and Internet surveys and the advantages and disadvantages of each.

 Explain nonresponse bias and discuss its relation to response rates.

 Discuss several critical thinking questions that people should ask when reading about survey results.

 Explain “sugging” and “frugging” and why they are unethical.

B. KEYWORDS Closed-ended question Nonrepresentative sample Sample Cluster sampling Nonresponse bias Sampling frame Confidence level Open-ended question Sampling variability Convenience sampling Population Self-selection Inter-rater reliability Probability sampling Simple random sampling Interviewer effects Purposive sampling Social desirability bias Likert scale Quota sampling Stratified random sampling Margin of sampling error Representative sampling Survey Nonprobability sampling Response rate

1 2 CHAPTER 7: Survey Research

C. BRIEF CHAPTER OUTLINE

I. Basic Characteristics of Surveys A. Populations and Samples

B. Why Conduct Surveys?

C. Limitations of Surveys

II. Selecting the Sample A. Probability Sampling

B. Nonprobability Sampling

C. Margin of Sampling Error and Confidence Level

D. Why Not Aim for More Precise Estimates?

III. Constructing the Questionnaire A. Steps in Developing a Questionnaire

B. Types of Questions

C. Wording the Questions: It’s Harder than You Think

D. Placing the Questions in Order

IV. Administering the Survey A. Face-to-Face Interviews

B. Telephone Interviews

C. Mail Surveys

D. Online Surveys

E. Response Rate and Nonresponse Bias

V. Being a Smart Survey Consumer A. Thinking Critically About Survey Results

B. Being Aware of Bogus Surveys

D. EXTENDED CHAPTER OUTLINE *Much of this summary is taken verbatim from the text. CHAPTER 7: Survey Research 3

Introduction Survey research and sampling is discussed in this chapter. It begins by describing the basic characteristics of survey research and how random and nonrandom samples are obtained. It then examines how one might approach the construction and administration of the survey. The chapter ends with a discussion of the importance of being a smart consumer of survey research. 4 CHAPTER 7: Survey Research

Part I: Basic Characteristics of Surveys A. Populations and samples. A population includes all cases or observations of interest to a researcher. Because obtaining information from an entire population is usually not possible, researchers instead examine a sample, a subset of cases or observations from the population. A sampling frame is an operational definition of the population of interest. For example, the complete list of names and phone numbers from a telephone directory could serve as a sampling frame from which X number of names are selected to be sampled. In the age of cell phones many people are opting not to have a landline thus making the use of a telephone directory an imperfect sampling frame. The researcher’s job then is to identify a sampling frame that best represents the actual population, knowing that none is absolutely perfect. Inferences about a population can be made based on sample data if the sample is a representative sample: one that reflects the important characteristics of the population. In contrast, a nonrepresentative sample is one that does not reflect the population well and cannot be used to make inferences about the population from which it was drawn. The percentage of the sample that participate in a survey represents the response rate of the survey.

B. Why conduct surveys? Surveys are important to both basic and applied research endeavors. They are often conducted because they are efficient; they enable a researcher to collect data from a large number of cases, and any single survey can measure any number of variables. Specific uses of surveys in research include:

a. describing population characteristics,

b. describing and compare the characteristics of different populations or different demographic groups within a population,

c. describing potential time trends,

d. describing relations among psychological variables, and

e. testing hypotheses, theories, and models.

C. Limitations of surveys. Because surveys are nonexperimental, researchers cannot examine causal relations among variables. Second, if the sampling method used to collect survey data was a poor one, the results may not represent the population. Finally, survey data is based on a participant’s self-report, and as such, the validity of the data can be compromised.

Part II: Selecting the Sample This section describes the two general methods for selecting a survey sample: probability sampling and nonprobability sampling.

A. In probability sampling each member of a population has a chance of being selected into the sample; and the probability of being selected can be specified statistically. Probability samples are the only samples that enable one to make inferences about populations. Probability samples can be obtained in several ways. CHAPTER 7: Survey Research 5

a. In simple random sampling every member of the sampling frame has an equal chance of being selected to participate in the survey.

b. In stratified random sampling a sampling frame is divided into groups, called strata, and then from within each group random sampling is used to select the members of a sample.

c. In cluster sampling, units called clusters that contain members of the population, are identified. X number of clusters are then randomly selected, and either all people within a cluster are surveyed or a random sample within the cluster is surveyed.

B. With nonprobability sampling each member of the population either does not have a chance of being selected into the sample or the probability of being selected cannot be determined, or both. Unlike probability sampling, nonprobability sampling techniques are not based on statistical theory to select a sample from a population. Yet, nonprobability samples are the most common in research because they are easy to use and cost-efficient. Types of nonprobability sampling techniques include:

a. Convenience sampling occurs when members of a population are selected nonrandomly on the basis of convenience. This method of obtaining whoever a researcher can get can introduce bias into the study.

b. In quota sampling a sample is nonrandomly selected to match the proportion of one or more key characteristics of the population. For example, if a population is predominantly female the participants selected would ensure the sample is also predominantly female.

c. Self-selected samples are those in which the participants place themselves in a sample rather than being selected for inclusion by a researcher. Although self-selected samples are usually very large, they are often unrepresentative of the population. Despite this, self-selected samples are the most common in psychological research. The validity of the results obtained from a self-selected sample is measured by their consistency with the existing body of literature.

d. In purposive sampling researchers select a sample according to a specific goal or purpose of the study, rather than at random. There are two types of purposive sampling. In expert sampling the researcher identifies experts on a topic and asks them to participate. In snowball sampling people contacted to participate in a survey are asked to help recruit additional participants by providing names and contact information of other qualified participants.

C. Margin of sampling error. Using probability sampling methods enables a researcher to make an inference about a population from a sample, but there is still a degree of error unless the sample size and population size are equal. This is due to sampling variability: chance fluctuations in the characteristics of samples that occur with randomly selected samples from a population. For example, if the researcher was to take five random samples of n = 10 from a population of N = 100, the likelihood that the mean of each of the five samples would be equal to one another and equal to the population is highly unlikely. These fluctuations represent sampling error (aka the margin of error). The margin of sampling error is a range of values within which the true population value is 6 CHAPTER 7: Survey Research

presumed to reside. The level of certainty one has that the true population value lies within the margin of sampling error is determined by the confidence interval (CI), which is obtained statistically.

D. Why not aim for more precise estimates? Confidence intervals are set minimally to 95%. This means that there is a 5% chance that the true population mean does not lie within a specified range of means. The CI can be increased but when it is the sample size must also be increased exponentially. Due to many factors associated with having a large sample (i.e., time and cost), the CI is typically held at the minimally accepted level.

Part III: Constructing the Questionnaire A questionnaire is a scientific measuring device that requires great care to develop. This section describes the key elements to be considered when creating an effective questionnaire.

A. The text describes six steps in developing a questionnaire.

a. Reflect on your research goals and then convert them into a list of more specific topics that you want to learn about.

b. Identify variables of interest within each topic.

c. Consider the practical limitations of the survey.

d. Develop your questions, decide on their order, and get feedback from mentors or colleagues.

e. Pretest your questionnaire.

f. Revise the questionnaire as needed after feedback and pretesting, and, if possible, pretest the revised version before conducting your survey.

B. Types of questions. Questions can be formatted in a variety of ways.

a. In an open-ended question, people respond in their own words or terms. Although this type of question allows respondents to provide what they feel is their true answer, such answers tend to be difficult and time-consuming for researchers to analyze. A closed-ended question, in contrast, provides specific response options from which the respondent must choose. These are much easier to analyze than open-ended questions but limit the way in which participants may respond.

b. There are several types of closed-ended questions. Multiple-choice questions are used to measure a variety of variables and often require respondents to select a single answer. A ranking scale asks respondents to order a list of items along some dimension, and forced-choice questions “force” people to choose between two options.

c. Rating scales are often used to enable respondents to report their attitudes, beliefs, and behaviors along a quantitative dimension. The Likert Scale, for example, is a rating scale in CHAPTER 7: Survey Research 7

which response items are balanced around a neutral point (i.e., respondents neither agree nor disagree).

C. Wording the questions is harder than you think. In general, questions should be worded clearly, concisely, and free of jargon. Types of questions to avoid include leading questions, loaded questions, and double-barreled questions. It is also recommended that researchers avoid the use of double negatives.

D. Placing the questions in order. There are several guidelines about how one should order the items on a questionnaire. First, items addressing a similar topic should be grouped together. Second, within a particular group of items open-ended questions should be placed before closed- ended questions. Similarly, general items should be placed before specific items. These guidelines are in place to prevent people from answering subsequent questions based upon previous ones. Lastly, sensitive questions should not begin an instrument. 8 CHAPTER 7: Survey Research

Part IV: Administering the Survey Surveys can be administered face-to-face, over the telephone, through postal mail, or online. This section also addresses the issue of response rate in survey research.

A. In a face-to-face interview a researcher asks a participant to respond to survey items and the researcher records the participant’s answers. Face-to-face interviews tend to yield a higher response rate, and they also enable the researcher to establish a rapport with the respondent. Interviews ensure that the survey is completed in its intended order and also enable the researcher to clarify questions for the respondent if need be. Disadvantages of interviews include their high cost, the potential to tax the participant’s memory capabilities, and the possibility of producing interviewer effects. For example, interviewer bias occurs when the interviewer unintentionally behaves in a way that affects the participant’s response. Similarly, interviewer characteristics, such as age and gender, can alter participant behavior. To reduce these effects interviewers should be extensively trained.

B. Telephone interviews became popular in the 1970s when almost all households had landline telephones. Telephone interviews have the same advantages as face-to-face interviews. Random digit dialing (RDD), which is used to randomly select telephone numbers to dial, has the advantage of potentially reaching people with both listed and unlisted numbers. Most telephone interviews use CATI systems in which the interviewer can ask questions as they appear on a computer monitor and the record the provided answers. The rapport established in face-to-face interviews does not necessarily translate to telephone surveys, and people tend to have a smaller threshold for attention on the telephone. The fact that many people have cell phones (and not landlines) also poses another issue. Some data suggests that cell phone users respond differently than landline users.

C. Mail surveys are very efficient (in terms of both cost and time),allow respondents to look at each item for as long as they wish, and make it more likely that respondents will answer highly personal questions. However, mail surveys tend to get a low response rate and also run the risk of participants not completing the survey correctly.

D. Online surveys are becoming increasingly popular. Although they rely heavily on convenience sampling, they are even more cost and time effective than mail surveys.

E. Nonresponse bias occurs when people who were selected but didn’t participate in a survey would have provided significantly different answers from those provided by those who did participate. Research suggests that the relation between response rate and nonresponse bias is not as strong as once thought. Still, a high response rate is preferred in survey research. There has been a significant decline in survey response rate over the past several years, and attempts to improve the response rates have produced mixed results. CHAPTER 7: Survey Research 9

Part V: Being a Smart Survey Consumer This section reiterates the importance of critically evaluating research.

A. Thinking critically about survey results. Surveys are everywhere. As such, consumers of survey research need to ask questions that will enable them to evaluate the validity of a survey’s results. Questions include:

a. Who conducted the survey and for what purpose?

b. Was probability sampling used?

c. What method was used to administer the survey?

d. Are the survey questions appropriate?

e. Are the results interpreted properly?

It is important to note that while most scholarly articles will provide answers to the above questions, other sources (e.g., news articles, magazines) may not.

B. Being aware of bogus surveys. Not all surveys are legitimate. Sugging is a deceptive marketing practice in which the caller intends to sell a product, initially disguises that intent, and then asks leading questions under the pretext of conducting a survey. Frugging is a similar practice that involves fund-raising under the guise of research. Both tactics are frowned on by researchers because they violate respondents’ trust, making them less likely to participate in future legitimate surveys.

C. LECTURE AND CLASSROOM ENHANCEMENTS

PART I: Basic Characteristics of Surveys

A. Lecture/Discussion Topics  Response rates. In survey research the response rate of the sample is an important element in ensuring that the results are representative of the population. So what must a response rate be in order for it to be “good”? Fincham (2008) argues that the percent of nonresponders is equal to the amount of nonresponse bias such that if 60% of the sample participates in the survey, the nonresponse bias is 40%. With a potential 40% error rate, how accurate is the data generated from this sample? What constitutes an adequate response rate varies depending upon who you ask, but, given that the average survey response rate in psychology is around 50% (VanHorn, Green, & Martinussen, 2009) it could be set as the minimum desired rate. According to Nulty (2008) the mode of administration also affects response rate. He reports that although paper-based surveys have a 23% increase in response rate compared to online surveys, online response rates can be increased by sending out email reminders to nonresponding sample members as well as to the survey administrator, and by incentivizing participation. 10 CHAPTER 7: Survey Research CHAPTER 7: Survey Research 11

 The largest survey ever. If you google “largest survey study ever” a plethora of results appear. Not surprisingly, there are a number of hits that claim to be the largest survey of ______(fill in the blank). For example, The National Epidemiologic Survey on Alcohol and Related Conditions claims to be “…the largest and most ambitious comorbid study ever conducted,” with a sample of over 43,000 participants. Likewise, the Centers for Disease Control claims to have the “…largest survey ever on drowsy driving,” with over 147,000 respondents. Finally, the BBC reports to have collected data on social class from over 160,000 participants.

B. Classroom Exercise  How honestly would you complete a survey? While psychology students are quick to suggest using a survey to study the relations among variables, they sometimes disregard the accuracy of self-reported data. To help them appreciate the issue of response bias in survey research, ask them to answer, on a scale of 1 (highly unlikely) to 5 (highly likely), how likely they would be to respond truthfully to items asked in surveys about one’s:

o sexual activity

o illegal drug use

o academic performance

o attitude toward statistics

o Movie preferences

Compile the results and then discuss with the class that more sensitive topics (e.g., sex and drugs) are likely to yield less accurate responses than benign topics (e.g., movie preferences and attitudes toward stats). Also, by giving your students a survey you might also generate inaccurate data. If the results seem questionable, use this opportunity to further illustrate the point!

 Construct a survey. Ask students to construct a five-item survey on a topic of their choosing. Do not give them any more detailed instructions than that they have to come up with a set of questions that they think will provide self-reported data on the topic. Have students refer to this preliminary set of questions over the course of your discussion of Chapter 7. The students’ items can be used to help them understand how good items are constructed and how instruments are formatted. By the end of your discussion of survey research, students may have revised their items into a workable survey.

C. Web Resources  The reliability of self-report measures.

o http://www.intropsych.com/ch01_psychology_and_science/self-report_measures.html

o http://www.sciencebrainwaves.com/uncategorized/the-dangers-of-self-report/

 Survey research: http://www.youtube.com/watch?v=G0nl40k0jvU 12 CHAPTER 7: Survey Research CHAPTER 7: Survey Research 13

D. Additional References Nulty, D. D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment & Evaluation in Higher Education, 33(3), 301–314. Rea, L. M., & Parker, R. A. (2012). Designing and Conducting Survey Research: A Comprehensive Guide. Jossey-Bass. VanHorn, P. S., Green, K. E. & Martinussen, M. (2009). Survey response rates and survey administration in counseling and clinical psychology. Education and Psychological Measurement, 69, 389–403. PART II: Selecting the Sample

A. Lecture/Discussion Topics  Convenience Samples. In statistics (and research methods) classes students are taught that because of the way in which they are obtained, convenience samples cannot make inferences about the population from which they were drawn. Does the fact that the majority of psychological research is based on convenience samples make us hypocrites? It’s very important that students understand that the principle difference between convenience samples and true random samples is solely based on probability. The way in which we obtain a random sample makes it more probabilistic that the sample is representative of the population. In turn, the way in which we obtain a convenience sample makes it less probabilistic that the sample is representative of the population. Because both types of samples are based on probability, there is still the possibility that each sample will typify a population very well, or not well at all. In other words, a convenience sample CAN produce a good sample and a random sample CAN produce a bad one. In research we cannot rely on a single study to make inferences about a population, but instead we must evaluate it in the context of the existing literature in order to draw conclusions.

 How big should a sample be? In theory, the larger the sample size, the more representative of the population it is. However, if a population of five billion members exists, how many participants must one measure to obtain a reasonable estimate of the population? There appears to be no hard- and-fast rule specifying that the sample size must be a certain proportion (say, 10%) of the population from which it’s drawn. (See: http://www.stat.berkeley.edu/~census/sample.pdf.) Instead, researchers may want to focus on other factors when making a decision on sample size, including population size and margin of error desired (http://www.gillresearch.com/pdfs.gr/samplesize.pdf), or by conducting a power analysis.

B. Classroom Exercise  Sampling error. Help students understand the concept of sampling error and how sample size affects it. Put a population of 100 scores into a paper cup (see score distribution below). Have each group of students randomly select a sample of n = 2 and to calculate the mean. Have them put the sample back into the cup and repeat the exercise two more times, first with a sample of = 10 and then with a sample of n = 25. Collect the sample means from each group and display them for the entire class to see. In all probability, each group should have obtained sample means that are comparable to one another and the population mean, but not exactly the same. 14 CHAPTER 7: Survey Research

C. Web Resource  This brief YouTube clip describes confidence intervals and margins of error clearly and concisely. http://www.youtube.com/watch?v=a0AFG7CZ7DQ

 Cluster sampling and stratified random sampling.

o http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Cluster_sampling.html

o http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Stratified_sampling.html

 Students can use this sample size calculator to understand how population size, confidence interval, and margin of error affect the recommended sample size. http://www.surveysystem.com/sscalc.htm

D. Additional References  On random sampling. Waksberg, J. (1978). Sampling methods for random digit dialing. Journal of the American Statistical Association, 73(361), 40–46. Lusinchi, D. (2012). “President” Landon and the 1936 Literary Digest Poll: Were automobile and telephone owners to blame? Social Science History, 36(1), 23–54.

PART III: Constructing the Questionnaire

A. Lecture/Discussion Topics  Analyzing Likert scales. Likert scales are commonly used in the behavioral sciences. That said, it is important for students to understand how Likert-style surveys are analyzed. As with all data analyses, one must first examine what level of measurement (i.e., nominal, ordinal, interval, or ratio) the instrument obtains. Since the response items (e.g., totally disagree, disagree, neither disagree/agree, agree, totally agree) represent different levels of agreement, the data are at least ordinal. Whether they are interval depends upon whether one believes the respondent perceives the various responses to a Likert item as being equally spaced. Those who believe in the perception of equal spacing tend to use parametric statistics, which is appropriate when data is at the interval level. In contrast, those who do not buy into the equal spacing perception of response options tend to use nonparametric statistics.

B. Classroom Exercises  Creating response alternatives to survey items. Have students refer to the survey items they created at the beginning of your discussion of survey research. Ask them to create a close-ended response set (multiple-choice, forced choice, or Likert) for each item and then justify the specific type of response set chosen. Next, have the students complete their neighbor’s survey. In pairs, ask the groups to discuss which items might need clarification or revision and why. This exercise can be used to demonstrate how the responses we anticipate can be very different from what we obtained. CHAPTER 7: Survey Research 15

C. Web Resources  Tips on constructing questionnaires.

o http://vault.hanover.edu/~altermattw/methods/assets/Readings/Questionnaire_Design .pdf

o http://ericae.net/ft/tamu/vpiques3.htm

 Using Excel to organize and analyze survey data.

o https://learningstore.uwex.edu/assets/pdfs/G3658-14.pdf

D. Additional References Fowler, F. J. (1992). How unclear terms affect survey data. Public Opinion Quarterly, 56(2), 218–231.

PART IV: Administering the Survey

A. Lecture/Discussion Topics  Questionnaires and interviews. Survey research is a method used to obtain self-reported information about an individual. Survey data can be obtained via questionnaire or interview. Sometimes people use (erroneously) the three terms—survey, questionnaire, interview— interchangeably. Questionnaires and interviews are both types of surveys. When the respondent reads the survey and records her or his answers it is a questionnaire. In contrast, in an interview the researcher asks the questions and records the respondent’s answers. To say, “We performed a survey” when writing about research results requires further discussion to make your meaning clear. Questionnaire surveys and interview surveys each have advantages and disadvantages. As such, using such a generic term as “survey” in relation to research could leave a consumer of research with a distorted perception of exactly what research methodology was employed.

 A case of nonresponse bias in Hungary. In 1936, the Literary Digest erroneously predicted Alf Landon would win the presidential election over Franklin Roosevelt because they had used a poor sampling frame. In 2002, another election prediction error—also due to sampling issues—occurred in Hungary. The data collected from voters by pollsters predicted that the Fidesz party would win the election to overturn the socialist regime. The results of the poll were broadcast internationally through a number of media outlets. However, when the actual results were tallied, the incumbent Socialist party had actually won the election. The error in predicting the outcome of this election appears to be due to nonresponse bias. According to Bodor (2011), Socialist party voters refused to talk to the pollsters about who they voted for, making it appear as if the Fidesz party won the election when in fact it was the Socialists who had won.

B. Classroom Exercise  Selecting an appropriate method of survey administration. Referring back to the survey that your students created earlier, ask them to take a few minutes to identify who the population is they intend to study. Based on this population what mode of administration would bestyield the greatest response rate, and why? 16 CHAPTER 7: Survey Research

C. Web Resource  Using random digit dialing in the age of cell phones.http://www.ucsur.pitt.edu/files/misc/cell_phone_sampling.pdf

D. Additional References  Effects of nonresponse bias in research:

Armstrong, J. U. S. T., & Overton, T. (1977). Estimating nonresponse bias in mail surveys. Journal of Marketing Research, 14, 396–402.

Bodor, T. (2012). Hungary's “Black Sunday” of public opinion research: The anatomy of a failed election forecast. International Journal of Public Opinion Research, 24(4), 450–471

PART V: Being a Smart Survey Consumer

A. Lecture/Discussion Topics  I was sugged! A few years ago I received a phone call from someone who wanted to include a bio about me in a particular desk reference. Curious, I spoke with the caller and answered her questions. After the interview the caller told me I could purchase the desk reference for somewhere in the ballpark of $500. I politely declined. For several weeks thereafter I received numerous emails continuing to solicit my business. Although it didn’t cause me any harm, this sugging attempt ended up to be a small nuisance for some time. In thinking about the act of sugging I recalled a similar incident that happened in the late 1990s to a group of friends and me while we were in Miami for a professional meeting. One day while walking to a restaurant we were approached by a man holding two boa constrictors. He asked if any of us wanted to hold one and one of my friends took him up on the offer. The man then proceeded to take a Polaroid photograph of my friend with the snake and then pressured him into buying the picture for $5. Although this story does not involve a survey, what it has in common with the first is the use of a gimmick in hopes of making a buck. Real surveys are never used as a gimmick. They are legitimate tools used for one thing, and one thing only: to collect self-report data.

B. Classroom Exercise  The business of paying people to participate in survey research. There are many companies that claim to pay people to take surveys. Are these surveys legitimate or just a scam? Have your students put their critical thinking skills to use by searching the web for sites that claim to pay survey participants and evaluate the sites’ claims. Have the students see if they can determine:

 who conducts the survey and for what purpose,

 whether respondents are selected based on probability sampling,

 what the method of survey administration is,

 if the survey questions are appropriate, and CHAPTER 7: Survey Research 17

 whether the results of the survey are interpreted correctly.

Ask students to share their findings with the class.

C. Web Resource  On sugging and frugging:

o http://ruthlessresearch.wordpress.com/2011/04/27/sugging-frugging-and-other-ways-to- write-a-bad-questionnaire/

o http://blogs.calgaryherald.com/2012/04/03/sugging-frugging-and-pugging/