Quantitative Research Design

Total Page:16

File Type:pdf, Size:1020Kb

Quantitative Research Design

Chapter 9

Quantitative Research Design

Statement of Intent Chapter 9 is an important chapter because it introduces students to principles that are critical to evaluating the basic architecture of a quantitative study. The chapter includes material on various dimensions of research design and techniques of research control. The first section of the chapter presents an overview of some fundamental features of research design in quantitative studies. The major dimensions of research design (e.g., whether there is an intervention, whether the study is cross-sectional or longitudinal) are described. An important point to reinforce here is that researchers usually have considerable latitude in designing studies, but their design decisions have implications for the quality of evidence their research yields. An early section also focuses on causality and criteria for making causal inferences. Cause-probing research plays an important role in evidence-based practice, and so it is important for students to think about how one can reasonably make inferences about phenomena that are causally linked to health outcomes. New to this edition, there is a section that describes evidence hierarchies (design options) for three types of cause-probing questions: Therapy, Prognosis, and Etiology. The next section of the chapter explains differences between research designs that do or do not involve an intervention or treatment—the differences between experimental/quasi- experimental and nonexperimental designs. In this section, we introduce students to several important research design issues—most importantly, randomization and control groups. Although randomization is an important tool for causal inference, we note that many important research problems simply are not amenable to an experimental design. Moreover, RCTs (experimental studies) can be flawed, even with a randomization. The Chapter Supplement for this chapter that appears on thePoint website presents the classical Cook and Campbell diagrams for selected experimental and quasi- experimental designs. The chapter also describes how quantitative studies can be designed to maximize the interpretability of study results by using strategies to control confounding variables. We discuss control over external and situational factors (including the topic of intervention fidelity) and control over intrinsic participant characteristics. This discussion is especially important because consumers must consider what design alternatives might have strengthened a study and how believable the study findings are, given the limitations of the design. Chapter 9 introduces the critical concept of internal validity. The importance of considering whether the various threats to internal validity are relevant (in cause-probing research) cannot be overemphasized. Each time students read an article about a cause-probing question, they should be trained to ask themselves a series of questions relating to the various threats: Is it plausible that the outcome (the effect) could have preceded the independent variable (the cause)? Is it plausible that group differences on outcomes at the end of the study reflect preexisting differences in the groups? Is it plausible that different rates of attrition in the groups affected the outcomes? Is it plausible that something else was going on at the same time of the study that could have affected the outcomes? Each “no” answer to these questions strengthens confidence in the internal validity of the study. Statistical conclusion validity and external validity are also briefly discussed in the chapter. The textbook notes that researchers often have to balance design considerations for the potentially competing goals of internal and external validity. External validity is often best addressed through replications, the findings from which can be integrated in a systematic review. Students should learn that rigid adherence to the evidence hierarchy as a means of evaluating evidence is not fruitful. As a way of illustrating this, we selected two studies as the end-of-chapter research examples in which there is actually more ambiguity about what the results mean for the RCT than for the quasi-experimental study. (Moreover, the quasi- experimental study involved a design that is often quite weak—a one-group pretest–posttest design). By going through the questions in Box 9.1—with particular attention to issues of internal validity—the students will hopefully learn important lessons about making causal inferences.

Special Class Projects 1. The best way for students to learn abut random assignment is to experience it first hand. You can do this by randomly assigning your students to groups—and students are likely to enjoy the interactive aspect of this activity. Depending on your school’s technological capabilities, you can get the random numbers during the class itself and display them on a screen, by connecting to the Internet and going to www.random.org. This site offers a lot of random number services for free, and you can explore which option is best for you. (If your classroom environment does not support a real-time Internet demonstration, this would have to be prepared ahead of time.) We will walk through two possibilities, noting that the site may change its presentations and the required steps. a. Randomizing using a list of names. To randomize in this manner, make a list of all your students’ names in a word processing file, one name per line, that you can copy and paste into the randomizer. (Use first names and last initials to avoid any issue with privacy.) On the Home page of Random.org, scroll down to a panel with the heading “Lists and Strings and Maps,” and click on the option “List Randomizer.” Copy and paste your list of student names into the designated field, and then click the “Randomize” button. The screen will show the list of names in random order, and the list will be numbered. If there are 40 students (for example) in your class, students 1 to 20 could be assigned to Group 1 and students 21 to 40 could be assigned to Group 2, or you could use odd/even numbers to make the assignments. b. Randomize by number. Number students in some fashion, from 1 to the number of students (e.g., have students count off, and make sure they remember their number; or hand them a slip of paper with a number from 1 to “X” on it as they enter class). Then go to the home page of Random.org, scroll down to the panel with the heading “Numbers,” and click on the option “Integer set generator.” (This routine will generate unique random numbers; the routine “Integer generator” produces a sequence of random numbers in which duplicates are possible.) The next screen will look like this (or something similar):

Step 1: The Sets

Generate 1 set(s) with 50 unique random integer(s) in each.

Each integer should have a value between 1 and 100 (both inclusive; limits ± 1,000,000,000).

Assuming you had 100 students and you wanted to randomize half to an “intervention” group (Group 1), you would instruct the randomizer to generate 1 set, with 50 unique random integers, with integers selected from 1 to 100. When you click on “Get Sets,” you would get a list of 50 numbers such as the following:

Here are your sets:

 Set 1: 4, 5, 6, 7, 11, 12, 14, 16, 19, 20, 24, 25, 27, 31, 34, 37, 39, 40, 41, 42, 44, 45, 47, 48, 49, 52, 53, 55, 56, 63, 64, 66, 70, 71, 72, 75, 76, 79, 80, 81, 82, 83, 88, 90, 91, 93, 95, 97, 98, 100

The 50 students with these numbers would be in Group 1, and the remaining 50 students would be in Group 2. An important part of this classroom activity is showing students that randomization creates comparable groups (of course, this may backfire if your class size is very small!). Have the students gather together in their respective groups, and then demonstrate the similarity of the groups in terms of a few things that are easy to discern. For example, you could look at the groups’ comparability in terms of number of males. Then, have students raise their hands if (for example) they were born in the months of January through June, if they have a brother, or if their last name begins with a letter between M and Z. With a large enough group of students, the number of students who raise their hands in the two groups should be similar for each characteristic. If your class size is small and the groups turn out to be dissimilar on one or more characteristics, you can explain that randomization works “in the long run” and that one reason for researchers to select a large sample is to give randomization the chance to do its job. 2. HDYF If you did the “How Do You Feel” icebreaker with your class (Chapter 1), there are a few activities you could pursue relating to the content of this chapter. a. If you would like to do the randomization exercise described as Activity #1, you could use data from the HDYF questionnaire to examine the comparability of the two groups that are formed at random. For example, you could have students (divided into their group) raise their hands if they selected a positive word in response to Question 1 and then compare how many students in each group had a positive word. Or, you could compute the average rating for responses to Question 2 in the two groups (i.e., how positive toward research, on average, were students in the two groups?). b. Have a classroom discussion about the design for your initial study of student attitudes toward research. Was the design experimental/quasi-experimental or nonexperimental? Is the study cause probing? Is the study longitudinal or cross- sectional? What, if any, comparisons were made or could be made? c. What if you collected posttest data about students’ feelings toward research on the last day of class—then what type of design would this study be? What would be the “intervention”? If there are changes to students’ attitudes over time, what might be some of the plausible alternative explanations for the changes? In other words, what are the threats to the internal validity of inferring that the course (the independent variable) affected students’ attitudes, given the design that was used? How might the study be designed to strengthen causal inferences? 3. GCE If you did the “Great Cookie Experiment” icebreaker with your class (Chapter 3), there are several related activities you could pursue, depending on how you structured the activity. The main issue is whether you (a) gave your students both cookies to eat, in a fixed order, as in the original GCE, (b) randomized students to two different groups, with each group eating a different cookie, or (c) randomized students to different orderings of eating cookies, with everyone eating both cookies. In this Activity 3, we offer suggestions for those who pursued option (a) and had all students eat both cookies in a fixed order. Activities 4 and 5 below are for those who opted for a different design, options (b) and (c), respectively. a. If you did the randomization exercise described in Activity 1, you could use data from the GCE questionnaire to examine the comparability of the two groups that are formed at random. For example, you could have students raise their hands if their rating for Question 1 (degree of hunger) was 50 or greater and then compare how many students in each group had their hands raised. Or, you could compute the average rating for responses to Question 10 in the two groups (i.e., how important, on average, was eating healthy food in the two groups?). b. The Cookie Experiment was designed to assess the effect of using eggs versus egg substitute as an ingredient in cookies on students’ perceptions about, and preference for, the cookie. Ask students to identify the design that was used in your version of the GCE. Is it experimental or quasi-experimental? Is the research cause probing? What type of question was being asked (Therapy, etc.), and where does the design for this type of question fall on the evidence hierarchy shown in Table 9.2 in the text (also available in the Forms section of thePoint website)? Have a discussion about the problem of carryover effects in this study design. Also discuss what effect students’ hunger at the outset (Q1) might have on their perceptions of cookie qualities and their cookie preferences. Ask students to consider what other factors relating to cookies could confound the results (e.g., cookie size, baking time for the cookies) as a way of illustrating the importance of constancy of conditions. c. Have a discussion about blinding in this study. Ask students to consider how knowing which cookie had eggs or which had egg substitute might have affected their responses. d. Given the study design, what are the possible threats to the internal validity of the study? Have students discuss ways to redesign the study to address these threats. 4. GCE If you did the “Great Cookie Experiment” icebreaker with your class (Chapter 3), Activity 4 offers suggestions for those who randomized students to two groups, with each group eating a different cookie—a Cookie A group and a Cookie B group. a. You have already randomized your class into two groups, so explore ways to assess the comparability of your two groups of students—those who got cookie A and those who got cookie B. (Also consider demonstrating how you achieved the randomization.) Divide the students into their two groups, and then use data from the GCE questionnaire to examine the groups’ comparability. For example, you could have students raise their hands if their rating for Question 1 (degree of hunger) was 50 or greater and then compare how many students in each group had their hands raised. Or, you could compute the average rating for responses to Question 10 in the two groups (i.e., how important, on average, was eating healthy food in the two groups?). b. The Cookie Experiment was designed to assess the effect of using eggs versus egg substitute as an ingredient in cookies on students’ perceptions about, and preference for, the cookie. Ask students to identify which design was used in your version of the GCE. Is it experimental or quasi-experimental? Is the research cause probing? What type of question was being asked (Therapy, etc.), and where does the design for this type of question fall on the evidence hierarchy shown in Table 9.2? Also discuss what effect students’ hunger at the outset (Q1) might have on their perceptions of cookie qualities and their cookie preferences. Ask students to consider what other factors relating to cookies could confound the results (e.g., cookie size, baking time for the cookies) as a way of illustrating the importance of constancy of conditions. c. Have a discussion about blinding in this study. Ask students to consider how knowing which cookie had eggs or which had egg substitute might have affected their responses. d. Have a discussion about whether a crossover design would have been appropriate in this study. Would there likely be carryover effects? e. Given the study design, what are the possible threats to the internal validity of the study? 5. GCE If you did the “Great Cookie Experiment” icebreaker with your class (Chapter 3), Activity 5 offers suggestions for those who randomized students to different orderings of eating the two cookies—a Cookie A First group and a Cookie B First group. a. You have already randomized your class into two groups, so explore ways to assess the comparability of your two groups of students—those who got cookie A first and those who got cookie B first. (Also consider demonstrating how you achieved the randomization.) Divide the students into their two groups, and then use data from the GCE questionnaire to examine the groups’ comparability. For example, you could have students raise their hands if their rating for Question 1 (degree of hunger) was 50 or greater and then compare how many students in each group had their hands raised. Or, you could compute the average rating for responses to Question 10 in the two groups (i.e., how important, on average, was eating healthy food in the two groups?). b. The Cookie Experiment was designed to assess the effect of using eggs versus egg substitute as an ingredient in cookies on students’ perceptions about, and preference for, the cookie. Ask students to identify which design was used in your version of the GCE. Is it experimental or quasi-experimental? Is the research cause probing? What type of question was being asked (Therapy, etc.), and where does the design for this type of question fall on the evidence hierarchy shown in Table 9.2? Also discuss what effect students’ hunger at the outset (Q1) might have on their perceptions of cookie qualities and their cookie preferences. Ask students to consider what other factors relating to cookies could confound the results (e.g., cookie size, baking time for the cookies) as a way of illustrating the importance of constancy of conditions. c. Have a discussion about blinding in this study. Ask students to consider how knowing which cookie had eggs or which had egg substitute might have affected their responses. d. Have a discussion about whether a crossover design was appropriate in this study. Is there a possibility of carryover effects? Have them consider how they might assess for the possibility of such an effect. Ask students to consider what the possible effects on power (statistical conclusion validity) might be if you had randomized students to different cookies, rather than to different orderings of the cookies. e. Given the study design, what are the possible threats to the internal validity of the study? 6. A nurse researcher found a relationship between teenagers’ level of knowledge about birth control and their level of sexual activity: Teenagers with higher levels of sexual activity knew more about birth control than teenagers with less sexual activity. Have three teams of students make an argument for three different interpretations: sexual activity as the independent variable, knowledge level as the independent variable, and a third variable (determined by the team) that is the “cause” of the other two variables. Have each team suggest design elements that could be used in a new study to better test their hypothesized causal sequence. 7. Suppose the class wanted to study the coping strategies of AIDS patients at different points in the progress of the disease. Have one team of students design a cross-sectional study to research this question, describing how subjects would be selected. Have another team design a longitudinal study to research the same problem. Have each team identify the strengths and weaknesses of the other team’s approach. 8. Suppose the class was interested in testing the effect of a special high-fiber diet on cardiovascular risk factors (e.g., cholesterol level) in adults with a family history of cardiovascular disease. Have the students, as a class, recommend a research design for this problem, indicating what confounding variables they would need to control and how they would control them. Ask them to identify the major threats to the internal validity of the design. 9. Use the “Self-Test” PowerPoint slides for Chapter 9 as a class activity, to give students an opportunity to apply what they have learned about quantitative research design to some research situations. These slides are not intended to be used to evaluate student performance, but rather to reinforce learning and provide an opportunity for discussion if there is any confusion.

TEST QUESTIONS AND ANSWERS FOR EVALUATING STUDENT LEARNING

True/False Questions 1. Most causes are not deterministic but rather affect the probability that an effect will occur. (True) 2. The counterfactual concept involves what would have happened to the same people simultaneously exposed and not exposed to a causal factor. (True) 3. One criterion for causality is randomization. (False) 4. In an experiment, the researcher manipulates (deliberately varies) the independent variable. (True) 5. A control group in an experiment is always the group that gets “usual care.” (False) 6. A potential limitation of a crossover design is the risk of carryover effects from one condition to the next. (True) 7. Randomization is not necessary with a crossover design because there is only one group of participants. (False) 8. Matching is the most effective method for equalizing groups of participants who are being compared in a study. (False) 9. The one-group pretest–posttest design is the strongest possible quasi-experimental design. (False) 10. In a pretest–posttest design, a researcher collects data from subjects at least twice. (True) 11. Experiments are always prospective. (True) 12. Quasi-experimental and experimental designs have a common feature: intervention. (True) 13. The purpose of both experimental and correlational research is to study relationships between variables of interest. (True) 14. A researcher would choose a nonexperimental design if ethical constraints prevented manipulation of the independent variable. (True) 15. The term cohort design is another term for a retrospective design. (False) 16. A case–control design is typically used in prospective studies. (False) 17. A major weakness of correlational research is the risk of making faulty inferences from the results. (True) 18. A potential problem in an experimental study is the high risk of self-selection bias. (False) 19. A study that focused on development among preterm infants would ideally use a cross- sectional design. (False) 20. Retrospective designs are stronger in elucidating causal relationships than are prospective designs. (False) 21. A heterogeneous sample may negatively affect a study’s internal validity but may enhance its external validity. (True) 22. The threat of mortality can arise if there is differential attrition from groups. (True) 23. The threat of maturation is one that applies primarily to studies involving children. (False) 24. Intervention fidelity is a particularly important concept in correlational research. (False) 25. One way to weaken the statistical conclusion validity of a study is to use too large a sample. (False)

Application Questions 1. Below is a description of a fictitious study. Read the description and then answer the factual questions that follow: Brusser and Janosy wanted to test the effectiveness of a new relaxation/biofeedback intervention on menopausal symptoms. They invited women who presented themselves in an outpatient clinic with complaints of severe hot flashes to participate in the study of the experimental treatment. These 30 women were asked to record, every day for 2 weeks before their treatment, the frequency and duration of their hot flashes. The intervention involved five 1-hour sessions over a period of a week. Then, for the 2 weeks after the treatment, the women were asked to record their hot flashes again every day. At the end of the study, Brusser and Janosy found that, on average, both the frequency and average duration of the hot flashes had been significantly reduced in their sample of 30 women. They concluded that their intervention was an effective alternative to estrogen replacement therapy in treating menopausal hot flashes. Short Answer Questions a. What is the independent variable in this study? (Exposure vs. nonexposure to the relaxation/biofeedback intervention) b. What is the dependent variable in this study? (Frequency and duration of hot flashes) c. Was random assignment used in this study? (No) d. Is the design experimental, quasi-experimental, or nonexperimental? (Quasi- experimental) e. What is the specific name of the design used in this study? (Time series) f. Was blinding used in this study? (No, both participants and research personnel knew about the women’s exposure to the treatment.) g. Was selection a possible threat to the internal validity of this study? (No, there was no self-selection; women were compared to themselves.) h. Was history a possible threat to the internal validity of this study? (Yes, it is possible that some or all of the women had an experience during the course of the study that was the “real” cause of reduced hot flashes.) i. Was mortality a possible threat to the internal validity of the study? (No, the description implies that 30 women remained in the study throughout the 5-week study period—i.e., no attrition.)

Essay Questions a. Discuss ways in which this study achieved or failed to achieve the criteria for making causal inferences. b. Propose a more powerful research design to address the same research question. Describe various components of the design in detail. 2. Below is a description of another fictitious study. Read the description and then answer the factual questions that follow: Brady and Dakes hypothesized that the absence of socioemotional supports among the elderly results in a high level of chronic health problems and low morale. They tested this hypothesis by interviewing a sample of 250 residents of one community who were aged 65 years and older. The participants were randomly selected from a list of town residents. The researchers asked a series of questions about the availability of socioemotional supports (e.g., whether the participants lived with any kin, whether they had any living children who resided within 30 minutes away, etc.). Based on responses to the various questions on social support, participants were classified in one of three groups: low social support, moderate social support, and high social support. In a 6-month follow-up interview, Brady and Dakes collected information from 214 participants about the frequency and intensity of the respondents’ illnesses in the preceding 6-months, their hospitalization record, and their overall satisfaction with life. The data analysis revealed that participants in the low-support group had significantly more health problems and hospitalizations and lower life satisfaction ratings than those in the other two groups. The researchers concluded that the availability of social supports caused better physical and mental adjustment to old age.

Short Answer Questions a. What is the independent variable in this study? (Amount of social support) b. What is the dependent variable in this study? (Health problems, hospitalizations, satisfaction with life) c. Is the design experimental, quasi-experimental, or nonexperimental? (Nonexperimental) d. Is the study design cross-sectional or longitudinal? (Longitudinal) e. Is the study design prospective or retrospective? (Prospective; social supports were assessed at one point in time and then health outcomes were assessed 6 months later.) f. Was random assignment used to control confounding variables? (No, groups were not formed at random.) g. Was matching used to control confounding variables? (No, participants in the three social support groups were not matched.) h. Was there attrition in this study? (Yes, 36 participants dropped out of the study in the 6 months between data collection.) i. Was selection a possible threat to the internal validity of this study? (Yes, the three groups being compared were not formed at random and could have been different from each other in many respects other than just amount of social support.) j. Is the researcher justified in concluding that lower levels of social support cause more health problems and lower life satisfaction? (No, the design does not justify a causal inference; there are other explanations for the results.) Essay Questions a. Discuss ways in which this study achieved or failed to achieve specific criteria for making causal inferences. b. Critique Brady and Dakes’ decisions regarding the scheduling of data collection in this study. c. Discuss possible alternative interpretations of the study findings. d. Discuss, to the extent possible based on this brief summary, the strengths and weaknesses of this study design with respect to internal validity and external validity. What are some of the key threats to the validity of this study?

Recommended publications