Research in Informal Settings: Some Reflections on Designs and Methodology
Total Page:16
File Type:pdf, Size:1020Kb
Spring 1991 67 Research In Informal Settings: Some Reflections on Designs and Methodology John J. Koran, Jr. Jim Ellis Professor and Curator Research Assistant Science Education and Museum Studies Museum Studies Associate Dean, The Graduate School Florida Museum ofNatural History University ofFlorida Gainesville, FL 32611 Research in informalsettings can be experimental or naturalistic or a combination of methods, allofwhich canyield rich data. Research designs areselected to explore specific problems; however, even the most carefully thought-out designs and procedures have shortcomings. Frequently, research studies contribute to summative or formative evaluation objectives. All evaluation studies, however, do not necessarily adhere to the rigorous criteria required for research studies and consequently mayor may not be strictlydefined as ((research. "Thispaperfocuses on experimentaldesigns andthe review offour experimental studies, considering threats to internal and external validity. Internal validitydeals with whether the experimentaltreatmentsactuallyproduce the observed effects. External validity concernsgeneralizability ofstudyfindings to other settings, exhibits, andsubjects. Threats to validityinclude history, maturation, testing, instrumentation, regression, selection, mortality, and interactions. The studies selected for this analysis were all variations of experimental designs. In each case, the study could haveprofitedfrom stronger, morefocused research designs and methodology as well as from exit interviews ofthe subjects andfrom the collection of other types of naturalistic data. It is the authors' opinion that both experimental and naturalistic methods can often be used together to enrich the data base from which to make inferences and contribute to knowledge about learning in informal settings. 68 ILVS Review INTRODUCTION Research in informal settings can be experimental or naturalistic or a combination of methods, all of which can yield rich data. Regardless of the methods used, research designs are selected to explore specific issues and problems. However, even the most carefully thought-out designs and procedures can have some shortcom ings. This is especially true of field research, where control of variables is often difficult to achieve. While research studies can contribute to evaluation objectives, not all evaluation studies necessarily adhere to the rigorous criteria required for research studies. Consequendy, such studies mayor may not be stricdy defined as "re search." This paper will focus on experimental research designs and will critique four published studies. Experimental Designs: Some Basics Cook and Campbell (1979) suggest that the word "experiment" implies testing, causal relationships, deliberate manipulation, and inference. For clarity we will use the term to signify a treatment, an outcome measure, some form of randomization ofsubjects, and a method for comparison to determine the effects ofthe treatment. A treatment can involve any number ofpossible manipulations; however, for museums and other informal learning settings,l treatments frequendy involve having subjects (visitors, students) view or participate in some exhibit or program that includes a planned and/or sequenced set of informational or instructional materials. Outcome indicators or instruments generally take the form ofquestion naires, interviews, or other ways ofmeasuring knowledge, comprehension, inter ests, attitudes, etc. These instruments often measure combined outcomes (~.g., knowledge and attitude), although interactions between cognitive and affective questions can occur, making the results difficult to interpret. Other forms of measurement also are used, such as unobtrusive observation ofvisitors and coding ofpertinent visitor behavior. Dependingonthe problem they are studying, researchers often assign their subjects to several treatment groups and/or a control group. In order to enhance the ability ofthe researcher to make an inference about the change observed as a result ofa particular treatment, a randomization technique for subject selection and assignment should be utilized. Random selection from a given population assures the researcher that the subjects are representative of that population. Random 1. Informal learning settings can include zoological parks, botanical gardens, nature centers, national and state parks, aquaria, schoolbased nature trails, most field trip locations, and even many school laboratory activities. Spring 1991 69 assignment of subjects from a selected population to the treatment and control groups ensures that the study groups are equivalent in all factors other than those associated with the treatments (Cookand Campbell, 1979;Smith and Glass, 1987). Researchers should be aware that the unit ofanalysis is dependent on the method by which subjects are selected and assigned. If individual visitors are randomly assigned to treatments and control, the statistical unit to be analyzed is based on the number ofvisitors in the sample; however, ifone assigns groups of visitors (e.g., adult education classes), then the unit of analysis is based on the number of groups and not individuals. The latter would require a considerably larger sample size since small numbers ofintact groups generally must show robust treatment effects before the effects are measurable. Yet in some situations, class assignment may be easier than splitting up classes. Three Basic Research Designs Research designs vary according to researcher preferences and types of questions asked. Campbell and Stanley(1973) have written one of the best introductory texts on experimental research designs. Table 1 outlines three ofthe most basic designs. Design 1 is best suited when the researcher has a large or relatively homogeneous sample and has reason to believe that randomization will be effective in equating the treatment and control groups; hence a pretest to determine whether randomizationwas effective is not necessary. This type ofdesign would be suitable for many ofthe museum and informal settings that include the casual visitor as well as students in planned groupvisits. This design can be used with class (field trip) groups by randomly assigning each student in a particular class to a treatment or control group. The treatment group views an exhibit or otherwise participates in the treatment and then receives the post-test. The control group is first tested with the same post-test and then allowed to view the exhibit or receive the treatment; through this modified design, the control group can benefit from the exhibit or treatment while still serving as a control. Designs 2 and 3 are most suitable when the researcher has greater control ofthe subjects or groups because at least two testing periods are needed for each group. Laboratory type settings, classrooms, school groups, and other planned groupings associated with museums and informal settings mightlend themselves to this type ofdesign. Again, as these are experimental designs, random selection and assignment are used to offset both design threats and sources of bias in selecting subjects (Table 2). A pretest, given to ascertain if randomization was indeed effective, provides data on whether the group means and standard deviations were equal at the outset. Design 3 provides the researcher with the opportunity to study the effects ofpretesting on the treatment (interactions ofthe treatment and test) and the effects ofthe pretest on the post-test when no treatment intervenes. 70 ILVS Review Table 1 Experimental Research Designsfor Informal Settings Design Subject Pretest Treatment Post-test Assignment Design 1 Post-test Only treatment group random NO YES YES control group random NO NO YES Design 2 Pretest/Post-test treatment group random YES YES YES control group random YES NO YES Design 3* Solomon four-group treatment group random YES YES YES control group random YES NO YES treatment group random NO YES YES control group random NO NO YES *This design is particularly strong because it controls for most effects that may be caused by the test itself. Note: YES indicates that a test or treatment is done in the sequence presented. NO indicates that the particular aspect is not active in the design. (adapted from Campbell and Stanley, 1973) VJ Table 2 '"d ::I. ::l Design Threats to Validity CJQ I-' '0 '0 I-' Design History Maturation Testing Instrumentation Regression Mortality Interaction Design 1 Post-test Only + + + + + + with control group Design 2 Pretest/ post-test with + + + + + + control group Design 3 Solomon four- + + + + + + group Note: Plus (+) sign indicates threat is accounted for through random assignment in the design. Question mark (?) indicates that threats to validity may not be fully accounted for by random assignment. (adapted from Campbell and Stanley, 1973) '-J I-' 72 ILVS Review As illustrated in Table 2, a number ofpossible design threats or factors can hinder the interpretation of the results. Campbell and Stanley (1973), in their discussion of experimental and quasi-experimental designs for research, refer to these threats to validity as history (specific events occurring during the study), maturation (growth or change in the subjects during the study), testing (effects of the test on the subjects ofthe study), instrumentation (changes in rating scales or instruments during the study), regression (scores moving toward the mean regard less oftreatment because ofthe groups' extreme or unique nature), selection (biases based on placement ofsubjects