Florida State University Libraries

Electronic Theses, Treatises and Dissertations The Graduate School

2006 Effects of Practice Sequence Variations on the Transfer of Complex Cognitive Skills Practiced in Computer-Based Instruction David W. Nelson

Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected]

THE FLORIDA STATE UNIVERSITY

COLLEGE OF EDUCATION

EFFECTS OF PRACTICE SEQUENCE VARIATIONS ON THE TRANSFER OF

COMPLEX COGNITIVE SKILLS PRACTICED IN COMPUTER-BASED

INSTRUCTION

By

DAVID W. NELSON

A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Degree Awarded: Spring Semester, 2006

The members of the committee approved the dissertation of David W. Nelson defended on March 3, 2006.

______Robert K. Branson Professor Directing Dissertation

______Dale W. Lick Outside Committee Member

______A. Aubteen Darabi Committee Member

______Gary Peterson Committee Member

Approved:

______Frances Prevatt, Chair, Department of Educational Psychology and Learning Systems

The Office of Graduate Studies has verified and approved the above named committee members.

ii

To my mother, Anna Mae Nelson, and my father, Willard Harry Nelson, who taught, showed, encouraged, and empowered me to learn and perform, who allowed me to find my own way in the world and continually encouraged me to pursue the things that fascinated me. To my sons, Erik Josef Nelson and Garrett Christopher Nelson, who inspired me to rise above my own expectations. To my brother Ben Albert Nelson, who modeled superior performance and academic excellence.

iii

TABLE OF CONTENTS

List of Tables ...... vi

List of Figures...... viii

Abstract...... ix

CHAPTER 1 INTRODUCTION ...... 1

Statement of the Problem...... 1 Context of the Problem...... 2 Significance of the Study...... 8 Theory and Rationale...... 9 Purpose of the Study ...... 10 Summary ...... 12

CHAPTER 2 REVIEW OF LITERATURE ...... 13

Conflation of Terms...... 15 Contextual Interference Effect and the Transfer Paradox ...... 16 Contextual Interference and Complex Cognitive Skills ...... 20 Cognitive Load Theory: Schema Development and Rule Automation ...... 22 Summary ...... 24

CHAPTER 3 METHOD ...... 26

Participants...... 29 Instructional Materials ...... 29 Variables ...... 30 Instruments...... 32 Procedure ...... 34 Research Design and Analysis...... 40

iv

CHAPTER 4 RESULTS...... 42

Tests of Assumptions...... 42 Test Effect...... 49 Item and Test Analyses...... 49 Descriptions of Groups ...... 51 Tests of Hypotheses ...... 52 Secondary Analyses...... 52 Tertiary Analyses...... 56

CHAPTER 5 DISCUSSION...... 60

Summary of Results in Context ...... 60 Interpretation of Results...... 62 Outcomes of Gainers...... 67 Implications for Instructional Design and Learning Theory ...... 69 Implications for Further Research ...... 72

APPENDIX A INSTITUTIONAL REVIEW BOARD DOCUMENTATION...... 75

APPENDIX B PRETEST ...... 80

APPENDIX C TRANSFER TEST ...... 82

APPENDIX D SCALE AND ITEM ANALYSIS RESULTS...... 85

REFERENCES ...... 93

BIOGRAPHICAL SKETCH ...... 97

v

LIST OF TABLES

Table 2.1 Practice Trials, Transfer Measures, and Effect Sizes in Two Studies of Complex Cognitive Skills...... 25

Table 3.1 Relations among Experimental Treatments and Expected Transfer Performance ...... 28

Table 4.1 Kolmogorov-Smirnov Test of Normality Within Experimental Groups...... 43

Table 4.2 Levene’s Test of Homogeneity of Variance Across Experimental Groups .... 44

Table 4.3 Means, Standard Deviations, and Numbers of Scores Analyzed for Experimental and Control Groups on the Pretest, Posttest, Transfer Test, and the Number of Practice Trials...... 51

Table 4.4 One-Way Analysis of Variance Summary for Transfer Test Scores...... 52

Table 4.5 Mean Mental Effort Invested During Practice...... 53

Table 4.6 Descriptive Statistics of Time Spent in Initial Instruction by Experimental Treatment ...... 56

Table 4.7 Intercorrelations and Probabilities of Data for All Dependent Measures ...... 57

Table 4.8 Adjusted Means and Standard Error for Gainers by Experimental Group on Performance and Transfer Controlling for Prior Knowledge ...... 59

Table D.1 Pretest Statistics for Items Sorted by Task Class...... 86

Table D.2 Pretest Subscale 1 Items: Words Between and ...... 87

Table D.3 Pretest Subscale 2 Items: Verb Before Subject ...... 87

Table D.4 Pretest Subscale 3 Items: Compound Subject...... 88

Table D.5 Pretest Subscale 4 Items – Indefinite Pronouns...... 88

vi

Table D.6 Posttest Statistics for Items Sorted by Task Class ...... 89

Table D.7 Posttest Subscale 1 Items – Words Between Subject and Verb ...... 90

Table D.8 Posttest Subscale 2 Items – Verb Before Subject...... 90

Table D.9 Posttest Subscale 3 Items – Compound Subjects...... 91

Table D.10 Posttest Subscale 4 Items – Indefinite Pronouns ...... 91

Table D.11 Transfer Test Items Sorted by Task Class ...... 92

vii

LIST OF FIGURES

Figure 1.1. Proposed causal factors associated with the effects of practice sequence on transfer task performance...... 14

Figure 3.1. Graphic representation of sequence of components of the CBI by treatment group...... 31

Figure 3.2. Hypothetical screen display after a learner has selected a correct choice in the first practice session...... 37

Figure 4.1. Linearity of covariance of posttest with pretest for experimental and control groups...... 45

Figure 4.2. Linearity of covariance of transfer test with pretest for experimental and control groups...... 46

Figure 4.3. Linearity of covariance of posttest with practice trials for experimental groups...... 47

Figure 4.4. Linearity of covariance of transfer test with practice trials for experimental groups...... 48

Figure 4.5. Adjusted mean performance of gainers by group on posttest (A) and transfer test (B) controlling for prior knowledge...... 59

Figure 5.1. Decision process for the simple application of the compound subject sub-rule...... 70

Figure 5.2. Decision process for the conditional reasoning of the compound subject sub-rule as instructed...... 71

viii

ABSTRACT

The sequence of instructional practice in computer-based instruction for the acquisition and transfer of complex cognitive skills has been the focus of several studies. The present study continued that line of research. One factor of practice sequence involves the contextual interference effect, in which a random practice sequence results in greater time and effort during instruction and equal or lower criterion performance, but greater transfer performance than blocked practice composed of contiguous subtasks. The contextual interference effect has been studied extensively with motor skills and has been less thoroughly studied with complex cognitive skills. Researchers have posited two rationales for the effect. One rationale is that random order practice elicits deeper processing of information related to the task that results in greater distinctiveness of the task classes from one another. The other rationale is that random order practice causes the participant to repeatedly forget the solution path for a given task class and necessarily reactivate the rule schema to establish a new memory trace. By either explanation, over a large number of trials, learners accumulate numerous retrieval cues for transfer to novel variants of the skill. For complex cognitive skills, both rationales involve the paired executive cognitive processes of interpreting cues from the problem context and deciding which rule to apply, which are favored by the random practice condition. Studies that have found the effect with complex cognitive skills are limited in at least two respects: they have not controlled for time spent in practice and they have examined variations in practice in strictly block-ordered or random-ordered schedules. Regarding the former limitation, researchers have controlled for the number of trials performed during practice but not for the time spent in practice, which was consistently greater for the randomly ordered practice condition. Regarding the latter limitation,

ix

empirical studies on complex cognitive skills have examined practice sequence in the limited sense of a dichotomy: block sequence or random sequence. They have not examined sequences that adjust practice task variability during instruction in a manner designed to respond to the individual’s dynamic skill acquisition process. The experimental design of the present study controlled for time in practice rather than the number of practice trials and focused on the effect of a shifted sequence of sub-skill practice, beginning with block order and shifting to random order, of practice at making subjects and of a sentence agree (the block-random condition). In addition to the block, random, and block-random conditions, the design also included a random-block condition to experimentally control for the possibility that practice involving both block and random conditions, regardless of order, could account for improved performance. It also included a control group that performed unrelated learning tasks between the pretest and posttest to statistically control for a potential test effect. Based on the results of previous studies, the block group in the current study was expected to perform equally to or better than the random group on the posttest and the random group better than the block group on the transfer test. The more novel hypothesis predicted that the block-random condition would result in greater transfer than any of the other groups because the earlier block order practice would limit cognitive load when learners began to automate recurrent aspects of the whole skill and of sub-skills, whereas the later random practice would provide sufficient practice at deciding which of the rules to apply. The random-block group was expected to underperform all other groups on both the posttest and the transfer test because the group’s members would experience excessive cognitive load and interference from the random order sequence in the early stage of practice, which would inhibit development of their schemas for subject-verb principles. Ninety-two high school seniors completed all of the activities and tests. The activities consisted of computer-based training and practice on the grammar principle of subject and verb agreement that trained four sub-rules of the principle. The tests included a 40-item pretest to assess prior knowledge of the skill, a posttest and a transfer test

x

consisting of a document that the participants edited to make subjects and verbs of sentences agree. No test effect was found between the pretest and posttest and no significant differences were found among treatment groups on either the posttest or transfer test, whether analyzed with or without their pretest scores used as a covariate. The number of practice trials performed in the fixed period was not significantly different among groups, nor did it correlate significantly with scores on either the posttest or transfer test. Sub- tests of the four sub-rules showed no significant differences by group. However, in a post hoc analysis that examined the scores of 23 “gainers” in experimental groups, those who made substantial learning gains from the pretest to the posttest, gainers in the block- random condition significantly outperformed gainers in the random condition on the posttest and significantly outperformed gainers in the block condition on the transfer test, as the hypotheses predicted. The results have implications for design of computer-based practice and feedback and for future research on practice sequence. Engaging the learner in cognitively processing feedback has implications for designing practice and feedback protocols in computer-based instruction. For example, a feedback processing protocol should be employed that requires the learner to read the feedback and, using its information, perform some action correctly before continuing to the next practice problem. Otherwise, some learners attend only to the knowledge of results, that is, whether or not they have chosen the right answer, before they continue to the next problem. Implications for future research on the contextual interference effect with regard to acquisition and transfer of complex cognitive skills include variations of the current study with modified practice and feedback protocols. In a different research direction, the results of this study implicate the interactions between learners’ motivational attributes and the adaptive qualities of computer-based instruction to account for motivational factors that mediate acquisition and transfer.

xi

CHAPTER 1

INTRODUCTION

This chapter presents the general problem of designing and developing computer- based instruction (CBI) to improve transfer of complex cognitive skills, acquired through practice, to the performance of novel related skills. The sequence of practice items is the specific design feature of interest. In Chapter 1 the problem is stated, the context of the problem is described, and the significance of the present study asserted. The chapter provides a brief theory-based rationale for examining variations in the sequence of practice that affect learners’ transfer of their knowledge and skills acquired from instruction to novel problems. Then the specific purpose of the present study is described, concluding with statements of the research questions and the hypotheses of the study.

Statement of the Problem

The purpose of instruction is to prepare learners to perform later in a different environment, either on a job or in more advanced education or training. This study addresses specific instructional strategies intended to improve the transfer of complex cognitive skills to subsequent performance in a learning environment or on the job. In the design and development of instruction of complex cognitive skills, the designer must consider the multiplicity of variations of the skills the learners will apply in their performance environments. Van Merriënboer (1997) describes complex cognitive skills as those comprising “constituent skills,” at least some of which “involve conscious processing,” and the majority of which are “in the cognitive domain” (p. 312). The complexity of interactions among the constituent skills challenges an instructional

1

designer’s ability to design instruction to train for every variant of each constituent skill. From a practical standpoint, the multiplicity of potential applications of the trained skill generally precludes training on each variation of the problems likely to occur in the performance environment (Merrill, 1987; Van Merriënboer, 1997). Instead of training for every variation, the designer must plan practice activities to optimize transfer of skills to novel problems that learners will solve in the diverse situations they will encounter in the workplace or in subsequent instruction. Thus, the efficient design and development of instruction requires instructional strategies that promote the flexible application of those skills to diverse novel situations in which they are to be performed. One such strategic consideration is the focus of this study: variation in the sequence of practice activities, particularly in CBI. Several theories and findings of empirical research informed the investigation of the question “How should practice be sequenced in CBI to result in greatest transfer of learning?” Later in this chapter, the section Theory and Rationale describes some of the salient theories and briefly illustrates research findings that suggest instructional strategies for the sequence of practice. Chapter 2 presents some of those research findings in greater detail. The present study examined the problem of how to sequence practice in CBI for the complex cognitive skills of grammar rule application to better promote transfer of the skills acquired during practice to the performance of a novel related skill.

Context of the Problem

This study examined a specific problem that involves rule automation and declarative knowledge in the application of an English grammar principle. As such, it is one study in the growing body of literature about the effects of practice sequence on the acquisition and transfer of complex cognitive skills.

Automatic Processing

With sufficient repetition, consistent practice of rule application leads to automatic processing in the retrieval and application of the rule, requiring little if any controlled processing or mental effort (Schneider & Shiffrin, 1977). The automatization

2

of rules (e.g., if x, then y, else z) or procedures (first do a, then do b, etc.), gained from repeated practice, results in relatively stable and reliable – although relatively fixed and inflexible – cognitive “productions” (Anderson, 1999). These productions are retrieved from long term memory (LTM) and activated as single chunks or as unitary elements in working memory (WM). For the purposes of the present study, rule automation and procedure automation are the cognitive processes of compiling declarative or procedural knowledge, the products of which are denoted collectively as productions. Rule automation is defined by van Merriënboer (1997) as “A category of learning processes … which lead[s] to the acquisition of highly domain specific rules or productions that directly control the performance of recurrent aspects of a complex cognitive skill” (p. 320). Automation is accomplished by repeated practice and feedback (Schneider & Shiffrin, 1977). The application of a rule or procedure, after it has become automated, is relatively inflexible (Shiffrin & Schneider, 1977). After repeated practice of rules or procedures and their subsequent automation as productions, their execution is typically quicker than execution of algorithmic structures retrieved from long term memory as abstract rules (e.g., Strayer & Kramer, 1994). However, the compromise for such speed is the relative inflexibility of the productions to adapt to the non-recurrent aspects of the skill. The use of inflexible productions seems to occur often when applying grammar rules, mistakenly when superficial retrieval cues activate inappropriate responses. As an example in grammar, in the sentence “John and I lifted the table.” the pronoun “I” is used correctly; whereas in the sentence “Jean handed the chairs to John and I.” the pronoun “I” is used incorrectly because the pronoun is used as part of the of the sentence, rather than as part of the subject of the sentence. Because the grammatical construction “x and I” occurs as the subject of a sentence more commonly than as the object of the sentence, and thus posits many instances in LTM, there is a strong tendency to retrieve “I” as the pronoun rather than the correct pronoun “me” when the pronoun is part of the object. In this example of faulty grammar usage, the algorithmic rules that govern pronoun use are superseded by productions stored in LTM that refer directly to instances of the application of the rules. The constituent part of the skill practiced in early training, to use the pronoun “I” for a first-person singular subject,

3

may have been effective to extinguish the use of “me”; but in performance, the pronoun “I” becomes routinely used without regard to whether it is part of the subject or the object of the sentence. Rule automation is a double-edged sword; it facilitates and speeds responses, but can result in errors when surface characteristics of presenting stimuli become retrieval cues instead of more cognitively demanding structural characteristics, and can result in application of an inappropriate rule. As an example, certain indefinite pronouns, such as “either,” when used as subjects, always take a singular verb (e.g., “Either is…”). In contrast, the words “either” and “or” situated around subject nouns can signal activation of a compound subject sub-rule (e.g., “Either a paragraph or two sentences are…”) that sometimes takes a plural verb. The similarity of stimuli can result in the activation of inappropriate rules or sub-rules. Habitual errors of this kind can be remediated with sufficient practice and feedback that focuses on structural cues for retrieval, although the originally production is never completely eradicated by the new one (Bouton, 2002). One of the many applications of CBI is to present such practice and feedback for frequently applied skills. The sequence of computer-based practice provided the intervention that was manipulated in the present study.

Practice Sequence Design Considerations

The sequence of practice has shown transfer effects in perceptual motor skills (e.g., Shea & Morgan, 1979) and in complex cognitive skills (e.g., de Croock, van Merriënboer, & Paas, 1998). Studies that fail to show transfer effects are often plagued by insufficient practice in the random-order practice condition (Schmidt & Bjork, 1992) or by failure to control for the number of trials in the random-order practice condition (Shea & Morgan, 1979), because random-order practice often proceeds more slowly than block-order practice. Thus, the researcher is faced with the decision of whether to control for practice trials or time-on-task during practice. The present study will control for time- on-task during practice. Additional analysis will examine the data using the number of trials during practice as a covariate, suggested by J. J. G. van Merriënboer (personal communication, May 18, 2003), to account for transfer performance.

4

The Contextual Interference Effect and the Transfer Paradox

The sequence of practice seems to affect performance in a paradoxical way. Whereas block-order practice, in which tasks A, B, and C, as variant forms of a skill, are practiced in a sequence such as A-A-A, B-B-B, C-C-C results in quicker, more efficient learning of the recurrent aspects of the skill than random-order practice such as B-C-A- C-A-B-C-A-B, random-order practice results in greater transfer to novel but related tasks. Battig (1972; 1979) found this phenomenon in studies of intra-task interference. Battig (1972) termed this phenomenon the contextual interference effect. Subsequently, numerous researchers in the field of kinesthetics found that the phenomenon applies as well to perceptual motor tasks (e.g., Shea & Morgan, 1979). More recent research has found the effect in the learning domains of problem solving and complex cognitive skills (de Croock, van Merriënboer, & Paas, 1998; Jelsma & Pieters 1989; van Merriënboer, Shuurman, de Croock, & Paas, 2002), with generally concurrent results to those of studies in the other domains. Hiew (1997) found the effect in rule learning. Jelsma and Pieters (1989) referred to the effect in the problem-solving domain as the transfer paradox. The transfer paradox, characterized as interference with acquisition that results in improved transfer, is especially important as it is applies to the design of practice in CBI for complex cognitive skills. If the phenomenon is universal across learning domains for which transfer is the goal, the main implication for designing instruction is that random order practice of constituent skills is more effective than block-order practice of constituent skills.

Effective Writing and English Grammar Usage

The verbal learning problem of incorrect grammar usage in the writing of American college students sometimes impedes the effectiveness of their writing. In the present study, grammar rule usage is a medium for the examination of the effects of practice sequence on the transfer of a skill to a novel related problem-solving skill, rather than an end in itself.

5

Adherence to rules of grammar is not considered a learning objective of high priority among college writing instructors because (a) college students are expected to have mastered basic grammar skills (D. Coxwell-Teague, personal communication, June 24, 2003) and (b) so many other objectives must be instructed during a limited time (J. Pekins, personal communication, July 24, 2003). However, college students who are diagnosed with deficiencies in grammar skills are often referred to auxiliary resources, such as a writing lab. H. W. Fowler’s (1985) account on “grammar” in the second edition of A Dictionary of Modern English Usage explains underlying reasons for disdain of grammar instruction and calls for mindful adherence to, and instruction of, established grammar rules: It has become fashionable to speak disrespectfully of grammar – a natural reaction from the excessive reverence formerly paid to it. The name Grammar School remains to remind us that the study of grammar was once thought to be the only path to culture…. We have developed our own ‘noiseless’ grammar, as Bradley called it; what are generally recognized for the time being as its conventions must be followed by those who would write clearly and agreeably, and its elements must be taught in the schools, if only as a code of good manners. (pp. 230-231) Students’ writing effectiveness sometimes suffers from faulty grammar. Repeated misuse of a grammatical construction, void of corrective or informative feedback, results in strengthening of that misuse, and renders it an automatic process (Schmidt, 1992). “Once learned, an automatic process is difficult to suppress, to modify, or to ignore” (Schneider & Shiffrin, 1977, p. 2). Evidently, responses to prior conditioned stimuli (CSs) are not merely “overwritten” by a new CS; instead, they must compete with responses encoded prior the new CS (e.g., Bouton, 2002).

Problem-Solving and the Complex Cognitive Skill of Grammar Usage

Complex problems are those that have multiple paths to the solution, or that can be solved by multiple means. By contrast, ill-structured complex problems have no specific goal state (Mayer & Wittrock, 1996). In the present study, there was a singular

6

goal state in which the subject and verb of a sentence agree in number, measured as the frequency of errors in subject-verb agreement. This study comprised four main elements: a pretest, instruction with practice, a performance test administered at the end of practice, and a transfer test administered after a delay. In the practice and the performance test, the goal was to select the correct choice (singular or plural) from among two choices of either the subject or the main verb. The participant’s task in the practice and the performance test was reduced to a discrimination task, although choosing correct alternatives depended on rule application. (For examples of problems see the pretest in Appendix B.) In the transfer test, however, the goal was to edit sentences in a set of paragraphs to make subjects and verbs agree in number, a problem-solving task. For each faulty sentence in the transfer test, there were at least two correct solutions, replacing the subject or replacing the verb. An additional option to reconstruct the sentence in a way that resulted in subject-verb agreement occasionally occurred. The transfer test presented a problem-solving task in which sub-rules, as variants of the main rule, were identified from the sentence structure and selectively applied. Despite the fact that the English grammar rule “subject and verb must agree in number” seems to elicit rule-based behavior, correct performance is not always a rule- based behavior, but more often a complex problem-solving task. Most English grammar rules do not always follow the logical structure (if, then, else) as rules are expected to follow. As grammar teachers have often expressed, there are exceptions to every rule. The rule of subject-verb agreement is complicated by the hurdle of identifying the subject and main verb of a complex sentence and applying several conditional aspects of the rule. English grammar rules do not apply in all circumstances; many rules have exceptions and the exceptions are not always clearly definable. “A rule in grammar is a generalization. It is a formula that one makes to account for how a given grammatical construction usually behaves” (Pyle & Muñoz, 1982, p.39, italics in original). Despite attempts by scholars to define rules for English grammar, some rules have been stated as conditional rules without firm conditions, (e.g., in Smith & Stapleford, 1963, p. 13, some rules were conditioned with the modifier “normally”). The grammar rules of the English

7

language are so rife with incidental exceptions that they can hardly be termed rules in the conventional sense. To further complicate the problem of adherence to English grammar rules, the is malleable in that it changes over time and is situational in that it is used differently under different circumstances. In short, grammar rules are conditioned by numerous exceptions and are sometimes affected by temporal and situational factors. Thus, adherence to English grammar rules can be characterized as complex problem-solving rather than as strictly rule-based behavior. Thus, the problems presented in the transfer test were well-structured complex problems. The problems presented to subjects in the present study were complex, but well defined, because the given state and goal state were provided to the problem solver.

Significance of the Study

Educational psychologists have placed a premium on the transfer of skills as a goal of training. Among them, Schmidt and Bjork (1992) argued that the ultimate goals of training practical skills is, or should be, “(a) the level of performance in the long term and (b) the capability to transfer that training to related tasks and altered contexts” (p. 207). They cite contextual interference studies in psychomotor and verbal domains as evidence that training strategies to promote rapid acquisition of skills and increased speed in performance sometimes impede retention and transfer. They further argued that training strategies should more frequently focus on improving transfer of target skills than criterion performance of skills divorced from the context of performance. The contextual interference effect in which the repeated practice of simple physical movements is followed by delayed performance and transfer tests has been widely studied in the perceptual motor domain. Many researchers have replicated the contextual interference effect with psychomotor skills, most frequently with artificial tasks in laboratory settings (see Magill & Hall, 1990 for an extensive review of contextual interference studies in the psychomotor domain). Some studies have examined the effect in other domains (e.g., de Croock et al., 1998; van Merriënboer et al., 2002).

8

The development of grammar skills by high school students whose responses are faulty requires that the faulty responses to stimuli they have acquired be extinguished and replaced with newly acquired responses. The cognitive restructuring required to substitute new skills for habitual behavior requires more effort by the learner than new learning that does not compete with prior learning. Extensive retraining is necessary to replace faulty habits with new skills because the initially conditioned response to a given conditional stimulus apparently remains indefinitely in LTM, presenting a recurring competitor to the newly learned response (Bouton, 2002). Many if not all of the performance environments in which college graduates apply the skills they acquire in college require effective written communication. Editing documents is a frequent task required for performers of intellectual skills in the workplace. The present study examined practice interventions to affect the replacement of faulty habits, as conditional stimuli (CSs), with new skills or responses that require compliance with the rule that subjects and verbs in a sentence must agree in number. The goal of the current study was to replace faulty CSs, the correct applications of concepts, rules and principles governing subject-verb agreement, by an intervention of practice and feedback. More specifically, the present study examined variations in sequences of practice as they affected the transfer of this complex cognitive skill to a novel related task of editing a document. The identification of an optimally effective practice sequence could, along with other effective strategies, be manifested in the design of instruction that would improve writing skill and communication. If such a sequence were identified and subsequently replicated and tested in other domains, a more generalizable application of the sequence strategy could inform instructional theory.

Theory and Rationale

The theories on which the present study was based are those of automatic human information processing (Schneider & Shiffrin, 1977; Shiffrin & Schneider, 1977), contextual interference founded on Battig’s (1979) inter-task interference studies, and cognitive load theory (CLT; see Sweller, 1990, for foundations of the theory, and

9

Bannert, 2003, for updates on the theory). These theories, applications, and associated empirical evidence are described in more detail in Chapter II. A brief overview of the main principles follows, along with rationales for the design of the present study.

Contextual Interference

Contextual interference was operationalized in this study as the effect of variability in the sequence of practice tasks that inhibits immediate acquisition of skills. Studies on contextual interference have traditionally operationalized contextual interference as the sequence of practice tasks, particular characteristics of which are categorized as A, B, or C, for example, and scheduled for practice in either block order, for low contextual interference, or random order, for high contextual interference. That is, low contextual interference was operationalized as repeated consistent practice of a constituent skill. Low contextual interference (LCI) was characterized as block order practice of constituent skills required for the task in which constituent skills A, B, and C were practiced in the order A-A-A-A-B-B-B-B-C-C-C-C. High contextual interference (HCI) was characterized as random order practice, as when the skills were practiced in an order such as C-A-B-A-B-C-A-C-B-C-A-B.

Purpose of the Study

An important purpose of any educational or instructional system is to facilitate the transfer of skills acquired in instruction and practice to performance of some kind (Ferguson, 1956; Mayer & Wittrock, 1996). The purpose of the present study was to investigate the effects of specific variations in the sequence of computer-based practice tasks of a specific verbal skill on the number of practice trials a learner completes, the learner’s acquisition of skill, and the learner’s ability to transfer skills practiced to a novel, related skill, when time spent at instructional practice was held constant. The study examined the effects of sequence on transfer performance.

10

Transfer

Whereas positive transfer of skills is the desired outcome of instruction and practice, transfer is a rather vague term that can be characterized or interpreted variously. In this study the term “transfer” is assumed to mean the successful application of knowledge and skill acquired in practice tasks to a task of a type the learner has not practiced applying with the target skill and knowledge. The transfer of skills across subject domains or across domains of learning does not occur without substantial cueing of the transfer task to the learner to make associations between practice tasks and transfer tasks (Gick & Holyoak, 1980, 1983; Tulving & Thomson, 1973). Van Merriënboer (1997) describes transfer as “The ability to perform an acquired skill in new, unfamiliar situations” (p. 322). Mayer and Wittrock (1996) stated that “Transfer occurs when a person’s prior experience and knowledge affect learning or problem solving in a new situation” (p. 48). Transfer was operationalized in the present study as the use of skills acquired in instruction and practice as they affected the editing of a document. As such, the transfer task is the successful application of knowledge and skill acquired in practice tasks to a task of a type the learner has not yet practiced.

Research Questions

This research investigated the effects of specific variations in the sequence of computer- based practice tasks on a specific verbal skill on the number of practice trials a learner completes, the learner’s acquisition of skill, and the learner’s ability to transfer knowledge acquired in the practice of a skill to a novel, related skill, when time spent at instructional practice is held constant. The general research questions of the present study, assuming computer-based practice of complex skills with practice time held constant, were: 1. Does practice sequence affect the number of practice trials completed? 2. Does practice sequence affect acquisition of the practiced skill? 3. Does practice sequence affect transfer?

11

Hypotheses

The hypotheses were that, when practice time is held constant across groups: 1. Random-order practice results in fewer practice trials than block-order practice. 2. Block-order practice results in greater or equal acquisition of the same skill practiced compared to random-order practice. 3. Random-order practice results in greater transfer performance than block-order practice. 4. Shifted-order practice (SCI-I, a period of block-order practice, followed by a period of random-order practice), results in greater transfer than block-order practice or random-order practice. 5. Reversed shifted-order practice (SCI-II, a period of random-order practice, followed by a period of block-order practice), results in lower transfer than shifted-order practice, block-order practice, or random-order practice.

Summary

The transfer of skills acquired in CBI practice to subsequent performance is evidently dependent partly on the sequence in which practice is presented to the learner (e.g., De Croock, van Merriënboer, & Paas, 1998; Jelsma & Pieters, 1989; Paas, & van Merriënboer, 1994; Schmidt, & Bjork, 1992; Shea & Morgan, 1979; Van Merriënboer, de Croock, & Jelsma, 1997; Van Merriënboer, Shuurman, de Croock, & Paas, 2002). The present study tested and elaborated the effects of block-order and random-order practice on the transfer of skills acquired in practice to the performance of a related, unpracticed task. The results of the present study are intended to inform the design of instruction for improving transfer of skills acquired in practice with CBI.

12

CHAPTER 2

REVIEW OF LITERATURE

Variations of practice sequence for rule learning and problem solving tend to improve the learner’s ability to transfer skills to new contexts and applications (Gagné, 1985; Cormier & Hagman, 1987). Practice sequence variations are the various temporal orders in which practice tasks are presented to the learner. Although the sequence of practice items can be varied in numerous ways, the specific way of varying practice in the present study was through variations on block-order practice and random-order practice. Here, practice items were blocked by rule variation. A secondary aim of this research was to provide data for inferences about the cognitive processes, including controlled processes and rule automation, by which positive transfer results from the instruction of complex cognitive skills. Mechanisms that account for positive transfer include the theory of identical elements (Thorndike & Wordworth, 1901) and schema-based transfer (Gick & Holyoak, 1987). Van Merriënboer (1997) holds that controlled processes, “require focused attention, are easily overloaded, and are prone to errors” (p. 313) and that rule automation, which enables processing with decreased mental effort, “is mainly a function of the amount and quality of practice” (p. 320). The effect of practice sequence on transfer is expected to be mediated by the degree to which elements of the transfer tasks are similar to practice tasks (from identical elements theory, Thorndike & Wordworth, 1901) and the degree to which cognitive processes are automated through repeated practice (Gick & Holyoak, 1987). Depending on the familiarity of the learning task to learners, prior knowledge or skill may also mediate the effect of practice sequence on transfer (Gagné, 1985). Figure 2.1 illustrates in

13

simplified form the proposed causal factors associated with the effects of practice sequence on transfer task performance.

Controlled Processing

Transfer Task Practice Sequence Performance

Automatic Processing

Prior Knowledge or Skill

Figure 1.1. Proposed causal factors associated with the effects of practice sequence on transfer task performance.

Empirical evidence across learning domains has quite consistently indicated that practice sequence affects mental effort exerted during practice (e.g., de Croock et al., 1998; van Merriënboer et al., 2002), acquisition (e.g., Shea & Morgan, 1979), delayed retention (e.g., de Croock et al., 1998; Shea & Morgan, 1979), and transfer performance (e.g., de Croock et al., 1998; Schmidt & Bjork, 1992). In the domain of complex cognitive skills, randomly ordered practice, when compared to block-order practice, results in: (a) increased mental effort during practice, (b) decreased speed to problem solution on performance tests of practiced skills, (c) increased delayed retention, and (d) increased performance on transfer tasks (e.g., de Croock et al., 1998; van Merriënboer et al., 2002). Those phenomena characterize the contextual interference effect. The contextual interference effect can been explained in the context of controlled and automatic human information processing (Schneider & Shiffrin, 1997; Shiffrin & Schneider, 1977) and in the context of cognitive load theory (Bannert, 2002; Sweller, 1994; Valke, 2003; van Merriënboer et al., 2002). The principles of these theories and the findings of associated research are presented in this chapter, followed by a rationale for investigating the effect of beginning a practice session with block order practice and then

14

shifting to random order practice. Before reviewing the literature, the uses and meanings of terms require some specification.

Conflation of Terms

For this review, several terms derived from diverse theories serve a common nomenclature. Terms within the following sets of terms are used synonymously to convey the definitions that follow: • working memory (WM), temporary stores, short-term memory (STM), and short-term stores: information held in consciousness, limited to seven items, plus or minus about two (Miller, 1956) – as long as those elements are not interactive – subject to extinction quickly unless rehearsed or encoded into long-term memory • long-term memory (LTM) and long-term store: virtually unlimited memory that is the compilation of knowledge and skill from a lifetime of learning, subject to reconstruction as new related information is recursively reintegrated into knowledge structures • cognitive processing and human information processing (HIP): the processes by which humans perceive stimuli and respond, characterized by Gagné (1985) as the following stages: reception of patterns of neural impulses, selective perception of features, short-term storage, rehearsal, semantic encoding, long-term storage, retrieval, response organization, and performance • block order practice, blocked practice, blocked order practice, and consistent practice: a practice sequence in which variants of a skill or constituent skills of a whole skill are practiced in consistent, contiguous segments or blocks of practice • random order practice, randomly ordered practice, and varied practice: a practice sequence in which variants of a skill or constituent skills of a whole skill are practiced in varied order with practice items randomly selected from all of the variants or constituent skills • time-in-practice, time-on-task, time spent at practice, time at practice, practice time: the total amount of time a learner or participant spends practicing learning tasks prior

15

to tests of acquisition, delayed retention, or transfer. In contrast to time in instruction, these terms refer to the part of instruction that includes strictly practice and feedback. Time in instruction includes time in practice in addition to the time spent in pre- instruction (gaining attention, presenting objectives, eliciting prior knowledge) and information presentation.

Contextual Interference Effect and the Transfer Paradox

The main phenomenon under investigation, the contextual interference effect, which is consistent with the transfer paradox described by van Merriënboer, de Croock, and Jelsma (1997), has been observed in the verbal domain with paired associates (Schild & Battig, 1966), in the motor domain (Shea & Morgan, 1979), and in the domain of complex cognitive skills with troubleshooting faults in a complex system (de Croock et al., 1998; van Merriënboer et al., 2002). High contextual interference results in greater transfer and often greater delayed retention, although it interferes with quick, smooth acquisition of a skill during practice trials (de Croock et al., 1998).

Contextual Interference Effect

Contextual interference is significant in its differential effects on skill acquisition, retention, and transfer. In a review of studies on the contextual interference effect in motor skill acquisition, Magill and Hall (1990) defined the phenomenon broadly as “the effect on learning of the degree of functional interference found in a practice situation when several tasks must be learned and are practiced together” (p. 264). The term “functional interference,” which Magill and Hall included in their definition, refers to the difficulty learners encountered with skill acquisition when multiple tasks were practiced together rather than separately – that is, when they practiced the various tasks in randomized order rather than in blocks of repeated practice of each task. The important aspects of the observed phenomenon with simple tasks are the effects on retention and transfer. Although learners who practiced random-ordered tasks, compared to learners who practiced the same tasks in blocked order, experienced greater difficulty and performed relatively poorly during practice (and often on performance

16

tests), they usually demonstrated greater delayed retention or transfer than those who practiced block-ordered tasks. The origin of the contextual interference effect was the finding that paired- associate learning – the memory task that required recall of matched words – although decreased by random-order rehearsal compared to block order rehearsal, resulted in increased performance on delayed retention tests (Schild & Battig, 1966; Battig, 1972). The initial finding was relatively incidental to Schild’s research on the properties of human memory (Battig, 1972), but later became a focus of Battig’s research (e.g. Battig, 1979). However, the findings drew little interest until Shea and Morgan (1979) applied the effect to motor skills (Magill & Hall, 1990). After the publication of Shea and Morgan’s (1979) study that involved timed sequential knocking down of barriers on a table as responses to visual stimuli, numerous studies in rapid motor skill acquisition followed, most of which replicated the results of the effect with regard to delayed retention and transfer.

The Transfer Paradox

The contextual interference effect and the transfer paradox share the common assumption that superior transfer follows from practice-task variability. The constructs vary in two ways: (a) the transfer paradox applies to all types of task variability, of which contextual interference is one (van Merriënboer et al., 1997), and (b) the transfer paradox posits that task variability during practice may inhibit delayed transfer. Van Merriënboer et al. (1997), in describing the transfer paradox, claimed that: Whereas high variability typically improves transfer performance for new variants of a task that were not practiced before, it may also impair performance during practice or require more training time to reach a prespecified performance without positive effects on performance at retention, that is, performance on variants of the task already practiced. (p. 784) In a similar vein, Battig (1979) wrote that “increased contextual variability and variety can and often do lead not only to more effective original processing and learning but more importantly, to better subsequent retention and transferability of this information” (p. 36).

17

However, Battig’s earlier memory experiments indicated that delayed retention was improved by high variability in practice, whereas van Merriënboer, et al (1997) predicted, and found, no differences between treatment groups in delayed retention. Because the van Merriënboer, et al (1997) study investigated the acquisition of complex cognitive skills under high- and low-contextual interference, rather than the learning of paired associate learning of Battig’s experiments, the reasons for the discrepancy remain unclear. One possible explanation is that the increase in the delayed retention characteristic of high contextual interference treatments in simple verbal or motor skills (e.g., Battig, 1979; Shea & Morgan, 1979) does not apply to complex skills because the complexity of tasks interferes with memory structures that are useful for simple tasks. Another potential explanation is that instruments used to assess delayed retention in experiments that involved complex tasks (e.g. de Croock et al., 1998; van Merriënboer et al., 1997; van Merriënboer et al., 2002) were not sufficiently sensitive to measure delayed retention effects that may have occurred. The contextual interference effect, as it has been studied in the acquisition of relatively simple psychomotor skills (e.g., Lee & Magill, 1983; Shea & Morgan, 1979) is, with some exceptions, largely incidental to the study of the effect as it pertains to the instruction of complex cognitive skills. The training of rapid muscle movement is a different domain of learning from those of rule use and problem solving typical of the training of complex cognitive skills. Although the effect might generalize across such diverse learning domains, its application to complex cognitive skills deserves further empirical investigation. Two observations by Magill and Hall (1990) in their review of motor skill studies are worthy of note because of their implications for complex cognitive skills: (a) the transfer aspect of the effect was more consistent when the parameters of transfer tasks exceeded those of the practice tasks than when the parameters of the transfer tasks were between poles of the practice tasks, and (2) retention and transfer effects were relatively more reliable when the number of trials for each group were controlled. The former observation is important because it suggests that the benefit of random-order practice may be more consequential to relatively far transfer tasks (e.g., when practice tasks and

18

transfer tasks are performed in quite different contexts) than relatively near transfer tasks (e.g., when practice and transfer tasks vary in the timing or sequential order of task elements, but are performed in the same context). Accordingly, one might surmise from Magill and Hall’s (1990) conclusions that – to some degree – the transfer aspect of the contextual interference effect is more robust when transfer tasks are more varied from practice tasks. The latter observation by Magill and Hall (1990) is crucially important. Whereas, controlling both the number of trials and time-on-task for practice trials is relatively easy to accomplish in trials of motor skills that occur almost instantaneously (in which the duration of practice trials are measured in milliseconds), control of practice time becomes problematic with complex cognitive skills such as troubleshooting a system fault, in which the trials may range from several seconds to several minutes. For complex cognitive skills, in which time to solution may vary widely across subjects, it is nearly impossible to control for both the number of practice trials and time-on-task during practice by experimental design (J. J. G. van Merriënboer, personal communication, May, 18, 2003). Van Merriënboer suggested that if time-on-task were controlled experimentally the number of practice tasks should be controlled statistically as a covariate. The studies of motor skills are also of interest to the acquisition and transfer of complex cognitive skills because, prior to Schild and Battig’s findings and the experiments of Shea and Morgan (1979), interference was commonly viewed as leading to exclusively negative transfer (Magill & Hall, 1990). Subsequently, interference had to be considered differentially with respect to the desired learning outcome: acquisition, retention, or transfer, among other variables. The studies of motor skills laid the groundwork for researchers to investigate the contextual interference effect in domains of complex cognitive skills. Another contextual interference study in the psychomotor domain by Shea, Kohl, and Indermill (1990) suggested that initial block order practice followed by random order practice might result in superior transfer results to practice in exclusively block order or random order sequences of practice. They found that participants who practiced the

19

greatest number of trials in block order, or the low contextual interference (LCI) treatment (400 practice trials) exhibited a degree of rigidity in responses on subsequent performance tests that was not present among participants who practice fewer trials in the LCI treatment. Flexibility in adapting responses to nuances of stimuli was viewed as a critical factor for transfer ability. The approach to practice sequence of beginning with block order practice and shifting to random order practice was tested in the present study in the domain of complex cognitive skills. Because the retrieval of information from LTM can occur simultaneously by both automatic and controlled processes (Shiffrin & Schnieder, 1997), efficient processing in the limited-capacity cognitive system can be aided by the development of automatic processes for recurrent aspects of tasks enabled by early block order practice, allowing working memory resources to remain available for the processing of new tasks (Shiffrin & Schneider, 1977). Following automation of the recurrent aspects of tasks during initial block-order practice, subsequent random order practice should promote more controlled processing of the non-recurrent aspects of the tasks than would block order practice, while recurrent aspects of the tasks require little from limited working memory resources. Therefore, the condition of the current study in which participants began with block order practice and later shifted to random order practice was expected to result in moderated interference from random order practice and superior transfer than either the random or blocked condition with the flexibility Shea et al. (1990) suggested would result from shifting order in the psychomotor domain.

Contextual Interference and Complex Cognitive Skills

From a study in the domain of complex cognitive skills, using a simulation of a chemical processing plant, de Croock, van Merriënboer, and Paas (1998) found that the random-order practice (high contextual interference; HCI) of troubleshooting faults resulted in greater transfer of skills than block-order practiced (low contextual interference; LCI). When subjects applied the skills they had practiced in a computer- simulated plant to a problem they had not yet practiced, the random-order-practice (HCI)

20

group significantly outperformed the block-order-practice (LCI) group at troubleshooting faults with an ad hoc effect size exceeding one pooled standard deviation on two of the four dependent measures used to assess transfer. Those two measures were “number of incorrect diagnoses” (ES = 1.44 SD) and “cases solved without incorrect diagnoses” (ES = 1.09 SD) (pp. 262-263). The measure “cases correctly diagnosed” suffered a ceiling effect and the “mean diagnosis time,” though less for the HCI group, did not reach the level of statistical significance. The de Croock, van Merriënboer, and Paas (1998) study was limited in power by a small sample size (N = 16, 8 in each group) in addition to the ceiling effect on the dependent measure “cases correctly diagnosed.” Furthermore, time required to complete the controlled number of practice trials, as expected, was significantly greater for the random-order practice group (p < .05) by about 19 minutes on average, a post-hoc effect size of 0.53 pooled standard deviation (pp. 260-261). Given the low predictive power of the study and the lack of control for time in practice, the results were inconclusive regarding the transfer aspect of the contextual interference effect with regard to acquisition and transfer of complex cognitive skills. Yet, their results provided impetus for further study. In a later study (van Merriënboer, Shuurman, de Croock, & Paas, 2002, experiment 2), using a different computer simulation of a similar chemical processing plant, a larger sample size (N = 69, 35 in the HCI group and 34 in the LCI group), and a transfer task that required troubleshooting of a different simulated plant than the plant in which practice occurred, results were somewhat more conclusive. Although the researchers did not state the alpha level by which they judged their results, they reported a “trend in the expected direction” (p. 24), that the random-order practice group outperformed the block-order practice group (p < 0.10) on the transfer test with a post- hoc effect size of 0.46 pooled standard deviation. However, by controlling the number of practice trials, the researchers observed that subjects in the random-order practice group required significantly more practice time than the block-order practice group by 14.5 minutes or 0.72 pooled standard deviation (p < 0.01). Because the number of trials was controlled rather than practice time, the

21

possibility remains that time in practice accounted for more of the variance between groups than the random-order practice sequence. Van Merriënboer et al. (2002, experiment 2) also examined cognitive load during practice and the transfer test with the simulation. The following section discusses the van Merriënboer et al. (2002) study and others in the framework of cognitive load theory. In summary, de Croock et al. (1998) and van Merriënboer et al. (2002) found evidence from among two practice sequence treatments, random-order practice (HCI) and block-order practice (LCI), that random-order practice resulted in greater positive transfer than block-order practice in troubleshooting simulation studies. Both studies controlled for the number of practice trials rather than the time spent in practice. In both studies, random-order practice required more time at practice to complete the controlled number of practice trials than block-order practice. Therefore, time-on-task may have accounted for the improved transfer performance of the HCI groups in both studies. Contextual interference may have had a greater or lesser effect than time-on-task, or may have had no effect on transfer performance.

Cognitive Load Theory: Schema Development and Rule Automation

Cognitive load theory (CLT) assumes a limited working memory and an essentially unlimited long-term memory (Sweller, 1990). According to CLT, cognitive load during training can derive from three sources: intrinsic cognitive load, extraneous cognitive load, and germane cognitive load (Bannert, 2003). Intrinsic cognitive load is the consumption of working memory resources caused by the inherent properties of the learning task, often attributable to interactivity among inextricable task elements. Extraneous cognitive load is the consumption of working memory resources caused by instructional materials that require attention to be diverted from the learning task during the learning process, and is attributable to poor instructional design. Germane cognitive load is the consumption of working memory that induces cognitive processes directly related to the learning task. Assuming that working memory is not overloaded, germane cognitive load is assumed to encourage mindful abstraction and more complex,

22

interrelated cognitive schemata, and thus increase the ability to transfer acquired skills to novel performance tasks (Paas & van Merriënboer, 1994; van Merriënboer et al., 2002). CLT researchers have suggested that the goal of instruction for training complex cognitive skills should promote schema development and automation (e.g., Paas, Tuovinen, Tabbers, & van Gerven, 2003; Paas, Renkl, & Sweller, 2003; van Merriënboer, Kirschner, & Kester, 2003). Schema development should improve the learner’s ability to apply skills in diverse situations and rule automation should reduce mental workload during problem-solving so that inherently limited working memory resources can be devoted to decision making. Van Merriënboer et al. (2002, experiment 2) measured cognitive load during practice and transfer using the subjective Mental Effort Scale (Paas, 1992; Paas & van Merriënboer, 1994b), in which subjects rated their mental effort on a 9-point scale from extremely low to extremely high. The self-reported mental effort a learner reported investing in a learning task or a transfer task was a proxy for the cognitive load induced by training or performance. Unfortunately, although cognitive load can be measured by subjective measures such as the Mental Effort Scale, neither this scale nor any known scale to date can differentiate intrinsic, extraneous, or germane cognitive load (van Merriënboer et al., 2002). However, when instruction, learning tasks, and performance tasks have been held constant, and only practice tasks have been varied, as in the van Merriënboer et al. (2002 experiment 2,) study, the Mental Effort Scale has provided an internally reliable measure to assess cognitive load during practice. As predicted, consistent with the contextual interference effect and cognitive load theory, subjects in the random-order practice (HCI) group perceived that they exerted significantly greater mental effort during practice than subjects in the block-order practice (LCI) group (p < 0.01). Although the HCI group outperformed the LCI group on the measure of diagnoses of faults in the transfer test, the difference was not statistically significant. The implication from a CLT perspective is that HCI slowed acquisition but facilitated transfer because, compared to LCI, HCI increased germane cognitive load during practice which resulted in the increased development of “cognitive schemata

23

relevant for transfer of troubleshooting skill” (p. 27). Conversely, LCI accelerated acquisition but inhibited transfer because LCI, though it served to automate recurrent aspects of troubleshooting skill, resulted in less schema development than HCI. In other words, HCI presumably facilitated transfer because it induced greater germane cognitive load than LCI, which resulted in more cognitive processing, and facilitated schema development relevant for transfer of the skill. Whereas the block-order practice of the LCI condition might facilitate automatic processing of recurrent skills, the random order practice of the HCI condition “gives inductive processes the opportunity to extend or restrict the range of applicability of acquired cognitive schemata” (van Merriënboer, 1997, p. 189), while increasing germane cognitive load.

Summary

HCI practice required greater perceived mental effort than LCI practice in both the de Croock et al. (1998) and van Merriënboer et al. (2002) studies, consistent with the contextual interference effect and transfer paradox. However, doubt remains regarding whether the random order of the practice tasks or the increased time required by random- order practice accounted for the superior transfer performance. As did de Croock et al., van Merriënboer et al. controlled for the number of practice tasks performed by the two groups rather than time in practice. The effect sizes for time in practice were variable in their relations to effect sizes for transfer performance. The accounts of both studies reported greater time spent at practice for the HCI group than the LCI group. In the de Croock et al. study, the effect size of the transfer score was greater than that of time in practice. In the van Merriënboer et al. study, however, the effect size of time in practice was greater. Table 2.1 summarizes the data reported in those studies on practice and transfer. The general supposition is that time in practice cannot be excluded as the cause for increased transfer performance in these studies. To test the tenability of this supposition, empirical results from similar experiments that control for time in practice should be performed, of which the current study is one.

24

Table 2.1 Practice Trials, Transfer Measures, and Effect Sizes in Two Studies of Complex Cognitive Skills

Number of Effect size of Effect size of time practice transfer in practice Source trials Measures of transfer (HCI > LCI) (LCI > HCI)

Number of incorrect diagnoses 1.44 De Croock et al., 1998 48 0.53 Percent of cases solved without 1.09 incorrect diagnoses

Van Merriënboer et al., Number of incorrect diagnoses 20 0.46 0.72 2003, Exp. 2

Note. HCI = High Contextual Interference; LCI = Low Contextual Interference.

When the number of practice trials was controlled in earlier studies, the HCI practice sequence required more time and resulted in greater transfer results. Moreover, when the time in practice was controlled in the current study, no significant differences were found among groups. However, several uncontrolled factors might have accounted for the different results between this and previous studies. Those factors are discussed in Chapter 5.

25

CHAPTER 3

METHOD

The present study investigated the effects of practice sequence variations on the transfer of a complex cognitive skill, editing text, in four practice sequence treatments that controlled experimentally for time spent in practice but not for the number of practice trials. This chapter describes an experiment to assess the effects of practice sequence on the transfer of a complex cognitive skill, which controls for time-on-task, and examines the variance attributable to the number of trials completed during practice. The current study hypothesized that contextual interference accounts for some of the variance in transfer performance because random-order practice was expected to increase controlled processing (Shiffrin & Schneider, 1977). In the CLT paradigm, because HCI was expected to increase germane cognitive load, HCI was expected to aid schema development (van Merriënboer et al., 2002). The present study was designed to test the contextual interference effect while controlling for time spent in practice. Furthermore, the present study investigated an alternative practice sequence to that which is entirely block-ordered or entirely random-ordered, consistent with the observation by Shea et al. (1990). To operationalize the practice sequence proposed by Shea et al., the present study included, in addition to HCI and LCI, a third practice sequence treatment, and a fourth as an experimental control treatment (See Table 3.1). The third and fourth treatments required the division of the practice stage of instruction into two temporally equal segments. That is, halves of the practice times were devoted to LCI and HCI practice sequences, respectively. The third treatment was the shifted-order practice sequence treatment (SCI-I) that began with random-order practice as in LCI, and, after half of the practice period expired, shifted to random order practice as in HCI. The fourth treatment was proposed to

26

experimentally control for the possibility that transfer performance resulting from the hypothesized shifted treatment (SCI-I) derived from practice under both LCI and HCI treatments alone, rather than the specific sequence of LCI followed by HCI. The sequence of SCI-II, as an experimental control, reversed the sequence of SCI-I. The SCI- II treatment presented random-order practice in the first practice segment, followed by block-order practice in the second practice segment. The inclusion of the fourth treatment followed from a recommendation by J. J.G. van Merriënboer (personal communication, May, 18, 2003) on a different related experiment: …from an experimental viewpoint I think it is important to include a “reverse- shifted” condition that starts with randomized practice and then continues with blocked practice. This would allow you to conclude that it is really the sequencing of randomized and blocked practice that counts – and not just doing it both. Although van Merriënboer’s comments referred to a different experiment that involved different learning tasks, they apply equally well to the present study because the purpose and design of the studies are otherwise identical.

Explanations of Hypotheses

Among the four experimental treatments, SCI-I was hypothesized to result in the greatest transfer performance, followed in order by HCI, LCI, and SCI-II (see Table 3.1). The SCI-I treatment was expected to result in better transfer performance than either HCI or LCI because it was expected to result in automatic processing of recurrent aspects of the whole skill (e.g., identifying the subject and verb of the sentence and applying procedures prescribed by sub-rules) in the initial part of practice and require more cognitive resources to be devoted to controlled processing (e.g., interpreting cues and retrieving associated sub-rules) later in practice. The HCI treatment was expected to result in greater transfer than LCI, as demonstrated previously (e.g., de Croock et al., 1998; Shea & Morgan, 1979; van Merriënboer et al., 2002), because HCI was expected to result in more practice at controlled processing regarding the interactive elements of the problem that required executive decisions. The reversal of the shifted order practice sequence in the fourth treatment, SCI-II, was expected to result in the poorest transfer performance (see Table 3.1). Detriments to

27

transfer performance in the SCI-II treatment were anticipated because (a) cognitive load associated with learning both recurrent aspects and non-recurrent aspects of the skill in early practice were expected to inhibit quick, smooth acquisition of recurrent skills (de Croock et al., 1998; van Merriënboer, 1997) and (b) controlled processing (Shiffrin & Schneider, 1977), or germane cognitive load (van Merriënboer et al., 2002) required for the non-recurrent aspects of the skill, associated with HCI, would not be afforded in later practice, resulting in limited schema development.

Table 3.1 Experimental Treatments, Practice Sequences, and Expected Transfer Performance

Experimental Hypothesized Transfer Results by Treatment Practice Sequence Experimental Treatment

LCI Block-order practice

HCI Random-order practice

Block-order practice, followed by SCI-I > HCI > LCI > SCI-II SCI-I random-order practice

Random-order practice, followed by SCI-II block-order practice

Experimental Exercises in capitalization and No hypothesis controls punctuation

Note: LCI = Low Contextual Interference; HCI = High Contextual Interference; SCI-I = Shifted Contextual Interference; SCI-II = Shifted Contextual Interference (experimental control with reverse of sequences from that of SCI-I)

A control group was also included to examine the test effect of a 40-item test, which was used to measure prior knowledge and post-treatment knowledge of the target skill. Members of the control group, instead of reading the initial instruction on subject- verb agreement as the experimental groups did, visited a Web site that provided a wide range of exercises from which participants could choose in such grammar skills as capitalization and punctuation. The types of exercises included crossword puzzles, matching, and fill-in-the-blank. The students continued with these exercises until the end of the class period. In the next session, they were directed to a similar Web site with

28

another set of exercises. They did not receive any training or practice on subject-verb agreement, so no hypotheses were made regarding their posttest or transfer performance.

Participants

The participants included 114 high school seniors enrolled in five senior English classes at a research laboratory high school in the Southeastern United States. They were randomly assigned to the five groups (four experimental groups and the control group). Because of absenteeism and other reasons, 92 of the original 114 students completed the pretest, posttest, and transfer test. Of these 92, 21 were in the control group, leaving 71 that were included in most of the final analyses of the four experimental groups. Of these 71 participants, 16 were in the LCI group, 19 were in the HCI group, 20 were in the SCI- I group, and 16 were in the SCI-II group. Forty of these 71 were female and 31 were male. None of the participants were advanced placement English students. Participants took part in the research as a required, non-graded activity of the English course. Permission by students to include their data in the research was voluntary. Each student granted permission by signing an informed consent form if they were 18 years old or older, or submitted a signed parental consent form and a signed student assent form if they were under 18 years of age (see Appendix A). The consent and assent forms were given to students after an explanation that the confidentiality of their data would be protected, that scores would be reported only as aggregated scores, and that their performance would not affect their grade. As an incentive to participate, they were provided a “pizza party” in which the researcher provided pizza to the participants and the teacher of the five classes provided drinks.

Instructional Materials

The instructional materials were presented entirely as CBI, consisting of self- paced information presentation followed by practice and feedback. The sections of the CBI included one called “Introduction” and another section on the four task classes labeled “The Basics.” These two sections constituted the entire information presentation.

29

The section called “Introduction” provided participants (a) an overview of the program, which elicited attention by explaining why subject-verb agreement in writing is important for future educational and job opportunities, (b) objectives of the instruction relating the four task classes to the importance of improving of writing and editing skills, and (c) the structure of the project including the information presentation, practice with feedback, posttest, and transfer test, and (d) a review of the concepts “subject” and “main verb” with examples. The section called “The Basics” provided four sub-rules of the of subject-verb agreement that cause difficulties for competent writers, each of which constituted a task class. The sub-rules or task classes involved situations in which (a) words come between the subject and the main verb, (b) the verb comes before the subject, (c) subjects are compound nouns, and (d) certain indefinite pronouns are subjects of the sentence. Two of the sub-rules required further discrimination. Compound subjects, when the latter subject noun is singular, requires a singular main verb form. Indefinite pronouns as subjects require singular main verbs only when they denote singularity (e.g. “either” as a subject denotes singularity of the subject whereas “both” as a subject denotes multiplicity). “The Basics” provided rules, sub-rules, examples and non-examples. Non-examples provided cases in which the sub-rule does not apply.

Variables

Independent Variable

The independent variable was the sequence of practice in four treatment conditions: block order practice (low contextual interference [LCI]), random order practice (high contextual interference [HCI]), block-order practice followed by random- order practice (shifted contextual interference [SCI-I]), and random-order practice followed by block-order practice (experimental control [SCI-II]). Figure 3.1 presents a graphic representation of the four treatments in the context of the entire experiment. LCI. Practice in the LCI condition consisted of four 15-minute segments in which practice items were randomly drawn from the item pools of each of the four task

30

classes consecutively. Within each block of practice, items were drawn randomly from the pool of the particular task class.

Pretest Information Presentation Transfer test Posttest

LCI

HCI

SCI-I

SCI-II

No control of time to 0 15 30 45 60 No control of time to complete complete Practice time in minutes by treatment Figure 3.1. Graphic representation of sequence of components of the CBI by treatment group.

Although the number of items (practice trials) completed of each task class was recorded for statistical analysis, the number of items completed was not controlled experimentally. Consequently, the number of items for each task class practiced was a function of the time allotted to practice rather than the number of trials in a given task class, as was true for all of the other conditions. HCI. Practice in the HCI condition consisted of 60 minutes of practice in 15- minute segments, during which practice items were randomly drawn from among all four practice item pools. SCI-I and SCI-II. Practice in the SCI-I and the SCI-II conditions consisted of two 30-minute segments (see Figure 3.1). In both SCI-I and SCI-II treatments, practice items in two practice segments were randomly drawn from among all four item pools, as in the HCI treatment. In the other two practice segments practice items were presented in block order, as in the LCI treatment. Instead of the 15-minute duration for each task class

31

provided in the LCI condition, 7.5 minutes were allocated to each of the four task classes to accommodate the abbreviated time allowed for block-order practice. The SCI-I treatment first presented a block-order practice segment of 30 minutes, followed by 30 minutes of random-order practice. The SCI-II treatment first presented a random-order practice segment of 30 minutes, followed by 30 minutes of block-order practice.

Dependent Measures

The two main dependent measures, performance and transfer, were measured by the pretest and transfer test, respectively. To test for prior knowledge, a pretest was administered as the first activity following a description of the research activity and collection of consent and assent forms. Additional dependent measures for secondary analysis included time spent in the initial instruction and the number of trials executed during practice sessions. The instruments used to collect these data are described in the Instruments section below.

Instruments

The instruments that were used in the present study were the pretest, the posttest, the transfer test, and the computer program that collected data regarding time in instruction and the number of practice trials. Each of these instruments is described in the following sub-sections.

Pretest

Prior knowledge of the learners about the four task classes was assessed with a pretest presented as a one-page paper instrument. The pretest consisted of 40 items. Items were designed to test proficiency in each of the four task classes. Items were complete sentences. For each sentence the learner was asked “Do the subjects and main verbs of these sentences agree?” and prompted “Mark Yes or No for each sentence.” In addition, a parenthetical note appeared under those directions: “Reminder: The subject of the sentence is the person(s), place(s), or thing(s) that the sentence is about. The main verb is

32

the word that tells what the subject is or does.” An example item was supplied: “Students who learn grammar rules are better at editing sentences.” and the “Yes” box was marked to indicate how learners should indicate that the subject and main verb of the sentence agreed. The internal consistency of the pretest was calculated from the present data as a Cronbach alpha coefficient of .76. To develop the test, the author generated 10 items for each of the four task classes and submitted them to an expert editor for analysis and revision. After several revisions recommended by the editor, the resulting items included 12 items for the first task class, 11 for the second task class, eight for the third task class, and nine for the fourth task class. The order of the items was randomized so that task classes of items would be heterogeneous. The revised test was tried with three high school graduates whom were asked to think aloud as they answered the items. The only resulting adjustment was to change the order of two items so that the first item would not be especially difficult to answer. The resulting version of the pretest was presented to the teacher of the English classes to obtain additional feedback. The main revision that resulted from the teacher’s feedback was the inclusion of the example item following instructions about how to answer test items. The finally revised pretest, as it was presented to learners, is represented in Appendix B.

Posttest

Acquisition, as the immediate ability of participants to apply learning from instruction, practice, and feedback, was measured by the posttest that followed practice as a Web-based instrument. The posttest was equivalent for all groups. The posttest consisted of the same 40 items as the pretest. All participants answered the same items of the pretest in the same sequence regarding agreement of subject and main verb. They also answered for each item “Which of the following words is the subject of the sentence?” and “Which of the following words is the main verb of the sentence?” For each of the 40 items, the participants were asked which of the four sub-rules applied to the sentence at hand. A sample item of the posttest can be found in Figure 3.3. The internal consistency of the posttest was calculated as a Cronbach alpha coefficient of .86.

33

Transfer Test

Transfer, as the ability of participants to apply skills learned from instruction, practice, and feedback to a novel task, was measured by a paper instrument in which participants edited a two-page document about guitars that comprised 33 sentences, 16 of which included errors of subject-verb agreement, 12 of which were correct applications of sub-rules associated with task classes, and five of which were transitional sentences to make the passage coherent. The transfer task was identical for all groups. The internal consistency of the transfer test was calculated as a Cronbach alpha coefficient of .78. See Appendix C for the complete transfer test. The transfer instrument was developed by exploring several resources that described the history and mechanics of guitars and employing the services of the editor to construct a cohesive text. The researcher altered text from several documents to embed errors in subject-verb agreement and include sentences that were correct but might seem incorrect to participants had they not been trained in the four task classes. The editor offered additional sentence constructions that characterized the types of errors associated with the task classes. The resulting transfer test contained 28 items, 11 items that tested transfer performance on task class 1, and five items for task class 2, six for task class 3, and five for task class 4.

Time in Initial Instruction and Number of Practice Trials

Each computer running the instructional program recorded time in seconds participants spent with the initial instruction, which consisted of the Introduction and The Basics, as well as the number of trials participants executed during practice sessions.

Procedure

The procedure involved initial meetings with participants and collection of permission forms, administration of the pretest and study of initial computer-based instruction, practice and feedback in four sessions as the experimental treatment, administration of the posttest, and administration of the transfer test. These phases of the procedure are described below.

34

Initial Meetings

Two weeks prior to the experiment, the teacher of the course introduced me to the students in each of the five classes and described the research procedure in brief. The teacher explained to students that, for the week of the experiment, the classes would be held in a different classroom that had a sufficient number of computers to accommodate all students. The researcher explained to students: (a) the general research objectives, (b) the procedure taken to protect confidentiality of participants and the freedom of students not to allow their data to be included in the study, and (c) the fact that neither their performance nor their participation would affect their grades. The teacher explained that the researcher would provide a pizza party as a reward to all participants who returned signed forms allowing their participation in the study. Following this explanation, the researcher collected signed informed consent forms, parental consent forms, and student assent forms. The researcher attended classes during the next week to obtain forms and answer questions about the research. Due to the vigilance of the teacher and the researcher’s accounting of forms received, the researcher obtained the required permission from every student in all of the five classes before the end of the experiment. Students asked me no questions about the research. During the week leading up to the experiment, the school’s system administrator installed the instructional program on the 27 computers in the classroom in which the experiment took place and tested the retrieval of user data.

Pretest Administration

In the first session the teacher and the researcher directed participants to find the computers to which they were assigned, notified by their names attached to computer monitors, and gave them the pretest to complete. They completed and returned their pretests to us and immediately began initial instruction by logging on to the program.

Pre-Instruction and Information Presentation

Participants proceeded through the self-paced instruction, including the section called “Introduction” and the next section called “The Basics,” both of which are

35

described above, and then they logged out of the program concluding the first computer- based session.

Practice Administration

On the next day, participants logged in to the computer they used previously and immediately began practice exercises. The computer they were assigned was designated for one of the four treatment conditions or the control condition. Participants in the control condition were directed to websites at which they engaged in grammar learning activities unrelated to subject-verb agreement, including topics of capitalization and punctuation. They chose from among exercises that involved crossword puzzle completion, matching, true-false, and fill-in-the-blank problems. Each of the exercises provided feedback to the answers students gave. The exercises were chosen to provide participants in the control group learning activities that were unrelated to the subject-verb agreement exercises that the experimental groups performed and to advance other grammar skills that were appropriate to grade level. Participants in treatment conditions were provided a constant set of directions that appeared at the top of the screen for every practice item. Underneath the “Directions” frame, the “Sentence” frame provided one sentence at a time, randomly selected from the database of items appropriate to their treatment condition, each of which offered a choice of two options for either the subject or the main verb. Participants were directed to choose one of the two options in the “Choices” frame. After they made the choice by clicking on the word they selected, feedback appeared in the “Feedback” frame, indicated whether their choice was correct and described the reason the choice was correct or incorrect. A frame on the right side of the screen presented the time elapsed in the current 15-minute session and the time remaining. In addition, a frame on the right side of the screen tracked their cumulative numbers of correct and incorrect choices, along with the percentage of items answered correctly, for both the current session and accumulated sessions. Figure 3.2 illustrates a screen a learner could encounter after answering the 26th item in the first of the four practice sessions correctly.

36

Figure 3.2. Hypothetical screen display after a learner has selected a correct choice in the first practice session.

After learners received and presumably read the feedback, they clicked the “Continue” button to advance to the next practice item. After the participants completed the first 15-minute practice session, the program opened an “intermission” program that was designed to provide relief from the repetitive task of responding to questions about subject verb agreement by providing a task that involved visual discrimination. Following the intermission, the subject-verb practice resumed in the next 15-minute practice session. At the end of the second practice session, participants were provided summary results of their performance that were identical to those presented in the final performance frame. When they clicked the “Exit” button, the program closed to conclude the session.

37

On the next day, participants repeated the procedure of the preceding day by logging on to the computer they had used previously. Those assigned to the control condition were provided new websites to explore that did not involve subject-verb agreement. Those assigned to experimental groups were given a similar regimen of practice tasks to perform. For experimental groups, the order of practice tasks and the database from which items were drawn were governed by the treatment group to which participants were assigned. Again, the two 15-minute practice sessions for experimental groups were intermitted with a novel activity that involved visual discrimination. After completion of the third and fourth practice sessions, the final report of participants’ practice results concluded the practice portion of the instructional program with a click of the “Exit” button.

Posttest Administration

On the day following completion of the practice, all participants took the posttest, which they accessed as a web-based assessment of their skills to make subjects and main verbs of a sentence agree. The posttest presented the same 40 items as the pretest and also asked addition questions for each item. Following the question of (a) which word in the sentence was the subject, (b) which word was the main verb of the sentence, (c) whether the subject and main verb of the given sentence agreed, the posttest asked, and (d) which of the four sub-rules (as task classes) applied to the problem of determining subject-verb agreement. Summary statements of each sub-rule were provided for each item. As an example the first item of the posttest is displayed in Figure 3.3.

38

Figure 3.3. Sample item from the posttest.

Transfer Test Administration

As the final participant task of the experiment, learners completed the transfer test. The transfer test required that participants use the knowledge they gained from instruction, practice, and their prior knowledge to edit a document to making subjects and

39

main verbs of each sentence agree. Participants were prompted that their editing of sentences should address only subject-verb agreement of the sentences. The teacher and the researcher handed out the paper-based transfer test described in the Instruments section. The participants completed and returned them.

Research Design and Analysis

Test and Item Analyses

Prior to the hypothesis tests three initial analyses were conducted. First, the pretest and posttest scores of the control group were analyzed to evaluate the potential of a test effect, which would indicate that completion of the pretest had affected scores on the posttest that administered the same items. To test for the test effect, a t-test of related scores was conducted on the pretest and posttest scores of the control group. Second, item analyses were conducted on the pretest and posttest to evaluate internal reliability of the tests and the predictive power of each item on total test performance. Internal reliability was analyzed by examining a coefficient alpha correlation on the pretest and posttest. The contributions of specific items to the whole test score were analyzed primarily by point-biserial correlations. The third analysis involved the transfer test. The expert editor and the researcher scored transfer items independently. The analysis of scores was conducted first by conducting a coefficient alpha of transfer test scores. After isolating the differences in scores, each discrepancy was discussed to arrive at mutual agreement of how each discrepant score should finally be scored.

Hypothesis Tests

For all hypothesis tests of significance, α was set at .05. Three primary analyses were conducted to test the five hypotheses of the study. To test Hypothesis 1, that random-order practice would result in fewer practice trials than block-order, a t-test of independent groups was conducted on the number of trials completed by the HCI and LCI groups. For Hypothesis 2, that block-order would result in greater or equal acquisition, a t-test was conducted with the LCI and HCI groups on

40

posttest performance. For Hypotheses 3, 4 and 5, a one-way analysis of variance, ANOVA, was conducted with the four experimental groups on transfer test scores to investigate whether random-order practice would surpass block-order, shifted-order would surpass random-order, and reversed shifted-order would perform the worst of all experimental groups. In secondary analyses that were chosen to control for prior knowledge and number of practice trials participants completed, analyses of covariance, ANCOVAs, were conducted. The first ANCOVA used pretest scores as the covariate to account for prior knowledge to investigate the differences among groups on acquisition and transfer. The other secondary analysis employed both prior knowledge and the number of practice trials completed to test for the possibility that the number of practice trials, in conjunction with prior knowledge, accounted for some variance in transfer test performance among treatment groups. Tertiary analyses were conducted after examining the results of primary and secondary analyses to further explore phenomena found or not found in the primary and secondary analyses. These analyses included correlations among all variables (excluding sub-tests of construct) and an analysis of selected participants whose learning gains were in evidence.

41

CHAPTER 4

RESULTS

The results are presented in six sections. The first section of this chapter reports the tests of assumptions that were required for the statistical tests used to test hypotheses. The second section presents the analysis of pretest and posttest scores within the control group to assess the potential test effect. The third section reports the results of item and test analyses for each of the dependent measures. The fourth section presents the results of the primary analyses in which all of the five hypotheses tests are reported. The fifth section reports the results of secondary, exploratory analyses that were planned before the initial analyses. The final section presents the results of the tertiary analyses, which were inspired by the findings – or lack thereof – of the primary and secondary analyses.

Tests of Assumptions

Prior to conducting item analyses, test analyses, and statistical tests of mean differences, a series of tests of assumptions associated with the various statistical tests were conducted. The sequence of the series of assumption tests is as follows: (a) assumptions of normality and homogeneity of error variance for experimental groups on the pretest, posttest, and transfer test; (b) assumptions of normality and homogeneity of error variance for the control group on the pretest and posttest, (c) the special assumptions of the analysis of covariance. Tests of assumptions conducted for t-tests and analyses of variance (ANOVAs) with experimental groups indicated that the assumptions were not violated beyond the degree to which the statistical tests are sufficiently robust to withstand minor violations.

42

Tests of the assumptions of normality and homogeneity of error variance of the distributions of the scores on the pretest and posttest were conducted for the sample as a whole, excluding the control group (n = 71). As one indication that the assumption of normality was not violated, in Kolmogorov-Smirnov tests for conformity of the sample distributions of the pretest and posttest with a normal distribution, the maximum deviation statistic D yielded .11 for the pretest (p = .20) and .08 (p = .20) for the posttest. The test of homogeneity of error variance was satisfied by Levene’s test statistics for the pretest, F (3, 67) = 2.31, p = .08, the posttest, F (3, 67) = 1.00, p = .40, and the transfer test, F (3, 67), p = .45. These tests fulfilled the general assumptions with regard to the four experimental groups combined. Additional assumptions had to be tested for the test effect with the control group and the ANCOVAs with the experimental groups. Next, each of the groups’ scores were independently analyzed for assumptions of normality and homogeneity of variance. Kolmogorov-Smirnov tests for conformity of the sample distributions for each experimental group with normal distributions on the pretest, posttest, and transfer test indicated that the assumption of normality within experimental groups was not exceeded, given the robustness of the t-test and ANOVA to withstand minor violations of the normality assumption (see Table 4.1). Levene’s tests for homogeneity of variance across groups indicated that the assumption of equal variance was not seriously violated for any of the scales (see Table 4.2). In summary, the tests of normality and homogeneity of variance, along with independence of group observations, satisfied the general assumptions for comparing means with asymptotic tests. Additional assumptions had to be tested for the analysis of covariance (ANCOVAs) with the experimental groups and the two covariates on the posttest and transfer test. Although the covariates identified for secondary analyses were both controlled experimentally by random assignment of participants to groups, pretest scores and the number of practice trials were identified for analyses of covariance. Because the number of practice trials did not correlate with either the posttest scores, r = .017, p = .90, or the transfer test scores, r = –.001, p = .99, the variable was not expected to account for any of

43

the variance. Pretest scores however correlated significantly with both the posttest scores, r = .63, p < .0005, and the transfer test scores, r = .63, p < .0005.

Table 4.1 Kolmogorov-Smirnov Test of Normality within Experimental Groups

Scale Group Statistic df p

Pretest Block (LCI) .14 16 .20 Random (HCI) .14 19 .20 Block-Random (SCI-I) .16 20 .18 Random-Block (SCI-II) .17 16 .20

Posttest Block (LCI) .10 16 .20 Random (HCI) .18 19 .10 Block-Random (SCI-I) .17 20 .14 Random-Block (SCI-II) .11 16 .20

Transfer Block (LCI) .13 16 .20 Random (HCI) .09 19 .20 Block-Random (SCI-I) .14 20 .20 Random-Block (SCI-II) .18 16 .17

Table 4.2 Levene’s Test of Homogeneity of Variance across Experimental Groups

Scale F df1 df2 p

Pretest 2.31 3 67 .08

Posttest 1.00 3 67 .40

Transfer .89 3 67 .45

The five special assumptions of the two-way ANCOVA were tested with regard to pretest scores and the number of practice trials. The number of practice trials did not meet the assumption that the covariate is measured before the experimental treatment.

44

Instead it was measured during the treatment. The assumption of reliability of the covariate was met for the pretest, coefficient α = .76. The number of practice trials was a direct measure rather than a sample of items to represent a construct. The assumption that multiple covariates were not strongly correlated was met, r = .06, p = .56. The assumption of linearity of the covariates did not seem to be violated, illustrated by Figures 4.1 through 4.4, which show the slopes representing relations between scores to be similar.

Group 40 Block Random Block-Random Random-Block 35 Control

30

25 Posttest Score 20

15

10

15 20 25 30 35 40 Pretest Score

Figure 4.1. Linearity of covariance of posttest with pretest for experimental and control groups.

45

Group 35 Block Random Block-Random Random-Block Control 30

25

20 Transfer Score Transfer

15

10

15 20 25 30 35 40 Pretest Score

Figure 4.2. Linearity of covariance of transfer test with pretest for experimental and control groups.

46

Group 40 Block Random Block-Random Random-Block

35

30

25 Posttest Score

20

15

0 500 1000 1500 Number of Practice Trials

Figure 4.3. Linearity of covariance of posttest with practice trials for experimental groups.

47

Group 35 Block Random Block-Random Random-Block

30

25

20 Transfer Score Transfer

15

10

0 500 1000 1500 Number of Practice Trials

Figure 4.4. Linearity of covariance of transfer test with practice trials for experimental groups.

Finally, the assumption of homogeneity of regression slopes was satisfied – that is, no significant interaction was found between treatments and covariates – for pretest scores with posttest scores, F (3, 71) = .24, p = .87, for pretest scores with transfer test scores, F (3, 73) = 1.06, p = .37, for the number of practice trials with posttest scores, F (3, 71) = .13, p = .94, and for number of practice trials with transfer test scores, F (3, 73) = .29, p = .83. In summary, the only assumption violated was that the number of practice trials was measured during the treatment, rather than before. This violation was not viewed as problematic because of the lack of interaction between treatments and practice trials on the posttest or transfer test, and because the number of trials was a true measure rather than an estimate based on a sample.

48

Test Effect

To examine the potential for a test effect attributable to the pretest-posttest design required that participants in the control group take both tests in the same way as the experimental groups and that the difference in their pretest and posttest scores be analyzed for evidence of a test effect. Under the assumptions of normality and equal variance, a paired-sample t-test comparing the mean scores of the 21 participants in the control group provided evidence supporting the conclusion that no test effect occurred, t = 0.69, p = .50. In fact, the participants in the control group performed slightly worse on the posttest, M = 26.48, SD = 6.86, than on the pretest, M = 27.19, SD = 5.96, though the difference was not statistically significant. Therefore it was assumed that no test effect had occurred and the analysis proceeded to examine the items and tests as measures of their respective constructs.

Item and Test Analyses

The use of the pretest, posttest, and transfer test for hypothesis testing called for some preliminary analyses prior to their use in hypothesis testing. The construct validity and internal reliability of the scales to measure the constructs of interest were analyzed. Prior to the experiment the expert editor and the teacher of the classes provided critique and recommendations for revisions of test items and instrument format. A formative evaluation with three participants also informed the design of the instruments. After the experiment the instruments and items were further assessed. Given the satisfaction of normality and homoschedasticity assumptions, Kuder- Richardson-20 (KR20) procedures for the pretest and posttest, with all items included, provided measures of internal reliability for each instrument. The procedures included all participants who took the given test. The KR20 scores were .76 for the pretest, .86 for the posttest and .82 for the transfer test. Standard errors of measure for the 40-item pretest and posttest, the items of which asked if the subjects and main verbs of sentences agreed, were 2.67 for the pretest and 2.65 for the posttest. The standard error of measure for the

49

entire 33-sentence transfer test, which asked participants to edit the sentences to make subjects and main verbs agree, was 2.48. To assess the reliability of each item to measure the whole construct of subject- verb agreement on each test, point-biserial correlations and the calculated “KR20 of the test if the item were deleted” were used to assess the impact of each item on the whole test. Moreover, difficulty, standard deviation, and several other measures were used to examine each item’s behavior with respect to the whole test (see Appendix D). From the analyses of tests and items of the pretest and posttest, these tests were accepted without modifications (omission of items) as measures of prior knowledge and post-instruction knowledge. The analysis of transfer test items to measure knowledge transfer as skill in editing text called for a somewhat different process. The sentences to be edited were situated in a coherent passage of text that included sentences constructed to test specific sub-rules of the main rule and sentences constructed to join the test sentences together. The latter type of sentence contained no incorrect sentence construction and was not intended to test the skill of making subjects and main verbs agree. The analysis of the transfer test and its items began with the scoring of all 33 sentences by two raters followed by resolution of discrepancies in ratings and concluded with analysis of items. Because the sentences could be edited by participants in various ways, the expert editor and the researcher scored all of the items independently and the researcher examined the correspondence of the scorings. The Pearson correlation of inter-rater reliability was .96. Subsequently the editor and the researcher examined the 45 scores that were scored differently out of 3012 scores. The expert editor made 12 scoring errors, and the researcher made 23 scoring errors and 10 data entry errors of either the researcher’s scores or the editor’s. They debated two items of the 45 and eventually agreed on how to score both of them. The resulting KR20 including all 33 items, was .78, but several of those items were not pertinent to the construct of the test. Five of the 33 sentences in the transfer test were not designed to test any of the four sub-rules, or task classes, and those five items were excluded from further analysis. Furthermore, three of the transfer test items designed to test specific sub-rules had

50

negative point-biserial correlations and were excluded from the analyses of transfer performance by experimental groups. They were excluded because they did not discriminate participants’ skill levels on the transfer task. Consequently, 24 of the 33 original transfer test items were included in the analysis of transfer performance by the experimental groups. The KR20 score for the 24 items was .82. In summary, all 40 of the items of the pretest and posttest were used in subsequent analyses, and 24 of the original 33 sentences were used as transfer test items.

Descriptions of Groups

The means and standard deviations for all groups on the pretest, posttest, transfer test and number of practice trials completed are presented in Table 4.3.

Table 4.3 Descriptive Statistics on the Pretest, Posttest, Transfer Test, and the Number of Practice Trials

Dependent Measure

Pretest Posttest Transfer Test Practice Trials

Group M SD n M SD n M SD n M SD n

Block 28.06 5.05 16 27.75 6.93 16 14.75 5.07 16 480.38 248.34 16 (LCI)

Random 26.16 6.50 19 25.21 7.72 19 12.68 4.68 19 474.79 295.27 19 (HCI)

Block-Random 25.65 4.75 20 25.25 7.53 20 13.15 5.94 20 408.95 237.58 20 (SCI-I)

Random-Block 25.13 4.21 16 26.19 6.18 16 12.63 4.29 16 361.00 159.76 16 (SCI-II)

Control 27.19 5.96 21 26.48 6.86 21 13.10 6.00 21 - - - (no practice)

Total 26.48 5.34 92 26.12 7.00 92 13.18 5.19 92 431.86 237.91 71

51

Tests of Hypotheses

The first hypothesis, that learners in the LCI treatment group would perform more practice trials than the HCI group, was not supported by the data; no significant difference was found, t = .13, p = .90. The LCI group performed an average of 505 practice trials, whereas the HCI group performed an average of 493 practice trials, a difference of 12. The second hypothesis that the HCI group would equal or exceed the acquisition of the LCI group, as measured by the posttest, was supported by the data in that a significant difference was not found between the HCI and LCI groups on posttest performance, t = 1.02, p = .32. Posttest performance by the HCI group (M = 25.21, SD = 7.72) was slightly lower than that of the LCI group (M = 27.75, SD = 6.93), d = .34. The data did not support the third, fourth, or fifth hypotheses, which predicted different transfer results among the treatment groups. The omnibus test for a one-way analysis of variance for the transfer test scores revealed no significant differences among the treatment groups, F (3, 69) = .57, p = .64 (see Table 4.4)

Table 4.4 One-Way Analysis of Variance Summary for Transfer Test Scores

Source df SS MS F p

Between Groups 3 42.792 14.26 .57 .64

Within Groups 69 1741.126 25.23

Total 72 1783.918

Secondary Analyses

Prior to testing the hypotheses, several other analyses were planned to examine the data for evidence that would further elucidate the findings and were conducted after

52

testing the hypotheses. First, the amount of mental effort participants reported investing in the practice trials was analyzed with a one-way ANOVA. Second, each of the four subscales, corresponding to the four sub-rules (task classes), were examined separately with one-way ANOVAs on the posttest and transfer test. Third, two one-way analyses of covariance (ANCOVAs) employed the pretest as a covariate to examine treatment group means adjusted for prior knowledge, measured by pretest scores; one ANCOVA examined posttest performance and the other examined transfer test performance. Finally, two two-way analyses of covariance were conducted to examine (a) posttest and (b) transfer test performance with two covariates, pretest scores and number of practice trials. All of these analyses included the experimental groups but not the control group. None of them revealed any statistically significant differences, but some of them provided data that provoked further analysis.

Mental Effort

The Mental Effort Scale was administered after each of the four practice sessions. The mean of the four practice sessions was calculated for each participant and the group means were analyzed. A one-way ANOVA was conducted to compare mean mental effort reported during practice trials among the experimental groups. No statistically significant differences were found among the groups, F (3, 68) = 0.82, p = .49. The means and standard deviations of mental effort for each group are presented in Table 4.5.

Table 4.5 Mean Mental Effort Invested During Practice

Group M SD n

Block 5.46 2.02 17 (LCI)

Random 5.78 1.83 19 (HCI)

Block-Random 5.41 1.52 20 (SCI-I)

Random-Block 6.20 1.16 16 (SCI-II)

Total 5.70 1.66 72

53

Sub-Scale Analyses

The items of the pretest, posttest, and transfer test were divided into subscales by the sub-rules the individual items were constructed to test. The purpose of decomposing the scale was to investigate the possibility that some of the four subscales conformed to the hypotheses and others did not, confirmatory evidence of which would implicate either the assumed hierarchy of sub-rules under the general rule or the internal reliability of the instrument to measure one or more of the sub-rule constructs. From the resulting ANOVAs, no statistically significant differences were found among sub-scales by group on either the posttest or transfer test. Detailed analyses of the sub-scales are presented in Appendix D.

Pretest as Covariate

Given the satisfaction of assumptions described above (in the section titled “Tests of Assumptions”), a one-way ANCOVA was conducted to examine the hypotheses with statistical control of prior knowledge of the domain. The domain was defined here as the agreement of subjects and main verbs of sentences, constrained by the four sub-rules, which was measured with the pretest before instruction began. No significant differences were found among the experimental groups on either the posttest, F (3, 66) = 0.31, p = .82, or the transfer test, F (3, 68) = 0.10, p = .96, with prior knowledge controlled.

Pretest and Number of Practice Trials as Covariates

A two-way ANCOVA was conducted to examine the effects of the experimental treatments on posttest and transfer test scores, controlling for both prior knowledge (pretest scores) and the number of practice trials. No significant differences were found on the posttest, F (3, 65) = 0.34, p = .80, or the transfer test, F (3, 67) = 0.15, p = .93.

Re-Analysis Omitting Participants Who Performed More Practice Trials

Another secondary analysis examined the results with regard to the number of practice trials participants performed during practice. Some participants performed a large number of practice trials of (maximum = 1567). The sample was filtered to exclude

54

participants who performed more than the median number of practice trials (377). So filtered, the resulting sample was analyzed on all of the previously conducted tests: (a) an independent samples t-test of Block (LCI) and Random (HCI) groups on the number of practice trials, t = 0.14, df = 15, p = .89, (b) an independent samples t-test of LCI and HCI groups on posttest performance, t = .82, df = 14, p = .46, (c) two one-way ANOVAs examining the effects of group membership on acquisition, F (3, 31) = 0.80, p = .50, and transfer, F (3, 32) = 0.84, p = .48, respectively, (d) two one-way ANCOVAs with prior knowledge as the covariate on acquisition, F (3, 30) = 0.53, p = .66, and transfer, F (3, 31) = 0.45, p = .70, and (e) two two-way ANCOVAs with prior knowledge and number of practice trials of covariates on acquisition, F (3, 29) = 0.49, p = .69, and transfer, F (3, 30) = 0.48, p = .70. In brief, no significant differences were found in any of the secondary analyses of data.

Time in the Initial Instruction

Although time in practice – the treatment – was experimentally controlled in the current study, the time participants spent in the initial instruction – the sections called Introduction and The Basics – was learner controlled, or voluntary. That is, participants could spend as much time as they wanted reading the initial instructional materials, or could speed through them without reading at all. From pilot trials, it was expected that participants would spend approximately 20 minutes. Instead, the time in initial instruction varied from 25 seconds to 40 minutes and 20 seconds, roughly a 100:1 ratio of longest to shortest time spent. This variable was intended to be equally distributed across groups through random assignment. However, an analysis of variance revealed a statistically significant difference among experimental groups, F (3, 69) = 2.82, p = .05 on time spent in initial instruction (see Table 4.6 for descriptive statistics for all experimental groups). Tukey HSD post-hoc tests indicated a significant difference between the Random (HCI) and Random-Block (SCI-II) groups, p = .04. The SCI-II group (M = 13:19, SD = 9:26) spent 5.5 minutes more, 74% more, time in the initial instruction than the HCI group (M = 7:38, SD = 4:07). Because the participants were randomly assigned to groups, were unaware of their group assignment, and spent their time in instruction before the experimental treatment began – and thus all events and

55

conditions before and during their time in instruction were the same for all experimental groups – the differences among groups must be attributed to sampling error.

Table 4.6 Descriptive Statistics of Time Spent in Initial Instruction by Experimental Treatment

95% CI* for Mean Standard Error of Lower Upper Treatment n M SD Mean Bound Bound Minimum Maximum

Block 17 11:46 5:30 1:20 8:57 14:36 5:11 23:50 (LCI)

Random 19 7:38 4:07 0:56 5:39 9:38 1:48 14:39 (HCI)

Block-Random 20 9:33 5:13 1:10 7:06 11:59 2:37 19:40 (SCI-I)

Random-Block 17 13:19 9:26 2:17 8:28 18:10 0:25 40:20 (SCI-II)

Total 73 10:27 6:31 0:45 8:55 11:58 0:25 40:20

Note. * CI = Confidence interval

Tertiary Analyses

Following the primary analyses that tested the hypotheses and planned secondary analyses that examined relations among the secondary variables, number of practice trials and mental effort, a tertiary analysis was conducted to further explore the data for meaning. Two approaches were used in the tertiary analyses. One approach was to examine all correlations among independent and dependent variables and abduct potential meaningful relations. The other approach was to examine the data from a limited set of participants who exhibited learning gains, evidenced by their increased scores from pretest to posttest.

Correlations among Dependent Measures

The intercorrelations of all of the dependent measures were analyzed in the interest of inferring relations among them. The results are presented in Table 4.7.

56

Table 4.7 Intercorrelations and Probabilities of Data for All Dependent Measures

Measure 1 2 3 4 5 6 7 8 9

r —– 1. Time in instruction p —– n 73

r .04 —– 2. Pretest p .75 —– n 73 73

r .23 .60** —– 3. Posttest p .06 .00 —– n 71 71 71

r .05 .60** .77** —– 4. Transfer test p .67 .00 .00 —– n 73 73 71 73

r -.26* -.02 .02 -.00 —– 5. Number of practice trials p .03 .85 .89 .99 —– n 73 73 71 73 73

r .11 -.26* -.32** -.26* .03 —– 6. Mental effort p .34 .03 .01 .03 .82 —– n 73 73 71 73 73 73

r .04 .45** .81** .64** .10 -.32** —– 7. Subject identification† p .75 .00 .00 .00 .43 .01 —– n 71 71 71 71 71 71 71

r .18 .49** .73** .58** .09 -.28* .68** —– 8. Main verb identification† p .13 .00 .00 .00 .48 .02 .00 —– n 71 71 71 71 71 71 71 71

r .26* .44** .68** .55** .07 -.33** .60** .62** —– p 9. Rule identification† .03 .00 .00 .00 .54 .01 .00 .00 —– n 71 71 71 71 71 71 71 71 71

Note. * p < .05; ** p < .01. †The measures Subject identification, Main verb identification, and Rule identification refer to the frequencies (out of 40 possible) with which participants correctly identified on the posttest the subject, the main verb, and the sub-rule that applied to the sentence, respectively.

57

Analysis of Data from Participants with Learning Gains

Under the assumption that the participants who engaged in the initial instruction and studied the feedback during practice would demonstrate positive learning gains by scoring appreciably higher on the posttest than on the pretest, cases were selected for which posttest scores were at least three points higher than pretest scores for analysis. This selection yielded 23 participants, four in the LCI group, six in the HCI group, six in the SCI-I group, and seven in the SCI-II group. For the purpose of this analysis, all of these participants are called “gainers.” One-way ANCOVAs were conducted on the posttest and transfer test scores, both with the pretest score as covariate to control statistically for prior knowledge. Preliminary tests with the gainer sample were conducted to ensure that there was no violation of the assumptions of normality, linearity, homogeneity of variance, homogeneity of regression slopes, and reliable measure of the covariate. The one-way ANCOVA on posttest scores with pretest score as covariate did not reveal a significant overall difference, F (3, 18) = 1.95, p = .16. The pretest score accounted for more of the variance, η2 = .81, than the treatment, η2 = .25. In a comparison of adjusted means, the SCI-I group significantly outperformed the HCI group, p = .03, with a difference in adjusted means of 2.90 points against a theoretical maximum of 40 (see Table 4.7 and Figure 4.5). The one-way ANCOVA on the transfer test with pretest as covariate did not reveal a significant overall difference, F (3, 18) = 2.10, p = .14. The pretest score accounted for more of the variance, η2 = .60, than the treatment, η2 = .26 percent. In comparisons of adjusted means, the SCI-I group significantly outperformed the LCI group, p = .03, with a difference in adjusted means of 2.90 (see Table 4.7 and Figure 4.5). Analyses of variance for gainers by experimental group revealed no significant differences on the number of practice trials performed, F (3, 22) = 0.35, p = .79, or the mental effort invested in the practice trials, F (3, 22) = 0.59, p = .63. Gainers reported investing slightly less mental effort in both the initial instruction and the practice than non-gainers, though the differences were not significant (t = -0.86, p = .37 for the initial instruction and t = -0.72, p = .47 for practice).

58

Table 4.8 Adjusted Means and Standard Error for Gainers by Experimental Group on Performance and Transfer Controlling for Prior Knowledge

Posttest Transfer Test

Adjusted Standard Adjusted Standard Treatment Group n Mean Error Mean Error

Block 4 32.93 1.11 13.83 1.72 (LCI)

Random 6 30.77 a 0.89 15.37 b 1.37 (HCI)

Block-Random 6 33.66a 0.89 18.90 b 1.37 (SCI-I)

Random-Block 7 31.67 0.83 16.30 1.28 (SCI-II)

Note: Superscripts indicate pairs of means significantly different at the .05 alpha level.

A. Posttest B. Transfer Test

34 19

18

33 17

16

32

15 Estimated Marginal Means Means Marginal Estimated 14 31

13

Block Random Block-Random Random-Block Block Random Block-Random Random-Block (LCI) (HCI) (SCI-I) (SCI-II) (LCI) (HCI) (SCI-I) (SCI-II) Group Group Figure 4.5. Adjusted mean performance of gainers by group on posttest (A) and transfer test (B) controlling for prior knowledge.

59

CHAPTER 5

DISCUSSION

This chapter (a) summarizes the results of the current study in the context of existing research and the current research questions, (b) presents an interpretation of the results that examines possible reasons for the findings, (c) suggests possible implications for instructional practice, and (d) proposes questions for further research in practice sequence variation. The final summary addresses the implications of the tertiary analysis with respect to instructional theory and practice.

Summary of Results in Context

The contextual interference effect that has been observed in previous studies was not observed in the current study except when differences emerged from the scores of the few participants who demonstrated learning gains. The analyses that included all participants, especially the comparison of the pretest and posttest scores that showed no learning gain, revealed that the instructional system did not elicit intended performance. One likely reason was that consequences for learners were not contingent on the level of performance, but merely on participation. Furthermore, the system did not require learners to process the initial instruction or, most importantly, the informative feedback. However, it was not unreasonable to expect that some learners who were intrinsically motivated to acquire the skills would attend to the feedback and, despite the lack of extrinsic reward, acquire the skills to some degree. An analysis of the subset of participants who scored three or more points higher on the posttest than on the pretest – gainers – provided a tentative indication that the instructional system was beneficial for

60

some students, perhaps those who were intrinsically motivated to learn the skill. The findings about this subset are addressed further in the final summary, which is preceded by a discussion of the findings with regard to previous research on the contextual interference effect in the practice of complex cognitive skills and the variable of time in practice.

The Contextual Interference Effect

A major question addressed in this study is whether practice sequence affects the acquisition and transfer of complex cognitive skills, and specifically whether the contextual interference effect applies to such skills. This question remains unanswered. Although the effect has been observed in some studies of complex cognitive skills (de Croock et al., 1998; van Merriënboer et al., 2002), these previous studies did not control for time spent at practice, as the current study did, but rather controlled for the number of practice trials. Controlling for time in practice, the current study found no significant differences among treatment groups. For skills of this kind, time in practice might explain more of the variance than practice sequence, and thus the rival explanation cannot be ruled out; but the current study was inconclusive because time in practice was not manipulated as an independent variable and for the various reasons interpreted below. The current study varied from the previous ones in four main ways. It controlled for time-in-practice rather than the number of practice trials. It compared not only block and random order practice sequences, but also block followed by random order and random followed by block order. It involved principle and rule application with verbal skills rather than the ill-structured task of troubleshooting a simulated chemical plant. Finally, it resulted in no statistically significant differences among groups that practiced in different sequences on measures of mental effort during practice, posttest performance, or transfer performance with the whole scales or their subscales, even when pretest scores and the number of practice trials were statistically controlled. From a simplistic perspective, one might be led to conclude that these results call into question the validity of the contextual interference effect for complex cognitive skills when time in practice is controlled. The following interpretation explains why such a conclusion would be unfounded.

61

Interpretation of Results

To suggest that time-in-practice accounts for transfer of complex learning to the exclusion of practice sequence, or that time in practice was responsible for the results of the de Croock et al. (1998) and van Merriënboer et al. (2002) studies instead of the contextual interference effect, would overextend inferences from these results. Several reasons other than control of time-in-practice might account for the starkly different findings from those of previous studies, including (a) the nature of the verbal skill being trained, (b) the brevity of the intervention, (c) the lack of consequences contingent on participant performance, (d) the failure of the instructional design to teach those participants who were not motivated or to activate sufficient controlled processing, thus sufficient germane cognitive load, to effect acquisition, and (e) extraneous interference associated with participant attendance and the classroom environment. All of these potential reasons are interrelated in that the redress of one might resolve some aspects of others as well. In the remainder of this section each of these possible reasons for the results is discussed, and the suggestion is made that all of these reasons might account, in part, for the failure of this experiment to find the expected results. Finally, the outcomes of gainers, those who evidently learned from the instruction, are discussed.

Nature of the Skill

First, the subject-verb agreement skill being trained, which is typically manifest as a relatively stable, resilient verbal habit for which participants in this study already had a somewhat automated and partially flawed schema, posed a formidable challenge for the instructional intervention. In this discussion, resilience of verbal behavior denotes the elasticity of word choice combined with a tendency to return to the original configuration of subject and verb form choices, rather than permanently change faulty habits. In fact, none of the groups showed noteworthy improvement from pretest to posttest (see Table 4.3). Verbal schemas for learners of first languages are developed very early in life when speech and language skills are initially acquired (McDonald, 1997). They become automated through repeated use and are especially difficult to alter later in life because of their automation and their continual use or misuse. That automation serves an important

62

purpose in one’s intellectual development through communication of thought. It allows one to rely partly on automatic processing for retrieving the form of the subject or verb to use, consuming negligible short-term memory (STM) resources, while applying controlled processing for manipulating a novel combination of words, which consumes far more of the limited STM resources. When complex cognitive tasks involve speaking or writing, the selection of the corresponding subject and verb forms is one among many aspects of language construction that is often consigned to automatic processing while controlled processing is allocated to the novel interactions of concepts, ideas, and the ultimate choice of words to communicate shades of meaning. The resulting subject-verb expression reflects the automated schema from early language learning because cognitive resources are largely consumed by the controlled processing required of novel sentence construction. Perhaps because the existing schema was eminently accessible by learners, because modifying faults in the existing schema would have been very difficult, and because no consequences were contingent on modification of the existing schema, many participants may have defaulted to their existing schemata rather than invest the effort to change.

Brevity of the Intervention

The brief intervention was intended to correct grammatical errors that the participants had heard, read, and practiced in their writing and speech for over 15 years. It was probably insufficient in duration to make an appreciable impact on their schemas regardless of their dedication to the learning tasks. Based on pilot tests of the instruction, participants were expected to spend about 30 minutes reading and reviewing the initial instruction. Instead, the mean time they spent was 10 minutes, 27 seconds. The practice time consisted of four 15-minute sessions, two on one day and two on the next. Considering how long some participants had spent learning incorrect approaches to applying the rule, an hour and ten minutes of instruction was probably insufficient time for a durable, voluntary individual behavior change to take place, no matter how effective or efficient it might be. The brief instructional intervention was likely insufficient to change such a resilient schema as subject-verb agreement.

63

Lack of Consequences

Tasks that carry no consequence that are contingent on the student’s level of performance are not likely to elicit effort on the part of most at the secondary level (Michaels, 1977; Slavin, 1977). The modest amount of mental effort participants invested in performing the practice tasks, and hence the low levels of performance and transfer, can be explained in part by the lack of consequences contingent on the level of their performance. The average mental effort score was 5.7 on the scale of 1 to 9, for which “5” indicated “neither high nor low mental effort” and “6” indicated “rather high mental effort.” All participants were required to complete the practice exercises and their participation was rewarded with a pizza party. However, in accord with standards for treatment of human subjects, participants were offered no incentives to perform well; all participants were rewarded equally. An appeal to their extrinsic motivation in the introduction to the CBT suggested that their ability to obtain or retain a desirable job might depend on their writing ability. That appeal seemed to be ineffective for eliciting the relatively high mental effort that probably would have been required to acquire the intended skills. The lack of consequences might also be responsible for the meager amount of time learners spent on each practice item. The mean number of practice trials completed, 432, in 60 minutes of practice indicates that the average time spent on a practice task was 8.33 seconds, including time to read the sentence, make and select the choice, read the informative feedback, and click the “continue” button. Assuming constant attention to the CBI throughout the timed period, the average duration of 8.33 seconds would seem sufficient to read the sentence, make a choice, obtain the knowledge of results, and go on to the next item, but not sufficient to read the feedback, which named the subject and main verb and identified them as singular or plural, and not sufficient to re-examine the sentence for better understanding of the rule. The lack of consequences for performance allowed students, if they so chose, to casually click through the practice items rather than challenge themselves to excel in the learning tasks. Combined with the resilience of the skill and the brevity of the instructional intervention, the influence of the appeal intended

64

to stimulate motivation was apparently ineffective in the absence of externally imposed consequences.

Failure of the Instructional Design to Accommodate Unmotivated Learners

The CBI was designed with an implicit assumption that learners would read the instruction carefully and read the feedback during practice with an aim to master the target skills. That assumption was unrealistic. The participants in the experiment did not behave the same as the participants in the pilot trials. Although pilot trial participants read instruction and practice feedback carefully, participants in the experiment, on average, did not. The pilot trial participants spent about 30 minutes in the initial instruction and accessed about 240 practice items during the 60 minutes of practice. Apparently, only a small percentage of the experimental sample acted similarly to the pilot participants. They spent about 10.5 minutes in the initial instruction and accessed about 430 practice items. The pilot trial participants were volunteers who willingly assisted with the tryout whereas the participants in the test sample were required to participate as a class assignment that was not graded. Differences in motivation might have accounted for the differences in the time spent by pilot trial participants and test subjects. The volunteers for the pilot test wanted to learn well and perform commensurately to provide valuable data as part of their voluntary effort. Many of the participants in the experiment who were required to complete the learning tasks apparently had little motivation to learn the target skills or perform well on the practice, posttest, or transfer test. This theoretical lack of intrinsic motivation is not the subject of this study. Nevertheless, the design of computer-based presentation of instruction must take motivation and self-regulation, or lack there of, into account, and the design this CBI might not have accommodated the motivational attributes of the sample. The CBI allowed participants to progress through the initial instruction at will without necessarily reading or engaging in the instruction. One participant completed the initial instruction in 28 seconds. That participant obviously did not comprehend the instruction that pilot participants took 30 minutes to process and that the average test participant took 10.5 minutes to complete.

65

Similarly, the number of practice trials participants completed was highly variable. Three participants completed 1000-1600 practice trials during the 60 minutes of practice for an average of 2.3 – 3.6 seconds per practice item, in contrast to the expected 240 practice trials for 15 seconds per item. Those three participants clearly did not read the feedback, if even the sentence itself. The observed means across all experimental groups of 430 practice trials, an average of 8.37 seconds per practice trial, suggests that many participants did not read the feedback. several students were observed spending just one to two seconds on the feedback screen after answering a practice item – long enough to obtain knowledge of results, but probably not long enough to read and process the feedback. (For a sample screen with feedback, see Figure 3.2.) The feedback presentation required no cognitive processing of the feedback before the participant clicked the “Continue” button to proceed to the next practice item. In neither the initial instruction nor the practice trials were the participants required to process information before moving on to the next screen. As a result, some of the participants were apparently engaged in the instruction but the majority seemed not to be. Some of those who were apparently more engaged in outsmarting the instructional system than in learning sought the greatest number of practice trials they could accumulate, while apparently maintaining some level of accuracy, by rapidly interpreting presentation cues and responding as quickly as possible, rather than studying the item or the feedback. One such participant was observed comparing his number of practice trials and percent correct scores with another student in the room, who had apparently competed with him simultaneously, and with another student via instant messaging who had performed the practice tasks earlier. Thus at least one participant found a way of inventing self-interesting tasks to consume available cognitive resources rather than approach the apparently less desirable goal of mastering the target skill, which might otherwise have consumed those resources. If the CBI were designed in such a way that it (a) required participants to process the initial instruction with an activity, (b) required them to process the feedback before proceeding to the next practice item, and (c) rewarded them for their controlled processing, perhaps participants would be more

66

engaged in the learning tasks and less inclined to substitute a distraction from the learning task as an alternative outlet for their driven cognitive activity.

Extraneous Interference Factors

Several factors inherent to classroom experiments in general, and some related to the particular situation, introduced irregularities that might have caused interference with the interventions in approximately equal proportion. Some students did not follow directions for logging in, and thus required special attention and repeated views of introductory screens. Because many of the seniors were involved in activities (e.g., sports, theater, and debate club) that required modifications of their schedules, several students missed classes and had to make up the activities during later class meetings. Some students arrived at class late and had to be accommodated in particular ways that may have varied somewhat from those who arrived on time. The end of the semester was near, and some students’ prior performance had presumably already assured their graduation; some simply did not attend classes. None of these factors, however, should have affected any one group more than another. They might have contributed error to the experimental conditions, but the error should have been distributed randomly across groups. All of these factors may have contributed to the failure of the experiment to find the hypothesized results, whereas the effect might yet be found with the same learning objectives if some or all of the other factors except the nature of the skill were redressed in a follow-up experiment. The prospect of the expected results in a follow-up study is accentuated by the findings of the tertiary analysis involving only the gainers.

Outcomes of Gainers

The selection of the sample subset called gainers, those who scored three or more points higher on the posttest than on the pretest, were selected on the assumption that those with intrinsic motivation to learn from the instruction, however well or poorly it accommodated their preferences, would show some learning gains (cf., Elton, 1998). For learners who were highly motivated to acquire skills (learning motivation), to meet

67

learning objectives (fulfillment motivation), or to obtain scores based on norms (performance motivation), the strengths of the instruction to enhance motivation or compensate for lack of motivation were of relatively little consequence because highly motivated learners find a way to learn from instructional materials (Berliner, 1999). Given that the general population is likely to contain a subset that is so motivated and that, in the current study, the initial instruction and the feedback for practice items had sufficient information to improve learners’ skills, it was not unreasonable to expect that such a subset of this sample would seek out the information availed by the instruction and learn from it. Moreover, it might be reasonable to expect that the analysis of the scores of the subset would provide a glimpse of how a larger sample of similarly motivated participants might perform or, more to the point, how a random sample using instruction that would compel controlled processing of the initial instruction and feedback on practice items might perform on the posttest and transfer test. Such processing might be compelled by requiring participants to identify subjects and main verbs of the sentence or by identifying rules that apply to a given sentence. Although the ANCOVAs of the tertiary analysis were inadequate to draw conclusions because of the small extracted sample and the unequal numbers in groups, group means in the gainer sample conformed quite well to the hypotheses on both the posttest and the transfer test and that the key group (SCI-I) showed significant differences with key contrast groups (HCI on the posttest and LCI on the transfer test) despite the small sample (see Figure 4.5). On the posttest, as hypothesized, the LCI group performed better than or equal to the HCI group, yet the SCI-I group performed better than either group. On the transfer test, as hypothesized, the HCI group performed better than the LCI group and the SCI-I group again performed better than either of them. Despite the small number of cases analyzed, the data from the subset were consistent with the prediction of the key hypothesis that the shifted block-random sequence (SCI-I) group would outperform the block (LCI) and random (HCI) groups on the transfer test, although the latter difference was not statistically significant.

68

Implications for Instructional Design and Learning Theory

An experimental study that finds no significant differences among instructional treatments has no implications for instructional design. It can, however, have implications for further research, the results of which might inform design principles. A possible, as yet unsubstantiated, implication for learning theory is the conjecture that, when time in practice is controlled, the contextual interference effect is valid for practice at some types of skills but not for others. The data collected in the current study would be consistent with the conjecture that the effect does not apply to complex cognitive skills, and that instead, increased time in practice required by participants in the random condition to complete trials has accounted for improved transfer scores of previous studies; but the data are insufficient to draw such a conclusion. If the differences among groups had been greater and in the expected direction, one might speculate that a better design with more participants might yield results consistent with the effect and the hypotheses. But the probabilities of finding these data, given the null hypotheses, strongly suggest that there are no differences among the experimental treatments for acquisition or transfer of grammar skills with graduating high school seniors with no perceived consequences for learning or performance. If the population is constrained to gainers, however, some differences emerge, which are addressed later. Not only are the types of skills that comply with the effect an object of interest, but also are the underlying cognitive mechanisms that explain which types of skills comply with the effect. Perhaps the effect is not applicable to the acquisition and transfer of any type of skill that requires the application of some number of operators – decision points – to arrive at the solution of a problem. This phenomenon could be hypothesized because the complexity of a task could inhibit the automatization of aspects of a skill, thus nullifying any effect from the sequence of practice tasks. For example, the simple application of the sub-rule about compound subjects, illustrated in Figure 5.1, elicits a logical process that can be compiled into a single production that can be largely automated. Assuming automation of (a) correct identification of the subject of the

69

sentence, (b) recognition that the subject is compound, and (c) recognition of the singular and plural forms of the main verb in context, only one decision point is required: Is the main verb plural? With sufficient practice, this application can be compiled as a single production, as it apparently was by most of the high school seniors who participated in the current study. But this simple application of the compound subject sub-rule was not the target skill of the instruction because it is not a type of mistake writers often make

Subject is co mpo und

Is the main Yes Subject and main verb plural? ve rb agree

Subject and main No ve rb do not a gre e

Figure 5.1. Decision process for the simple application of the compound subject sub-rule.

The mistake regarding compound subjects that writers often make entails a far more complex decision-making process (illustrated in Figure 5.2) and requires three to five decision points. The familiarity of the simple application of the compound subject sub-rule illustrated in Figure 5.1 might have accounted for participants mistakenly using the simple application when a complex application was called for. The number of times they attempted solutions, failed, and received corrective feedback on this process evidently had no effect on acquisition or transfer because many participants’ learning opportunities were limited by a repetitive process in which they answered practice items incorrectly and apparently did not read the informative feedback that might have directed them toward greater achievement.

70

Subject is co mpo und

Are subject nouns separated Yes by "either...or"?

Does the former subject noun agree Subject and main No Yes in number with the ve rb agree Former main verb? The special co mpo und sub ject condition applies. Which subject Are subject nouns noun is closest Subject and main se p a r a te d b y Yes (The main verb No to the main verb do not agree "neither...nor"? agrees with the ve rb? subject noun closest to the mai n verb .) No Latter Does the latter subject noun agree Subject and main Yes in number with the ve rb agree main verb?

Are subject nouns separated by "not Yes only...but also"? Subject and main No verb do not agree

No

The special co mpo und sub ject Is the main Yes Subject and main condition does not verb plural? ve rb agree apply.

Subject and main No verb do not agree

Figure 5.2. Decision process for the conditional reasoning of the compound subject sub-rule as instructed.

Regarding the design of practice activities in computer based instruction, the current study illustrated the importance of engaging the learner in the processing of feedback. Whereas participants were expected to read the feedback carefully and to thoughtfully apply that information to subsequent choices as the cooperative try-out subjects did, participants instead apparently read the feedback only for the knowledge of results and ignored the informative part of it. Thus they missed the most critical part of the instruction and subverted a major part of the difference between treatments. A more effective design would have required them to interact with the feedback information in a way that would induce controlled processing in reflection on the practice item before moving on to the next one. For example, on the same page as the specific feedback they could be required to complete a sentence by use of one or more drop-down menus that

71

explains the general principle involved in the practice item for which the feedback would serve as an example.

Implications for Further Research

The findings of the current study suggest several directions for further research about the contextual interference phenomenon with respect to the acquisition and transfer of complex cognitive skills. The various research directions have implications for instructional research and development, if not for basic research in human cognition. A pressing need is resolution of the discrepancy between the research that has found the contextual interference effect with complex cognitive skills while not controlling for time in practice and research that has not found the effect when controlling for time in practice. Resolution of methodological questions would likely spawn the two questions of (a) whether coercing self-regulatory behavior would result in behavioral change or modification of attributions and (b) with what types of skills the contextual interference phenomenon is pertinent and applicable. From the extensive investigation of these questions and related ones, an instruction design heuristic might be developed that would guide instructional strategy choices about practice sequence and might be incorporated into instructional design models for learning complex cognitive skills. If the data supported the hypotheses, questions would remain about whether computer-based drill and practice instruction that requires self-observation, self- recording, self-evaluation, and self-reaction would result in a measurable degree of voluntary, reflective, self-regulatory use of the skills over a long term. If the data did not support the hypotheses, further investigation could examine whether the effect is contingent upon the complexity of the learning task and whether it is found in the domain of grammar rule learning. The answers to these questions might reveal information about the reflexiveness of human cognition with regard to acquisition, retention, and transfer of complex cognitive skills for which the learner approaches the learning task with a preexisting schema.

72

Several observations resulting from this research might be of value to education researchers interested in pursuing similar research questions in naturalistic learning environments: 1. A native language skill such as subject-verb agreement is very unlikely to change as a result of a brief treatment period approximately 240 times over two days, for which the participant has practiced the skill (estimated conservatively) more than 100,000 times over about 15 years. 2. Skills that are initially learned incorrectly very early in life, that have been modeled and practiced incorrectly many thousands of times without corrective feedback, and particularly verbal skills, seem to be highly resistant to response modification. Initial experiments might be conducted with skills that are not so essential to every learner’s daily cognitive activity, ideally of completely novel skills. 3. Modification of native language skills requires an instructional approach that is different from providing repeated practice trials with knowledge of results as feedback, after which access the participant advances to the next problem. Learners must be influenced to process feedback more deeply on items answered incorrectly before advancing to the next problem. 4. When corrective feedback is provided in computer-based drill and practice instruction, a tactic that requires the learner to process the feedback is necessary, or the drill and practice will have limited effectiveness. For instance, if learners answer a subject-verb item correctly, the program would provide confirming feedback and advance to the next item. If the item were answered incorrectly, however, learners would be required to name the subject of the sentence with text entry or identify the rule that applies to the item. This aversive stimulus would discourage impulsive responses and encourage learners to avail themselves of the knowledge required for avoiding the aversive stimulus on successive trials. Furthermore, the additive magnitude of the displayed number and percentage of correct responses might positively reinforce correct responses to future items, which can be accomplished by careful reading of the corrective feedback.

73

5. Although independent computer-based instruction allows greater control of some variables than instructor-led group learning tasks, such as reliance of group members on the most admired or most vocal member of the group, it does not prevent students from engaging in conversations with peers, thus interrupting the treatment. Several other confounding factors can occur, such as students finding a way to use instant messaging to communicate or merely staring into space when they no longer choose to engage with the task.

In summary, the effect of practice sequence on transfer with respect to complex cognitive skills remains a fertile area for research, especially with respect to computer- based instruction that is adaptive to each individual’s dynamic level of competency. Continued research on the topic might provide designers methods by which they can not only identify areas in which a learner requires additional practice but also predict the sequence of practice that is likely to result in the greatest level of transfer performance, based on empirical data gathered from a given learner’s activity and that of other learners. The results of the current study, though inconclusive because of participants’ apparent failure to process the information provided to them, leaves tenable the hypothesis that an effective design approach might be one that forces the learner to process the feedback, begins with block order practice, and shifts to random order practice when the learner demonstrates some criterion of competence. But this speculation about practice sequence obscures the question of whether, and to what degree, learner motivation to acquire or use the skill is a factor in transfer performance. Perhaps different sequence protocols are more effective for learners with intrinsic learning motivation, extrinsic or performance motivation, or no motivation to learn the skill. All of these questions might be approached with a research design that uses a modification of the instruction and additional instruments to measure motivational characteristics.

74

APPENDIX A

INSTITUTIONAL REVIEW BOARD DOCUMENTATION

Approval Memorandum Informed Consent Form Parental Consent Form Minor Assent Form

75

76

INFORMED CONSENT FORM

I freely and voluntarily and without element of force or coercion, consent to be a participant in the research project entitled “Effects of Practice Sequence Variations on the Transfer of Complex Cognitive Skills Practiced in Computer-Based Instruction.”

This research is being conducted by David Nelson, who is a doctoral student in Instructional Systems at Florida State University. I understand the purpose of his research project is to better understand computer-based instructional practices. I understand that if I participate in the project I will be asked questions about my gender, race/ethnicity, and age. I understand that my answers to these questions will be used only to report the numbers of participants in each class or group, and that my answers will not be identified with my name in any report of the results.

The total time commitment will be about 2 hours in four sessions. Each session will take about 30 minutes. I will receive extra credit (if applicable) only if I complete all four sessions.

I understand my participation is totally voluntary and I may stop participation at anytime. Information collected for the study will remain confidential to the extent allowed by law. My name will not appear on any of the results. No individual responses will be reported. Only group findings will be reported. The only penalty for stopping participation is that I will not receive the extra credit for completing this exercise.

I understand there is no risk involved if I agree to participate in this study. I am able to stop my participation at any time I wish.

I understand there are benefits for participating in this research project. First, I may be able to improve my writing or editing skills. Second, I will be providing information that might improve the way educators design instruction.

I understand that this consent may be withdrawn at any time without prejudice, penalty or loss of benefits to which I am otherwise entitled. I have been given the right to ask and have answered any of my questions concerning the study. Questions, if any, have been answered to my satisfaction.

I understand that I may contact David Nelson, Florida State University, College of Education, (850) 222-4363, (email [email protected]) or Julie Haltiwanger, Legal Counsel (850) 644-7900 ([email protected]) for answers to questions about this research or my rights.

I have read and understand this consent form.

______(Participant) (Date)

______(Witness)

77

Parental Consent Letter for Minors

Dear Parent:

I am a graduate student under the direction of Professor Robert K. Branson in the Department of Educational Psychology and Learning Systems in the College of Education at Florida State University. I am conducting a research study for my dissertation to examine the effectiveness of different practice sequences in the learning of grammar rules (making subjects and main verbs of a sentence agree in number). The title of the project is “Effects of Practice Sequence Variations on the Transfer of Complex Cognitive Skills Practiced in Computer-Based Instruction.”

Your child's participation will involve the use of a computer-based program I have developed to improve students’ skills in writing sentences. I am working with Mr. Strazulla and Mr. Bailey to make this a valuable and enjoyable activity for your child. I have designed the program to improve your child’s skills in English writing. Although you have consented to your child’s participation in such research, FSU’s Human Subjects Committee now requires that I obtain your consent to the specific study for your child to participate. Your agreement to allow your child to participate, indicated by signing this form, is necessary for your child to participate.

Your agreement to allow your child’s participation in this study is voluntary. There will be minimal risk to your child. If you or your child chooses not to participate or to withdraw from the study at any time, there will be no penalty, (it will not affect your child's grade). The results of the research study may be published, but your child's name will not be used. The information collected during the study will remain confidential to the extent allowed by law.

Although there may be no direct benefit to your child, the possible benefit of your child's participation is improved skills at writing and editing English text.

If you have any questions concerning this research study or your child's participation in the study, please contact me at 222-4363 (email: [email protected]) or Dr. Branson at 644-1559 (email: [email protected]). If you have any questions about your or your child’s rights as a participant in this research, or if you feel you or your child has been placed at risk, you can contact the Chair of the Human Subjects Committee, Institutional Review Board, through the Office of the Vice President for Research, at (850) 644-8633. Please complete the form below.

Thanks in advance for your agreement,

David W. Nelson

*************

I give consent for my child to participate in the above study.

Parent's Name: ______

Parent's Signature: ______Date: ______

78

Assent

I have been informed that my parent(s) have given permission for me to participate, if I want to, in David Nelson’s study concerning the agreement of subjects and main verbs. My participation in this project is voluntary and I have been told that I may stop my participation in this study at any time. If I choose not to participate, it will not affect my grade in any way. Name (please print): ______

Signature: ______Date: ______

79

APPENDIX B

PRETEST

80

Do the subjects and main verbs of these sentences agree? Mark Yes or No for each sentence. (Reminder: The subject is the person(s), place(s), or thing(s) that the sentence is about. The main verb is the word that tells what the subject is or does.) Example: Students who learn grammar rules are better at editing sentences. Yes□9 No□ 1. Anything that has wings is capable of flight. Yes□ No□ 2. Either the scissors or the knife are sharp enough to cut the string. Yes□ No□ 3. There is interesting new movies at the theatre. Yes□ No□ 4. History and Geography is my favorite coursework. Yes□ No□ 5. Flag Day, when many people display their flags, are lots of fun for all of us. Yes□ No□ 6. On the football field stands Florida State’s eleven tough defensive players. Yes□ No□ 7. Here is the hospital in which my brother and I were born. Yes□ No□ 8. Each of us have a valid driver license. Yes□ No□ 9. Not only the earrings but also a necklace is my mother’s birthday present. Yes□ No□ 10. The best solution for her problems with weight and fatigue are more exercises in her routine. Yes□ No□ 11. Our lifeguards, after taking a course in first aid, understands the respiratory system. Yes□ No□ 12. The bushes around the house grow very fast. Yes□ No□ 13. Everyone in all of the classrooms has the same access to the computer lab. Yes□ No□ 14. Two cats and a dog are the only pets I have. Yes□ No□ 15. Nobody in federal agencies have authority over public schools. Yes□ No□ 16. What are the player’s favorite colors? Yes□ No□ 17. The states west of the Mississippi River, on average, are larger than states to the east. Yes□ No□ 18. There is visible erasure marks on my paper. Yes□ No□ 19. At the door was two police officers. Yes□ No□ 20. When does the college games begin? Yes□ No□ 21. Neither do the repair job properly. Yes□ No□ 22. Either fashion magazines or a book is my reading for the trip. Yes□ No□ 23. There is stained and dirty shirts in the laundry basket. Yes□ No□ 24. Basketball and tennis is my main means of exercise. Yes□ No□ 25. Independence Day, when celebrations occur in many cities and towns, is appropriate for fireworks displays. Yes□ No□ 26. Out on the street wait the little ballerina’s parents as she shouts “Trick or treat!” Yes□ No□ 27. There is the elementary school my sister and I attended. Yes□ No□ 28. Something in old books makes me sneeze. Yes□ No□ 29. Not only the keys but also the registration was in the glove compartment. Yes□ No□ 30. The most effective activity for improving skills in math and science are additional practice problems. Yes□ No□ 31. Military people, instead of providing a social security number, provides a serial number. Yes□ No□ 32. The cars inside the garage are protected from the sun. Yes□ No□ 33. Someone with strong arms lifts the bails of hay up to the loft as they come off the wagon. Yes□ No□ 34. Two cars and a truck park in the same place every day. Yes□ No□ 35. Anybody with serious problems have the right to assistance from disaster relief. Yes□ No□ 36. Who is the first airplane’s inventors? Yes□ No□ 37. People without identification are not allowed in the club. Yes□ No□ 38. There are elaborate stained glass window in the National Cathedral. Yes□ No□ 39. In the blue convertible was two blonde women. Yes□ No□ 40. Who among the students get the last piece of cake? Yes□ No□

81

APPENDIX C

TRANSFER TEST

82

Name: ______

Directions: Make the subjects and main verbs in the following sentences agree in number by striking through words that should be changed and writing the correct words above them. You may change either the subject or the main verb to correct the sentence.

Example: stay The children stays in the car.

Or: child The children stays in the car.

Guitars

Instruments much like the guitar were popular five thousand years ago. An Egyptian mural from the time of the Pharaohs show women playing instruments that look very much like the guitar. But the origins of the word “guitar” appear in Spain in the 13th century. There was

probably some derivations of the word “qitara” that led to the word “guitar.” Not only the

characteristics but also the name of the qitara were brought into Spain by the Moors after the 10th

century. Nobody who has written about the history and derivation of the word “guitar” doubt that

it also may have derived from the sitar, an ancient instrument of India.

Neither Egyptian qitaras nor the Indian sitar were the likely direct ancestor of the guitar.

The Spanish instrument called the vihuela, among others, appear to be an intermediate form. The

vihuela, with its lute-style tuning and a small body, resembles the guitar. But there is not enough

details in the literature to determine if the vihuela represents a unique transitional form. It could

simply represent a design that combined features from the Egyptian and Indian families of

instruments.

The guitar was transformed into an electronic instrument in 1931. Adolf Rickenbacker,

George Beauchamp, and Paul Berth is mainly responsible for the invention of the electric guitar.

83 Continued on the reverse side.

The effort of Rickenbacker and his colleagues were fundamental to the electrification of musical instruments. There was, with the introduction of electronics, major departures from the acoustic methods of making music. The solid or semi-solid body and steel-cored strings of the electric guitar utilizes electromagnetic pickups to convert the vibration of the strings into electrical current. The current and its amplification is sometimes electronically distorted to achieve various tonal effects. Rickenbacker, with his background in electromagnetism and the manufacture of guitars, is responsible for the invention of the horseshoe-magnet pickup. But neither Rickenbacker nor his colleagues were responsible for bringing electric guitars to the wider public. Instead, a small company named Danelectro, by producing the Silvertone and Danelectro lines of solid body electric guitars, were accountable for the explosion of the electric guitar’s popularity. Anyone knowledgeable about both electric and acoustical guitars recognize that both types of guitars have common characteristics. Despite their disparity, there is a few similarities. At the extreme end of all guitars are the headstock. It often contains the tuners, the nut, and some kind of decoration. This decoration usually indicates the maker or model of the guitar. Anything that augments the appearance of the headstock, body, fretboard, or tuners are considered decorative. The headstock is the part of the guitar where tuning is adjusted. Something that affects the fidelity of the guitar’s tunings are the steadfastness with which tuners maintain tension on the strings. Some tuners are the gear driven type and some are the type that are held in place by friction. Either are subject to creep as the guitar is played. The function of the tuners is to adjust the tension on each individual string. The nut, a small strip of ivory, plastic, graphite, or other medium-hard material, supports the strings at the joint where the headstock meets the fretboard. The nut, which maintains the spacing of the strings at the headstock end of the fretboard, determine the positions of one end of the strings that results in the tune of the strings.

84 End of selection.

APPENDIX D

SCALE AND ITEM ANALYSIS RESULTS

Table D.1: Pretest Items Sorted by Task Class Tables D.2-D.5: Sub-Tests of Pretest Table D.6: Posttest Items Sorted by Task Class Tables D.7-D.10: Sub-Tests of Posttest Table D.11: Transfer Test Items Sorted by Task Class

85

Table D.1 Pretest Statistics for Items Sorted by Task Class.

Task Mean Correct Point Biserial KR20 if Item Scale Mean if Item Standard Scale Variance if Item Class (Difficulty) Correlation Deleted Item Deleted Deviation Item Deleted 5 1 .85 .167 .755 25.53 .360 28.882 6* 1 .33 -.006 .763 26.04 .472 29.466 10 1 .42 .258 .751 25.96 .496 28.061 11 1 .60 .409 .743 25.78 .492 27.310 12 1 .82 .219 .753 25.55 .385 28.610 17 1 .86 .304 .750 25.52 .351 28.396 25 1 .77 .242 .752 25.61 .424 28.385 30 1 .19 .361 .747 26.19 .392 28.010 31 1 .63 .312 .748 25.74 .484 27.833 32 1 .95 .338 .751 25.43 .226 28.788 37 1 .92 .259 .752 25.46 .273 28.827 3 2 .77 .382 .746 25.61 .424 27.772 7 2 .88 .279 .751 25.50 .332 28.559 16* 2 .83 .077 .758 25.54 .377 29.205 18 2 .74 .181 .754 25.63 .440 28.613 19 2 .71 .281 .750 25.66 .454 28.100 20 2 .63 .296 .749 25.75 .486 27.901 23 2 .64 .394 .744 25.73 .481 27.441 26 2 .61 .154 .756 25.77 .491 28.612 27 2 .89 .248 .752 25.48 .311 28.738 36 2 .80 .335 .748 25.57 .399 28.085 38 2 .62 .375 .745 25.76 .489 27.500 39 2 .71 .286 .750 25.67 .458 28.061 2 3 .41 .356 .746 25.96 .494 27.566 4 3 .80 .265 .751 25.57 .399 28.373 9* 3 .36 .015 .762 26.02 .481 29.351 14 3 .83 .293 .750 25.54 .377 28.340 22* 3 .48 -.011 .764 25.89 .502 29.466 24 3 .69 .530 .738 25.69 .466 26.883 29* 3 .56 .003 .763 25.81 .498 29.397 34 3 .86 .295 .750 25.52 .351 28.432 1 4 .71 .271 .750 25.67 .458 28.133 8 4 .41 .194 .754 25.96 .494 28.395 13 4 .57 .103 .759 25.80 .497 28.862 15 4 .52 .179 .755 25.86 .502 28.448 21* 4 .61 -.111 .769 25.77 .491 30.018 28 4 .74 .331 .748 25.63 .440 27.928 33 4 .69 .196 .754 25.69 .466 28.469 35 4 .48 .302 .749 25.89 .502 27.808 40 4 .51 .298 .749 25.87 .502 27.829

Note: * Point Biserial Correlation < .1. Task Classes: 1 = Words Between; 2 = Verb Before Subject; 3 = Compound Subject; 4 = Indefinite Pronoun. Scale statistics: N = 112; M = 26.38; σ2 = 29.66; σ = 5.45; KR20 = .76; Standard Error of Mean = 2.67.

86

Table D.2 Pretest Subscale 1 Items: Words Between Subject and Verb

Item Standard Point Biserial KR20 if Scale Mean if Scale Variance Item Difficulty Error Correlation Deleted Deleted if Deleted 5 .85 .034 .148 .484 6.48 2.973 6 .33 .045 -.085 .563 7.00 3.207 10 .42 .047 .248 .454 6.91 2.641 11 .60 .047 .333 .422 6.73 2.522 12 .82 .036 .269 .451 6.51 2.793 17 .86 .033 .235 .462 6.47 2.882 25 .77 .040 .132 .491 6.56 2.915 30 .19 .037 .321 .436 7.14 2.718 31 .63 .046 .283 .442 6.70 2.610 32 .95 .021 .188 .479 6.38 3.086 37 .92 .026 .184 .478 6.41 3.037

Note: N = 112; KR20 = .50; M = 7.33 of 11 SD = 1.81; σ2 = 3.29; Difficulty = .67; Standard Error of Mean = .85

Table D.3 Pretest Subscale 2 Items: Verb Before Subject

Item Standard Point Biserial KR20 if Scale Mean if Scale Variance Item Difficulty Error Correlation Deleted Deleted if Deleted 3 .77 .040 .388 .527 8.05 3.961 7 .88 .031 .157 .574 7.95 4.466 16 .83 .036 .077 .590 7.99 4.531 18 .74 .042 .223 .563 8.08 4.201 19 .71 .043 .238 .560 8.11 4.151 20 .63 .046 .330 .537 8.20 3.925 23 .64 .045 .352 .532 8.18 3.896 26 .61 .046 .119 .589 8.21 4.314 27 .89 .029 .266 .557 7.93 4.355 36 .80 .038 .234 .560 8.02 4.252 38 .62 .046 .324 .539 8.21 3.930 39 .71 .043 .190 .571 8.12 4.230

Note: N = 112: KR20 = .58; M = 8.82 of 12; SD = 2.19; σ2 = 4.80; Difficulty = .74; Standard Error of Mean = 1.42

87

Table D.4 Pretest Subscale 3 Items: Compound Subject

Item Standard Point Biserial KR20 if Scale Mean if Scale Variance Item Difficulty Error Correlation Deleted Deleted if Deleted 2 .41 .047 .154 .411 4.58 2.120 4 .80 .038 .200 .390 4.19 2.190 9 .36 .045 .070 .449 4.63 2.252 14 .83 .036 .296 .354 4.16 2.118 22 .48 .047 .208 .384 4.51 2.036 24 .69 .044 .195 .390 4.30 2.105 29 .56 .047 .175 .400 4.43 2.085 34 .86 .033 .226 .383 4.13 2.225

Note: N = 112; KR20 = .43 M = 4.99 of 8; SD = 1.61; σ2 = 2.59; Difficulty = .62; Standard Error of Mean = 1.22

Table D.5 Pretest Subscale 4 Items – Indefinite Pronouns

Item Standard Point Biserial KR20 if Scale Mean if Scale Variance Item Difficulty Error Correlation Deleted Deleted if Deleted 1 .71 .043 .120 .390 4.53 2.882 8 .41 .047 .274 .323 4.82 2.598 13 .57 .047 .160 .374 4.66 2.767 15 .52 .047 .169 .370 4.71 2.746 21 .61 .046 .037 .426 4.63 2.975 28 .74 .042 .123 .389 4.49 2.901 33 .69 .044 -.021 .445 4.54 3.097 35 .48 .047 .323 .299 4.75 2.514 40 .51 .047 .277 .321 4.72 2.580

Note: N = 112; KR20 = .40; M = 5.23 of 9; SD = 1.81; σ2 = 3.28; Difficulty = .58; Standard Error of Mean =1.40

88

Table D.6 Posttest Statistics for Items Sorted by Task Class

Task Mean Correct Point Biserial KR20 if Item Scale Mean if Item Standard Scale Variance Item Class (Difficulty) Correlation Deleted Item Deleted Deviation if Item Deleted 5 1 .63 .645 .844 25.37 .485 43.895 6 1 .29 .167 .855 25.71 .458 47.019 10 1 .40 .232 .854 25.60 .492 46.477 11 1 .52 .584 .845 25.48 .502 44.125 12 1 .85 .213 .854 25.15 .356 47.106 17 1 .83 .301 .852 25.17 .376 46.588 25 1 .80 .302 .852 25.20 .402 46.460 30 1 .33 .454 .849 25.67 .471 45.180 31 1 .61 .531 .847 25.39 .490 44.559 32* 1 .86 .061 .856 25.14 .346 47.864 37 1 .86 .269 .853 25.14 .346 46.885 3 2 .66 .425 .849 25.34 .475 45.332 7 2 .86 .292 .852 25.14 .346 46.779 16* 2 .79 .087 .856 25.21 .410 47.615 18 2 .61 .421 .849 25.39 .490 45.262 19 2 .61 .477 .848 25.39 .490 44.900 20 2 .55 .455 .848 25.45 .500 44.974 23 2 .60 .545 .846 25.40 .492 44.455 26 2 .68 .101 .857 25.32 .467 47.410 27 2 .81 .151 .855 25.19 .394 47.304 36 2 .49 .381 .850 25.51 .503 45.444 38 2 .55 .474 .848 25.45 .500 44.846 39 2 .56 .492 .847 25.44 .499 44.739 2 3 .31 .422 .849 25.69 .463 45.427 4 3 .66 .421 .849 25.34 .475 45.353 9* 3 .60 .001 .860 25.40 .492 48.030 14 3 .77 .299 .852 25.23 .424 46.371 22* 3 .75 -.003 .859 25.25 .437 48.106 24 3 .67 .548 .846 25.33 .471 44.605 29 3 .66 .113 .857 25.34 .475 47.311 34* 3 .75 .032 .858 25.25 .437 47.893 1 4 .91 .252 .853 25.09 .294 47.172 8 4 .47 .473 .848 25.53 .502 44.848 13 4 .79 .182 .855 25.21 .410 47.083 15 4 .65 .492 .848 25.35 .479 44.889 21 4 .25 .235 .854 25.75 .437 46.680 28 4 .81 .214 .854 25.19 .394 46.964 33 4 .82 .161 .855 25.18 .385 47.276 35 4 .66 .556 .846 25.34 .475 44.524 40 4 .69 .525 .847 25.31 .463 44.810 Note: * Point Biserial Correlation < .1. N = 95; M = 26.00 of 40; σ2 = 48.28; σ = 6.95; KR20 = .86; Standard Error of Mean = 2.65

89

Table D.7 Posttest Subscale 1 Items – Words Between Subject and Verb

Item Standard Point Biserial KR20 if Scale Mean if Scale Variance Item Difficulty Error Correlation Deleted Deleted if Deleted 5 .63 .050 .615 .530 6.36 3.509 6 .29 .047 .162 .635 6.69 4.342 10 .40 .051 .240 .620 6.59 4.138 11 .52 .052 .534 .548 6.47 3.592 12 .85 .037 .183 .627 6.14 4.460 17 .83 .039 .140 .634 6.16 4.496 25 .80 .041 .211 .623 6.19 4.347 30 .33 .048 .368 .591 6.66 3.949 31 .61 .050 .447 .571 6.38 3.770 32 .86 .035 .023 .650 6.13 4.707 37 .86 .035 .154 .631 6.13 4.516

Note: N = 95; KR20 = .63; M = 6.99 of 11; SD = 2.21; σ2 = 4.86; Difficulty = .63; Standard Error of Mean = 1.34

Table D.8 Posttest Subscale 2 Items – Verb Before Subject

Item Standard Point Biserial KR20 if Scale Mean if Scale Variance Item Difficulty Error Correlation Deleted Deleted if Deleted 3 .66 .049 .390 .664 7.12 6.018 7 .86 .035 .177 .692 6.92 6.716 16 .79 .042 -.031 .719 6.99 7.053 18 .61 .050 .441 .655 7.17 5.865 19 .61 .050 .510 .643 7.17 5.716 20 .55 .051 .527 .640 7.23 5.648 23 .60 .051 .554 .636 7.18 5.617 26 .68 .048 .016 .718 7.09 6.895 27 .81 .040 .046 .708 6.97 6.903 36 .49 .052 .369 .667 7.28 5.993 38 .55 .051 .392 .663 7.23 5.946 39 .56 .051 .473 .649 7.22 5.770

Note: N = 95; KR20 = .69; M = 7.78 of 12; SD = 2.65; σ2 = 7.78; Difficulty = .65; Standard Error of Mean = 1.47

90

Table D.9 Posttest Subscale 3 Items – Compound Subjects

Item Standard Point Biserial KR20 if Scale Mean if Scale Variance Item Difficulty Error Correlation Deleted Deleted if Deleted 2 .31 .047 .258 .198 4.86 1.779 4 .66 .049 -.013 .353 4.51 2.104 9 .60 .051 .024 .336 4.57 2.035 14 .77 .044 .235 .218 4.40 1.860 22 .75 .045 .036 .323 4.42 2.076 24 .67 .048 .279 .182 4.49 1.742 29 .66 .049 .149 .262 4.51 1.891 34 .75 .045 .036 .323 4.42 2.076

Note: N = 95; KR20 = .31; M = 5.17 of 8; SD = 1.52; σ2 = 2.31; Difficulty = .65; Standard Error of Mean` = 1.27

Table D.10 Posttest Subscale 4 Items – Indefinite Pronouns

Item Standard Point Biserial KR20 if Scale Mean if Scale Variance Item Difficulty Error Correlation Deleted Deleted if Deleted 1 .91 .030 .189 .584 5.16 3.241 8 .47 .051 .369 .536 5.59 2.670 13 .79 .042 .094 .610 5.27 3.222 15 .65 .049 .327 .550 5.41 2.776 21 .25 .045 .233 .576 5.81 2.985 28 .81 .040 .176 .589 5.25 3.127 33 .82 .040 .125 .600 5.24 3.207 35 .66 .049 .535 .482 5.40 2.498 40 .69 .030 .448 .512 5.37 2.639

Note: N = 95; KR20 = .59; M = 6.06 of 9; SD = 1.88; σ2 = 3.53; Difficulty = .67; Standard Error of Mean = 1.20

91

Table D.11 Transfer Test Items Sorted by Task Class

Mean Correct Point Biserial KR20 if Item Scale Mean if Item Standard Scale Variance Item Subscale (Difficulty) Correlation Deleted Item Deleted Deviation if Item Deleted 1 1 .90 .212 .822 19.90 .296 26.711 2 1 .47 .532 .810 20.34 .502 24.550 3 1 .79 .231 .822 20.02 .411 26.301 8 1 .62 .326 .819 20.19 .489 25.597 9 1 .96 .136 .823 19.85 .203 27.117 14 1 .48 .406 .815 20.33 .502 25.148 18 1 .93 .127 .824 19.88 .264 27.029 20 1 .51 .525 .810 20.30 .503 24.577 31* 1 .93 -.030 .827 19.88 .264 27.497 32* 1 .87 -.005 .828 19.44 .335 27.351 33 1 .41 .584 .808 20.39 .495 24.349 4 2 .66 .445 .814 20.15 .476 24.096 10 2 .37 .459 .813 20.44 .486 24.980 15 2 .65 .375 .817 20.16 .480 25.404 22 2 .62 .425 .815 20.19 .489 25.124 23 2 .45 .570 .808 20.36 .500 24.384 5 3 .53 .292 .820 20.28 .502 25.708 7 3 .41 .146 .826 20.39 .495 26.456 13 3 .72 .404 .816 20.09 .450 25.412 16 3 .29 .331 .818 20.52 .455 25.715 17 3 .45 .501 .811 20.36 .500 24.706 19* 3 .86 -.110 .831 19.95 .347 27.728 6 4 .40 .441 .815 20.40 .493 25.168 21 4 .36 .490 .812 20.45 .483 24.852 26 4 .49 .474 .813 20.32 .503 24.822 28 4 .50 .579 .808 20.31 .503 24.323 30 4 .30 .298 .820 20.51 .460 25.844 11* 0 .94 .019 .826 19.87 .246 27.338 12 0 1.00 N/A N/A N/A .000 N/A 24* 0 .99 -.063 .825 19.82 .103 27.505 25* 0 .99 -.083 .825 19.82 .103 27.526 27 0 .99 .136 .823 19.82 .103 27.290

Note: * Point Biserial Correlation < .1. Transfer scale statistics exclude item 12, for which difficulty = 1.0. Task class 0 indicates sentence not used as discriminatory item. Scale statistics: N = 94; M = 20.81 of 32; σ = 5.24; σ2 = 27.45; KR20 = .78; Standard Error of Mean = 2.48.

92

REFERENCES

Bannert, M. (2003). Managing cognitive load – recent trends in cognitive load theory. Learning and Instruction, 12 139-146.

Battig, W. F. (1972). Intratask interference as a source of facilitation in transfer and retention. In R.F. Thompson & J. F. Voss (Eds.), Topics in learning and performance. New York: Academic Press.

Battig, W. F. (1979). The flexibility of human memory. In L.S. Cermak & F. I. M. Craik (Eds.), Levels of processing in human memory. Hillsdale, NJ: Erlbaum.

Cormier, S. M., & Hagman, J. D. (Eds.). (1987). Transfer of learning: Contemporary research and applications. San Deigo: Academic Press.

De Croock, M. B. M., van Merriënboer, J. J. G., & Paas, F. G. W. C. (1998). High versus low contextual interference in simulation-based training of troubleshooting skills: Effects on transfer performance and invested mental effort. Computers in Human Behavior, 14(2), 249-267.

Ellis, N. C., & Schmidt, R, (1998). Rules or associations in the acquisition of morphology? The frequency by regularity interaction in human and PDP learning of morphosyntax. Language and Cognitive Processes, 13, 307-336.

Elton, L. (1996). Strategies to enhance student motivation: A conceptual analysis. Studies in Higher Education, 21(1), 57-68.

Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological Review, 102(2), 211-245.

Ferguson, G. A. (1956). On transfer and the abilities of man. Canadian Journal of Psychology, 10, 121-131.

Fowler, H. W. (1985). A dictionary of modern English usage, 2nd ed. Oxford: Oxford University Press.

Gagné, R. M. (1985). The conditions of learning and theory of instruction, 4th ed. New York: Holt, Rinehart, and Winston.

93

Gick, M. L., & Holyoak, K. J. (1980).Analogical problem solving. Cognitive Psychology, 12, 306-355.

Gick, M. L., & Holyoak, K. J. (1983). Schema induction and analogical reasoning. Cognitive Psychology, 15, 1-38.

Harrington, M., & Dennis, S. (2002). Input-driven language learning. Studies in Second Language Acquisition, 24, 261-268.

Hiew, C. C. (1977). Sequence effects in rule learning and conceptual generalization. American Journal of Psychology, 90. 207-218.

Jelsma, O., & Pieters, J. M. (1989). Instructional strategy effects on the retention and transfer of procedures of different difficulty level. Acta Psychologica, 70, 219- 234.

Keller, J. M. (1987). Strategies for stimulating the motivation to learn. Performance and Instruction, 26(8), 1-7.

Lee, T. D., & Magill, R. A. (1983). The locus of contextual interference in motor-skill acquisition. Journal of Experimental Psychology: Learning, Memory and Cognition, 9, 730-746.

Magill, R. A., & Hall, K. G. (1990). A review of the contextual interference effect in motor skill acquisition. Human Movement Science, 9, 241-289.

McDonald, J. L. (1997). Language acquisition: The acquisition of linguistic structure in normal and special populations. Annual Review of Psychology, 48, 215-241.

Mayer, R. E., & Wittrock, M. C. (1996). Problem-solving transfer. In D. C. Berliner, & R. C. Caffee, (Eds.) Handbook of educational psychology (pp. 47-62).

Merrill, M. D. (1987). The new Component Design Theory: Instructional design for courseware authoring. Instructional Science, 16(1), 19-34.

Michaels, J. W. (1997). Classroom reward structures and academic performance. Review of Educational Research, 47(1), 87-98.

Newell, A., & Rosenbloom, P. S. (1981). Mechanisms of skill acquisition and the law of practice. In J. R. Anderson (Ed.), Cognitive skills and their acquisition (pp. 1-55). Mahwah, NJ: Erlbaum.

Paas, F. G. W. C. (1992). Training strategies for attaining transfer of problem-solving skills in statistics: A cognitive load approach. Journal of Educational Psychology, 84, 429-434.

94

Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38(1), 1-4.

Paas, F., Tuovinen, J. E., Tabbers, H., & Van Gerven, P. W. M. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38(1), 63-71.

Paas, F. G. W. C., & van Merriënboer, J. J. G. (1994a). Variability of worked examples and transfer of geometrical problem solving skills: A cognitive load approach. Journal of Educational Psychology, 86, 122-133.

Paas, F., & van Merrriënboer, J. J. G. (1994b). Measurement of cognitive load in instructional research. Perceptual and Motor Skills, 79(1, part 2), 419-430.

Pyle, M. A., & Muñoz, M. E. (1982). Cliffs test of English as a foreign language preparation guide. Lincoln, NE: Cliffs Notes, Inc.

Schild, M. E., & Battig, W. F. (1966). Directionality in paired-associate learning. Journal of Verbal Learning and Verbal Behavior, 5, 42-49.

Schmidt, R. (1992). Psychological mechanisms underlying second language fluency. Studies in Second Language Acquisition, 14, 357-385.

Schmidt, R. A., & Bjork, R. A. (1992). New conceptualizations of practice: Common principles in 3 paradigms suggest new concepts for training. Psychological Science, 3(4), 207-217.

Schneider, W., & Shiffrin, R. M. (1997). Controlled and automatic human information processing: I. detection, search, and attention. Psychological Review, 84(1), 1-66.

Shea, J. B., & Morgan, R. I. (1979). Contextual interference effects on the acquisition, retention, and transfer of a motor skill. Journal of Experimental Psychology: Human Learning and Memory, 5, 179-187.

Shea, J. B., Kohl, R., & Indermill, C. (1990). Contextual interference: Contributions of practice. Acta Psychologica, 73, 145-157.

Shiffrin, R. M., & Schneider, W. (1997). Controlled and automatic human information processing: II. perceptual learning, automatic attending, and a general theory. Psychological Review, 84(2), 127-190.

Slavin, R. E. (1997). Classroom reward structure: An analytical and practical review. Review of Educational Research, 47(4), 633-650.

95

Smith, K., & Stapleford, J. (1963). Effective writing. New York: Doubleday.

Strayer, D. L., & Kramer, A. F. (1994). Strategies and automaticity: I. Basic findings and conceptual framework. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(2), 318-341.

Sweller, J. (1990). Cognitive load as a factor in the structuring of technical material. Journal of Experimental Psychology; General, 119(2), 176-192.

Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning & Instruction, 4(4), 295-312.

Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80, 352-373.

Valcke, M. (2002). Cognitive load: Updating the theory? Learning & Instruction, 12(1), 147-154.

Van Merriënboer, J. J. G. (1997). Training complex cognitive skills: A four-component instructional model for technical training. Englewood Cliffs, NJ: Educational Technology Publications, Inc.

Van Merriënboer, J. J. G., de Croock, M. B. M., & Jelsma, O. (1997). The transfer paradox: Effects of contextual interference on retention and transfer performance of a complex cognitive skill. Perceptual and Motor Skills, 84, 784-786.

Van Merriënboer, J. J. G., Kirschner, P. A., & Kester, L. (2003). Taking the load off a learner’s mind: Instructional design for complex learning. Educational Psychologist, 38(1), 5-13.

Van Merriënboer, J. J. G., Shuurman J. G., de Croock, M. B. M., & Paas, F. G. W. C. (2002). Redirecting learners’ attention during training: Effects on cognitive load, transfer test performance, and training efficiency. Learning and Instruction, 12, 11-37.

Wang, W. (2000). The relative effectiveness of structured questions and summarizing on near and far transfer tasks. Dissertation Abstracts International, 61(10), 3884. (UMI No. 9990406)

Zimmerman, B. J., & Kitsantas, A. (1999). Acquiring writing revision skill: Shifting from process to outcome self-regulatory goals. Journal of Educational Psychology, 91(2), 241-250.

96

BIOGRAPHICAL SKETCH

SUMMARY

• Doctoral Candidate in Instructional Systems Program, Department of Educational Psychology and Learning Systems, Florida State University • Master’s Degree in Instructional Systems, Florida State University, 1997 • Teaching Assistant for graduate level Instructional Systems courses, online and face-to-face, Florida State University, 2001-2006 • Performance System Technologist and Research Assistant, Learning Systems Institute, Florida State University, 2001-2006 • Lead Instructional Designer of Web-based graduate level Instructional Systems courses, Florida State University, 1999-2001 • Project Manager responsible for producing, directing, and developing telecourses, College of Education, Florida State University, 1997-1998 • Cinematographer/Producer of motion pictures, videotapes, and CD-ROMs, 1976-present • Owner of Media Production Small Business, managing instructional designers, scriptwriters, video and audio production personnel, satellite uplink engineers, post-productions agencies, and editors, 1990-present • Research Interests: Cognitive processes, instructional strategies, cognitive strategies, computer-based instruction, electronic performance support systems, and simulations, to promote learning, retention, and transfer in e-learning applications

PROFESSIONAL PREPARATION

M.S. in Instructional Systems, Department of Educational Psychology and Learning Systems, Florida State University – 1997 • Award: Nominated for the Gagné-Briggs Award for Outstanding Master’s Degree Student, 1997 • Accomplishment: Designed, developed, implemented, formatively evaluated, and revised, a unit of interactive multimedia computer-based instruction for beginning photography students • Accomplishment: Designed, developed and produced, as part of a team, a commercial website (1995) • Service: Participated in the first Annual Instructional Systems Design Forum at Florida State, documenting sessions on videotape (1996)

B.A. in Photography and Cinematography, Department of Fine Arts, Florida State University – 1974 • Major: Fine Arts, Cinematography • Specialization: Photography and Cinematography, under direction of filmmaker Victor Nuñez and photographer Robert Fichter • Minor: English, with concentration in creative writing, under direction of Douglas R. Fowler

97

PROFESSIONAL HISTORY

August 23, 2002 – Present Learning Systems Institute, Florida State University Instructional Systems Designer, Human Performance Technologist • Assisted the redesign of a computer-based simulation for researching cognitive processes in the acquisition, retention, and transfer of complex cognitive skills under direction of A. Aubteen Darabi. • Researched acquisition, performance, and transfer of complex cognitive skills with computer-based simulations under direction of A. Aubteen Darabi. • Designed and conducted experimental and naturalistic research; collected and analyzed data in experimental and naturalistic research settings in collaboration with Aubteen Darabi, Robert Branson, Mohammed Kahlil, Tristan Johnson, Zane Olina, Robert Reiser, Eric Sikorski, and Norbert Seel. • Researched self-efficacy of post-graduate students to use of an Electronic Performance Support System (EPSS) for performance system analysis; trained students on the use of the EPSS under direction of Aubteen Darabi. • Assisted teaching of an online graduate level course, A Systems Approach to the Management of Change, under the direction of Robert K. Branson, including daily online contact with course participants. • Assisted teaching of the face-to-face graduate level seminar, Alternative Views of Teaching and Learning, under the direction of Robert K. Branson. • Designed a post-graduate seminar a face-to-face graduate level seminar, Alternative Views of Teaching and Learning, under the direction of Robert K. Branson.

February 11, 2002 – August 23, 2002 Office for Distributed and Distance Learning, Florida State University Instructional Systems Designer, Human Performance Technologist • As a member of a team, designed and developed online orientation training for students in a new online Master of Social Work program. • Consulted with School of Social Work faculty to develop instruments to assess the online program.

February 19, 2001 – February 8, 2002 Center for Performance Technology, Learning Systems Institute, Florida State University Instructional Systems Designer, Human Performance Technologist • As member of a team to perform a statewide training system analysis for the Florida Department of Children and Families, developed data collection instruments, inspected Training Coordinating Agencies’ facilities and records, interviewed Training Coordinators and Contract Managers throughout the State of Florida. • Analyzed data and developed recommendations to the Florida Department of Children and Families to improve training coordination. • Developed proposals for instructional research and development contracts under the direction of Judd Butler, Samantha Tackett and Robert K. Branson, Director.

January 4, 1999 – February 16, 2001 Office for Distributed and Distance Learning, Florida State University Instructional Systems Designer • Designed and developed Web-based asynchronous courses using Blackboard© for a Master’s Degree Program in Instructional Systems with a Major in Open and Distance Learning, including graduate courses Introduction to Distance Learning with lead faculty Donald P. Ely, and A Systems Approach to the Management of Change with lead faculty Robert K. Branson, under the direction of Barbara Gill and Yusra Laila Visser.

98

• As Blackboard© administrator, provided technical support to university faculty, creating course websites, converting courses across Blackboard© versions, and troubleshooting system discrepancies, under the direction of Gerardo Garcia.

August 1999 – December 1999 Department of Educational Research, Florida State University Instructional Systems Developer • Assisted in development of computer-based training (CBT) using Authorware for conversion of the existing CBT in Quest to CD-ROM format with interactive video for Florida Department of Corrections, under the direction of Walter W. Wager.

July 1998 – August 1998 Professional Development Centres, Florida Department of Children and Families Instructional Systems Designer, Video Production Manager • Developed video-based training for all child protection personnel in Florida (counselors, supervisors, investigators and attorneys) on 1998 legislation affecting job performance for all department employees, under the direction of Dalene Miller.

May 1997 – June 1998 College of Education, Florida State University Instructional Systems Designer, Project Manager, Video Production Manager • Planned, designed, produced, and edited a graduate level three-credit-hour distance learning course for K-12 teachers, Effective Teaching, in videotape format under the direction of Professor James Miller, Dean of the College of Education, and Gerry Miller, Project Director. • Directed video crews for the acquisition of 15 course sessions and stand-up transitions. • Edited videotape using Sony Broadcast Editor (BVE 910) and Betacam SP videotape editing machines (BVE 75, BVE 65, and BVE 60). • Revised the telecourse, The American Community College.

April 1998 – September 1999; September, 2000 – May 2001 Florida Safe Learning Environment Institute (FSLEI) Instructional Systems Designer, Video Production Manager • Planned, designed, and developed training via CD-ROM and videotape, as member of training team, to increase accuracy and consistency in the reporting of disciplinary incidents by Florida’s schools administrators to support the Florida Department of Education’s School Environment Safety Incidence Reporting System (SESIR), under direction of Professor Nancy Fontaine, Project Director. • Managed and supervised acquisition of video assets. • Managed and supervised replication of 3500 CD-ROMs for distribution to all Florida public schools. • Edited 3 video programs, using social marketing principles, tailored to potential sponsors, advocates, and targets of the organizational change.

August 1995 – May 1997 Center for Needs Assessment and Planning, Learning Systems Institute, Florida State University Instructional Systems Designer • Worked as member of a team to implement a strategic planning process to re-validate instruction for basic recruits for the State of Florida Criminal Justice Standards and Training Commission (CJSTC), including the development of a statewide survey, and the conduct of regional focus groups, under the direction of Kathi Watters, Paula McGillis, Philip Grisé and Roger Kaufman, Director. • Contributed to the report Use of Computer Assisted Instruction with Incarcerated Youthful Offender Populations for the Florida Department of Corrections under direction of the lead author, Philip Grisé.

99

March 1989 – March 1990 Office of Communications, Florida Department of Education • Communications Specialist, Member of extended communication team for Commissioner Betty Castor, serving the Florida Department of Education, under the direction of David R. Voss. • Designed, co-produced, shot and edited programs for broadcast on public television, public service announcements for broadcast television, including short documentaries for use in videoconferences, and promotional videotapes on organizational initiatives and activities.

March 1990 – June 2004 Moving Images Media Productions Owner, Producer, Project Manager, Director, Cinematographer, and Editor • Consulted in strategic, instructional, and communication planning • Conducted needs assessment, instructional design, and development of performance support tools • Supervised videoconferencing coordination and production staff • Developed program treatments, storyboards, and scripts • Supervised field and studio crews • Supervised postproduction personnel • Recorded and edited with: ƒ 35mm and 16mm motion picture technologies ƒ Betacam recording equipment ƒ Betacam editing facilities Clients included: • ABC News Interactive • SERVE (a regional educational laboratory) • Electronic Arts, Inc. • Florida Department of Education • U.S. Department of Education, Office of Educational Research and Improvement (OERI) • Tiny Baby Productions

March 1984 – March 1989 Public Information Office, Office of the Secretary, Florida Department of Health and Rehabilitative Services Television Producer/Director, Public Information Specialist • Developed video communications, including a half-hour program on HIV-AIDS (1985), AIDS: Understanding the Challenge, which the Veterans Administration distributed to all of its hospitals, and other parties. The program was distributed to approximately 700 health agencies. • Developed a quarterly, half-hour, magazine format television program illustrating programs funded through the Department of Health and Rehabilitative Services • Developed public service announcements to increase awareness of services provided by the Department of Health and Rehabilitative Services • Developed and maintained a network of approximately 30 cable television providers who aired the quarterly half-hour program • Advertised the half-hour program to department employees statewide • Developed and maintained a network of approximately 60 public affairs directors at broadcast stations that aired the public service announcements the department produced • Published a bi-weekly tip sheet for news media highlighting the work of health and human service workers, and distributed it to media contacts to enhance the news coverage of the work health and human service workers perform • Acted as the department’s liaison to public service directors and public affairs directors at all broadcast television stations in Florida • Established a cooperative arrangement with Florida Cable Television Association leading to increased cable distribution, better program promotion, and thus increased viewership, of the quarterly television program 100

June 1974 - March 1984 WFSU-TV, Florida State University Cinematographer/Editor • Shot and edited award-winning programs for local, state, regional, and national broadcasts, resulting in numerous awards from National Association of Educational Broadcasters (NAEB) • Operated and supervised operation of 16mm motion picture processing laboratory • Conceived and photographed national award winning (NAEB) animated station identification and program openings

REFEREED PUBLICATIONS Darabi, A., Mackal, M. C, & Nelson, D. W. (2004). Self-regulated learning of performance analysis as a complex cognitive skill: Contributions of an electronic performance support system (EPSS). Journal of Educational Technology Systems, 33(1), 11-27. Darabi, A. A., Nelson, D. W., Paas, F., & Palanki, S. (submitted). Instructional efficiency of process- oriented and product-oriented worked examples in simulation-based training of a complex cognitive skill. Educational Technology Research and Development (ETR&D). Darabi, A. A., Nelson, D. W., Mackal, M. C. (2004). Instructional efficiency of performance analysis training for learners at different levels of competency using a Web-based EPSS. Performance Improvement Quarterly, 17(4), 18-30. Lee, Y., Baylor, A., & Nelson, D. W. (2005). Supporting problem-solving performance through the construction of knowledge maps. Journal of Interactive Learning Research, 16(2), 117-131. Lee, Y., Driscoll, M., & Nelson, D. W. (2004). The past, present, and future of research in distance education: Results of a content analysis. American Journal of Distance Education, 18(4), 225-241. (Reprinted [2005] in Journal of Library and Information Services in Distance Learning, 2(3), 45-61.) Lee, Y., & Nelson, D. W. (2005). Design of a cognitive tool to enhance problem-solving performance. Educational Media International, 42(1), 3-18. Lee, Y., & Nelson, D. W. (2005). Viewing and visualizing – Which concept map strategy works best on problem solving performance? British Journal of Educational Technology, 36(2), 193-204. Lee, Y., & Nelson, D. W., & Lebow, D. (submitted). Introducing the Problem Attribute Typology (PAT): Structuredness, complexity, situatedness, and information richness. Educational Psychology Review.

NON-REFEREED PUBLICATIONS Darabi, A., & Nelson, D. W. (2004). Effects of process-oriented and product-oriented worked examples, and conventional problem-solving on the transfer of complex cognitive skills with computer-based simulations. Proceedings of the Association for Educational Communications and Technology, 2004. Darabi, A., & Nelson, D. W. (2004). Comparison of instructional conditions for novice and intermediate learners of a complex cognitive skill: Performance, efficiency, and motivation. Proceedings of European Association for Research on Learning and Instruction, 2004. Lee, Y., & Nelson, D. W. (2004). A conceptual framework for external representations of knowledge in teaching and learning environments. Educational Technology, 44(2), 28-36. Lee, Y., & Nelson, D. W. (submitted). Structuredness and complexity: Two attributes of problems impacting successful problem-solving performance. Educational Technology. Lee, Y., & Nelson, D. W. (2004). Instructional use of external representations of knowledge. Proceedings of the International Conference of the Society for Information Technology and Teacher Education, 2004.

101

PROFESSIONAL PRESENTATIONS Guidelines for Selecting Appropriate Delivery Environments Presented at the annual conference of the Association for Educational Communication and Technology, Orlando, Florida, October, 20, 2005. Training and Transfer of Complex Cognitive Skills: Effects of Process-Oriented, Product-Oriented Worked Examples, and Conventional Problem-Solving in a Simulated Environment Co-presented with Abbas Darabi at the annual conference of the Association for Educational Communications and Technology, Chicago, Illinois, October 22, 2004 (Paper cited above in Non- Refereed Publications). Instructional Use of External Representations of Knowledge Co-presented with Youngmin Lee at the annual conference of the Society for Information Technology and Teacher Education, Atlanta, Georgia, March 1, 2004 (Paper cited above in Non-Refereed Publications). A Taste of Their Own Medicine: Use of an Electronic Performance Support System (EPSS) by Performance Technologists Co-presented with Abbas Darabi and Melissa Mackal at the annual conference of the Association for Educational Communication and Technology, Anaheim, California, October 24, 2003 Effects of Process, Outcome, and Shifted Goal Orientations on Transfer of Complex Cognitive Skills Co-presented with Abbas Darabi at the annual conference of the Association for Educational Communication and Technology, Dallas, Texas, November 14, 2002. Verbal Protocol Analysis in a Non-Native Language Context Presented at the annual conference of the Association for Educational Communication and Technology, Atlanta, Georgia, November 9, 2001. Course Design: Transitioning Face-to-Face Courses to Web-Based Courses Co-presented in a panel discussion with Shujen Chang, Juanda Beck-Jones, and Josephine Raybon at the annual conference of the Association for Educational Communication and Technology, Atlanta, Georgia, November 9, 2001. An Ethnographic Study of an Educational Technology Classroom for Future Teachers Authored by Scott Hampton; edited and presented by David Nelson at the annual conference of the Association for Educational Communication and Technology, Denver, Colorado, October 26, 2000. What SHOULD Web Templates Do? The Battle Rages On Participated in panel discussion led by Marcy Driscoll and Walter Wager at the annual conference of the Association for Educational Communication and Technology, Denver, Colorado, October 27, 2000. Diverse Learner Groups in Distributed Learning Contexts: A Case Study Co-presented with Yusra Laila Visser at the annual conference of the Association for Educational Communication and Technology, Houston, Texas, February 13, 1999. Internet Resources for Post-Secondary Distance Learning: System Interventions Presented at the annual conference of the Association for Educational Communication and Technology, Houston, Texas, February 12, 1999.

SELECTED FILM AND VIDEO PROGRAMS

Director of Photography • The Camera A short narrative film about a photographer who follows his inexplicable vision, resulting in destruction and rebirth. Written, directed and produced by Anthony Zaccaro, with Anthony Arkin and Jon Mark Fletcher.

• The Painter A short comedic film about an eccentric artist who takes the dichotomies of essence and existence too seriously. Produced and directed by Jon Mark Fletcher. Improvised by Timothy Johnson.

102

• An Actor Despairs A short comedic-improvisation based on Samuel Beckett's "Waiting for Godot." For two New York City actors, Godot is not a big hit on their repertory tour of backwoods Florida. Conceived by actors Anthony Arkin and Tony Zaccaro, with director and editor Jon Mark Fletcher. Cameo by Alan Arkin.

Director, Cinematographer, and Editor • A Nation's Challenge Produced and supervised the Washington, DC uplink site of the six-site national videoconference As an integral part of that videoconference, co-produced, shot and edited a documentary on educating substance-exposed children, which explored the medical, family, social, and intellectual development of children exposed to drugs at birth. The videotape was used as source material distributed nationally, funded by the Centers for Disease Control and Prevention. • Southern Solutions in Mathematics and Science Six innovative approaches to teaching mathematics and science documented in southeastern public schools. Shot and edited the program produced by Jane F. Matheny. • A New Department of Education Florida Department of Education's introduction to new employees – describes the department's functions and communicates the department's new role. Shot and edited the videotape produced by David Rodriguez and Lorraine Allen of the Florida Department of Education. • Snapshots of Wellness Murray Banks' approach to wellness, advocating the drive to thrive that got him to the Ironman Classic in Hawaii, and the determination that brought him to the finish line. Produced, directed, shot and edited the videotape for the Prevention Center, Florida Department of Education. • Passages Six pre-school-to-school transition programs from six southeastern states present strategies and features that make transition to school successful, highlighting HeadStart, and other pre-school programs, that work cooperatively with public schools to share resources and information. Shot and edited the program produced by Jane F. Matheny and guided by Nancy Livesay of SERVE. • Southern Crossroads Harold "Bud" Hodgkinson brings demographic data and insight to the futurist's view of education in the southeastern states. Shot and edited the videotape produced by Jane F. Matheny; guided by Jan Putnal of SERVE. • Journey Toward Change A guide to changing schools collaboratively. Shot and edited the videotape produced by Jane F. Matheny, guided by Dorothy Routh of SERVE. • Accountability: 21st Century Schools Commissioner of Education Betty Castor explains her initiatives of school improvement and accountability that transformed the department from a regulatory agency to a service provider to schools and districts. Shot and edited the program with scriptwriter, Connie Ruggles. • Model Technology Schools A program to promote Florida's leadership role in the use of technology in public schools. Shot and edited the program produced by Jane F. Matheny and guided by Shirley McCandless of the Florida Department of Education.

AWARDS

Florida State University - Instructional Systems Program 1997 Certificate of Merit for Excellent Performance in the Instructional Systems Masters Program Received certificate as recognition for nomination to the Gagné-Briggs Award for Outstanding Master’s Degree Student

103

Florida Public Relations Association (FPRA) 1991 Accountability: 21st Century Schools Received FPRA’s Regional Award of Distinction for program presenting Commissioner of Education Betty Castor's concept of accountability and school improvement 1990 Children of Triumph - Florida's Hope for the Future Received FPRA’s Golden Image Award for co-producing a documentary on drug prevention programs in Florida's public schools 1986 AIDS: Understanding the Challenge Received FPRA’s Golden Image Award for co-producing a documentary on HIV and needs of people with AIDS American Corrections Association (ACA) 1985 Breaking the Cycle Received ACA;s First Place Award for co-producing a documentary on child abuse prevention efforts in Florida

National Association of Educational Broadcasters (NAEB) 1981 WFSU Presents Received NAEB’s Award of Recognition for creating an animated station identification

1980 Vibrations Open Received NAEB’s Award of Recognition for creating an animated program opening for WFSU

FELLOWSHIPS AND GRANTS

Florida State University 2002 Learning Systems Institute – Awarded Dissertation Research Grant

2003 Office of Graduate Studies – Awarded Competitive Dissertation Research Grant

Florida Fine Arts Council, Division of Cultural Affairs, Florida Department of State 1987 Awarded Individual Artist Fellowship in the Media Arts Category

1988 Awarded grant to the Tallahassee-Krasnodar Sister City Program to send musicians to perform in the Soviet Union and to document the cultural exchange on 8mm videotape and 35mm slide film.

104