Florida State University Libraries

Electronic Theses, Treatises and Dissertations The Graduate School

2006 The Effects of Training and Selective Attention on Working Memory during Bimodal Processing of Auditory and Visual Stimuli Jennifer D. Jones

Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE FLORIDA STATE UNIVERSITY

COLLEGE OF MUSIC

THE EFFECTS OF MUSIC TRAINING AND SELECTIVE ATTENTION ON WORKING

MEMORY DURING BIMODAL PROCESSING OF AUDITORY AND VISUAL STIMULI

By

JENNIFER D. JONES

A Dissertation submitted to the College of Music in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Degree Awarded: Summer Semester, 2006 The members of the Committee approve the dissertation of Jennifer D. Jones defended on June

15, 2006.

______

Jayne M. Standley Professor Directing Dissertation

______

Jeffrey James Outside Committee Member

______

John M. Geringer Committee Member

______

Clifford K. Madsen Committee Member

The Office of Graduate Studies has verified and approved the above named committee members.

ii ACKNOWLEDGEMENT

I wish to thank Dr. Jayne Standley for always having answers to my questions and for supporting my research interests. To Dr. Cliff Madsen, I wish to say thank you for ‘getting me.’ It was an honor to teach with you. To Dr. Geringer, you challenged me to think harder than I even thought was possible! Thanks for the stats-induced headaches. I wish to thank my parents for the roots and wings. Thank you, Mother, for always having time to listen to me. Thank you, Daddy, for determination and great math genes. There is no way to adequately thank my husband, Jon Jones. You have had many roles – data-miner, editor-in-chief, financier, chef, computer technician, cat-feeder, and many more. Mostly, I thank you for always saying, “Yes, you can” every time I claimed I couldn’t, and “Yes, you will” when I claimed I wouldn’t. Better than better!

iii TABLE OF CONTENTS

List of Tables vii List of Figures viii Abstract ix

1. INTRODUCTION 1

2. REVIEW OF LITERATURE 5 Theoretical Framework 5 Attention Research 8 Measuring Attention and Attention as a Central Resource 8 Endogenous and Exogenous Attention 11 Attention Research with Infants and Children 12 Attention, Intelligence, and Development 14 Selective Attention Research 16 Auditory and Visual Stimuli 17 Commonalities and Synergy 17 Localization and Cueing Differences with Cross-modal Stimuli 19 Visual Attention During Auditory Distraction – Irrelevant Sound 20 Paradigm Auditory and Visual Dominance Theories 23 Dichotic Listening Paradigm 25 Dual Audio Tasks – Pitch and Duration Judgments (non-dichotic) 29 Music and Memory 30 Memory and Songs – The Influence of Auditory Structure on Serial 30 Verbal Recall Theories of Music Processing and Memory 34 Melody Recognition 35 Developmental and Training Differences 35 Pitch, Rhythm, Contour, and Timbre Discrimination 36 Searching for Melodic Targets 39 Attention During Multi-Voice Music 40 Error Detection and Expectancy as Evidence of Focus of Attention 42 during Multi-voice music Gestalt in Music – Extracting Parts from Wholes 46 Attention to Music – Focused and Therapeutic Listening 47 Bimodal Experiences with Music – Complimentary and Non-complimentary 50 Audio-Visual Encoding and Decoding Music Through Visual, Auditory and Tactile Senses 52

iv Music Training and Memory Research 54 Gender Differences in Music and Memory Research 58 The Present Study 61

3. METHOD 63

Pilot Study Materials 63 Music (Auditory Stimuli) 63 Images (Visual Stimuli) 68 Video Development 70 Posttest Construction 70 Procedure for Pilot Study 72 Results of Pilot Study 73 Changes to Music Stimuli for Main Experiment 80 Changes to Test Construction for Main Experiment 81 Main Experiment 83 Participants 83 Design 86 Procedures 88

4. RESULTS 92

Familiarity – Ratings and Total Correct Scores 92 Perception of Attention Allocation 94 Analyses of Modality of Error Scores 95 Analysis of Question Type under Music Conditions 97 Memory Strategies 100 Analyses for Memory Decay 103 Analyses for Serial Position Effects 104 Analyses of Posttest Questions 107

5. DISCUSSION 109

Summary of Results 109 Music Training Effects 109 Recognition Versus Rejection of Stimuli in Working Memory 111 The Role of Strategy 112 Rhythm, Attention, and Information Processing 113 Contour- Similar or Dissimilar to Target Melody 114 Serial Position Effects – Expectancy Theory 115 Attention States 116 Implications for and 118

v Appendix A: Main Experiment – Composition of Audio Distractors for The Bailiff’s 120 Daughter and Pomp and Circumstance Appendix B: Informed Consent Form and Approval Letter from 127 Human Subjects Committee Appendix C: Pilot Study – Pre-Experiment Questionnaire 130 Appendix D: Main Experiment – Posttest Form for Pomp and Circumstance 132 Appendix E: Main Experiment – Posttest Form for The Bailiff’s Daughter 135 Appendix F: Pilot Study – Post-Experiment Questionnaire 138 Appendix G: Pilot Study – Instructions Script 140 Appendix H: Pilot Study – Practice Test Form 143 Appendix I: Main Experiment – Pre-Experiment Questionnaire 145 Appendix J: Main Experiment – Practice Test Form 147 Appendix K: Main Experiment – Post-Experiment Questionnaire 149 Appendix L: Audio Instructions Accompanying Introductory and Instruction 152 Slides in Experiment Videos Appendix M: Introductory, Instruction, and Practice Test Slides 155 Appendix N: Main Experiment – The Bailiff’s Daughter Posttest 159 Appendix O: Main Experiment – Pomp and Circumstance Posttest 163 Appendix P: The Bailiff’s Daughter Video 167 Appendix Q: Pomp and Circumstance Video 169 Appendix R: Raw Data Spreadsheets 171

REFERENCES 226

BIOGRAPHICAL SKETCH 241

vi LIST OF TABLES

1. Musicianship as Defined by a Sample of Reviewed Literature 59 2. Descriptive Statistics on Pilot Data Group by Music Type 76 3. Question Analysis for The Bailiff’s Daughter – Pilot Study 77 4. Question Analysis for Pomp and Circumstance – Pilot Study 78 5. Academic Majors of the Participants 85 6. Mean Estimated Hours of Music Heard and Performed By Participants Groups 86 7. Study Design – Participant Distribution by Gender, Major, and Instruction Type 87 8. Perception of Attention Allocation to Music and Pictures for Familiar 94 and Unfamiliar Music 9. Mean Correct for Each Question Type for The Bailiff’s Daughter 97 10. Mean Correct for Each Question Type for Pomp and Circumstance 99 11. Distribution of Strategies Used by Participants 101 12. Mean Total Score for Familiar and Unfamiliar Music By Strategy Type 102 13. Frequency Distribution of Strategies by Instruction Type for Music Conditions 103 14. Frequency Distribution of Total Correct Responses for Pictures by Serial Position 105 15. The Bailiff’s Daughter Frequency Distribution of Total Correct Responses for 106 Music Measures 16. Pomp and Circumstance Frequency Distribution of Total Correct Responses for 107 Music Measures 17. Bailiff’s Daughter Distractor Music Items Failing Criterion 108

vii LIST OF FIGURES

1. Pomp and Circumstance – original 64 2. Pomp and Circumstance – pilot study version 65 3. The Bailiff’s Daughter – original 65 4. The Bailiff’s Daughter – pilot study version 66 5. Hail to the Chief – original notation 67 6. Hail to the Chief – pilot study version 67 7. The Farmer’s Boy – original key and notation 67 8. The Farmer’s Boy – pilot study version 68 9. The Bailiff’s Daughter order of visual training stimuli (black images on white screen) 69 10. Pomp and Circumstance order of visual training stimuli (blue images on 69 white screen) 11. Hail to the Chief order of images (green on white screen) for training stimulus, 70 practice test 2 12. The Farmer’s Boy order of images (red on white screen) for training stimulus, 70 practice test 1 13. Pomp and Circumstance – main experiment 81 14. The Bailiff’s Daughter – main experiment 81 15. Experimental laboratory set-up 89 16.Familiarity ratings by major interaction 92 17. Total correct scores – interaction between major and instruction 93 18. Picture errors – major by instruction interaction 96 19. Music errors – major by instruction interaction 96 20. The Bailiff’s Daughter – question type by gender interaction 98 21. Pomp and Circumstance – question type by gender interaction 99 22. Pomp and Circumstance – question type by major interaction 100 23. Pomp and Circumstance – test half by order interaction 104 24. Question #22 distractor music 107

viii ABSTRACT

Researchers have investigated participants’ abilities to recall various auditory and visual stimuli presented simultaneously during conditions of divided and selective attention. These investigations have rarely used actual music as the auditory stimuli. Music researchers have thoroughly investigated melodic recognition, but non-complimentary visual stimuli and attention conditions have rarely been applied during such studies. The purpose of this study was to examine the effects of music training and selective attention on recall of paired melodic and pictorial stimuli in a recognition memory paradigm. A total of 192 music and non-music majors viewed one of six researcher-prepared training videotapes containing eight images sequenced with a highly familiar music selection and an unfamiliar music selection under one of three attention conditions: divided attention, selective attention to music, and selective attention to pictures. A 24-question posttest presented bimodal test items that were paired during the training, paired distractors, a music trainer with a picture distractor, or a picture trainer with a music distractor. Total correct scores, error scores by modality, and scores by question type were obtained and analyzed. Results indicated that there were significant differences between music and non-music majors’ recall of the bimodal stimuli under selective attention conditions. Music majors consistently outperformed non-music majors in divided attention and selective attention to music conditions, while non-music majors outperformed music majors during selective attention to pictures. Music majors were better able to reject distractor music than were non-music majors. Music majors made fewer music errors than non-music majors. However, an unanticipated effect of gender was found. Females were better at recognizing paired trainers and males were better at rejecting distractors for both music conditions. Individually selected memory strategies did not significantly impact total scores. Analyses of sample error rates to individual questions revealed memory effects for music due to serial position and rhythmic complexity of stimuli. Participants poorly recalled the final measure of both music conditions. This finding was unusual since this position is generally

ix memorable in serial recall tasks. Simple rhythmic contexts were not remembered as well as more complex ones. The measures containing four quarter notes were not well recalled, even when tested two times. This study confirmed that selective attention protocols could be successfully applied to a melodic recognition paradigm with participants possessing various levels of music training. The effect of rhythmic complexity on memory requires further investigation, as does the effect of gender on recognition of melody. A better understanding of what makes a melody memorable would allow music educators and music therapists the opportunity to devise and teach effective strategies.

x CHAPTER 1 INTRODUCTION

Can people do two things at once? When asked, most individuals readily answer “yes” or “no” to this question. Each seems to understand his/her own capacity and preference for ‘multitasking.’ Those who answer “yes” perceive that their performance is not compromised and may be enhanced in highly stimulating environments. Those who answer “no” perceive that they perform best when completing a single task a time. Is either of these groups correct? Researchers have investigated many aspects of this conundrum – are humans single, double, or multi channel thinkers? What conditions influence performance accuracy? What stimulus properties influence the outcome of dual task events? What roles do attention, memory, and experience/training play? Researchers have discovered partial answers to many of these questions, but as questions are answered, technology advances and are faced with new sensory environments that pose new challenges for research. Modern environments are saturated with stimuli; one prevalent environmental stimulus is music, made more readily available to listeners than at any other time in history by the iPod and other portable devices. Researchers have confirmed that music is ever-present in today’s society; we are influenced by music everywhere from work (Lesiuk, 2005) to restaurants (Caldwell & Hibbert, 2002). Many times the hearer chooses the music, other times; listeners have little control over sound environments (North, Hargreaves, & Hargreaves, 2004). Teens and young adults have reported listening to music from 2.5 or 3 hours per day to as much as 40 hours per week (Gardstrom, 1999; North, Hargreaves, & O’Neill, 2000; Radvansky, Fleming, & Simmons, 1995; Schwartz & Fouts, 2003; Tarrant, North, & Hargreaves, 2000). While listening to music is a highly valued leisure activity, it is frequently secondary to another media event, such as watching television and reading (Kubey & Larson, 1990). Therefore, listening to music can be researched in the context of dual task experiments. In fact, many aspects of music listening constitute a dual task, even when listening to the music is the primary objective. Most music contains both pitch and rhythm information.

1 Researchers have investigated the degree to which listeners can attend separately to each of these components (Byo, 1997; Demorest & Serlin, 1997; Sink, 1983). Additionally, music can present different timbres and degrees of intensity for listeners’ attentional foci (Madsen & Geringer, 1990; Radvansky et al., 1995; Wolpert, 1990). Songs introduce yet another stimulus by the presence of text (Bonnel, Faita, Peretz, & Besson, 2001). Performing music also provides a number of dual task opportunities, including the dual auditory tasks of listening to one’s own performance while being aware of the performance of others and the inclusion of visual tasks when performing from notation. Investigations of bimodal audio-visual processing have included musical and non- musical stimuli. Research on the effects of soundtracks to movies provides clarity on music’s influence upon mood (Boltz, 2001) in addition to its impact upon memory for mood-related aspects of films (Marshall & Cohen, 1988). Other research involving short tone sequences and common sounds (door bell, duck quack) paired with visual images has revealed developmental differences in the reliance upon our eyes and ears for information (Napolitano & Sloutsky, 2004; Robinson & Sloutsky, 2004; Sloutsky & Napolitano, 2003). Sloutsky (2003, 2004) and colleagues discovered that young children (4-year olds) relied more heavily on auditory information when encoding bimodal stimuli. This was termed an auditory processing bias. Additionally, children were not able to shift their attention to visual aspects when instructed or able to use this information successfully during testing. In contrast, the visually-dominant adults could shift their attention to auditory inputs successfully. Baddeley’s (1986) components of working memory, namely the phonological loop, visuospatial sketchpad, and central executive function, provide a framework for examining recall for visual and auditory events. According to this proposal, visual and auditory events are processed in different memory sub-components with central executive function acting like a coordinator for incoming information. Contemporary information processing theory concurs with Baddeley’s ideas of separate stores for different incoming stimuli. Information processing theorists propose that there are filters or buffers that prevent the working memory system from overloading by prioritizing information into sensory traces, data-driven, and process-driven concepts (Klahr & MacWhinney, 1998). Information processing theory also categorizes information as serial, including music, speech, and other events that unfold in time, or parallel, including many visual stimuli. While speech and music are both examples of serial auditory

2 processing, brain studies have discovered that different areas of the brain are used when processing these events. Through brain scanning technology, researchers have found that verbal, auditory (Mirz et al., 1999), and musical stimuli are processed differently. Generally, the left hemisphere is specialized for speech (Jeffries, Fritz, & Braun, 2003) and words (Samson & Zatorre, 1991) while the right hemisphere processes melody. Rhythm judgment did not appear to be lateralized to the left or right hemisphere (Dennis & Hopyan, 2001; Plenger et al., 1996). Curiously, though some aspects of music are processed in the opposite hemisphere from verbal data, people with musical training demonstrate superior verbal memory (Ho, Cheung, & Chan, 2003; Kilgour, Jakobson, & Cuddy, 2000). No such advantage was found for visual memory (Ho et al., 2003). Seemingly, musicians’ systematic use of their auditory attention and processing yields superior skills in the general auditory domain. However, few studies have examined how musicians compare to others when bimodal audio (musical)-visual tasks are presented to them. Other research has focused on the differences in the male and female brain. The male brain is characterized as being designed for understanding and building systems (and extracting rules that govern systems) while the female brain is more socially oriented (Baron-Cohen, 2005). Likewise, males and females, both infants and adults, responded to music differently, particularly when under stress (Standley, 1998, 2000), and female infants have more acute (Cassidy & Ditty, 2001). However, studies examining tonal memory (Norris, 2000), attention responses (Richard, Normandau, Brun, & Maillet, 2004), and mental capacity (Johnson, Im-Bolter, & Pascual-Leone, 2003) found no differences between the sexes. Often researchers assign equal numbers of each sex in participant groups without reporting differences. It is not conclusive if differences in audio-visual memory between men and women exist at this point. The present study was designed to investigate how musicians versus nonmusicians and males versus females remember paired musical (auditory) and visual components following bimodal encoding examined in a recognition memory paradigm. The effects of attention on the recall of bimodal stimuli among the groups were tested through selective attention instructions. Additionally, this study sought to determine the differences between dual encoding unfamiliar music with unfamiliar images in comparison to retrieval of familiar music and encoding of unfamiliar images. It was expected that differences in the recall of musical events between musicians and nonmusicians would emerge, though a difference between males and females was

3 not projected. The natural patterns of attention to visual and audio/musical events were examined in the group receiving no selective attention instructions. The degrees to which participants could manipulate their attention patterns to musical and visual stimuli were also tested. Encoding and memory strategies of the groups were categorized.

4 CHAPTER 2 REVIEW OF LITERATURE

Theoretical Framework Researchers have long investigated how humans remember (James, 1902). Numerous forms of memory have been differentiated, including implicit and explicit memory (Schacter, 1993), autobiographical memory, episodic, and semantic memory (Nelson, 1993) among the more generally recognized long-term and short-term memory divisions. Few theorists debate the existence and functions of a long-term memory component, though short-term memory theory has undergone and continues to undergo revision. Alterations in short-term memory theory from the late 1950s to the early 1970s included a shift from the dominant view of memory as a “relatively undifferentiated unitary system” (Baddeley, 1976, p. 187) to one of distinct stores for acoustically based, limited-capacity, short-term storage and a more durable long-term store of considerable capacity. Even in the mid 1970s growing evidence that sensory systems (visual, auditory, and kinesthetic) may have a unique memory store compelled further differentiation of short-term memory theory. Baddeley’s (1986) conceptualization of working memory in three major divisions has proven hearty enough to withstand rigorous research and has spawned years of debate. This concept of the central executive with its slave components, the phonological loop and visuospatial sketchpad, provided a framework for not only researching visual and auditory information processing, but how information is selected, organized, coordinated, stored, and ultimately remembered (Baddeley & Hitch, 1974). Broadbent (Broadbent, 1971) contributed the concept of information selection. In the initial proposal in 1958, there was a single channel with a limited capacity through which all information funneled. The capacity limit of the single channel was a function of the rate of information flow, meaning that the organism needed time to process the stream of incoming information. In order to accommodate this limit, selective perceptual processing utilized a buffer store and a filtering system (Broadbent, 1971). Information could be held in the store and selected information filtered out for immediate processing. Broadbent also contributed the

5 concepts of vigilance and expectancy to early work in early information processing. Today, vigilance and expectancy are probably best conceived as functions of attention, including selective attention, sustained attention, and inhibiting or ignoring distracting stimuli. Jones (1999) credited Broadbent with the concept of selective attention, particularly for auditory events, that has been exhaustively researched through the dichotic listening paradigm. The role of attention during information processing has continued to be important to working memory theory development. Treisman and Davies (1973) proposed the concept of parallel attention and processing, thereby expanding the single channel theory. In parallel processing, incoming stimuli of different modalities or different properties of the same stimuli can be analyzed at the same time because they do not use the same resources. Two studies conducted by Allport, Antonis, and Reynolds (1972) supported the hypothesis. Allport et al. (1972) proposed a multi-channel hypothesis after research showing participants displayed abilities to accurately recall photography presented while they were engaged in speech shadowing. A second study involved music majors who were able to sight-read piano music while speech shadowing easy and difficult prose passages. There were no significant differences in the memory posttest for the prose passages under conditions of sight-reading on the piano and only speech shadowing. Furthermore, the differences in error rates during the piano task were not significantly different in session 2 under divided or focused attention. This research clarified that when the simultaneous tasks are different enough, each can be completed successfully. The allocation of attention to one task, speech shadowing, did not affect visual memory or motor performance. Cowan (1995) proposed that attention and memory intersected and conceptualized working memory in terms of focus of attention. Cowan (1998) devised a definition of working memory as follows: Working memory is the collection of mental processes that permit information to be held temporarily in an accessible state, in the service of some mental activity” (Cowan, 1998, p. 77). He furthered compared his working memory system with Baddeley’s system (central executive, phonological and visuospatial stores and processors) acknowledging the differing sensory memories for visual and auditory events. Cowan’s working memory system is composed of a capacity-limited focus of attention along with temporarily activated information in permanent memory. According to Cowan, attention to events summons long-term memory

6 resources that allow for semantic processing (Cowan, 2005). The use of long-term resources, such as chunking and schemata, was not a new concept. Miller (1956) proposed chunking as a means of overcoming capacity limits for short-term memory. The basic concept of chunking was that like information was perceptually grouped such that information was encoded in small groups rather than a string. One of the most common examples of chunking is remembering telephone numbers in groups of three or four. Researchers have investigated the degree to which stimulus features render the chunks or cogitative processes of the observer (Crawley, Acker-Mills, Pastore, & Weil, 2002; Green & McKeown, 2001). Stimulus-driven chunking would provide evidence of a bottom-up or data-driven approach while scheme-driven chunking would be indicative of a top-down or concept-driven approach, using information technology language (Klahr & MacWhinney, 1998). Chunking obviously occurs; the degree to which it is perceptually (automatic) or conceptually (thought) driven is still under investigation. Information processing theory has developed alongside the understanding and development of computer programming. Based upon storage units, filters, and schemata proposed by psychological theorists, programmers have designed computer models that imitate human information processing. One particular area of interest has been auditory selective attention, a topic intriguing to cognitive psychologists, neurologists, musicians, and computational engineers alike (Wrigley & Brown, 2004). Psychologists have developed and tested theories with behavioral experiments, and computer engineers have integrated information from electrophysiology and neurology. A model was developed that grouped incoming streams of intentional information and allowed for ‘leaks’ representative of the processed unintentional auditory streams. The model represented a culmination of the research on the processing of attended and unattended auditory events. Psychologists have tested this phenomenon by using multiple auditory streams or auditory streams in addition to other input and asking participants to divide attention across streams or select a single stream. Myriad researchers have developed the selective attention paradigm in both auditory and bimodal (often auditory and visual) paradigms. The attention research for bimodal audio-visual and dual/multiple audio events has been framed by Baddeley’s three components of working memory (Baddeley, 1976, 1986; Baddeley and Hitch, 1974) and contemporary theories of information processing with an elaborate system of sensory-specific filters and stores. Investigations have been designed to better understand the

7 capacity limits for unimodal information and bimodal information, particularly the unique differences between auditory and visual events. Investigations on the recall of serial and nonserial information as well as paired or associated events have provided understanding of the coordination processes during information processing. Researchers have manipulated encoding methods, rehearsal times and strategies, and output. Measurement of performance has included reaction or response times, accuracy rates, including hit and false alarm rates, and a number of brain scan technologies. Designs have included detection, recognition, and discrimination protocols, each contributing different and, at times, conflicting outcomes. Participants have been instructed to attend to stimuli, ignore stimuli, and divide attention across events. Through these expansive and complex investigations of attention and working memory during dual tasks or multisensory environments, much has been learned about how humans process and retain information. While educators are particularly invested in understanding how learning is differentially achieved through vision and audition, research on learning through different senses has of interest to a broad audience. The effects of multisensory input are of special interest to music therapists and music educators. While the researchers studying attention during dual audio tasks have systematically manipulated frequency, duration, and intensity in short tone sequences, fewer researchers have used actual music. Music as a stimulus provides a number of foci for attention, including rhythm, pitch, melody, and harmony, to name a few. Additionally, cognitive memory strategies can be examined using music (Madsen & Madsen, 2002). The use of music as an agent for facilitating the learning of non-musical information has been investigated in addition to the teaching of music understanding and performance. The effect of music training upon memory for verbal and nonverbal information has provided a fertile ground for examining the role of experience in memory and attention. Attention Research Measuring Attention and Attention as a Central Resource Auditory attention has proven to be a difficult construct to measure though James’ (1902) claimed that we all know what attention is. Two such tests have claimed to measure auditory selective attention, namely the Goldman-Fristoe-Woodcock Auditory Selective Attention Test and the Flowers Auditory Test of Selective Attention. Glass, Franks, and Potter (1986) compared these test to determine if they indeed measured the same construct; the researchers found the

8 tests to correlate (r = .44). Though the correlation was positive, the relative weakness of the relationship indicated that the construct of auditory selective attention was broad. Auditory selective attention ranges from being aware and localizing of sounds, to perceptual processing of relevance, as well as ignoring distraction, and maintaining focus of attention over time. Despite the difficulties presented by empirical measures of attention, a study by Kahneman, Ben-Ishai, and Lotan (1973) demonstrated the validity of attention as a construct. Based upon a high number of accidents – a behavior attributed to poor attention - bus drivers were tested for auditory attention. Kahneman et al. (1973) found moderate, positive correlations between the number of accidents per year by professional bus drivers in Israel and a test of selective auditory attention. Cowan (1995, 2005) has established the importance of attention to working memory; therefore, measures of working memory may to provide an estimate of one’s attention. One aspect of attention that has been frequently measured is the ability to resist distraction. Resistance to auditory distraction, referred to as the irrelevant sound paradigm (Jones, 1999), is a common technique. Beaman (2004) gave participants the Operations Span Task (OSPAN) for working memory during conditions of auditory distraction. He later compared the scores for relationship between the test and behavior. The OSPAN was not predictive of the irrelevant sound effect on serial or free recall of verbal material (Beaman, 2004). Irrelevant speech sounds and words affected both high and low scores from the OSPAN. Though the relationship was not hearty enough to demonstrate a significant relationship, low span individuals were more likely to experience intrusion from previous list trials than high span individuals. It does appear that these tests are spotlighting the same concept, though they have demonstrated flaws. Morey and Cowan (2004, 2005) verified attention to be a central resource in working memory. In the 2004 study, participants were asked to recite their 7-digit phone number, a random set of 7 digits, 2 digits, or no digits while examining visual arrays with 4, 6, or 8 colored squares and subsequently making same-different judgments on the visual arrays. The recitation of numbers was designed to prevent verbal rehearsal of the positions and colors of squares in the visual array by directing attention to numerical recitation. There were significant differences in scores by visual array size and recital condition with no significant interactions between the variables. Performance was best for the smallest array size. The participants’ scores were poorest when reciting 7 random digits in comparison to the other three conditions that did not differ

9 significantly from one another. The authors concluded that some shared space in working memory was used for digits and visual information. The 2005 study provided additional support for the central resource for attention. Morey and Cowan (2005) found that participants were able to make visual array judgments of same or different equally well under no digit recall conditions and silent rehearsal of digits conditions, but vocal rehearsal of the digit lists significantly impacted visual array judgments. The recitation of the to-be-remembered digits, whether before the first visual array or after, interfered with visual memory. Attention was presumably drawn away from the visual task during recall of the list, indicative of a central attentional resource (Morey & Cowan, 2005). The distracting effect of speaking was likely the result of tapping into resources from the phonological loop, particularly since digits were not disruptive to visual array judgments when silently rehearsed. The impact of the distracting spoken digits occurred regardless of the location of the distractor in the sequence (before arrays or during). Further evidence from both studies that a central attention resource was responsible for auditory and visual information was the relationship between accuracy in both modalities. Morey and Cowan (2004) found that visual array comparisons were significantly more accurate when accompanied by correct digit list recall than when the digit recall was incorrect. The same relationship existed for correct digit recall as correct lists were accompanied by correct visual arrays. The co-occurrence of error (and accuracy) confounded the idea of a simple trade-off between stimulus types under dual-task conditions. The same relationship was found in the 2005 study; when the digits were recalled incorrectly, accuracy on the visual array task was lower than when digits were correctly recalled. Demonstration of successful attention allocation was evinced by correct recall of both digits and visual arrays. Research in the auditory distraction paradigm also supports attention as a major component of working memory. Berti and Schroger (2003) had listeners identify the duration of tones, the majority of which were 1000 Hz (90%) and a few of which (10%) were 1050 Hz or 950 Hz, immediately (low-load task) or upon the arrival of the next tone (high-load) task. There was a significant interaction between response times and load, and main effects for response times. Under the high-load condition, there was less difference in response times between standard and deviant tones. Response times were lower for both standard and deviant tones in the low-load condition, as would be expected. The greater interruption of the irrelevant pitch change in the low-load condition appeared to be triggered by the preattentive detection system, however,

10 the task requiring greater attention reduced the sensitivity of the preattentive system. Attention mediated response times until the task load was too large. An automatic response (faster reaction time) indicated a greater ability to ignore the irrelevant dimension of pitch during duration judgments. Additionally, this research provides evidence that the salience of stimulus features can be dependent upon task load. Endogenous and Exogenous Attention While attention can be directed toward certain stimuli or specific stimulus attributes, attention can be ‘captured’ by salient aspects of events without intentional attention shifts. Green and McKeown (2001) discussed the differentiation between endogenous attention, the top-down, voluntarily controlled attention, and exogenous attention, the attention that is largely automatic. Their research results provided evidence for stimulus-driven control of frequency selection during informative and uninformative cue trials despite the participants’ intention to ignore frequency. This processing of unintentional auditory features, or in other cases auditory streams, was precisely what challenged the computer engineers’ computational model of auditory selective attention (Wrigley & Brown, 2004). Another study in auditory perception confirmed the role of stimulus-driven attention. Crawley et al. (2002) conducted research designed to determine differences between musicians’ and nonmusicians’ ability to use primitive (stimulus-driven, bottom-up) and scheme-driven grouping to detect single errors in 3-voice music with homophonic or polyphonic textures. The authors proposed that musicians would demonstrate better flexibility in selecting schemes due to experience, particularly the ability to attend to single lines in homophonic music despite the likely perceptual grouping of chords. However, the data refuted this hypothesis. Both musicians and nonmusicians were better at identifying subtle melodic changes in a homophonic texture, particularly when the change was chord-unrelated. The performance of both groups suffered when directed to search for melodic changes in a specific voice instead of any change in the overall texture. While musicians were significantly better at the error detection task overall than nonmusicians, Crawley et al. (2002) concluded that music training did not appear to provide musicians with the ability to override perceptual grouping tendencies but did give them better ability to use the information in error detection. While control of attention was the intention, stimulus properties evoked automatic perceptual strategies despite intention to select a specific cognitive strategy.

11 Attention Research with Infants and Young Children Sustained attention to a stimulus and distraction by other stimuli can be investigated in very young participants. Richard et al. (2004) investigated attention getting (localization) and attention holding (habituation) patterns of infants (5 months old) exposed to simple and complex auditory stimuli (repeating scale pattern) using a looking paradigm. The data on localization and habituation differentiated the two attention processes in response to these auditory stimuli; a progressive decrease in attention-holding but not attention-getting was observed across trials. By simply turning their heads, the infants indicated awareness of the location of presented stimuli and made preference decisions by the length of time the infant focused on a location. Infants preferred complex tones as indicated by longer looking times. These differences were significant. Distinct acoustic properties influenced the sustained attention of the infants. Infants attend to verbal and musical sounds differently. Kinney and Kagan (1976) tested the orienting responses of 7-month-old boys and girls for auditory stimuli, specifically short verbal (nonsense syllables) and musical phrases (varying in rhythm and timbre) that were presented along a continuum of variability from none to extreme. Using heard turns and heart rate deceleration as indicators of an orienting response and captured attention, the hypothesis that the response would be curvilinear with moderate stimulus changes being more alerting than no change or extreme change was supported for both types of stimuli. However, some distinct response differences were noted. More infants vocalized during musical stimulus presentations with great variability among the boys in the sample. Girls’ fixation responses on variable stimuli were closer to the predicted quadratic trend while boys’ responses fell into an inverted U-shape. Clearly infants’ responses were discriminate among the variable verbal and musical stimuli with differences between the sexes emerging. Infant attention to bimodal stimuli can also be tested. Ruff and Capozzoli (2003) investigated the attention getting properties of audio and visual distractors in children engaged in play with toys. They compared casual, settled, and focused attention disruption by audio only, visual only, or audio-visual distractors on children 10 months, 26 months, and 42 months old. Based upon the number of head turns as an indicator of the attention-getting properties of the distractor, there were significant differences among the age groups by modality of distractor. While the three age groups did not differ in responses to visual only distractors, 10 month old infants had more head turns in response to audio only (2 or 3 tone sequences) and audio-visual

12 (tone sequences plus pictures on screen). The differences in complexity of the auditory distractor, 2 tones versus 3 tones, were evident in the 42-month-old group only. In the audio only condition, children looked longer at the monitor after simple (2-tones) tunes, but longer looking times were documented under audio-visual conditions following complex (3 tone) tunes. These findings seem to indicate that sustained attention by resisting distraction has a developmental sequence that is different based upon the modality of the distractor. The meaningfulness of the distractor, the actual tunes versus two tones, was a more salient distractor for the 42-month-old group. Perhaps the longer looking times were a reflection of conceptual processing. Bahrick and Lickliter (2000) provided convincing evidence that infants’ (5 months old) attention adhered to the intersensory redundancy hypothesis. The hypothesis proposed that information presented synchronously across two sense modalities was processed thoroughly as a result of focused attention to the event versus lesser attention for unimodal presentation. Infants were presented with a bimodal training stimulus where a red hammer pounded out distinct, synchronized auditory rhythms. When presented with a novel rhythm pattern during the test phase, significantly longer looking times were noted in comparison to repetition of the training rhythm. (Infants look longer at novel presentations.) However, when training was either visual or auditory alone, there were no significant differences in looking times during unimodal test phases. Not only were the infants stimulated by bimodal stimuli, the encoding of such events appeared to be more thorough and provided a basis for future decision making in comparison to singular inputs. These data provide further evidence of dual processing capabilities when the encoded stimuli recruit different sensory functions. Using a protocol similar to Bahrick and Lickliter (2000), Lewkowicz (2003) documented that infants as young as 4 months old detected changes in rhythm following audiovisual encoding of both syllables and sounds (toy hammer taps). However, only 10-month-old infants looked longer at a desynchronized audiovisual rhythm. The author concluded that these older infants were able to process the audiovisual events as a single perceptual stream rather than two separate streams. This developmental milestone would reduce the load and allow for the attention to desynchronized rhythm as a novel, meaningful stimulus. Infants from 4 to 10 months increased looking times to desynchronized, arrhythmic nonsense speech. The abilities of infants to process synchronized audiovisual events is particularly important to the development of language and speech.

13 Preschool children are routinely exposed to media that has been designed to capture their attention. Formal design elements are not always subjected to rigorous research prior to use in media. Therefore, Ritterfeld, Klimmt, Vorderer, and Steinhilder (2005) chose this topic. The researchers prepared audio recordings of a story about an elephant using a literary style with only introductory and closing music and a narrator reading the story and a ‘radiophonic’ style introductory and closing music, sound effects, individual voices for different characters, and a narrator with vocal dynamics. Seventy-nine German preschoolers (ages 3 and 4) listened to one of the tapes, were rated by an accompanying caregiver on attentiveness and enjoyment, and were observed by the researchers for response to intentional distractors and engagement with the story. Results indicated that there were significant differences in attention, enjoyment, and periods of calm engagement between the conditions. Children were rated as more attentive by caregivers, responded to fewer intentional distractors, were rated as having enjoyed the story more, and exhibited longer periods of calm engagement when listening to the radiophonic version. The data confirmed the researchers’ attention model presuming that formal design elements (e.g., music, sound effects) would automatically engage the child’s attention, who would in turn become aware of the emotional qualities of the protagonist, enjoy the story, and continue with voluntary attention. The automatic capture of attention began a sequence of events that resulted in endogenous attention indicated by the length of time on task. Attention, Intelligence, and Development Attention has been linked to intelligence in both infants and children. Choudhury and Gorman (2000) found duration of attention to be predictive of cognitive battery scores on the Bayley Scales of Infant Development in 17-24 month old infants. Additionally, infants who diverted their gaze away frequently but briefly from the tasks of stacking cups and sorting shapes completed the task more successfully. The infants determined to be the most cognitively mature were able to sustain attention to tasks, take needed brief breaks, and return to the task in order to complete it. Johnson et al. (2003) tested gifted and typically developing students in grades 1 through 5 on measures of mental capacity, inhibition (effortful and automatic) of attention, and speed of processing. The tests were largely visually based or visual-verbal. With the exception of no differences between gifted and typical students in the length of time to complete highly complex tasks, there were significant differences between the performances of gifted students and typical students and older and younger participants in the expected directions on most tests.

14 There were no differences between gifted and typical students’ score on tasks requiring automatic inhibition, but there were significant differences on tasks requiring effortful inhibition of distractors. Gifted children were better at controlling attention and coordinating inhibitory processes resulting in more accurate performances. On many of the tests, older typical children and younger gifted children had comparable scores. Attention skills have a developmental pattern. A number of these studies have indicated that attention follows a developmental pattern (Johnson et al., 2003; Lewkowicz, 2003; Ruff & Capozzoli, 2003). The fact that older children in Ruff and Capozzoli (2003) were more distracted by actual tunes than two tones stimuli would appear that meaningful stimuli attract attention differently than meaningless ones (tones). Attention development to auditory and visual stimuli was different as evidenced by no difference among the ages for visual stimuli but differences noted for auditory. While no differences in resistance to visual distractors were noted among the age groups (Ruff & Capozzoli, 2003), other researchers have found differences in visual attention among different ages. Young children were able to search for visual targets based on shape or color. Merrill and Lookadoo (2004) found that children as young as 6 years old selectively searched for targets on the basis of subset properties. When searching for a specific shape and color of targets amongst like colored and like shaped distractors, young (2nd grade) children tended to recheck small sets where older children (5th grade) and adults did not. Confidence in ones’ decision grew with age. Huang-Pollock, Carr, and Nigg (2002) researched children (4th graders) and adults’ (college age) ability to ignore offset distractor letters and identify target letters arranged circularly in sets of varying size (perceptual load) during time-limited exposure. There was a significant interaction between distractor-type (congruent or incongruent with target), set size, and age for reaction times but not for errors. Children and adults performed with similar reaction times during target sets of 4 or 6 letters while children were slower to respond on smaller set sizes. Children were as able as adults to resist visual interference by distractors when the set size was large, forcing an early selection of the target letter. These studies demonstrated the developmental nature of effortful attention for selection. Research has classified various types of attention. Endogenous and exogenous attention referred to the voluntary direction of attention to a particular event and reflexive shifts of attention to unexpected, yet salient events, respectively (Spence, 2002). While various aspects of

15 stimuli get or capture attention, sustained attention was assumed to be mostly endogenous and a result of scheme, intention, or preference. Resistance to distraction can be used as a measure of participants’ level of attention. There are different types of inhibition - effortful and automatic inhibition. Johnson et al. (2003) found tests for these two types of attention did not correlate, indicating they are distinct processes. The effect of modality on attention has been explored in infants (Bahrick & Lickliter, 2000; Lewkowicz, 2003) and adults (Morey & Cowan, 2004, 2005). From infancy through adulthood, stimuli compete for attention, which can be directed or divided. Spence (2002) summarized the excitement and conundrum of attention research when he wrote, “Understanding the mechanisms by which people simultaneously divide and/or focus their attention between different modalities and locations is one of the most exciting problems facing the emerging discipline of cognitive neuroscience” (p. 57). Selective Attention Research The ability to pay attention to a single stimulus among a stream of stimuli (e.g., reading a menu in a noisy restaurant), or to a specific feature of a compound stimulus (such as the letter S that is printed in green), or to several stimuli at once has been extensively researched. A number of terms appear in the research, such as divided attention, selective attention, directed attention, and full attention. Each essentially refers to one of two attention conditions, selective or divided attention. Selective attention refers to the endogenous focus of attention upon a single target stimulus amongst the target and one or more distractor stimuli or attention to one aspect of a compound stimulus. Divided attention refers to dual attention to more than one stimulus or more than one aspect of a compound stimulus. Researchers have investigated selective attention and divided attention in verbal, visual, auditory, and kinesthetic senses separately. Historically, costs in performance time and accuracy are found within modalities while the research between modalities has yielded conflicting results. For example, visual objects often have two distinct characteristics, such as shape and color. Bonnel and Prinzmetal (1998) conducted divided and selective attention research using two visual characteristics of an object, color and shape. When the color and shape of an object were presented in the same location, a positive correlation between the correct responses to each characteristic was found. Thus, when a participant correctly identified color or shape s/he identified the other characteristic accurately as well. However, when color information and shape information were separated on the computer screen, a negative correlation was found. Another of

16 the authors’ interesting findings was that 50% of the participants’ responses under selective attention instructions (80% to color-20% to shape; 80% to shape-20% to color) did not reflect the instructions. The conclusion was that each was unable to direct his/her attention to a single aspect when dual characteristics of visual objects shared a spatial location. The viewer processed both, as both were perceived simultaneously. There is a direct relationship between visual attention and visual perception. One’s eyes must be focused upon the stimuli in order for processing. This is one of many differences between vision and audition that have been explored. Schon and Besson (2002) investigated dual encoding research for pitch and rhythm in the visual domain through music notation. Proficient music readers were presented with a probe containing a key and time signature. Under single encoding conditions, either the key or time signature was colored (red or green) to indicate attentional focus and response to pitch or metric test item. During dual encoding conditions, the probe was blank and the response was either green or red indicating either pitch or meter as the test item. Participants were monitored with an EEG collecting ERP data during the behavioral response task. Regardless of encoding condition, response times on congruent (test matched probe) and incongruent items were significantly different. Congruent items had lower response times, and the ERP data were different for congruent and incongruent items. There were significant interactions between congruency and task (pitch or duration) for both encoding conditions with pitch judgments occurring faster in congruent conditions in comparison to duration judgments. There were significant differences in error rates by encoding condition with more errors in the dual encoding condition, but the irrelevant stimuli had no effect on response time or ERP data. Auditory and Visual Stimuli Commonalities and Synergy Some aspects of stimuli are common to both auditory and visual stimuli. For instance, duration judgments can be made of both auditory and visual signals. Goldstone and Goldfarb (1964a, 1964b) found that visual (light) duration was judged shorter than auditory (fixed tone) duration. These differences were found across a number of replications involving within subjects, between subjects, and cross-modal judgments of comparison to standards (1964b). Additionally, intensity, a property common to both visual and auditory stimuli, was found to influence visual judgments (dimmer lights judged shorter than brighter lights) but not to influence auditory judgments (no differences in duration judgments for loud and soft tones). The

17 ability to judge auditory duration may appear earlier in development than visual judgments (Goldstone & Lhamon, 1972). Penney, Gibbon, and Meck (2000) concluded that auditory signals stimulated a central timing mechanism directly as a number of their research participants indicated an auditory standard was held in mind and used for comparing audio and visual test items. The aspect of timing was unique to auditory stimuli. Kubovy and van Valkenburg (2001) described auditory objects as pitch-time, whereas visual objects are space-time objects. While the processing systems for vision and audition have common and distinct elements, at times a synergistic relationship has been found between the two. Gopher (1973) found that participants moved their eyes in the same direction as the attended ear during dichotic listening exercises. This seemed to function as a selective attention method as the same eye movement did not occur when single messages were presented to either the right or left ear. Reisberg, Scheiber, and Potemken (1981) found eye gaze toward the relevant auditory source facilitated recall of verbal material. McDonald, Teder-Salejarvi, and Hillyard (2000) found that participants whose attention was involuntarily directed left or right following a pink noise sound burst had improved visual acuity detecting targets more quickly. Merikle (1976) found that recall by position of visually encoded letters was better when the serial position was probed aurally rather than visually. Across three experiments that varied time between input and output, the interference of the visual probe and superior performance under auditory probing remained. Soto-Faraco and Spence (2002) identified an attentional blink, the phenomenon where the second of two targets is not processed if in close temporal proximity to the first during rapid stream input, for both visual and auditory stimuli. The attentional blink was not observed when the two targets were presented cross-modally. Recall was facilitated if the second target switched to the opposite modality of the first. Olivers and Nieuwenhuis (2005) found that participants who were engaged in free association (thinking about holiday or shopping with friend) or listening to up- instrumental music during a 2-digit recall task did not demonstrate attentional blink (forgetting the second items in series when presented in close proximity to first). The authors speculated that this could be due to arousal or increase in positive affect, however the cross- modal nature of auditory stimulation (music) could be an alternate explanation. While Baddeley’s model suggested that auditory and visual stimuli have separate stores and processors, Jones (1999) postulated that events are not represented in separate memory stores for verbal, spatial, visual, and auditory information. Jones (1999) contended that the

18 emerging theory was that representation transcended modality. Later research supported this theory. Studies where participants must switch between modalities routinely find consequences, perhaps a reflection of the consequences of interactions in the brain where multimodal processing converges, such as superior colliculus, parietal lobe, and insula-claustrum region (Spence, 2002). Localization and Cueing Differences with Cross-modal Stimuli Spence and colleagues (2000, 2003) provided further evidence of information processing. Spence, Ranson, and Driver (2000) conducted a series of experiments where visual and auditory information streams were presented simultaneously in the same spatial location or in different locations. Significant differences were found with the same spatial location facilitating verbal (speaking words) and visual (identifying target shape) performance. Additionally, irrelevant auditory sounds were more difficult to ignore when presented in the same spatial location as relevant visual information. Spence and Read (2003) found participants recalled more relevant words during speech shadowing when the auditory source was located in front of their heads rather than to the side. This significant difference was maintained and strengthened when the participants were engaged in a driving simulation task. These data contradicted popular workload models of information processing that indicate there are separate processing resources for auditory and visual information. Spence (2002) summarized the attention shifts between visual and auditory senses by writing, “Behavioral, electrophysical and neuroimaging data now converge on the view that shift of attention (no matter whether it is elicited exogenously or endogenously) in one sensory modality to a particular location results in a concomitant shift of attention in other modalities to the same spatial location” (p. 62). Researchers have documented the amount of time it took observers to respond to spatially and nonspatially located targets that were cued within the modality of the target or in another modality. Turatto, Benso, Galfano, and Umilta (2003) hypothesized that an automatic draw to the modality of a presented stimulus would delay response times for a cross-modal second stimulus. They confirmed this through a series of five experiments where stimulus 1 was either visual (red or green light) or auditory (900 or 1800 Hz tones) and stimulus 2 was either in a congruent or incongruent modality. Consistently, there were significant differences in response times by modality conditions. When stimulus 2 was auditory, response times were quicker than when stimulus 2 was visual. When stimulus one and stimulus two were modality congruent,

19 response times were shorter than under cross-modal conditions. The data did not support previous findings where auditory stimuli were more alerting; in fact, visual cues were more effective for a subsequent visual target than were auditory cues. The cross-modality response times were mediated by short, medium, and long stimulus onset times. Schmitt, Postma, and De Haan (2000) conducted six experiments examining spatial attention with auditory and visual targets and stimuli using detection, localization, and discrimination tasks. Response times for detection were significantly faster when the target was auditory, regardless of the modality of the cue. However, when localization was the response mode, visual targets produced significantly faster response times. Furthermore, auditory cues were as effective as visual cues in shifting attention to location. Discrimination tasks demonstrated that cross-modal cueing existed for auditory cues and visual targets, but not vice versa. Turatto, Mazza, and Umilta (2005) confirmed that two different tones, 1000 Hz on the left speaker and 2000 Hz on the right, separated by 150 ms are perceived as two separate objects. When participants heard the aforementioned tones, performance on spatially cued visual objects suffered. When the erroneous visual cue for visual targets appeared in the auditory object, the response time was significantly higher than when the erroneous visual cue did not fall in the auditory stream. However, when both speakers emitted a 1000 Hz tone, the differences in response time failed to replicate. Visual cues were more effective and erroneous cues better ignored when the auditory stimuli were a single object instead of two perceptual streams. This reduced the attention load and improved response to visual targets. Visual Attention During Auditory Distraction – Irrelevant Sound Paradigm Other research where auditory and visual stimuli are presented simultaneously has been conducted using auditory distraction during visual selective attention, known as the irrelevant sound paradigm (Jones, 1999). The irrelevant sounds have been studied with visual and verbal inputs during serial recall and non-serial recall. Distracting sounds have been presented during encoding, rehearsal, and output with different degrees of memory loss based upon the phase of presentation. Jones (1999) concluded the interruption in memory from irrelevant sounds was the result of automatic processing. The interruption was more damaging to serial recall during rehearsal of newly learned information, likely due to the temporal nature of sound and ordering of verbal material utilizing same processors. The phonological loop was identified as processing seriation and temporally organization sounds.

20 An early ERP study sought to determine the role of the olivocochlear bundle during focused visual attention in the presence of irrelevant sounds. Lukas (1980) proposed that under conditions of focused visual attention, the olivocochlear bundle would act as filter for concurrent irrelevant auditory stimuli. As support for his hypothesis, he reported that Wave 5 of the ERP, said to be a measure of the inferior colliculus, was reduced when participants directed their attention to visual stimuli in comparison to attention directed toward an irrelevant sound stream. A second experiment using two different irrelevant tones in a repetitive manner replicated the finding. While his research documented suppression as a result of attention, Lukas (1980) acknowledged the likelihood that the olivocochlear bundle was not the only mechanism involved in attentive behavior. Researchers have found that the construction of the irrelevant sounds, patterns and content, impacted visual memory differentially. Tremblay, Macken, Jones, and colleagues (Hughes, Vachon, & Jones, 2005; Macken, Tremblay, Houghton, Nicholls, & Jones, 2003; Tremblay, Macken, & Jones, 2001) conducted a series of experiments to test the changing-state hypothesis. Once an irrelevant auditory stream established a predictable pattern, the effect on serial recall was reduced. Tremblay et al. (2001) investigated the hypothesis that the changing- state phenomenon applied to noise and serial memory for visually presented digits and shape sequences would be disrupted. Participants encoded a series of digits or shapes under quiet, repeated noise pattern, or changing noise patterns and wrote down digits or indicated order of shapes on computer. There were significant main effects for auditory condition and serial position without an interaction for both types of stimuli. For both, quiet conditions and repeated sound conditions were not different from one another, but both differed from the poorest performing condition, changing noise. The serial position effects were typical primacy-recency effects. Since the noise bursts and neither verbal nor nonverbal sets contained the same content for processing, the disruption was due to similar processing required of serial order and variable temporal sequences. In another study, Macken et al. (2003) used high and low pitch streams presented at slow, medium, and fast speeds as the irrelevant sound. They found that the performance on a serial verbal (letters) memory task improved between medium rate presentation and high rate presentation of alternating high-low pitch stream conditions. Macken et al. (2003) concluded that increasing rate changed the variable high-low streams into two unchanging streams - one high

21 and one low in pitch. Thus, the interference effect in participant performance in high rate was not different from quiet and steady stream interference conditions. The effect of two steady streams generating less interference than a single alternating stream was consistent with the findings Turatto et al. (2005). Hughes et al. (2005) tested the effect of a temporal shift in an otherwise unchanging auditory stream during encoding and rehearsal of visually presented digits. Their study provided more data that implicate the changing-state effect on serial recall of digits. The study included the addition of a temporal shift to steady state and changing-state irrelevant auditory streams. Recall was affected similarly by the temporal shift in both types of irrelevant auditory stimuli during encoding, but effect was not noticed if the temporal shift occurred during a rehearsal period. The changing-state hypothesis, or greater disruption by a changing auditory stream, during encoding was supported while the same phenomenon was not observed during rehearsal (lapse between encoding and writing digits). Tone sequences were a frequent audio distractor set, however, the participants themselves have been the source of distracting audio. Hughes and Jones (2005) found that spoken to-be- ignored random numbers were more disruptive than spoken consonants or numbers in congruent order (offset by 2 digits) to the serial recall of a visually presented set of 8 numbers. These differences were significant and confirm an irrelevant sound effect when to-be-ignored stimuli are drawn from the same set during serial recall. Morey and Cowan (2004, 2005) found visual array judgments affected by the participants recitation of numbers. Specifically, recitation of a random set of 7 numbers was significantly more disruptive to visual array judgments than were recitation of the participant’s own phone number, 2 digits, no digits, or silent rehearsal of digits. A secondary auditory tracking task was used as a distraction during a series of associative memory tasks. The continuous response task (CRT) required the participant to push a button when a certain frequency was detected in a stream. This task was undertaken while the visual stimuli were presented. Naveh-Benjamin, Guez, and Marom (2003) conducted a series of experiments testing their hypothesis. They proposed that divided attention due to competition from an auditory CRT during encoding would disrupt associative memory mechanisms binding items of episodic memory together. The carefully crafted experiments tested participants’ recall for paired word-nonword combinations, related word-word and unrelated word-word combinations, and word-font-type combinations. Words were presented visually in some studies

22 (and aurally in others) with divided attention participants completing a CRT in the opposite modality from the presented stimuli for encoding. There were significant differences between the scores (hits minus false alarms) of participants in full attention and divided attention conditions as well as scores on item tests versus association tests but with no significant interactions. Participants in full attention outscored divided attention participants on all recognition tests. Associative memory for paired stimuli was not differentially affected under the distracting conditions. Generally, associative memory was poorer than item memory, perhaps indicating prioritization in memory systems. Another relevant finding of the study was that the differences between the scores of divided attention and full attention participants were the least under recognition testing in comparison to free or cued recall conditions. The recognition of encoded words was an easier task than free or cued recall, thus the sensitivity of the method to detect true differences was lessened. Other researchers have found similar results between recognition and identification. Bonnel and Hafter (1998) asked participants to monitor simultaneously a light and a sound and report if there was a change (detection) and the direction of change (identification). Participants were given instructions to focus attention selectively (80% on one stimulus and 20% on the other) and divide attention equally between light and sound. There were significant differences between detection and identification; detection was an easier task, and no apparent cost was demonstrated under selective or divided attention. However, identifying the direction of change was different for participants under divided and selective attention conditions. Auditory and Visual Dominance Theories An early theory was that the visual sense was dominant. It was believed to be the most efficient means of encoding data, though recent discoveries seriously challenge this notion. Klein (1977) tested and refuted the visual dominance phenomenon. In fact, kinesthetic sensory process was more efficient in some ways than vision. When participants were attending to auditory stimuli, attention was switched to kinesthetic stimuli significantly faster than to visual. There were no significant differences in the switch rates from vision or kinesthetic sense to auditory. As with many studies to come, during bimodal processing of visual and kinesthetic inputs error rates and response latencies were higher indicating that while one attempts to direct attention to a single modality, interference by the to-be-ignored modality was documented. Klein (1977)

23 argued that the visual dominance identified by earlier studies likely reflected a habitual bias to selectively attend to vision since it has poorer alerting qualities than other senses. Several decades later, Larsen, McIlhagga, Baert, and Bundesen (2003) found no significant differences in participants’ abilities to identify letters presented visually and aurally under selective and divided attention conditions. The rates of accuracy were essentially the same between the modalities; additionally, there was no relationship between correctly identified visually or aurally presented letters under divided attention, unlike other studies that found correlations during dual recognition (Bonnel & Prinzmetal, 1998; Morey & Cowan, 2004, 2005). However, participants in the divided attention condition displayed modality confusion reporting that visually presented letters were spoken and vice versa. Recently, Sloutsky and colleagues proposed an auditory dominance theory in children (Napolitano & Sloutsky, 2004; Robinson & Sloutsky, 2004; Sloutsky & Napolitano, 2003). Sloutsky and Napolitano (2003) conducted a series of experiments with four-year-old children and adults investigating the processing of visual and auditory stimuli equated for discriminability and salience. Participants were taught to identify a target stimulus (Vis1Aud1) containing a picture (foliage) and a three-tone sequence and reject a distractor (Vis2Aud2). Then, the bimodal target stimulus was split such that the target visual stimulus was presented with a new audio stimulus and the target visual stimulus was presented with a new audio sequence. These two types of stimuli were presented together, and the child pointed to the stimulus (adults typed responses on computer keyboard) he/she judged as the target. Results indicated that children relied upon the audio component while adults relied upon the visual. Differences in the number of trials where judgments were based upon the visual or auditory modality by children and adults were significant. These findings were replicated in three subsequent experiments where only same-different judgments were made and visual stimuli were changed from photographs of foliage to shapes. Additionally, when only visual components were trained and tested without audio, children demonstrated above chance accuracy during testing. Robinson and Sloutsky (2004) further investigated the auditory dominance of young children found in Sloutsky and Napolitano (2003). Experiment IA replicated the auditory preference of young children (4-year-olds) while Experiment II replicated the auditory dominance among 8-month, 12-month, and 26-month old infants using a looking times as the dependent measure. Experiment IB eliminated the auditory dominance in 4-year-olds by

24 changing the green three-shape geometric patterns used as visual stimuli with a red triangle and a green cross. These stimuli would likely generate a verbal label whereas the three shape patterns were chosen because they would likely not. The researchers suggested that these data indicated that the auditory over visual dominance was stimulus dependent and children could shift attention to either component. Two experiments tested this hypothesis. Experiment IC where bimodal encoding but single, non-preferred modality testing was used, and Experiment III where children were given instructions to selectively attend to non-preferred modality stimuli during the bimodal encoding. Results from both of these experiments indicated that the auditory processing dominance was largely automatic under bimodal (visual and auditory) conditions for children while adults could shift from visual to auditory modality efficiently. Napolitano and Sloutsky (2004) continued to investigate the effects of stimulus familiarity on modality dominance. They concluded that the children (four-year-olds) studied “exhibited modality dominance: They processed information presented in one modality, but not both. Furthermore, modality dominance was moderated by familiarity” (p. 1865). The research also pointed to automatic processing in children since few could selectively attend to non- dominant modalities, while adults had no difficulties. Children functioned best when their preferred modality (typically auditory) was the directed stimulus for attention. Adults were flexible and could attend selectively to either modality and use the information successfully. Dichotic Listening Paradigm Selective and dual attention research has been conducted within the auditory modality using the dichotic listening paradigm. Reaction times are often measured while participants make judgments about to which ear targets were presented or identify a target by feature (frequency, duration). Woods, Alain, Diaz, Rhodes, and Ogawa (2001) found that reaction time to frequency (high or low pitch) was significantly faster than reaction times for location (right or left ear presentation) with conjunctive targets falling in between the two at very short inter-stimulus onsets. Selective attention and response to frequency regardless of location was an interesting finding; perhaps pitch received greater priority over location when the auditory input was complex. The effects of aging on dichotic listening have been investigated. The response times for both young (mean age 22) and older adults (mean age 74) suffered when targets were defined by two features (presented to right or left ear, high or low pitch) as compared to a single feature (presented to right or left ear) (Gaeta, Friedman, & Ritter, 2003). Interestingly, there were no

25 significant differences between the adult groups in response times or hit (accurate answers) rates for single or conjointly defined targets. However, data collected by event-related potentials (ERPs) indicated that the brains of older adults responded differently as a result of the load on working memory. The dichotic listening paradigm has been used with children and in conjunction with event-related potentials (ERPs) collected with electroencephalograms (EEGs). Bartgis, Lilly, and Thomas (2003) engaged 5-, 7-, and 9-year-old children in a selective auditory attention experiment while monitoring ERPs through an EEG. Children were told they were searching for a rabbit that was represented by a target tone and were not concerned about guards represented by a standard tone. Children monitored right and left ears selectively and were to press a button when the rabbit was heard in the attended ear only. The task difficulty was equated among the age groups by varying onset of auditory stimuli. The hits and false alarms to standard tones in the attended ear and targets in the unattended ear were analyzed. ERP data were examined for negative difference (Nd) and P300 (P3) representing attention to relevant stimuli and allocation of attention and processing resources, respectively. Results indicated that there were significant differences among the hit rates (accuracy) among the age groups; the older two groups did not differ but were both superior in hit rate to the 5-year-olds. While no significant differences emerged in false alarms to targets in the unattended ear, 7-year-olds’ false alarm rate to standards in the attended ear emerged as significantly different than older and younger counterparts. Additionally, the amplitude of the P3 component of the ERP data correlated with hit rate indicating increased attention resources co-occurred with higher hit rates. Other studies conducted with a conjunction of dichotic listening and data collected by brain scanning provided support for working memory and attention theories. Benedict et al. (1998) found significant differences in the response times of participants tracking a syllabic target (‘ba’) in a stream of distracting syllables during no competing text, ignore competing text, and divided attention between text and targets. Response times increased across the conditions. However, similar brain structures were activated during focused (ignore text) and divided attention conditions. Citing Posner’s model for visual attention, Benedict et al. (1998) confirm that auditory attention uses the same anterior attention network that visual attention uses. Using dichotic presentation of targets and deviants to 10-year-old children who were typical or diagnosed with attention-deficit/hyperactivity disorder, Kemner et al. (2004)

26 documented through ERP data that the task involved activity in the primary and secondary auditory cortex for both groups. These data appeared to disconfirm hypotheses conjecturing that attention-deficit is a defect for selective attention in the executive functioning located in the frontal lobes, at least for the specific auditory selective attention paradigm of the research. Working with adults, Woldorff et al. (1993) documented that focused auditory attention exerted selection control over sensory processing in the auditory cortex during a neuromagnetic imaging study using the dichotic listening paradigm. They concluded that these results add strong support for psychological theories of attention that indicate that attention acts as a filter and reduces processing of certain incoming stimuli. Some dichotic listening studies were designed to test the ear-advantage hypothesis. The ear-advantage proposed that certain auditory stimuli were better recalled if presented to the contralateral ear of the hemisphere where the information was processed. A right-ear advantage was proposed for verbal material, and a left ear advantage for melodies. Obrzut, Boliek, and Obrzut (1986) conducted research using the dichotic listening protocol with 9-12 year old children and examined words, digits, syllables, and melodies under free recall, directed to left ear, and directed to right ear conditions. The purpose of the research was to delineate the effect of attentional bias on recall and to clarify the mechanisms for the so-called ‘ear advantage.’ For both the verbal stimuli and musical stimulus, a number of significant interactions among the variables were found. The significant interactions did not provide a simple answer; in fact, they did indicate that stimulus properties and attention directions strongly influence perception, resulting in myriad results from research in the dichotic listening paradigm. Music has also been used in dichotic listening paradigm research. One use of music in the paradigm was as an arousal agent prior to dichotic presentation of digits to children. Morton, Kershner, and Seigel (1990) had children listen to 5 minutes of music prior to a dichotic listening trials. There were significant differences in the total recalled (written) digits in the free report condition following music exposure versus quiet condition. Additionally, significant differences were found in the rate of intrusions from the non-attended ear on trials following exposure to music. Other research has presented melodic phrases to music majors who had to select the notation that correlated with heard melodies. Cook (1973) dichotically presented two different 4- second melodic phrases to music majors and asked them to indicate which melody was heard among 4 choices of notated melodic phrases. The melodies presented to both ears were among

27 the choices. There were significant differences in the correct melody identification by ear; more melodies presented to the left ear were identified than were the right. Gordon (1978) also found a left ear trend in a dichotic chord listening study in 100% of the male (N=12) and 8 of the 12 female musicians. Nonmusicians have also participated in musically based dichotic research. Hall and Blasko (2005) conducted a dichotic listening experiment where participants heard either an E-flat clarinet or violin timbre (both electronically generated) on a C4 or F-sharp 4 pitch in one ear or both ears. Participants were asked to identify the timbre directed to an assigned ear under single ear conditions or while ignoring the other ear during dichotic tasks. During dichotic tasks, the instrument in the to-be-ignored ear could be the same as the target or different from the target timbre. Participants also took the Operation Span test (OSPAN) and were subsequently divided into high working memory and low working memory sub-groups. Results revealed significant interactions between working memory scores and instrument conditions (single, dichotic – same, and dichotic – different) for both accuracy and response times. For those in the low working memory group, response times were significantly different for single, dichotic-same, and dichotic-different conditions. There were significant differences between response times for the dichotic-different and the other two tasks, single task and dichotic-same were not significantly different from one another, for the high working memory group. Participants with higher working memory scores experienced less interference by a different timbre sent to the ignored ear resulting in faster response times and greater accuracy. Hall and Blasko (2005) documented each participant’s music training experience and compared it with the scores on the working memory test and from the experiment. Participants in the study with instrument playing experience were distributed equally among the low and high working memory scores. There was no relationship between years playing an instrument and working memory scores, r (54) = .032. There were weak, positive correlations between years playing an instrument and accuracy during timbre judgment trials; Pearson correlation coefficients for single instrument, dichotic-same, and dichotic-different were .20, .28, and .38. Persons with instrument playing experience experienced less interference during different timbre conditions. Hall and Blasko (2005) concluded that these data indicate that working memory was an executive control function of auditory attention since those with high working memory stayed on task more effectively (less distracted) than those with less working memory.

28 Dual Audio Tasks – Pitch and Duration Judgments (non-dichotic) Selective attention research in the auditory realm has been conducted outside of the dichotic listening paradigm as well. Participants typically search for targets among distractors that vary in pitch, duration, or intensity. Dalton and Lavie (2004) found that distractors with a single varying acoustic feature automatically captured attention. Reaction times for targets were significantly longer under detection (present or absent) of target and identification (judgment) of target trials when a single varying distractor was present. Variations in frequency and intensity caused attentional capture. While louder sounds could arguably be an alerting effect, lower intensity distractors caused the same interference. Novel acoustic features of irrelevant sounds captured auditory attention. Mondor and Terrio (1998) conducted a series of five experiments designed to evaluate the possibility of automatic perceptual processing of auditory events beyond attention strategies of listeners in the realm of rhythmic pattern detection. Listeners were to indicate their detection of a target stimulus that varied in length among an ascending or descending tone pattern. The target, in addition to duration, could be presented as expected in the sequence, higher than expected, or lower than expected. There were significant differences in duration judgments as a result of target properties. Reaction times were slower when the target was unaltered in comparison to raised or lowered targets. Error rates on duration judgments also varied significantly by pitch height. The researchers’ conclusion that listener judgments were based upon general pattern structure and not solely upon the to-be-attended aspect was replicated with alterations in the intensity of targets, presence or absence of target judgments, and under conditions of varying predictability of direction of auditory stream. Despite the listeners’ intention of selectively attending to duration, properties of the total auditory stream were processing and influenced response and error rates. Schroger and Wolff (1998) asked participants to identify tones by duration and press a button when a long tone was presented among short and long probes. Task irrelevant frequency changes, including small 50 Hz, medium 200 Hz, or a large 500 Hz changes, were inserted as potential distraction in the duration task. There were significant differences in response to target duration under frequency distraction conditions for response time, hit rate, and false alarms. Response times lengthened as the difference in frequency between standards and deviants increased with the shift from medium to large frequency variation more detrimental than the

29 increase from small to medium. Hit and false alarm rates responded in turn. Schroger and Wolff (1998) attributed the distraction effects with even the smallest deviants within the change- detection mechanism. An automatically operating system within the auditory sense was cued when changes in the stimulus occurred, despite their irrelevance to the task of duration judgment. However, Schroger and Wolff (1998) found that when were presented at random during duration judgments that a benefit in response time was observed in comparison to the standard-deviant experiment. The authors posited that duration and frequency are not processed completely independently of one another. Listeners were capable of finely tuned attention to features of auditory stimuli, particularly once a basis for comparison has been established. Perhaps once the listener knew what to search for, attention was directed toward other aspects. Music and Memory Research Memory and Songs - The Influence of Auditory Structure on Serial Verbal Recall Music is a unique combination of pitch and duration (rhythm). Despite the lags in response time when listeners made dual task judgments of these aspects during dichotic listening studies, the facilitation of text recall set to music has been attributed to the pitch-rhythm combination of music. This facilitation effect would seem to conflict with the research on changing state effect during irrelevant sound (see Morey and Cowan, 2004, 2005). Once irrelevant sounds established a pattern, they were no longer influential in memory loss but did not improve recall of verbal material. Clearly, a song is a unique stimulus that influences memory in a substantially different way than unrelated streams of visual and auditory stimuli. In fact, researchers have proposed that text and tune act upon one another creating a unique, inseparable unit (Crowder, Serafine, & Repp, 1990; Serafine, Davidson, Crowder, & Repp, 1986). Serafine, Crowder, Repp, and Davidson (Crowder et al., 1990; Serafine, Crowder, & Repp, 1984; Serafine et al., 1986) conducted a series of experiments that culminated in the integration theory. Serafine et al. (1984) proposed that the melodies and texts of folk songs are “integrated to a considerable degree” (p. 300). Participants recalled songs presented in their original forms as test items better than when test items were mismatches (original tunes and texts in new combination) or new words or tunes were presented. The differences in recognition rates were significantly different. These differences appeared after participants heard novel folk songs only one time. The results were replicated with nonsense words (Serafine et al., 1986). There

30 were significant differences in the recognition rates in comparison to melodies, which were hummed without words or presented with mismatched, nonsense text. The verbal aspects of songs, even when nonsense, facilitated melody recognition under the originally encoded condition (Crowder et al., 1990; Serafine et al., 1986). There are also data that refute the integration hypothesis and document the separateness of words and melodies. Bonnel et al. (2001) investigated the degree to which semantic and melodic errors could be detected by musicians (music majors) and nonmusicians (not music majors) asked to focus their attention on one or both error types. Brief excerpts (16 seconds) from French operas were recorded by a trained soprano singing without accompaniment. Either the excerpt was performed as it was originally composed, or was changed in one of the following three ways: 1) A distractor word that fit rhythmically and rhymed but was semantically incongruent and the original note; 2) An original word with a new note that violated tonal expectancies; or 3) changes to both the word and the note as described. Participants were assigned to one of three attention conditions, identification of the semantic errors only, identification of the melodic errors only, or a dual task condition where both were identified. Results indicated that there were no significant differences in the responses under single or dual task conditions, and there were no significant differences in the overall identification of melodic and semantic errors. The researchers concluded that words and melodies formed two separate objects that could be divided and attended to separately. Research with participants with right or left temporal lobectomies confirmed their hypothesis from a neurologic standpoint. Samson and Zatorre (1991), using stimulus tapes with unfamiliar folk songs generated by Serafine et al. (1984), found that the left temporal lobe was used for melody recognition decisions when words were present but not when words were absent. Participants with right temporal lobectomies showed significant deficits in recall of tunes regardless of the presence or absence of text. This research confirmed conclusions of Crowder et al. (1986) that words influenced the melody and further supported a dual encoding (integration) hypothesis for song components. Other researchers have used unfamiliar songs and asked participants to recite or write words verbatim. Participants in McElhinney and Annett’s (1996) research heard an unfamiliar “pop” song in either its original sung version or in a spoken condition that maintained the phrasing and timing of the sung version and recalled the words across 3 trials. The researchers

31 totaled the number of words per trial and the number of words per “theme” or unit as a measure of information chunking. There were significant differences in the number of words per chunk between the two conditions with participants in the sung condition recalling more words per chunk. The facilitation of the music appeared to be in the perceptual grouping (chunking) of text. Musicians are uniquely disposed to schemata for serial verbal recall. Kilgour et al. (2000) investigated the effect of music training on recall of text presented in sung or spoken conditions. The researchers found significant differences between the recall scores of musicians (M = 10.8 years of training) and nonmusicians for both sung and spoken conditions by the second trial. Likewise, on the second trial, participants in the sung conditions recalled more words than those in the spoken condition. In subsequent experiments where the presentation rate of the words was controlled, the music training effect replicated while the advantage of the sung conditions dissipated. Musicians in the study were reported to sing, hum, or tap out rhythms during word recall, while nonmusicians were not reported to have used these strategies. Wallace (1994) found music training to influence the recall of text. She conducted four experiments examining the effects of rhythm and melody upon the recall of the accompanying text. Significant differences were found in the number of words recited verbatim among the melodic/rhythmic conditions. Participants recited more words of the three verses when the text was presented with a single melodic line (includes rhythm) for three verses than when chanted (rhythm only), different melodies presented for each verse, or spoken (same rate as the music). The melodic line facilitated text recall while rhythm had no advantage over spoken text. Wallace (1994) also found that participants with training in singing (M = 4.22 years) recalled more words than those without in Experiment IV only. These differences were significant, though no such differences were present among instrumentalists with the same number of years of training. The finding that text recall was not facilitated by multiple melodies or chanted was noteworthy. Unfamiliar melodies have been compared to familiar melodies as structural prompts for the recall of digits. Wolfe and Hom (1993) found that fewer trials were required of young children (5-year-olds) to initially learn a phone number when presented in a familiar melody than when spoken or presented in an unfamiliar melody. There were no significant differences in the recall of the learned phone number at delayed recall among the training conditions. Obviously, considerably different memory strategies are available to adult participants than are known to young children, and comparison across studies is ill advised.

32 Researchers have presented information to participants in visual and singing combinations. Shehan (1981) found bimodal (visual/verbal and visual/musical) input of paired words lists to be more effective than single auditory input (verbal only or musical) for children with learning disabilities. Learning was facilitated by the bimodal input, specifically adding the printed word to the encoding, despite the fact that the participants were enrolled in remedial reading. Rainey and Larsen (2002) presented the names both visually (on a computer screen) and aurally. Aural presentations were either text sung with piano accompaniment to a familiar tune or spoken at the same rate. There were no significant differences in the number of trials to learn the list of names between the spoken or sung presentations, however, there were differences in the number of trials to relearn the list a week later. The group who heard the list sung required fewer trials to relearn the list. A subsequent experiment presented a list of names visually only, spoken only, and sung unaccompanied only. During this experiment, significant differences were found on the number of trials to initially learn the list with the visual group requiring the fewest trials and spoken and sung were not different from one another. The sung condition findings were replicated at the retest a week later with sung condition requiring the fewest trials to relearn the list and spoken and visually presented not differing from one another. Researchers have asked participants to recall words, digits, and pitches of sung and spoken input. Jellison and Miller (1982) found music training to affect serial, but not overall, recall for words and digits. There were significant differences between the scores of musicians and nonmusicians on serial recall of words and digits regardless of whether the encoding was verbal or sung and whether or not the recall was spoken or sung. Musicians were better at pitch recall than nonmusicians, as was expected. Furthermore musicians’ pitch recall was not affected by the presence of verbal information (words or digits sung versus ‘la’). Jellison and Miller (1982) refuted the idea that a sung stimulus would be a greater attention load than a spoken one. Collectively, these studies demonstrate a unique memory experience for verbal material embedded within a song. While the facilitation of music was not demonstrated under every condition, the conglomeration of research findings suggests that music contributes uniquely to recall strategies such as chunking. Additionally, the addition of melody to text did not have a detrimental effect upon recall even when no distinct advantage was found. Facilitation effects were found for children and adults with varying degrees of music experience and training.

33 Theories of Music Processing and Memory Theorists have proposed that music has its own processing system separate from other auditory processors. Based upon research with adults with adventitious brain damage, Peretz (2001) concluded, “Music cognition is isolable both functionally and neuroanatomically from the rest of the cognitive system” (p. 519). The separate nature of brain structures for music processing coincided with an earlier psychological theory. Berz (1995) proposed an additional loop to Baddeley’s model for working memory that processes music. The proposed music loop had its own processing and storage capabilities, just like the visuospatial sketchpad and phonological loop. Saito (2001) claimed that an individual’s memory for rhythms and functioning capacity in the phonological loop of working memory are closely related. Significant positive correlations were found between rhythmic memory and auditory digit span (r = .43) and visual digit span (r = .48) with a nonmusician sample. Peretz (2001) acknowledged that musicians and nonmusicians are equipped with the same neurological structures designed to recognize music. However, when tested, musicians recruit more neural tissue. Thus, musically experienced listeners have greater cognitive schemata that in turn speed up processing and result in superior memory for music (Williams, 1982), not a difference due to brain anatomy. Models of music processing have included components driven by the perception of stimulus characteristics and concept-driven processors. Carroll-Phelan and Hampson (1996) produced a multi-component system for the auditory processing of music. The first component beyond input was registration and attention, which recognizes music as sound. The second component was perceptual processing involving music as perception, including melody, rhythm, and harmony over a sequence of tones. In the basic model, Carroll-Phelan and Hampson (1996) assumed that perceptual processing was data-driven but did not rule out the possibility of concept-driven support at higher levels of processing. Williams (1982) proposed a two level model for music cognition including a top-down tier reliant upon organizing schemata and a bottom-up approach reliant upon sensory processing. LaBerge’s (1995) model was framed in attention more than memory. He discussed a number of brain scan studies and concluded that attention was actively summoned when fine discriminations between sensory sources are required in comparison to obvious differences between sources. Whether perceptual groupings are exogenous or endogenous, theories list common grouping strategies.

34 Williams included the following as psychological organizers of music: expectancy, contour, , and rhythm. Expectancy was integrally linked to information processing (Radocy & Boyle, 2003) and memory for melodies (Williams, 1982). Several researchers separate rhythmic and contour or melodic processing (Carroll-Phelan and Hampson, 1996; Peretz, 2001). Peretz (2001) described these analyzers as “parallel and largely independent” (p. 522). The influences of music training and experience on cognitive schemata for music have been mentioned, and they contribute to the discussion of pitch and contour as separate entities. Radocy and Boyle (2003) indicated that rhythmic perception and basic metric organization was influenced little by musical experience and training, but melody cognition was based directly upon expectancies developed through enculturation and individual experience. While the division of melody and rhythm for processing has been supported by brain research (Dennis & Hopyan, 2001), a number of behavioral studies would indicate that the processes are not entirely independent (Demorest & Serlin, 1997; Dowling, 1973a; Kidd, Boltz, & Jones, 1984; Jones, Summerell, & Marshburn, 1987; Sink, 1983). Nonetheless, these theories provide evidence that music and its requisite pitch and rhythmic components are uniquely processed and recalled. Recognition and reproduction paradigms are used in music memory research. Researchers have investigated the influences of rhythm, pitch, contour, timbre, and key-relatedness on melodic memory judgments. Conceptual groupings and schemata for music have been analyzed with both musicians and nonmusicians. Error detection studies have contributed to the understanding of music expectancy and attention. Additionally, brain scans conducted during musical tasks have provided unique understanding of the functions of the auditory cortex during complex acoustic events. Melody Recognition Research Developmental and Training Differences The attention of music listeners must be focused upon two simultaneous auditory elements, namely pitch and rhythm. Melodies are sequential occurrences of changes in one or both of these aspects in a systematic, meaningful manner. Researchers test listeners’ memory for music by presenting an unknown target melody during encoding and asking participants for same-different judgments of subsequent test melodies. Test items typically include the target melody and lures that vary based upon the study’s design. Studies have been conducted in the melodic target recognition paradigm with persons of varied age and music training. While

35 adeptness varied along developmental lines, young children were able to recognize target melodies. Demorest and Serlin (1997) investigated children and adult novice listeners’ abilities to judge the degree to which 4-measure test melodies were altered in pitch and rhythm. While there were no significant differences among the age groups in detecting pitch changes, there were significant differences in participants’ responses to rhythmic changes. The ability to detect no or small changes in rhythm improved with age. However, data evidenced that participants did use both pitch and rhythm when making judgments about the melodies’ similarity to a trained target melody. Young children (1st and 5th graders) were inconsistent in their recognition of the original melody when presented in sequence with other similar distractor melodies, though older participants (ninth graders and college elementary education majors) consistently identified the target melody. Madsen and Madsen (2002) found that most listeners (6th graders, 8th graders, adult musicians, and adult nonmusicians) were able to identify a target melody among eight very similar interpolated (mode, meter, and rhythm) melodies. There were significant differences in the total percentage correct among the four age groups, 53% for sixth graders, 61% for 8th graders, 69% for untrained adults, and 74% for musician adults. When queried about the capabilities of young children with limited music training to complete the melodic recognition test, adult musicians believed that the children would be unable to do the task. Halpern, Bartlett, and Dowling (1998) found, generally, that college-aged and senior (over 70) musicians and non- musicians qualified differences between target and lure melodies based first upon rhythm, next melodic contour, and finally mode. For both groups, including professional musicians, mode judgments were difficult and inconsistent. Younger groups weighted the importance of rhythmic differences more heavily than contour, while older groups weighted the two more equally. Pitch, Rhythm, Contour, and Timbre Discriminations Magnitude estimation of differences between targets and lures has been incorporated into the melodic recognition paradigm. Sink (1983) examined the function of melodic alterations on rhythm changes. Participants were asked to estimate the magnitude of rhythmic differences from a target rhythm and test items featuring augmentation, diminution, retrograde, and repeating rhythmic figures under three melody conditions: monotony, M-shaped melody, and V-shaped melody. While there was no significant main effect for melody, there was a significant main

36 effect for rhythm alterations and a significant interaction between melody and rhythm changes. Under conditions of no rhythmic alteration, V-shaped melodies were rated significantly different than monotony; V-shaped were perceived as having greater rhythmic alteration. Based upon the results, Sink (1983) concluded that the simultaneous attention to melody and rhythm reduced attention to absolute rhythmic structures. When listeners undertake the task of judging the degree of sameness of various melodies, the task is dual in nature with pitch changes influencing the perception of rhythmic differences. Kidd et al. (1984) also found interference from rhythmic changes during recognition studies. Kidd et al. (1984) found listeners more likely to judge comparison melodies with the same rhythm as the target to be the same, even when told to ignore the rhythm and focus only on the melodic sequence. Thus, participants made more accurate discrimination between the target and test melodies when they were presented in the same rhythmic context. No differences were found in melodic recognition rates under variable rhythmic presentations for fast, medium, and slow tempos. Kidd et al. (1984) hypothesized that rhythm contributed to expectancy such that a listener’s ability to predict upcoming events (notes) improve discrimination of pitch sequences for same rhythm items. Dowling (1973a) did find difference in melodic memory across slow to fast tempos. He found significant differences in recognition performance of participants between slow and fast tempos. Dowling (1973a) proposed that the fast tempo would provide a more cohesive Gestalt than would the slow, however, the slow tempo provided more rehearsal time for participants to recall tones prior to testing. Participants with music training did better in the fast condition than lesser-trained peers, which further supported the advantage of music training during melodic memory tests is speed in processing. In addition to rhythm and pitch discrimination in melodic recognition paradigm, lures can vary based upon contour. Dowling, Tillmann, and Ayers (2002) examined the effect of delay intervals on listeners’ ability to discriminate between melodic targets, lures with similar contours, and lures with different contours in piano minuets. Listeners easily identified lures with different contours and judgments were not impacted by delay intervals when the target was the first two measures. However, there were significant differences in judgments made of similar contour lures over time; listeners’ abilities improved over time and false alarm rates to similar lures dropped. When a lapse in time occurred between exposition of the target and subsequent testing, regardless of what filled the delay - the minuet continuing, silence, meaningless rhythmic

37 fill - facilitation of similar lure to target discrimination was seen. Dowling et al. (2002) wrote, “… while listening continues, the processing of already-heard material proceeds automatically” (p. 271). These results appear to indicate that not only does the listener need to time to process music, but decisions about targets become more accurate once processing is completed. The influence of timbre and accompaniment on melodic recognition among musicians and nonmusicians was investigated. Wolpert (1990) discovered that musicians and nonmusicians differed in their abilities to identify a target melody when it was subsequently played by different instruments and/or with various accompaniments. All musicians were 100% accurate in identifying the target melody when test items varied instrumentation and accompaniment while only 5% of the nonmusicians were accurate under all conditions. Nonmusicians were the least accurate when the target melody was accompanied by harmony in an opposing key with the melody on the same instrument as the target. Nonmusicians judged the melody in the same timbre to be the target, even when the accompaniment was in a different tonality. The tonality of the harmonic structure did not alter judgments of nonmusicians in the same way that it did musicians. In terms of same-different judgments for melodic targets, nonmusicians were lured from the target when played on the same instrument even when the harmonic structure was incorrect (different key). The stimulus properties for categorizing aspects of accompanied melodies differed between musicians and nonmusicians. Nonmusicians may have selectively attended to the melody and timbre without paying attention to harmonic structures while musicians used all of the available information. Radvansky et al. (1995) replicated Wolpert’s (1990) research findings that nonmusicians rely heavily on timbre in making memory judgments even with changes to the experimental design and subjection of the data to statistical analyses. Radvansky et al. (1995) used unfamiliar melodies exclusively, inserted a distraction task that delayed the onset of test melodies 30- seconds after the target melody, and tested Wolpert’s ‘ambiguous’ instructions against their ‘unambiguous’ instructions. The researchers found that there were significant differences in the error rates in the match and mismatch conditions, nonmusicians made more errors in mismatch condition when the target melody was presented in a different timbre. Using the unambiguous instructions and a comparison musician group in Experiment II, the differences between musicians and nonmusicians’ error rates as influenced by timbre continued to be significant. However, the insertion of the distractor/delay task reduced the musicians’ perfect performance in

38 Wolpert’s study. While the error rate was small, this distractor task indicated that musicians’ memory for melodies was influenced, though to a considerably lesser degree, by timbre. Searching for Melodic Targets While the melodic recognition paradigm traditionally introduces a new melody presented to listeners for encoding followed by test items, research participants have also searched for familiar melodies during recognition studies. The musical features that cue recognition for nonmusicians have been investigated. Schulkind, Posner, and Rubin (2003) indicated that melody identification evidenced by title retrieval for well-known melodies was a global, holistic process for their nonmusician sample. While contour (consecutive alternations between rising and falling pitches) was a significant predictor in the regression models, temporal (rhythmic) factors contributed more than did pitch factors. Information about the overall temporal and pitch shape of the melody was for melody identification. Bella, Peretz, and Aronoff (2003) found that musicians reached a point of certainty in familiarity judgments for familiar and unfamiliar in fewer notes than nonmusicians. The differences were significant for the fewer notes needed by musicians to judge unfamiliar melodies as unfamiliar in comparison to nonmusicians. Generally, it took between 3 and 6 notes for a sense of knowing the melody with an additional 2 notes for certainty for familiar melodies, while it took 8 to 10 notes to determine that a melody was unknown. A second experiment replicated the differences in familiarity judgments between musicians and nonmusicians, though it took 5 to 7 notes for familiar melodies to be sung 3 notes beyond the recognition point. While Peretz’s (2001) theory of music processing was predominantly based upon the recognition of familiar melodies, Bella et al. (2003) demonstrated differences in the process of recognizing familiar and unfamiliar melodies, which was influenced by music training. Active search, a part of selective attention, facilitated the recognition of otherwise hidden melodic targets. In his initial research with adult listeners, Dowling (1973b) found that listeners were able to identify two interleaved familiar melodies when each was presented in a separate non-overlapping range (Dowling, 1973b). A familiar melody was identified more quickly when interleaved with background melodies when the listener was searching for its presence or absence. Andrews and Dowling (1991) expanded the research to include children and interleaved melodies with distractor pitches that varied by salience. Participants were selectively searching for either Twinkle Twinkle Little Star or Old MacDonald Had a Farm among the distracting

39 notes that were inside or outside of the target melody pitch range and either tonal or atonal. The melodies themselves were presented with interval relationships preserved like the original (straight) or wandering, meaning the contour of the melody was preserved but interval relationships were altered. A number of significant interactions were found among location of target (inside or outside of target melody’s range), tonality of distractors, and type of target (straight or wandering). Performance under each condition improved with age, implicating a developmental process in selective attention for melodic targets. Many of the youngest (ages 5 – 6 years) children’s responses were the same level as chance. By ages 7-8 performance had improved for the most salient condition, and by 9-10 years of age children could discern the hidden targets. Adults with music training experience were more successful than adults without music training experience. The authors concluded that the ability to focus attention to pitch relationships in this study (rhythmic values were the same for the targets) and the ability to recognize important melodic features was a cognitive task reliant upon development. In terms of auditory stream research, this study provided evidence that under certain musical parameters young, relatively musically inexperienced listeners could locate and focus upon specific aspects of a complex, multi-stream auditory event. Attention During Multi-voice Music Melodies are only one example of dual auditory foci available in music. Counterpoint and multi-voice textures provide opportunities for listeners to attend selectively to a particular voice or line, divide attention across lines, or attend to the totality of the music. Crawley et al. (2002) found that musicians and nonmusicians were better at identifying an error in a 3-part polyphonic texture when focusing on the vertical totality. Listeners were facilitated by comparing tones as part of chords rather than individual melodies. Based upon error recognition rates, musicians were no better at focusing attention to single melodic lines than were nonmusicians. This strategy – vertical/tonal comparison versus linear/melodic comparison - likely explains Sloboda and Edworthy’s results with key relationships. Sloboda and Edworthy (1981) found that both musicians and nonmusicians detected errors in 2-part counterpoint best when the two melodies were presented in the same as opposed to different keys. Dowling (1973b) and Andrews and Dowling (1991) found listeners could identify melodies within polyphonic textures best when the melodic ranges did not overlap and distracting/irrelevant tones were outside of the range of the

40 melody. Apparently, listeners can focus attention on a pitch range during melodic detection. While these melodic recognition tasks are challenging, listeners evidently have perceptual and cognitive strategies available to them. Davidson and Banks (2003) conducted selective attention research with professional musicians who were asked to identify the direction of an interval in an attended timbre (piano or bassoon) while ignoring the other. Intervals were presented simultaneously. The time to respond (RT) and accuracy scores were tabulated. There were no significant differences in RT or accuracy by timbre indicating that each timbre could be attended to selectively. However, a number of interactions were found among attended interval direction, unattended interval direction, and interval type (second or a fifth). Generally, the data indicated that RT were the shortest and accuracy the best when both the unattended and attended interval moved down, followed by both moving up, with contrary motion creating the longest reactions times. While participants could focus on a particular timbre, their responses were influenced by stimulus parameters (interval type, crossing and non-crossing voice) and the unattended, though clearly processed, interval. Though the timbres were in different pitch ranges, the melodic motion of both the attended and unattended instrument affected ascending or descending judgments. Perhaps intervals are not enough information for successful judgments. Cuddy and Cohen (1976) found that 3-note melodies were easier to recognize when transposed and altered than were intervals (2-note melodies). The differences between recognition scores for 2- and 3-note targets were significant for highly musically trained (mean of 10 years), moderately musically trained (mean 5 of years), and untrained listeners. A significant interaction between test and experience level revealed that the facilitation of the 3-note versus the 2-note targets was greater for the musically trained groups. The more notes presented the clearer the contour of the melodic line. Keller and Burnham (2005) conducted research investigating aspects of divided and selective attention for multipart musical stimuli. The researcher proposed that musicians routinely engage in forms of divided and selective attention when performing in ensembles, since attention is required to both one’s own part (integrant) and to the aggregate produced by the ensemble. The authors labeled this prioritized integrative attention (PIA), a situation where a high priority target, one’s own part, contributes to the aggregate in a dual attention model. Experienced, actively performing musicians heard a multipart stimulus (target pattern on conga

41 with complimentary pattern on cowbell) followed by a test item. Test items were either the integrant pattern or distractor played on the conga or the aggregate played on the snare drum. Participants rated how confident they were that the test item was recognized or not. Results indicated that responses to both integrant and aggregate test items were above chance, confirming that PIA was possible in this format. There were significant differences in the rate at which aggregate and integrant items were answered correctly; participants were better at aggregate responses. The authors concluded that participants’ perception of the study’s intention was the aggregate condition during dual task instructions. Nonetheless, the finding was novel since the aggregate pattern in the test item was played in a different timbre. Other aspects of the research demonstrated that the participant’s ability to complete this task successfully was jeopardized if the parts were mismatched rhythmically or were nonmetric. Parts in like meters seemed to fuse together meaningfully and were uniquely encoded. Brain scan studies confirm different aspects of the brain are excited during focused and global listening. Satoh, Takeda, Hatazawa, and Kuzuhara (2001) asked musicians to selectively attend to the alto line (second of four parts) or to the whole piece of an unknown four-voice motet (instrumental performance) while undergoing a PET scan. The musicians demonstrated focused listening by completing a behavioral task (identifying a tone as tonic or dominant in alto line or a chord as minor in harmony) with 70% accuracy. A comparison of the differences in PET scan topography between whole piece listening and selective auditory attention to the alto line revealed specific activation of the superior parietal lobules, premotor areas, and orbital frontal cortex. Each of these areas was reported by the authors to be structures involved in selective and effortful attention. Interestingly, there was no apparent lateralization of processing between hemispheres during any part of this research. Error Detection and Expectancy as Evidence of Focus of Attention During Multi-voice Music Bigand, McAdams, and Foret (2000) concluded that a divided attention model for music listening was not supported by their investigation that used error detection during highly familiar nursery tunes tested as a dual task. While errors in the tunes were easily detected when played separately for musicians and nonmusicians, error detection rates fell when the two tunes were played simultaneously. There were significant differences in the error detection rates of musicians and nonmusicians; musicians detected a mean of 2.08 of the 4 errors while

42 nonmusicians only detected a mean of 1.09 of the 4 errors. The differences between the groups were more evident when considering the number of errors by high or low voice. Musicians detected a mean of 0.98 of the 2 errors in the low voice while nonmusicians detected only 0.41; in the high voice, musicians detected a mean of 1.48 of the 2 errors and nonmusicians 1.09. An additional finding in the musician group was more false alarms were generated in far key condition (G and A-flat majors); the false alarm rates by key were no different in the nonmusician sample. The researchers concluded that nonmusicians in the experiment selectively attended to the top melody while musicians attended to both. However, musicians may have created a single perceptual stream of the two melodies as evidenced by the false alarm rate in distant keys that created more vertical dissonance. This false alarm rate was eliminated in a follow up experiment with musicians who were instructed to selectively attend to the lower voice during error detection. Other researchers have found a propensity for listeners to focus on the upper voice; in fact, the brain may be ‘hard-wired’ for it. Fujioka, Trainor, Ross, Kakigi, and Pantev (2005) tracked magnetoencephalograms (MEGs) of musicians and nonmusicians for the mismatched negativity component (MMNm) while they listened to polyphonic melodies containing unexpected endings. The MMNm, an index of auditory perception that responds to deviations from expectation, responded to new phrase endings that were in key and out of key, in either the upper or lower voice, and responded with different magnitude in musicians. Additionally, the listeners were asked to behaviorally identify changes in heard melodies after the scan. A number of significant differences between the scores and scans of musicians and nonmusicians emerged. Musicians correctly identified 80% of the melodic changes while nonmusicians’ performance was at chance for 8 of the 12 questions. Both groups identified changes out of key better than in key, however the brain scan data were opposite of this. Since out of key changes were typically 1 semitone and in key changes were 2 semitones, MMNm was larger in response to in key changes reflecting the actual distance from the expected tone. There were significant differences in MMNm response to upper and lower voices; upper voices created larger MMNm. Lastly, musicians’ responses were of significantly greater magnitude than nonmusicians. Fujioka et al. (2005) pointed out the uniqueness of attention during music. While focus of attention to one auditory stream typically means ignoring other streams, with music “we seem to maintain both selective and global listening” (p. 1578). There seems to be the ability to shift

43 from figure to ground without substantial compromise. Furthermore, brain scan data support the Gestalt of processing of polyphonic melodies. There was interference between the two melodies (upper voices producing larger MMNs) implying that the melodies interact to some extent during processing. Janata, Tillmann, and Bharucha (2002) found attentive listening to music recruits brain areas that serve general functions for working memory, attention, semantic processing, and motor imagery. Similar temporal, frontal and parietal areas were engaged during attentive listening to 2 and 3 part musical works under holistic (listen) and selective attention (directed to a specific timbre) conditions. Differences emerged, however, when a target detection paradigm was added to the listening condition. Listeners in these studies were clearly following the directions given to them – listening for an expected error or just listening– and brain scans reflected the differences based upon attention. Conductors are faced with a number of dual tasks when rehearsing a performing ensemble. One dual task is bimodal, as one must read the notation from printed scores while listening to the performers sounding the music. He/she must detect conflicts between what is read and what is heard in order to correct the mistakes and improve the performance. Byo (1997) researched the abilities of undergraduate and graduate music majors to detect pitch and rhythm errors within different textures. There were a number of significant interactions among the variables researched, texture (one, two, or three parts), rhythm versus pitch errors, and level of experience. Graduate students’ scores were consistently higher than undergraduate students’ scores largely due to their superior ability to detect pitch errors. Pitch errors were more difficult to detect overall, particularly in 2 part and 3 part textures. The pattern of rhythm error detection across textures between graduate and undergraduate musicians was highly similar. It is possible that rhythm errors more easily attract the attention of the listener or that a decision has been made a priori that one will ‘listen for’ rhythm errors. Another implication of the research is that pitch error detection is a function of experience more so than rhythm error detection. Brass and woodwind instrumentalists (N=76) participated in error detection research by circling heard performance errors on a musical score across three listenings (Sheldon, 2004). Participants circled the most correctly identified errors and incorrectly identified errors following the first playing of the musical example. It would appear that participants anticipated failing memory and attempted to complete the majority of the task on the first attempt. Errors in pitch, rhythm, and articulation were correctly identified the most often, while attention to tempo,

44 balance/dynamics, and intonation lagged behind. Furthermore, more errors were identified in the top-most voices that typically contained melodic features. Again, attention was drawn to the highly salient musical elements. Error detection by comparing a visual score (notation) with an auditory stream is obviously quite challenging. Yumoto et al. (2005) asked participants with self-reported to study the musical notation of a set of randomly organized melodies. They each listened to the melodies containing errors while undergoing magnetoencephalography (MEG). The participants were perfect in identifying mismatches between the written and sounded notes, and brain scans indicated differences in correct and incorrectly sounded pitches in sequence. The authors concluded that the visually presented musical score provoked auditory images that were subsequently compared to sounded tones, resulting in mismatches when errors occurred. Widmann, Kujala, Tervaniemi, Kujala, and Schroger (2004) found similar expectancy violations among musicians when the contour of expected pitches did not match visual symbols (spaced dashes) arranged to portray contour. This documentation of resulting from visual notation was important. Participants experienced expectancy violations because the auditory imagery conflicted with actual music. A similar conclusion regarding notation and auditory imagery resulted from a behavioral study. Brodsky, Henik, Rubinstein, and Zorman (2003) asked trained musicians to read music notation that contained a known, embedded theme under conditions of no distraction, rhythmic tapping, or humming a familiar folk song. There were significant differences in the reaction times and correct identification of the theme across conditions. The phonatory interference (humming) resulted in the slowest response times, while both distracting conditions lowered accuracy. In a subsequent experiment, rhythmic distraction was replaced with hearing the folk song sung in the phonatory interference condition. Reaction times among the conditions were significantly different and replicated the first experiment. Though the notation reading task presented without distraction was difficult, as evidenced by significantly better accuracy and lower reaction times when embedded themes were presented aurally, it appeared that humming tapped the same resources as audiating the notation. Auditory images were created for music notation implicating a cross-modal experience for music reading – from visual to auditory form. In a related study, Halpern and Bower (1982) briefly exposed musical notation to musicians and nonmusicians who subsequently wrote down what they recalled. There were no

45 significant differences in the accuracy of notated good and bad (but visually similar) melodies for nonmusicians, but there were significant differences in accuracy for musicians. Musicians demonstrated a decrease in accuracy from good to bad indicating that memory for good melodies was superior. Evidence of chunking was clear as musicians permeated the order of notes from an F chord, seemingly recalling the notes as a group while forgetting specific order. The use of chunks in memory reduced the memory load for musicians, and melodies that rendered themselves to consistent ‘chunking’ were more easily recalled. Good melodies apparently encoded an auditory image that facilitated recall of notation. Bad melodies may have violated expectancies and taxed the memory load since rules were broken. Gestalt in Music – Extracting Parts from Wholes Keller and Burnham (2005) provided evidence that performers are capable of global and local attention during multi-textured music. Three studies have explored the influences of melodic encoding as whole or in parts; one study used a long-term memory paradigm while the other two were short-term/immediate recall. Klinger, Campbell, and Goolsby (1998) compared the effects of immersion, or repetition of the entire song, versus phrase-by-phrase, or repetition of short segments, on children’s (2nd graders) recall of new songs one week after the initial learning. There were significant differences in the errors made by children learning under different encoding conditions. Children performed songs learned by immersion method with fewer errors than the phrase-by-phrase method. This would seem to support a holistic memory for music that was compromised when reduced to smaller components. Williams (1975) investigated the effects of note position (first note, middle note, or final note), stimulus length (3, 5, or 7 tones), and delay of production on musicians’ recall through sung responses. He found that the length of the stimulus and delay time equally affected the outcome of final notes, while the first and middle tones were affected the most by the length of the stimulus. The recall of an event in a series, even when auditory and nonverbal, was affected by ordinal position. Since these tone series were chosen at random from the chromatic scale and were not necessarily conceivable as melodies (author referred to these as pitch sequences), research on actual melodies provided a different perspective. Deliege, Melen, Stammers, and Cross (1996) conducted a series of experiments with nonmusician participants designed to determine how memory for new music was coded into memory. Participants were asked to identify ‘landmarks’ in a 30-second Schubert Valse for piano

46 on a first and second hearing. The marked locations were much the same for both indicating that general schemata for the piece were developed and maintained. However, in subsequent experiments when nonmusicians had to rely on schemata to reconstruct the randomized phrases of the music, their performance was extremely poor. The ordinal position of each segment of music was not encoded. Musicians were substantially better at this task, though not completely accurate, largely because they were able to make construction decisions based upon tonal structure. Musicians were able to order first, middle, and final phrases, notably those initiating or concluding phrases, accurately. Music provides a veritable playground for research into a listener’s analytic processes during conditions of auditory attention. Both musicians and nonmusicians demonstrate refined skills in attentive listening when asked to do so. The reality of most music listening situation is that music is not subjected to the types of critical listening that have been presented in this literature review. In fact, music is often only one stimulus engaging attention. Further, while music educators are concerned with music listening skills, students are also engaged in performance of music. Multisensory approaches to music education have rendered surprising results. Given the research literature on divided attention, a naïve assumption would be that a single modality for instruction would be the most efficient. Researchers have found otherwise. Music educators have long suspected ancillary benefits to music study; research in memory and attention validate their suspicion. Debates over gender differences are as common as visual or auditory dominance theories. Research in memory and attention typically fail to find differences, whereas music research has identified differences between males and females. Attention to Music – Focused and Therapeutic Listening Today’s multimedia environment impacts everyone, from young (Hair, 1997) to old. Ritterfeld et al. (2005) found that 69 of the 79 3- and 4-year olds in their study owned their own cassette tape player. North et al. (2004) documented the uses of music in naturalistic settings by paging participants’ mobile phones. They found that listening to music was rarely a main event. Lesiuk (2005) investigated the impact of music listening in the work place. The primary impact of music was on positive affect. Exposure to music does not equate listening with focused attention. By contrast, music playing while someone is engaged in another task is often ignored or held in the back of one’s attention.

47 Even young children demonstrate the ability to listen attentively to music. Moog (1976) identified a shift in infant response to music from a calm and sleepy response to lullabies during the first few month of life to interest responses in music around 6 months of age. While moving is the typical response of 9-month-old infants to music, children around age 2 demonstrate concentrated, attentive listening to music for several minutes. Sims and Nolker (2002) offered kindergartners the opportunity to listen to lullabies for as long as they chose. Individual listening times were documented. They found no differences in the amount of time spent listening to music between boys and girls, though teachers estimated that boys would be less attentive than girls. While the variability among children’s listening rates was considerable (range of 17.5 to 175 seconds), individual children displayed internally consistent patterns in duration of listening. Sims (2005) continued the research with classical music, younger children, and a complimentary circling task. Prekindergarten children listened to classical music excerpts (The Lion and the Cuckoo from Saint-Saens Carnival of the Animals) while circling icons on a paper or under free listening (Sims, 2005). There were no significant differences in duration of listening between directed and free listening condition. Individual children demonstrated consistent listening patterns, replicating the 2002 (Sims & Nolker) finding. Additionally, one child listened to music for 40 minutes, easily exceeding expectations for an attention limit for music listening. Contrary to teachers’ predictions of children’s attentiveness and to concepts that a ‘busy’ task would help focus attention to listening, children did listen to music – some for extended periods of time. Madsen, Geringer, and colleagues (Madsen & Coggiola, 2001; Madsen & Geringer, 1990; Madsen & Geringer, 2000/2001; Madsen, Geringer, & Frederickson, 1997) have conducted extensive research on focus of attention during music listening using the continuous response digit interface (CRDI). Madsen and Geringer (1990) investigated musicians and nonmusicians’ focus of attention to musical elements in orchestral excerpts selected with melody, rhythm, dynamics, timbre, or “everything” as salient features. Based upon the percentage of time spent listening to each element, melody and rhythm were the top ranks for music majors while dynamics and melody were the top ranks for non-music majors. Madsen et al. (1997) found were no significant differences in the degree of attention to assigned musical elements between individuals selectively attending or globally attending. Madsen and Coggiola (2001) found the aesthetic response rating of participants who listened to music and manipulated a CRDI dial were more intense than those who did not use a dial. Unlike Sims (2005) where the

48 additional task was not facilitative, attention was focused through dial manipulation that resulted in an aesthetic response. Madsen and Geringer (2000/2001) summarized the focus of attention research and commented that listeners’ attention often drifts during music, a likely result of a culture saturated with sounds. Their conclusion was that people must be systematically taught how to listen and attend to music. Attention to music has also been measured in terms of the number of reported distractions. Flowers (2001) asked college faculty (both music and non-music) and 6th graders to monitor their distractions by tapping a key while listening to diverse excerpts of unfamiliar music (2:06 – 3:56 minutes). While the college music faculty identified the fewest distractions, there were no significant differences in self-reported distractions among the groups (mean of 1.32 distractions per minute). Flowers and O’Neill (2005) asked 6th, 7th, and 8th graders to monitor their distractions while listening to a 3.5 minute excerpt of Bach or a recording of a narrator reading a story (and excerpt expected to be unfamiliar from Kipling’s Jungle Book). While there were significant differences in the reported rates of distraction by audio media (music M = 1.60 per minute; prose M = 1.11 per minute), a positive correlation (r = .58) was found between the distraction rates across 10-second intervals between prose and music. Students identified fewer distractions at the beginning and end of the excerpts. Music listening as a therapeutic or assistive intervention has been used to focus attention before the appearance of aversive tasks or and as an enhancement for memory performance (Borling, 1981; Morton et al., 1990; Standley, 1991). Standley (1991) found that music listening preceding aversive stimuli (sound of a dentist’s drill) diminished the impact of the aversive stimuli. Borling (1981) found significant differences in participants’ abilities to complete a focused attention task (maze tracing) after hearing sedative, but not stimulative music. Participants who heard sedative music performed better on the maze task; additionally, there was a correlation between maze scores and time spent in alpha wave production during the 5-minute listening experience (r = .43). Morton et al. (1990) found differences in memory performance in a dichotic listening task between those who listened to music for 5 minutes before the task. has been found to impact memory performance (Burleson, Center, & Reeves, 1989; Pearsall, 1989; Sloboda, 1976). Sloboda (1976) found that tonal and atonal background music significantly improved nonmusicians’ rewriting of briefly exposed music notation over quiet and speech conditions, though there were no significant differences in the

49 performance of musicians by background condition. Burleson et al. (1989) found that children with autism or schizophrenia benefited from background music during sorting tasks. Each of the four boys (age 5 – 9) was more accurate in sorting by color when music was playing. The authors proposed that music masked distracting background noises and thereby increased the children’s focus of attention. Pearsall (1989) found tonal background music (excerpt of Bruckner’s Symphony No. 7, Adagio) to be more disruptive to verbal listening comprehension of college non-music majors measured by the vocabulary subtest of the Sequential Tests of Educational Progress. There were significant differences among scores of participants who answered questions about stories read aloud on tape during tonal background music than no music or atonal music (Schoenberg’s Five Pieces for Orchestra), which were not significantly different from one another. Whether due to masking or increased arousal, the presence of music during attention-demanding tasks was helpful for some participants. Bimodal Experiences with Music – Complimentary and Non-complimentary Audio-Visual Complimentary audio-visual experiences include soundtracks and movies and ballets. The auditory and visual experiences were designed to convey the same message. Researchers have investigated how focus of attention was affected by bimodal input. Misenhelter (2004) had participants track on-going response to music alone (Bach Passacaglia) or music paired with a ballet (audio-visual) on a continuum of less/negative to more/positive using the CRDI. When exposed to the audio-visual condition, there was less variability among the participants. Misenhelter (2004) commented that participants’ focus of attention was dominated by the visual information; therefore response to the aural stimulus was lowered (more narrow). Children’s attention to music under conditions of no visuals and animated videos was examined. Cassidy and Geringer (1999) investigated the impact of audio-visual media on preschoolers’ abilities to focus on music. Children were able to watch or watch/listen as long as they chose music only or music and video clips; significant differences in times were found. Children attended longer to the music-plus-video condition. During interviews slanted toward the listening experience, children reported more musical features during the music only and more visual features during the music-plus-video condition. Older children (6-year-olds) spoke more about the music (e.g., instruments, dynamics) than the younger children (4 and 5 year olds). Memory for bimodal events may be influenced by the congruence of these events. Marshall and Cohen’s (1988) research on film with accompanying soundtracks provided

50 evidence of this. Participants’ judgments of the film and its characters were different based upon film-only and film with soundtrack conditions. The attention patterns to characters were influenced by the presence and type of music. The nature of the music, composed to be strong or weak, influenced the ratings. Participants evaluating the small triangle character were influenced by the co-occurrence of this character and the full chord, forceful quarter theme in the strong music in comparison to raters hearing no music or weak music (single quarter notes, no chords). The researchers concluded that raters remembered this character as a result of the music accenting events surrounding the character and rated it congruently with the music. The information provided by the music in conjunction with the action of the character contributed to the memory trace. Films, like music, evolve relationships over time and observers create associations of contiguous events (Marshall & Cohen, 1988). Boltz (2001) found significant differences in the recall of positive and negative objects in ambiguous film clips paired with no music, negative music, or positive music by participants queried a week later. Those who saw clips with positive music had a higher hit rate for positive objects, and those who saw clips with negative music recalled more negative objects. Boltz (2001) concluded that music exerted a selective attention function on the cognitive processing of the film such that mood congruent objects were strongly encoded while mood incongruent objects were not encoded to the same degree. Investigation of the effectiveness in using art, pictures, and icons to guide the listening experience has yielded varied results. Shank (2004) found the incorporation of visual art into music teaching resulted in significant differences in posttest music listening skills between the experimental and control groups. Elementary education majors benefited from the connection between visual art and musical concepts. Cassidy (2001) found music education majors were very accurate on both concrete and abstract graphic listening map-types, but elementary education majors became less accurate as the maps presented less one-to-one correspondence (one icon per note) and more abstraction (one icon per theme or section). While the relationships between the icons and ongoing music were obvious to the musically trained, this same was not true of the lesser-trained group. Gromko and Russell (2002) found no significant differences in 2nd and 3rd graders listening map reading accuracy following exposure to listening only, sand tray exploration during listening, or engaging in choreography to music. However, the scores of children with piano playing experience were significantly different from the scores of children

51 without piano experience. Thus, the pictorial icons were meaningful to musically trained children, regardless of the type of encoding – passive, active, aural, or kinesthetic, but not meaningful to children without such a frame of reference. Johnson and Zatorre (2005) conducted attention research comparing unimodal and bimodal attention to melodies and evolving shapes with behavioral and functional magnetic resonance imaging (fMRI) indicators. Participants’ attention to directed visual or auditory stimuli were confirmed by near ceiling effects on continuous response tasks undertaken during all encoding phases. Behavioral results of the study found a significant interaction between modality (melody or shape) and presentation (single modality focus or bimodal focus); melodies were better recognized than shapes when attention was directed to auditory modality and shapes were better recognized than melodies when attention was directed to shapes. There were no significant differences in memory stimuli under unimodal focus condition and focus to same modality under bimodal conditions. Predictably, memory for attended information was significantly better than for unattended information. Furthermore, brain scan results indicated that sensory cortices subserving visual and auditory modalities were stimulated under selective attention conditions to each modality. However, stimulation of visual cortices was modulated by attention instruction where auditory cortical response was largely independent of attention instruction. Johnson and Zatorre (2005) attributed this to the complexity of the auditory information (melodic) that stimulated higher-level sensory areas. Based upon brain scan data, participants appeared to treat melodies and shapes as non-complimentary, unrelated events. These researchers supported the top-down theory of attention and memory meaning that effortful attention to one modality does suppress encoding of information in an ignored modality. Encoding and Decoding Music through Visual, Auditory, and Tactile Senses Hair (1997) noted that children receive information through visual, auditory and kinesthetic modes, which appear to develop at different rates and times. Researchers have investigated various sensory modalities as encoders of musical information. Persellin (1992) researched visual, auditory, and kinesthetic, and multi-sensory input modes on rhythmic performance of 1st, 3rd, and 5th grade students. With the exception of the visual mode for the youngest students, input modes had a similar effect on rhythmic performance by grade level. The scores for the visual mode for the 1st grade were significantly different from all other scores; the short and long lines had little rhythmic meaning for the students whose performance was the

52 worst in this condition. Generally, performance following each encoding modality increased between grade levels with the exception of the auditory only (listening to the rhythm played out of sight on a tone bar) between 3rd and 5th grade. Auditory encoding appeared to plateau between these years. Shehan (1987) found children (2nd graders and 6th graders) accurately performed a rhythm pattern on a woodblock in fewer trials when each learned the rhythm with a visual aid accompanying the sounding of the rhythm. The differences in the number of trials between audio-only encoding and audio-visual encoding was significant for both younger and older children. The combination of visual aid (standard notation) and mnemonic device (spoken syllables referencing each rhythm uniquely) required the fewest trials to accurate performance. Rarely have researchers attempted to match a child’s preferred sense with learning methods. Zikmund and Nierman (1992) found significant differences between the scores of the experimental group and control on a melodic conservation task conducted with 3rd, 4th, 5th, and 6th grade boys and girls. Experimental groups who encoded the melody through preferred modality, visual or tactile-kinesthetic, scored better than control participants who only heard melody targets and test items. There were no differences by grade or between the genders for melodic conservation; however, the same was not true of rhythmic conservation. A significant three-way interaction among gender, grade level, and experimental or control group occurred. While the authors did not attempt to explain the unexpected interaction, they noted that overall girls scored higher than boys on rhythmic tasks. This study provided further evidence that encoding melodic and rhythmic information requires different processes that develop at different rates and preferred sensory input facilitates recall. Researchers have investigated the responses of children who are deaf, have emotional disturbance, or mental retardation to multisensory encoding for rhythm discrimination or performance. Darrow and Goll (1989) found that the addition of tactile input via the SOMATRON to auditory input benefited the rhythmic discrimination of children with hearing loss. Children with limited use of audition supplement by relying upon their kinesthetic/tactile senses on a more regular basis and did benefit (age 7 was the youngest child). Larson (1981) found the adolescents with emotional disturbances were significantly better at making same- different rhythm judgments when the stimuli were auditory rather than visual. The same auditory preference was found for typical adolescents, thus the differences between adolescent groups were not significant.

53 Grant and LeCroy (1986) had students (ages 6 – 8, 9 – 12, and 13 – 18) with mild mental retardation play on a drum the rhythms presented to them singly or through a combination of sensory inputs. Children heard the experimenter play rhythms out of sight, felt the rhythm tapped on their shoulder, saw and heard the experimenter play the rhythm, or received visual, auditory and tactile inputs when the experimenter tapped the rhythm on the children’s knee and chanted. A significant interaction between sensory input modality and age groups was found. While tactile input was the poorest input condition for all students, it was significantly so for the youngest children. The audio-visual condition had the best performance for young and intermediate age groups, but when tactile input was added in the audio-visual-tactile input performance went down. Young children with cognitive impairments did the poorest under tactile input while all groups’ performance was enhanced under the audio-visual input, which incidentally was the encoding condition closest to output mode (modeled by experimenter). Music Training and Memory Research Music training influences serial recall of verbal material. Wallace (1994) also found that participants with training in singing (M = 4.22 years) recalled more words than those without in Experiment IV only. These differences were significant, though no such differences were present among instrumentalists with the same number of years training. Kilgour et al. (2000) found significant differences between the text recall scores by musicians (M = 10.8 years of training) and nonmusicians for both sung and spoken conditions by the second of 5 trials. Musically trained participants continued to outscore untrained counterparts after presentation rate for words was controlled. Jakobson, Cuddy, and Kilgour (2003) found correlations between the years of music training and verbal recall (Kaufman Brief Intelligence Scale and Multidimensional Aptitude Battery, r (58) = .44, p < .001), and composite scores for temporal-order tests (Test of Basic Auditory Capabilities, subsets Temporal Order for Tones and Temporal Order for Syllables, r (58) = .59, p < .001). No significant correlations were found between general intelligence and music training. Ho et al. (2003) assembled two groups of males (ages 6-15) in Hong Kong that were matched for age, education level, socioeconomics status, and full scale IQ (WISC); one group had classical music training from 1 to 5 years while the other had no training. Tests for verbal (Hong Kong List Learning Test – Form One) and visual memory (Brief Visuospatial Memory Test – Revised and Rey-Osterrieth Complex Figure Test) were given to the groups. There were

54 significant differences at each trial on the verbal memory test between groups; musically trained boys outscored counterparts on both immediate and delayed recall trials. However, no such differences between groups existed on the visual memory test. There was a correlation (r = .54) between duration of music training and verbal learning scores even when controlling for the effects of age and education level, while there was no relationship (r = .21) between music training and visual memory scores. A follow-up study (Ho et al., 2003) was completed the next year that compared boys who continued their music study, those who discontinued music study, and a group of new music students who had completed the test battery prior to one year of music training. There were significant differences in verbal, but not visual, memory scores between the no music training group and the two trained groups. However, at the year follow-up there were no significant differences between groups as a result of the new music students’ significant increase in score and the discontinued groups’ nonsignificant decrease. The effects of music training were immediate for the new students and sustainable across nearly a year without weekly, formal training for the discontinued group. Differences in sequential memory were attributed to differing rates of music training. Moore and Staum (1987) found that English children scored better in auditory/visual sequential memory with increased age while American children’s progress was less linear. In fact, English 7-year-olds scored significantly better than English 5- and 6-years and American 5-, 6-, and 7- year-olds. Moore and Staum (1987) noted that English children received more music education on a daily basis and started music in schools at an earlier age than their American counterparts. Furthermore, research with children who engage in musical training evidenced immediate differences between these and musically untrained children in tonal judgments of music (Morrongiello, 1992). This would appear to indicate that training directly and quickly effects changes in music processing. The changes include greater speed at processing music and better memory for musical stimuli (Morrongiello, 1992). Brain volume and responses have differed between musicians and nonmusicians. In an all male sample, Gaser and Schlaug (2003) found significant positive correlations between musician status (professional, amateur, and nonmusicians) and increases in gray matter volume in the primary motor and somatosensory areas, premotor areas, anterior superior parietal areas, and in the inferior temporal gyrus bilaterally. Thus, the gray matter volume was highest in professional musicians engaged daily in practice, followed by amateur musicians whose primary careers are

55 not in music but are routinely engaged in music performance, and lastly nonmusicians. Gaser and Schlaug (2003) contend that these differences can be attributed to unique structural changes as a result of on-going music study since there are differences in several anatomically distinct brain regions making it unlikely that these were innate differences in the sample. Davidson and Schwartz (1977) examined bilateral EEGs (electroencephalograms) of musicians and musically naïve participants while they sang a familiar song, whistled the tune, or spoke the words in a monotone. There were significant differences in hemispheric activation pattern across the conditions by subjects. For both groups, talking, in comparison to singing, produced more left hemisphere activation for both groups. Whistling the melody generated more left hemisphere stimulation for musicians while right hemispheres of musically naïve participants was stimulated by whistling. The author explains this as a training effect attributing whistling's similarity to playing an instrument and asserting that the task engaged the musicians’ sequential, analytical approach to performance. Besson and Faita (1995) found the ERP data collected during the congruence judgments showed that incongruent final notes for familiar melodies elicited larger and shorter onset latency late positive components (LPC) for musicians in comparison to nonmusicians. However, when procedures eliminated the expectancy of altered final notes and participants were merely listening, differences in LPC for familiar and unfamiliar melodies and diatonic and nondiatonic incongruity for familiar melodies replicated while differences between musicians and nonmusicians disappeared. Thus, perceptual processing by both groups was similar when no decisions were required and there was no expectation of violations; but, under conditions of expectancy, musicians made decisions about incongruity faster and brain response was larger. The experience of music training provided them with a different set of rules against which comparisons were made; this was particularly evident for unfamiliar melodies. It would seem obvious that musicians have superior skills during musically based probes. However, differences in the type of music training have influenced outcomes (See Wallace, 1994). Norris (2000) compared choral music students’ (5th through 12th graders) scores on four tests for tonal memory: Seashore Measures of Musical Talent tonal memory subsection, Wing’s Standardised Tests of Musical Intelligence tonal memory subsection, Norris Pitch Retention and Discrimination Test, Gordon’s Musical Aptitude Profile tonal imagery – melody section. On all tests, students who participated in both chorus and band scored higher on each test than chorus-

56 only peers; these differences were significant. Additionally, the chorus plus band students scores among the test types were more strongly correlated than the chorus-only scores. Bonnel et al. (2001) researched the abilities of instrumentalists and singers to detect melodic and semantic errors in excerpts of French operas (all native speakers of French). Between these two groups of music majors, there were no significant differences in rates of melodic and semantic error detection; therefore, the two musicians types were considered comparable in skill. Bella et al. (2003) found that musicians reached a point of certainty in familiarity judgments for familiar and unfamiliar melodies in fewer notes than nonmusicians. The differences were significant for the fewer notes needed by musicians to judge unfamiliar melodies as unfamiliar in comparison to nonmusicians. Generally, it took between 3 and 6 notes for a sense of knowing the melody with an additional 2 notes for certainty for familiar melodies, while it took 8 to 10 notes to determine that a melody was unknown. A second experiment replicated the differences in familiarity judgments between musicians and nonmusicians, though it took 5 to 7 notes for familiar melodies to be sung 3 notes beyond the recognition point. Melodic recall and recognition differed between musicians and nonmusicians. A study examining how 16 trained melodies were grouped at recall found no significant differences, relative to the amount of recall, in the type or number of grouping categories between music majors and non-music majors (Cutietta & Booth, 1996). While music majors recalled significantly more melodies during testing, both groups changed grouping variables from mode and meter in early trials to contour and interval type in the final trial. Bigand et al. (2000) found musicians were better able to detect melodic errors in the lower voice of 2-part counterpoint. Timbre influenced the judgments of nonmusicians significantly more than it did musicians (Radvansky et al., 1995; Wolpert, 1990). Nonmusicians were likely to incorrectly choose lures presented in the same timbre as the target melody. Hall and Blasko (2005) found weak, positive correlations between years playing an instrument and accuracy during timbre judgment trials; Pearson correlation coefficients for single instrument, dichotic-same, and dichotic-different were .20, .28, and .38. Persons with instrument playing experience experienced less interference during different timbre conditions. The range of music training represented by the literature in this review was quite broad and must be considered in interpreting the body of research. Some researchers label groups as musicians or nonmusicians while other merely report the mean or median years of training

57 represented in the sample. Music majors’ years of training were not always delineated, though it is assumed to be extensive and commenced at an early age. Criteria for musicianship often included current active status as a performer. Table 1 provides a sample of the range of training defining musicians, or lack therefore defining nonmusicians, from studies in this review. Gender Differences in Music and Memory Research Researchers and popular culture have identified differences in males’ and females’ memory skills that are attributed to innate brain differences. The male or female brain has long been debated (Baron-Cohen, 2005). Understanding of the unique functions of the brain’s hemispheres continues to be illuminated through advances in technology (Robertson, 2005). A popular theory that men and women originated from different planets infused humor into the discussion. However, the topic of gender differences is not without controversy, since identifying a true advantage for one sex in terms of health, education, or opportunity has a certain political incorrectness in our age. While most researchers acknowledge differences, the solution generated by some investigators was to balance the genders in the sample or investigate only one gender. It was assumed that differences would be spread across the sample and, therefore, statistical comparisons were not undertaken. Others have compared males and females statistically and reported their findings. Much of the research from this review reporting analysis by gender reported no significant differences. Gender studies are not without precedent in music. Music researchers have identified conditions where differences between males and females were evident. Standley (2000) reported that men and women responded to music differently, particularly when under stress. Effect sizes for men and women receiving music therapy while receiving medical care exposed the differences. Other research has documented female infants as having more acute hearing than male infants (Cassidy & Ditty, 2001). Researchers in instrumental music have investigated gender roles in preferences for learning a particular instrument (Harrison & O’Neill, 2000; O’Neill & Boulton, 1996; Rife, Shnek, Lauby, & Lapidus, 2001) or biases in how directors assign students to instruments (Johnson & Stewart, 2005). The current review, however, revealed more studies that reported no differences than those who did. No differences were found between male and female college students who study while listening to music or among their music preferences (Crawford & Strapp, 1994). No

58 Table 1 Musicianship as Defined by a Sample of Reviewed Literature

Authors Music Training Description Label Andrews & Dowling, 1991 Minimum of 2 years; sample mean of 7 Musician years Bella et al., 2003 At least 4 years Musician Crawley et al., 2002 10 years with current practice Musician Cuddy & Cohen, 1976 Untrained 0 years; medium 5 years, high Musician 10 years Davidson & Schwartz, 1977 Played at least one instrument, engaged in Musician current performance Dowling et al., 2002 Less experienced < 2 years, moderate > 2 Musician years (mean 6.3 years) Gaser & Schlaug, 2003 Professional, career musicians with 2 Musician hours daily practice Jellison & Miller, 1982 Music majors – performed 4 of last 5 Musician years Samson & Zatorre, 1991 No training, medium training – less than 2 Musician years no current practice; high training – 2 or more years with current practice Balch, 1984 Median 2 years instrument training Nonmusicians Bigand et al., 2000 No training, listening 1 hour daily Nonmusicians Bonnel et al., 2001 3 year total, no training in prior 8 years Nonmusicians Cassidy, 2001 Band, choir, orchestra in elementary Nonmusicians school Jones et al., 1987 Less than 10 years, median of 5 Nonmusicians Kilgour et al., 2000 Little or no training – range 0 – 2 years Nonmusicians Madsen & Coggiola, 2001 Less than 3 years of music instruction Nonmusicians Madsen & Geringer, 1990 Less than 3 years of music instruction Nonmusicians Oura & Hatano, 1988 1/2 piano, 1/2 year solfeggio Nonmusicians Radvansky et al., 1995 0 –5 years, mean 1.6, not currently Nonmusicians playing Dowling, 1973a 3.75 years of lessons on instrument Not labeled Johnson & Zatorre, 2005 0 –15 years study sample Not labeled Wallace, 1994 4 years of instrument play; 4 years of Not labeled singing

59 differences were found between boys and girls in attention during music listening (Flowers, 2001; Flowers & O’Neill, 2005; Sims, 2005; Sims & Nolker, 2002), though teachers estimated boys’ attention to be less than girls (Sims & Nolker, 2002). No differences were found on tonal memory tests (Norris, 2000), on tests of sequential memory for audio-visual stimuli (Staum & Moore, 1987), or for text/melody recall (Samson & Zatorre, 1991). Two authors researched an all male population (Gaser & Schlaug, 2003; Oura & Hatano, 1988). The work of Gaser and Schlaug (2003) was replicated with both males and females, where no differences between the males and females appeared, but the interaction with music training and gender was significant (Hutchinson, Lee, Gaab, & Schlaug, 2003). Female musicians and nonmusicians’ brains were not different, while male musicians and nonmusicians differed. There were three music studies that found differences during attention or memory experiments. Kinney and Kagan (1976) tested the orienting responses of 7-month-old boys and girls for auditory stimuli, specifically short verbal (nonsense syllables) and musical phrases (varying in rhythm and timbre) that were presented along a continuum of variability from none to extreme. The researchers tracked head turns, heart rate deceleration, and vocalization during the stimuli. They found that more infants vocalized during musical stimulus presentations with great variability among the boys in the sample. Girls’ fixation responses on variable stimuli were closer to the predicted quadratic trend while boys’ responses fell into an inverted U-shape. Clearly infants’ responses discriminated among the variable verbal and musical stimuli with differences between the sexes emerging. Zikmund and Nierman (1992) found significant differences between the scores of the experimental and control groups on a melodic conservation task conducted with 3rd - 6th grade boys and girls. There were no differences by grade or between the genders for melodic conservation; however, the same was not true of rhythmic conservation. A significant three-way interaction occurred among gender, grade level, and experimental or control group. While the authors did not attempt to explain the unexpected interaction, they noted that overall girls scored higher than boys on rhythmic tasks. This study provided further evidence that encoding melodic and rhythmic information requires different processes that develop at different rates. Finally, among musicians, Gordon (1978) found 100% of the males (N=12) and only 8 of the 12 (66%) females demonstrated a left ear trend in a dichotic chord listening study.

60 Most studies on attention and memory have reported no differences between males and females. Richard et al. (2004) reported no differences between 5-month-old boys’ and girls’ attention responses, nor did Ruff and Capozzoli (2003) report differences in distraction from audio-visual material in youngsters (10 months, 26 month, and 42 months). Ritterfeld et al. (2005) reported no differences between boys’ and girls’ attention during narrative stories with or without music. Obrzut et al. (1986) found no significant differences between the boys’ and girls’ (ages 9 – 12) recall of verbal and auditory stimuli during dichotic listening. Johnson et al. (2003) conducted a number of tests on middle school students who were typical or identified as gifted. They found no significant differences on measures of mental capacity, speed of processing, and inhibition of attention tests. The exception was a single subtest on speeded spatial location where a significant interaction with gender and age appeared. The youngest females only had lower response times than older girls and the boys. The Present Study In summary, this literature review showed the ubiquitous nature of listening to music while multi-tasking (Kubey & Larson, 1990; Lesiuk, 2005; North et al., 2004) and that music listening can be a focus of attention for young and old (Flower, 2001; Flowers & O’Neill, 2005; Sims, 2005; Sims & Nolker, 2002; Madsen & Geringer, 1990, 2000/2001; Madsen et al., 1997). Research on melodic recognition has included recognition (Andrews & Dowling, 1991; Bella et al., 2003; Dowling, 1973a, 1973b; Schulkind et al., 2002), judgments on a continuum of same- different discriminations between targets and test items (Dowling et al., 2002; Demorest & Serlin, 1997; Halpern et al., 1998; Kidd et al., 1984; Madsen & Madsen, 2002; Radvansky et al., 1995; Sink, 1983; Wolpert, 1900), and error detection in single and multi-voice textures (Bigand et al., 2000; Byo, 1997; Fujioka et al., 2005; Sheldon, 2004) all within the auditory domain. Audio-visual studies with music have found that visual stimuli, music notation and contour-like dashes) evoke auditory imagery (Brodsky et al., 2003; Halpern & Bower, 1982; Widmann et al., 2004; Yumoto et al., 2005). These are examples of complimentary audio-visual stimuli meaning that they are contributing toward the same goal. Performing music, specifically sounding music from notation, provides numerous opportunities to simultaneously coordinate auditory and visual senses toward a common goal. The majority of audio-visual research with bimodal encoding, where both the audio and visual stimuli are to be used for decision-making or recalled, used novel and familiar sounds,

61 short tone sequences, or noise bursts (Napolitano & Sloutsky, 2004; Robinson & Sloutsky, 2004; Schmitt et al., 2000; Sloutsky & Napolitano, 2003; Tremblay et al., 2001; Turatto et al., 2003; Turatto et al., 2005). Only one used actual music and a non-complimentary visual sequence (Johnson & Zatorre, 2005). Therefore, this study used actual melodies, Pomp and Circumstance and The Bailiff’s Daughter, paired with novel images that were encoded and tested as pairs. A number of recall studies with music asked participants to recall the words/lyrics, melodies, or both (Crowder et al., 1990; Jellison & Miller, 1982; Oura & Hatano, 1988; Serafine et al., 1984; Serafine et al., 1986). Other studies have asked participants to selectively attend to words or melodic elements (Bonnel et al., 2001). Generally, musicians are better at the recall of both words and music than nonmusicians. The present study compared musicians’ and nonmusicians’ recall for novel visual images and melodic fragments following encoding under selective attention conditions. While Napolitano and Sloutsky (2004), Robinson and Sloutsky (2004), and Sloutsky and Napolitano (2003) identified an auditory bias among children and a visual bias among adults, that sample did not include musicians. This study evolved from the following questions: Would music majors’ attention to the music interfere with the encoding of visual images? Would music majors memory load for the bimodal task be lower and overall performance better since it is theorized that they should have better melodic memory? Can nonmusicians successfully compare melodic fragments as same or different from the intact melody presented during encoding (Deliege et al. (1996) found melodic reconstruction by phrase very difficult for nonmusicians.)? Would there be a difference in the melodic judgments if the music were highly familiar versus novel? The following research questions were addressed: 1) What are the differences between music majors’ and non-music majors’ ability to recall bimodal audio-visual stimuli following encoding under divided or selective attention? 1) What are the differences between males and females during these conditions? 1) What differences exist in recall when the music is highly familiar versus novel? 1) What strategies do music majors and non-music majors use to accomplish this bimodal task?

62 CHAPTER 3 METHOD

This research involved a recognition memory paradigm including bimodal encoding of auditory (music) and visual information tested in bimodal form. Participants indicated if the auditory or visual or both events were different in test items in comparison to training stimuli. The researcher designed both the training stimuli and test materials. The audiovisual materials, procedures, and generated forms were subjected to pilot testing prior to the main experiment. Some changes to each of these were made following the pilot study, however, the procedures for generating audio and visual materials were not changed substantially from the pilot to the main experiment. Therefore, detailed description of the methodology for development was included in the pilot study only. Pilot Study Materials Music (Auditory Stimuli) Criteria were established for the music selections, one of which was expected to be familiar and the other unfamiliar to the research participants. Two 8-measure melodies were sought such that each selection was either an intact song or a recognizable, independent excerpt. The melody was played without accompaniment using piano timbre. Piano was found to be a timbre common to music memory research and harmonic information was useful only to music majors in making same-different judgments (Besson & Faita, 1995; Cuddy & Cohen, 1976; Cutietta & Booth, 1996; Deliege et al., 1996; Demorest & Serlin, 1997; Halpern et al., 1998; Johnson & Zatorre, 2005; Norris, 2000; Oura & Hatano, 1988; Pembrook, 1986, 1987; Povel & van Egmond, 1993; Radvansky et al., 1995; Schulkind et al., 2003; Wolpert, 1990). The harmony was selected to be idiomatic of Western Art and/or Folk music and the selection remained within a single key. Each measure of the selection was to be unique with no exact duplicate measures appearing in the chosen selection. The selection was to be instrumental with no known text, in order to avoid interaction between text and melodic recall (Crowder et al., 1990; Serafine et al., 1984; Serafine et al., 1986; Samson & Zatorre, 1991).

63 Rhythmic groupings were to exist within the selection such that four rhythmic motives were each repeated two times within the excerpt. The rhythmic groupings were chosen to facilitate chunking and to reduce reliance on rhythm and increase reliance on pitch relationships during judgment-making (Andrews & Dowling, 1991; Byo, 1997; Demorest & Serlin, 1997; Dowling, 1973a; Halpern et al., 1998; Madsen & Geringer, 1990; Schulkind et al., 2003; Sink, 1983). Each selection had to be coherent when performed at M.M. = 68 as both stimuli and test items would be presented at a constant tempo (Pierce, 1992). An 8-measure excerpt of the English composer Edward Elgar’s (1901) Pomp and Circumstance, Military March No. 1 in D, Opus 39 was taken from the trio. This selection was chosen for the familiar music condition as it was expected to be familiar to most high school graduates attending college and has been used in previous research as a familiar melody (Besson & Faita, 1995). The original time signature was two-four; however, the excerpt is reasonably conceivable as common time (Figure 1). Therefore, the excerpt was notated and recorded in common time throughout the research (Figure 2). No additional rhythmic alterations were made. The range of the excerpt was one octave, from A3 to A4, and was recorded in the key of G. A simple analysis of the harmony identified the following chords: I, V7, ii, IV, and V7/V. These chords are consistent with traditional harmonic structure. The excerpt included a raised 4th scale degree that occurred once in the melodic line (measure 5) during the secondary dominant. During the pilot testing, the selection ended on D4 harmonized on a V7 chord as it appeared in the original excerpt.

Figure 1. Pomp and Circumstance – original.

64 Figure 2. Pomp and Circumstance – Pilot study version.

The folk song The Bailiff’s Daughter of Islington (Silverman, 1975) was selected for the unfamiliar music condition. This 8-measure, 18th century English ballad by an unknown composer was used in its entirety. While the folk song includes a text, it was determined to be unknown to the listeners. In the original, the song was in common time, included an anacrusis, and was in the key of D (Figure 3). A simple analysis of the harmony included the following chords: I, V7, ii, IV, and vi. These chords are consistent with traditional harmonic structure and comparable to the harmony of Pomp and Circumstance. The excerpt remained in the same key throughout the eight measures. In addition to transposing to the key of G, rhythmic structures were altered from the original notation in order to produce four rhythmic motifs that were iterated two times each. The range of the excerpt is one octave and one half step, from F#4 to G5. Melodic contour was preserved. See Figure 4 for the version used in the pilot study.

Figure 3. The Bailiff’s Daughter – original.

65 Figure 4. The Bailiff’s Daughter – pilot study version.

The rhythmic groupings for Pomp and Circumstance are as follows: Rhythm I - measures 1 and 3, Rhythm II – measures 2 and 6, Rhythm III - 5 and 7, and Rhythm IV - 4 and 8. The rhythmic groupings for Bailiff’s Daughter are as follows: Rhythm I - measures 1 and 5, Rhythm II - measures 2 and 7, Rhythm III - measures 3 and 6, and Rhythm IV - measures 4 and 8. The dotted half note and whole notes at the end of each four-measure phrase are considered to serve the same rhythmic function of cessation of motion. Rhythm IV in both selections was a whole note or a dotted half note at the end of a four-measure phrase. Rhythms I, II, and III are discrete to each selection. Two additional melodies were identified in order to create a practice test. The familiar music selection chosen was the first four measures of Hail to the Chief (The book, 1994), an instrumental selection credited to a 19th century English composer named James Sanderson. The selection was expected to be familiar to most participants due to the performance of the music during Presidential events. The four-measure excerpt (Figure 5) was converted from cut-time to common time when transcribed (Figure 6). It was recorded in the key of C and had a range of an octave and a major third (C4 to E5). It contained 3 rhythmic groupings, one of which was iterated two times. A simple analysis of the harmony included the following chords: I, V7, V7/V. The excerpt ended on D5 and a V chord.

66 Figure 5. Hail to the Chief – original notation.

Figure 6. Hail to the Chief – pilot study version.

The unfamiliar music selection was the first four measures of the 19th century English folk song The Farmer’s Boy (Grover, 1973). The original song was in common time, in the key of B-flat, and contained an anacrusis (Figure 7). The four measure excerpt was transposed to the key of C and the anacrusis was eliminated. It presented four different rhythmic patterns with a range of one octave, D4 to D5 (Figure 8). A simple analysis of the harmony included the following chords: I, V7, and IV. The excerpt ended on D4 on a V chord, similar to the Hail to the Chief excerpt.

Figure 7. The Farmer’s Boy – original key and notation.

67 Figure 8. The Farmer’s Boy – pilot study version.

The music excerpts were transcribed into Finale notation software using the piano for instrumentation. The bass clef staff was eliminated for all music and no harmony was used. Tempo marking was set at M. M. = 68. Each excerpt was transformed into a sound file with Finale. Editing of the sound file was completed using Audacity. For the music stimuli, the editing consisted of removing silence at the end of each selection. No other alterations were necessary for the sound stimuli used for the training stimulus video. Images (Visual Stimuli) Shapes were chosen as the images in the study. Four groupings of shapes were chosen to accompany each music stimulus, thus mimicking the rhythmic groupings established in the music. One set of images contained diamond shapes, swirling or spiral shapes, star-like shapes enclosed or arranged in circular patterns with dots, and lightning bolt-like spiked symbols. The other set of images contained squares, triangles, compass faces, and stars alone. Four of each type of shape was located using Microsoft Word Clip. From each image grouping, two were randomly chosen as trainers and the remaining two as distractors. The sets were randomly assigned to accompany Bailiff’s Daughter or Pomp and Circumstance. The images for Pomp and Circumstance were blue on a white screen; the images for Bailiff’s Daughter were black on a white screen. The variation in color was used to assist the participant in separating each trial. Each of the four images in a set was numbered 1 – 4 and randomized. The first two in the order were used as trainers and the remaining two were used as distractors. The eight training images were then randomized for order. The same procedure was applied to both sets. The ordered training stimuli are indicated in figures 9 and 10. Each video lasted approximately 30 seconds.

68 (1) (2) (3) (4)

(5) (6) (7) (8)

Figure 9. The Bailiff’s Daughter order of visual training stimuli (black images on white screen).

(1) (2) (3) (4)

(5) (6) (7) (8)

Figure 10. Pomp and Circumstance order of visual training stimuli (blue images on white screen).

Two additional sets of four images each with no grouping patterns were selected for the practice videos. One set included green images on a white screen and was randomly assigned to accompany Hail to the Chief (Figure 11). The other set included red images on a white screen and accompanied The Farmer’s Boy (Figure 12).

69 (1) (2) (3) (4)

Figure 11. Hail to the Chief order of images (green on white screen) for training stimulus, practice test 2.

(1) (2) (3) (4)

Figure 12. The Farmer’s Boy order of images (red on white screen) for training stimulus, practice test 1.

Video Development Videos were developed using iMovie (video), Finale (notation and sound software), Audacity (sound editing), Microsoft Powerpoint (display of visual images), Microsoft Clip Art (selection of visual images), and Snapz Pro X (screen snap shots of visual images). The sound files created with Finale and Audacity were imported into iMovie. The visual images were inserted into a Powerpoint presentation and displayed using the full 15-inch computer screen. Snap shots of the screen displaying the images were taken using Snapz Pro X software and saved as an image file. For the training stimulus videos, the onset of each image was timed with the music such that each first appeared on screen on with downbeat of each measure and remained on screen for the duration of each measure (3.16 – 3.24 seconds). Ten seconds of a blank white screen appeared between repetitions of the training video. Posttest Construction Composing the audio distractors. Criteria were established for composing the audio distractors. A single distractor was composed based upon each of the eight measures in the familiar and unfamiliar training music. The rhythmic integrity of the measure was not altered, such that the rhythmic groupings continued. The key of the song or excerpt was not altered with accidentals; however, the implied harmonic structure of the measure was changed in some

70 distractors. The range of each measure was analyzed and distractors composed such that the range was similar to the original. The starting pitch was raised or lowered. Each interval change was identified for direction and degree. The contour was changed by altering the direction of the melody such that ascending intervals either descended or no interval change was made, descending intervals either ascended or no interval change was made, and in instances where no interval change occurred between two notes, either an ascending or descending interval was composed. Additionally, interval changes between adjacent tones were analyzed as either stepwise or leaps. Stepwise intervals were expanded to a leap, and intervals of a minor third or greater were either compressed to a step or expanded beyond their original degree. (See appendix A for detailed description of distractor composition.) Audio (music) distractors were notated using the same Finale settings as the training music. A sound file of the distractors was generated and edited using Audacity such that each measure of music was saved as a single sound file. The training stimulus was edited in a like manner for the test. When necessary sound bleed from previous measures was eliminated by notated the measure itself in Finale and generating a sound clip. Test item construction. The posttest for recognition memory contained the following question types: Pair correct –the music and image were presented as a pair in the training video; Picture new – the music was in the video but the image presented as a test item was not; music new - the image was in the video but the music presented as a test item was not; and Pair new – neither the music nor the image were presented in the training video. These test item types were common to the memory research literature (Bonnel et al., 2001; Crowder et al., 1990; Napolitano & Sloutsky, 2004; Robinson & Sloutsky, 2004; Serafine, Crowder, et al., 1984; Serafine, Davidson, et al., 1986; Sloutsky & Napolitano, 2003). The posttest consisted of 23 items, six each of pair correct, picture new, and music new, two new pair questions, and three retest items - one each of pair correct, picture new, and music new. Each measure of music and its respective image were numbered 1 – 8. Using random number generator, the first 6 of the 8 random ordered measures were selected for pair correct. This procedure was repeated for picture new and music new questions resulting in 6 of the 8 sets of training music and image being tested two times and two sets being tested three times. Two measures were randomly chosen for the new pair questions. For the pair correct questions, both

71 the audio and visual pair were presented together. For the picture new and music new questions, the paired stimuli were separated and presented with respective distractors. For the new pair questions, an audio distractor based upon the chosen measure was paired with a picture distractor. Visual distractors were assigned randomly to music new and new pair test items. One retest item was randomly selected from each of the generated pair correct, picture new, and music new items. These 23 questions were randomly ordered; the order chosen did not have retest items in close sequence and no more than two of the same type of question appeared sequentially. This procedure was completed twice, once for The Bailiff’s Daughter and once for Pomp and Circumstance. Each test item was presented following a screen with the item number accompanied by a warning sound. This warning sound was two sixteenth notes prepared in Finale using unpitched percussion (maracas, wood block, and castanets). Its duration was approximately 0.5 seconds. The number slide was exposed for 2 seconds (Dowling, 1973a, 1973b) followed by the 3 to 3.5- second test item. Five seconds of a blank white screen followed each item and provided a fixed interval during which responses were expected of the participants (Dowling, 1973a, 1973b). The next test item began with the warning sound and next sequential number slide. Posttest practice test. In order to familiarize participants with the posttest procedure, each question type was tested in the practice test. Likewise, participants were exposed to a familiar music condition and an unfamiliar music condition. The question types and music conditions were randomly ordered resulting in each participant viewing first The Farmer’s Boy training video with four red images synchronized to the four measure excerpt. Following the second viewing of the training video, a two-question test followed testing music new and pair correct question types. Participants received immediate feedback on test answers. Then, the Hail to the Chief training video containing four green images synchronized to the four-measure excerpt was viewed twice. A second two-question exam followed testing pair new and picture new question types. Immediate feedback was provided. Procedure for Pilot Study One purpose of the pilot study was to determine the differences in viewing the training video two times versus three times on posttest results. Researchers have found one exposure to be too few for unfamiliar music (Kilgour et al., 2000), significant differences appear after two exposures of training stimuli (Deliege, Melen, Stammers, & Cross, 1996; Kilgour et al., 2000;

72 Pembrook, 1986, 1987; Wallace, 1994), after three exposures of a 30-second piano composition (Deliege et al., 1996), and ceiling effects were discovered after five exposures (Kilgour et al., 2000). Another main purpose was to examine patterns in the posttest items. Feedback on procedure, instructions, and forms was sought from participants. Though the main study would include music and non-music majors and a balanced sample of females and males, it was concluded that data from a convenience sample of predominantly female music majors would answer the posed pilot study questions. Pilot study participants N = 18 (n=16 female) were experienced musicians enrolled in either graduate or undergraduate study at Florida State University. The convenience sample was haphazardly divided into two groups, one of which viewed the familiar and unfamiliar music training videos two times prior to taking the posttest while the other viewed each training video three times prior to taking the posttest. Both were tested on the same day in the same classroom and viewed the stimuli on the TV/VCR. Each participant received a study packet containing the informed consent form, a pretest questionnaire, practice test posttest, a posttest for Pomp and Circumstance, a posttest for The Bailiff’s Daughter, and a post experiment questionnaire. See Appendices B-F for copies of the forms. Each packet was coded in order to assure anonymity. The researcher read aloud all instructions from a script (Appendix G). The practice test procedures were identical for both groups. The training stimuli were presented to the participants of one group two times prior to posttest and to the other group three times prior to the posttest. These videos contained only the training stimulus and subsequent tests. Both groups received a dual-task condition, as the instructions did not ask for a specific focus of attention. The order for both groups was Pomp and Circumstance followed by The Bailiff’s Daughter. Each participant completed a post experiment questionnaire. All participants were given the opportunity to provide feedback on the methods and procedures of the research after all forms were collected. Changes in procedures, forms, and instructions resulted. (See Appendices D, E, I, J, K for the forms used in the main experiment.) Results of Pilot Study Comparison of 2 viewing times and 3 viewing times groups. Posttest scores of the two viewing times (2VT) group were compared to the three viewing times (3VT) group using the Mann Whitney U. For both Pomp and Circumstance and The Bailiff’s Daughter, no significant

73 differences were found (p > .05) between total score, new pairs, picture new, music new, first half of test correct, or second half of test correct. Significant differences were found between groups on the number of pair correct identified on Pomp and Circumstance posttest, U = 67.5, p < .02, 2VT M = 4.11 (SD = 1.36) and 3VT M = 5.56 (SD = 0.88). For the 3VT group, the mean for pair correct was nearly a perfect score (5.56 out of 6) with very little spread. No significant differences were found on pair correct between 2VT and 3VT groups on The Bailiff’s Daughter. Posttest scores for the unfamiliar and familiar music conditions were compared using Wilicoxon Signed Ranks. For the 2VT group, no significant differences were found on posttest scores for Pomp and Circumstance (PC) and The Bailiff’s Daughter (BD) on total score, pair correct, new pairs, picture new, first half of test, and second half of test. Significant differences were found on music new question type, W =31, p < .05, PC M = 5.33 (SD = 1.12) and BD M = 3.44 (SD = 2.30). These differences were expected and were considered a function of the stimulus type. For the 3VT group, no significant differences were found on posttest scores between Pomp and Circumstance (PC) and The Bailiff’s Daughter (BD) on pair correct, new pair, picture new, and second half of test. Significant differences were found on total score (W = 32, p < .05, PC M = 17.8, SD = 3.10 and BD M = 14.0 SD = 4.10), music new (W = 32, p < .05, PC M = 5.44, SD = 1.67 and BD M = 3.67, SD = 1.44) and first half (12 questions) of test (W = 28, p < .02, PC M = 9.78, SD = 1.12 and BD M = 7.67, SD = 1.94). Again, differences between the familiar and unfamiliar music conditions on total score and music new were expected, however, differences by test half were not. Since significant differences were identified between familiar and unfamiliar music conditions for the 3VT group on the first half of the test, further investigation was warranted. The effects of test half on total scores for 2VT group and 3VT group for both music types were examined using Wilicoxon Signed Rank test. No significant differences were found for the 2VT group nor the 3VT group for The Bailiff’s Daughter or for the 2VT group for Pomp and Circumstance (p > .05). Significant differences were found on posttest scores by test half in the 3VT group for Pomp and Circumstance, W = 36, p < .01, first half M = 9.78, SD = 1.20 and second half M = 8.0, SD = 2.0. The differences were likely attributable to the length of time the participants were engaged in the task since the stimulus tape viewed three times took longer than viewing two times. The decay of the memory trace or fatigue may be responsible for poorer performance on the second half of the test.

74 The range, mode, median, mean and standard deviation of the four data sets (familiar versus unfamiliar, 2VT versus 3VT) were compared. Table 1 displays the descriptive data. For Pomp and Circumstance, the 3 times training group produced a ceiling effect scoring no lower than 5.44 out 6 points on questions types, whereas the 2 times training group produced a greater spread in scores as evidenced by the standard deviation and a greater range in individual scores. For the Bailiff’s Daughter, the difference in means between the training groups was less than a point in each instance. The range in scores was comparable between these two groups. See Table 2. Based on the statistical analysis and a comparison of the descriptive statistics, it was concluded that viewing the training video only two times was advantageous to data collection. For Pomp and Circumstance, viewing the training video only two times produced a greater spread in scores among this group of dominantly female musicians and eliminated the differences by test half produced by the 3VT training group. There was little difference in means and no significant differences in score types for The Bailiff’s Daughter. There were differences among score types for the familiar and unfamiliar music conditions in the two times training group. Therefore, for the main experiment all participants viewed the training video two times prior to taking the posttest. Question analysis. The total number of correct responses to each question was calculated by music type and training times. Tables 3 and 4 display the results. Details on the incorrect responses were noted. Responses to retest items were noted. The rhythmic grouping of the music and the type of question were considered when analyzing the data. Questions where 4 or fewer respondents answered correctly were considered anomalous. For both familiar and unfamiliar music conditions, the retest items for picture new fell below this criterion. During both trial types of Pomp and Circumstance, respondents indicated that the item was a correct pair since the picture had been previously viewed during the test. During both trial types of The Bailiff’s Daughter, respondents indicated that the item was a new pair indicating that they failed to recall the music. However, this item was number 19 out of 23 and was rhythm

75 Table 2 Descriptive Statistics on Pilot Data Group by Music Type

Group Data Total Pair Music Picture New First Second type score Correct New New Pair (2) half half (23) (6) (6) (6) (12) (11)

Bailiff 2VT Range 10 – 19 2 – 6 1 – 7 2 – 6 1 – 2 4 – 10 5 - 9 Mode 11 4 None 5 2 None 6 Median 13 4 3 5 2 8 6 Mean 13.56 4.11 3.44 4.44 1.56 7.22 6.33 SD 3.0 1.2 2.3 1.2 0.5 2.0 1.3

Bailiff 3VT Range 10 – 19 1 – 6 1- 5 1 – 6 0 – 2 5 – 10 2 - 10 Mode 18 None None 5 None 8 8 Median 16 5 4 5 1 8 7 Mean 14.0 4.67 3.67 4.3 1.33 7.67 6.33 SD 4.1 1.6 1.4 1.7 .71 1.9 2.5

Pomp 2VT Range 9 – 20 1 – 6 4 – 7 3 – 7 1 – 2 7 – 10 2 – 10 Mode 14 4 5 6 2 10 None Median 17 4 5 5 2 9 7 Mean 15.9 4.11 5.33 5.11 1.78 8.67 7.22 SD 3.5 1.4 1.1 1.3 0.4 1.4 2.6

Pomp 3VT Range 13 – 21 4 – 7 3 – 7 5 – 7 None 8 – 11 5 - 10 Mode 19 6 7 6 2 None 10 Median 19 6 6 6 2 10 9 Mean 17.8 5.56 5.44 5.78 2 9.78 8.0 SD 3.1 0.9 1.7 0.7 0 1.2 2.0 Note. Total possible score in parenthesis.

76 Table 3 Question Analysis for The Bailiff’s Daughter - Pilot Study

No. Type Rhythm 2x 3x Wrong answers Correct (9) Correct (9) 1 Pair New IV 8 8 2 PN 2 Picture New III 5 7 1 PC, 1 MN, 4 NP 3 Picture New IV 6 6 6 NP 4 Music New III 4 4 8 PC, 1 PN, 1 NP 5 Pair Correct IV 6 5 5 MN, 1 PN, 1 NP 6 Music New II 6 8 1 PC, 3 NP 7 Picture New IV 4 4 10 NP 8 Music New III 3 4 9 PC, 2 PN 9 Picture New II 7 6 5 NP 10 Pair Correct II 4 6 1 PN, 3 MN, 4 NP 11 Pair Correct RT#5 IV 7 7 2 MN, 1 NP, 1 no response 12 Music New I 5 5 6 PC, 2 NP 13 Pair Correct III 5 4 8 MN, 1 NP 14 Picture New I 9 8 1 PC 15 Pair Correct III 7 7 3 PN, 1 MN 16 Music New RT#12 I 3 1 13 PC, 1 NP 17 Pair Correct I 5 7 4 PN, 1 MN, 1 NP 18 Pair New IV 6 4 1 PC, 7 PN 19 Picture New RT#3 IV 2 2 3 PC, 2 MN, 8 NP 20 Picture New II 7 7 1 PN, 3 NP 21 Music New I 5 4 1 PC, 1 PN, 7 NP 22 Pair Correct IV 3 6 6 MN, 3 NP 23 Music New II 5 7 2 PC, 4 NP

Note. PC=pair correct, PN=picture new, MN=music new, NP=new pair, RT=retest.

77 Table 4 Question Analysis for Pomp and Circumstance - Pilot Study

No. Type Rhythm 2x 3x Wrong answers Correct (9) Correct (9) 1 Music New IV 9 7 1 PN, 1 NP 2 Music New I 8 9 1 NP 3 Pair Correct I 7 8 2 PN, 1MN 4 Picture New I 9 9 None 5 Music new II 7 8 2 PN, 1 MN 6 Picture New I 9 9 None 7 Pair Correct IV 1 4 1 PN, 11 MN, 1 NP 8 Picture New III 5 8 1 PC, 4 NP 9 Picture New III 8 9 1 MN 10 Music New II 6 6 5 PC*, 2 NP* 11 Picture New IV 2 3 13 NP 12 Pair Correct III 7 7 3 PN, 1 MN 13 Pair Correct III 5 8 4 PN, 1 NP 14 Pair New III 8 8 2 MN 15 Music New IV 6 5 1 PC, 2 PN, 4 NP 16 Picture New RT#9 III 4 4 8 PC, 2 NP 17 Pair Correct I 6 7 2 PN, 2 MN, 1 NP 18 Music New RT#1 IV 5 6 3 PC, 4 NP 19 Picture New II 6 8 1 MN, 3 NP 20 Music New III 6 7 1 PC, 1 PN, 3 NP 21 Pair Correct II 4 4 1 PN, 6 MN, 3 NP 22 Pair New I 8 7 1 PN, 2 MN 23 Pair Correct RT#13 III 7 9 1 PN, 1 MN

Note. PC=pair correct, PN=picture new, MN=music new, NP=new pair; RT=retest; * two incorrect responses circles by participant.

78 IV (discussed later). There are likely interactions operating among these variables. The music new retest failed to reach the criterion for both trials of The Bailiff’s Daughter only. Respondents indicated that the pair was correct since the music had previously been heard during the test. This was a similar confound to the picture new retest item. Both pair correct retest items scored above criterion. Given that the retest items resulted in failed criterion, all retest items were eliminated from the main experiment. Rhythm group IV (whole notes) represented two of the four questions failing to reach criterion on Pomp and Circumstance. Of the five questions based on rhythm IV, 40% failed to reach criterion. Rhythm groups II and III were represented a single time each at the 4 or fewer criterion. Rhythm group IV was measure 4 (supertonic scale degree) and 8 (dominant scale degree), neither of which was a tonic pitch. Therefore, when participants judged these pitches from the training video as test items, they were concluded to be new music on the posttest (questions 7 and 11). New music items that were whole notes were accurately judged (questions 1 and 15), when not included as a retest item (question 18). Rhythm group IV (measure 4 – dotted half and quarter; measure 8 - whole note) represented 3 of the 9 questions failing to reach criterion on The Bailiff’s Daughter. Excluding the retest item including Rhythm IV (question 19; measure 4), in both of the other occurrences, (question 7 and question 22) respondents judged long tones to be new music when they were in fact from the training video. However, questions based on Rhythm IV training music exceeded the criterion three times, as well (Questions 3, 5, and 11). Questions 5 and 11 were based on a tonic pitch. Distractors based upon Rhythm IV (questions 1 and 18) were accurately judged, like distractors in Pomp and Circumstance. Rhythm III (four quarter notes) failed to reach criterion on three of the nine questions. Two of these three occurrences followed a question based on Rhythm IV training stimulus. Three of the five occurrences of a question based on Rhythm III followed Rhythm IV and were distractor questions. It was difficult to conclude if the question would be answered accurately if it did not follow Rhythm IV since there was no example of this in test. One occurrence of Rhythm II (question 10) and two occurrences of Rhythm I (questions 16 {retest} and 21) failed to reach criterion. Rhythmic groups were not evenly distributed across question types. The following distribution was represented by Pomp and Circumstance: Rhythm I – 6, Rhythm II – 4, Rhythm

79 III – 8, Rhythm IV – 5. The Bailiff’s Daughter represented a slightly better distribution among rhythmic grouping (Rhythm I – 5, Rhythm II – 5, Rhythm III – 5, Rhythm IV – 8); rhythm III followed rhythm IV on three of the five occasions. Likewise, during Pomp and Circumstance three times rhythm group III followed IV and three times III followed III. None of the new pair questions from either music condition failed to meet criterion. There were only two questions based on new pair test items. Pair correct question type functioned similarly in each music condition; two questions on Pomp and Circumstance and three questions on The Bailiff’s Daughter failed to reach criterion. Two picture new questions failed to meet criterion on both music conditions. However, considerable differences are noted in the rate at which music new failed to reach criterion by music type. No music new questions failed to reach criterion on familiar music, while four failed on unfamiliar. This appeared to indicate that the difficulty was different based upon the familiarity of the music. Since there were only two new pair questions, it was only practical to compare means among three of the question types. Changes to Music Stimuli for Main Experiment Based upon the data from question analysis, changes were made to the training music. The whole note and dotted half note measures contained too little information for participants to recall during the posttest. In order to avoid lack of information detriment (Cuddy & Cohen, 1976), measures 4 and 8 of both music stimuli were altered such that Rhythm IV contained more information than a single long tone. For Pomp and Circumstance, the rhythm was changed to a quarter note, two eighth notes, and a half note. In measure 4, the A3 quarter and half notes were separated with an eighth note pattern on a G3 and B3 while keeping the harmony in a V chord. In measure 8, the harmony was moved from the V to a I chord by walking up stepwise from the D4 to G4 (Figure 13). While this did change the harmony of the excerpt, it was deemed a reasonable way to end an excerpt.

80 Figure 13. Pomp and Circumstance – main experiment.

Rhythm IV for The Bailiff’s Daughter was changed such that the same rhythm appeared in both measures 4 and 8 and it was different from Rhythm IV in the familiar music condition. The new Rhythm IV was a half note, two eighth notes, and a quarter note. In a similar manner to the alterations of Pomp and Circumstance, the starting and final pitch of measures 4 and 8 remained the same with an embellishment of two eighth notes in between. In measure 4, the D5 half note and quarter note were separated with C5 and B4 eighth notes. In measure 8, the G4 half note and quarter note were separated with two D4 eighth notes. Neither alteration changed the harmonic structure of the folk song (Figure 14).

Figure 14. The Bailiff’s Daughter – main experiment.

Changes to the Test Construction for Main Experiment In addition to eliminating the retest items, each question type was represented six times resulting in a 24-question test. The addition of a single question balanced the halves of the test. Furthermore, three of each of the four question types appeared between questions 1 and 12 and

81 the remaining three of each type between questions 13 and 24. This enabled direct comparison of question type by test half. Six of each question type, pair correct, picture new, music new, and new pair, were numbered and randomized. The random order was chosen that met the criterion of question type by half of test. Random integers were assigned, and 6 sets of each rhythmic grouping (I – IV) were established. This resulted in an equal distribution of rhythmic groups by question. Additionally, the sequences of rhythmic groups were randomized such than no more than two occurrences of each rhythmic grouping were in succession and each combination was represented at least one time. Three questions based on each rhythmic grouping were presented in each half of the test allowing analysis of rhythmic group by test half. In a similar manner, the order of question types was randomized and balanced as much as possible. Each combination of pair correct, music new, picture new, and new pair was represented at least once but no more than two times. This question type sequencing was completed after the rhythmic grouping sequence. It was not possible to create two tests that adhered strictly to these criteria for both the rhythmic sequence and the test question type sequence. Both criteria were met by the posttest for Pomp and Circumstance test, and the rhythmic sequence criterion was met for The Bailiff’s Daughter. There were two violations of the question type sequence; there was no music new – music new combination and three pair correct – music new combinations for The Bailiff’s Daughter. The randomized test sequencing was undertaken to minimize or distribute any order effect based on rhythm or question type. By adding four new pair questions, four new audio distractors and visual distractors were needed. One additional image was located for each of the four picture groupings and added to the other two distractors. The original set of training pictures was used. Four additional audio distractors were composed following the same guidelines as the original test set. Question types pair correct and picture new required an audio trainer, and music new and pair new question types required audio distractors. Each of the eight bimodal pairs in the training set were tested three times in the posttest in a distributed combination of trainer or distractor, audio and visual stimuli. For The Bailiff’s Daughter, Rhythm I in measures 1 and 5 required composition of an additional distractor based upon each measure. Rhythm III in measure 6 and Rhythm IV in measure 4 required additional distractors. The results for the unfamiliar music condition were as follows: Rhythm I – 2 trainers,

82 4 distractors, Rhythm II – 4 trainers and 2 distractors, Rhythms III and IV – 3 of each. For Pomp and Circumstance, Rhythm I required an additional distractor based upon measure 3, Rhythm III required an additional distractor based on measure 5, and Rhythm IV required additional distractors based on measure 4 and 8. The results for the familiar music condition were as follows: Rhythm I and Rhythm III – 3 of each, Rhythm II – 4 trainers and 2 distractors, Rhythm IV – 4 distractors and 2 trainers. With the audio trainers and distractors in place based upon rhythmic grouping and question type, visual stimuli were added. An additional distractor image was added to each grouping type. Pair correct and music new required an image from the training sequence. For picture new and new pair, twelve visual distractors were needed. From the pool of visual distractors containing three of each of the four visual groupings, a random order was assigned. Each visual distractor was then assigned to the question in the random order. At times, this resulted in a distractor pairing up with a trainer from the same group; other times, the distractor was not from the same group. No attempt to manipulate or distribute this aspect was made. Based on the pilot testing, few participants had trouble discriminating among the trainer and distractor pictures. No observable interaction was noted among the visual stimuli; therefore the true random assignment of visual images was maintained for the main experiment. Main Experiment Participants Male and female students majoring in music and other academic majors were recruited from the College of Music at Florida State University. Music majors were recruited from core courses for music, music education, music therapy, and music performance degree tracks. Non- music majors were recruited from the non-auditioning, free enrollment performing ensembles (campus band, women’s glee club, men’s glee club, and gospel choir), humanities elective courses offered in the College of Music (modern popular music, music of the 19th century, and world music cultures), and beginning guitar classes. Participants were invited to bring friends to participate, resulting in approximately 5 total participants who were not directly recruited by the researcher. Some students received course credit for participation, some were paid ($8), and others volunteered without receiving credit or money. Participants signed up for a single 25-minute appointment at his/her convenience between 8:30 a.m. and 5:00 p.m. Each participant was contacted by email the evening before the

83 scheduled appointment, reminded of his/her time, and given directions to the research lab. Participants who failed to attend the scheduled appointment were contacted again. Some were rescheduled; others did not respond. Up to four participants could complete the study simultaneously. Experiments were conducted over the course of 19 days (3/16 - 3/17, 3/20 – 3/24, 3/27 – 3/29, 4/3 – 4/6, 4/10 – 4/14). Two hundred and one total participants completed the study. One female music major was dropped due to incomplete data collection on one posttest. Two participants, 1 female music major and 1 male non-music major, failed to meet the criterion of 3 of 5 on the posttest familiarity scale for Pomp and Circumstance. No participant failed the criterion for unfamiliar music, marking 3 or less on scale of 5 (most familiar). Two men in the non-music majors group were initially miscoded and were actually music majors; they were dropped. The remaining 4 participants who were dropped (1 female music major and 3 male music majors) were extra data in each cell. Each was chosen at random from his or her respective cell. Data from the remaining one hundred and ninety-two (N=192) participants were complete, met the familiar music criterion, and represented the sample for the study. There were forty-eight (n=48) female music majors, forty-eight (n=48) male music majors, forty-eight (n=48) female non-music majors, and forty-eight (n=48) male non-music majors (n=96 music majors, n=96 non-music majors). Table 5 lists the academic majors of the sample. Thirty-five participants were graduate students (n=19 females, n=16 males) and 157 (n=77 females, n=80 males) were undergraduate students. There were no significant differences in the distribution of undergraduate and graduate students by gender, χ2 (1, 192) = 0.14, p > .70. However, 32 of 35 graduate students were music majors. As a result, there were significant differences in the mean ages in the gender by academic major grouping, F (3, 188) = 12. 15, p < .001. Tukey HSD was used as a post hoc analysis for means. The mean age of females and males within the same academic major groups were not significantly different. There were significant differences in the mean ages between music majors, females M = 22.33 (3.60) and males M = 22.75 (5.06), and non-music majors, females M = 19.54 (2.28) and males M = 19.71 (1.24). The mean age of all females was 20.94 years (SD = 3.31), and the mean age of all males was 21.23 years (SD = 3.97). There were no significant differences in the mean ages of men and women of all combined majors, t (190) = 0.55, p > .59.

84 Table 5 Academic Majors of the Participants Academic Majors Accounting (1) Engineering (3) Music education (55) Advertising (1) English (5) /composition (4) Art (1) Exercise physiology (2) Music therapy (18) Biological Science (8) Graphic design (1) Music performance (9) Business (7) History (2) (2) Chemistry (1) Hospitality (1) Nursing (2) Child development (1) Humanities (2) Political science (3) Civil engineering (3) Information technology (3) Psychology (6) Communication disorder (2) International affairs (1) Religion (1) Communications (5) Mass media production (2) Residential science (1) Computer science (4) Mathematics (1) Social work (4) Creative writing (3) Mechanical engineering (1) Sociology (1) Criminology (4) Merchandising (1) Sports journalism (1) Dietetics (2) Music – Bachelor of Arts (7) Sports management (1) Education (4) Music business (1) Undeclared (4) Note. Number in parenthesis indicates number of participants with same academic major.

Non-music majors indicated their music training history by marking a box for no music training, fewer than 3 years of training, or more than 3 years of training. Those with any music training were asked to describe their music study. Sixty-eight of the 96 indicated that they had more than 3 years of training (n=38 females, n=30 males), thirteen indicated that they had some training but less than 3 years (n=6 females, n=7 males), and 15 indicated that they had no music training (n=4 females, n=11 males). There were no significant differences in the distribution of music training level among men and women who were not music majors, χ2 (2, 96) = 4.28, p > .12. Of the thirteen participants who had some music training but fewer than 3 years, 7 had sung in choirs, 4 had played the piano, 1 had played the bass guitar, and 1 played a wind instrument in band. Of the 68 participants who had more than 3 years of music study, 31 participated in band (wind or percussion instruments) from 4 to 11 years, 23 in choirs from 4 to 11 years, 12 played

85 piano from 4 to 16 years, 1 guitar for 7 years, 1 in orchestra for 5 years. Overall, this was a highly musically skilled group, likely a result of the location where recruiting took place and the fact that students with music training volunteered because they were interested in the line of research. Each participant was asked to indicate the amount of time s/he spent listening to and performing music during a typical week on 4 point Likert-type scale. The same scale was used for both listening and performing such that 1 = 0 to 10 hours, 2 = 11 to 20 hours, 3 = 21 to 30 hours, and 4 = more than 30 hours. There were no significant differences among the gender by academic major groups in the number of hours each estimated listening to music, F (3, 188) = 0.70, p > .55. There were significant differences among the gender by academic major groups in the number of hours each estimated performing music, F (3, 188) = 19.49, p < .001. The amount of estimated hours spent performing music by females and males within the same academic major groups were not significantly different. There were significant differences in the amount of hours spent performing music between music majors and non-music majors. See the Table 6 for the means.

Table 6 Mean Estimated Hours of Music Heard and Performed By Participant Groups Music Music Non-music Non-music Female Male Female Male Listening* 2.58 (1.13) 2.85 (1.05) 2.63 (1.02) 2.58 (1.09)

Performing** 2.02 (0.89) 2.02 (0.93) 1.21 (0.41) 1.25 (0.48)

Note. Standard deviations in parenthesis; Scale was 1 = 0 to 10 hours, 2 = 10 to 20 hours; 3 = 20 – 30 hours; * NS; ** p < .01; Means with same underscore are not significantly different.

Design The design of this study compared performance on a posttest following two repetitions of a bimodal encoding task containing visual images and familiar and unfamiliar music between men and women, and music majors and non-music majors. The 192 participants were divided

86 into groups with one of three instruction levels presented prior to each viewing of the training stimulus (see Table 7). Participants in the dual-task (DT) group did not receive instructions on how to focus his/her attention beyond “try to remember what music was playing during each picture” stated during the practice tests. Participants in the selective attention to music (SAM) group were verbally (visual and audio) instructed to focus 80% of his/her attention on the music and only 20% attention to the pictures. Participants in the selective attention to pictures (SAP) group were verbally (visual and audio) instructed to focus 80% of his/her attention on the pictures and only 20% attention to the music. The protocol for 80-20 focus of attention has been established in the literature (Bonnel & Prinzmetal, 1998). These instructions were repeated on screen (visual only) prior to the second viewing of the training stimulus. The dual-task group saw a screen indicating it was the second viewing of the stimulus with no instructions. There were two music conditions, familiar music represented by Pomp and Circumstance and unfamiliar music represented by The Bailiff’s Daughter. All participants were encouraged to answer questions as accurately as possible and to guess at answers if they were uncertain (Obrzut et al., 1986).

Table 7 Study Design - Participant Distribution by Gender (2), Major (2), and Instruction Type (3) Music Music Non-music Non-music Female Male Female Male Totals Dual Task (DT) 16 16 16 16 64

Selective Attention 16 16 16 16 64 Music (SAM) Selective Attention 16 16 16 16 64 Pictures (SAP) Totals 48 48 48 48 192

Note. Each group was presented with 2 repetitions of task comparing familiar and unfamiliar music. Order of familiar and unfamiliar music presentation was counterbalanced across participants.

87 Each participant completed both familiar and unfamiliar music presentations under a single selective attention condition (DT, SAM, or SAP). Half of the participants completed the familiar music condition first; half completed the unfamiliar music condition first. Given the two orders and three instruction types, six research videotapes were generated. The first subjects were randomly assigned to one of the six tapes (condition by order) and balanced for gender and academic major. The 24 cells were inspected on a daily basis along with the participant pool for the day. Needed tapes were noted and randomly assigned across the experimental runs for the day. Near the end of the study, participants were assigned to tapes in order to provide equal numbers in all cells. Procedures Room and equipment. Individuals or groups of up to 4 participants entered the well-lit research lab measuring approximately 15 feet by 9.5 feet with an approximatley10 foot ceiling. The walls of the lab were irregular such that no two walls were parallel. The carpeted room featured two 30-inch wide floor to ceiling cloth covered panels and curtains over two windows. The room provided an adequate sound environment. Physical materials included two 5-foot tables, 5 chairs, 30-inch by 10-inch foam board dividers, and pens. The audio-visual materials included a 26-inch Sharp television monitor, Sony VCR, Pioneer amplifier, and two 10 by 18 inch Paradigm Performance series stereo speakers. The top of the television monitor was 53 inches from the floor and the center of the monitor was 31.5 inches from the front edge of the tables. Thus, including the width of the table, participants were between 5.5 and 6 feet away from the monitor. The base of each speaker was approximately 26 inches from the floor and 21 inches were between each speaker and the television monitor. The speakers and monitor were in a straight line (Reisberg, Scheiber, & Potemken, 1981; Spence, Ranson, & Driver, 2000; Spence & Read 2003; Turatto et al., 2003) such that audio and visual stimuli were emitted from the same direction. The amplifier was between the monitor and speaker on the left side. The researcher was seated near the door behind the participants during each experiment. See Figure 15 for the floor plan of the research lab. Routine for experiment. Once seated, each participant was given a packet of forms containing the informed consent letter, the pre-experiment questionnaire (demographic and music training information), a practice test form, a posttest for Pomp and Circumstance or The

88 Speaker Speaker TV

1 2 3 4

R

Figure 15. Experimental laboratory set-up.

89 Bailiff’s Daughter, a second posttest for the other music condition, and a post-experiment questionnaire. (Main experiment pre-experiment questionnaire, practice test, and posttest questionnaire are included in the Appendix I - K). Once each participant read and signed the informed consent form, s/he was instructed to complete the pre-experiment questionnaire. After all of the participants in an experimental session completed this form, they were instructed that all of the following directions were on the videotape. The video was begun at this point. The practice test procedure was identical for all participants. At the end of the second practice test, the participants were asked if there were any questions on how to use the answer sheet. If asked, the researcher paused the video and answered each question. Though few had questions, the most commonly asked question concerned the possible appearance of a mismatch test item, meaning that an image from the video was paired with music on the video from another image. The researcher confirmed that there were no mismatches; if both the picture and the music were on the video, they were paired in each case, and the correct response was pair correct. When all questions were answered, the video was begun again and no further communications between researcher and participant(s) were made until the end of the video- taped experiment. After the end of the experiment tape, the participants were instructed to complete the post-experiment questionnaire. Upon completion, each participant was given the opportunity to ask questions about the research, thanked for his/her participation, and was escorted from the research lab. Videotaped practice, training, and testing stimuli. Videos for the main experiment were created using the same programs and software as the pilot study videos. The first 56 slides on the video provided the introduction to the research, explanation of testing procedures, and the two practice tests. Accompanying audio instruction tracks were recorded by the researcher and paired with respective slides. The introduction and practice tests took 5 minutes and 16 seconds. See Appendices L and M for introductory and instruction slides and audio scripts. The script and image of the introductory, instruction, and practice test slides are provided in Appendix M. Beginning with slide 57, participants received audio instructions to turn to the answer sheet for either The Bailiff’s Daughter or Pomp and Circumstance depending upon the assigned order. The corresponding slide and audio instructions informed participants of the video’s title, its contents of eight pictures and music, and that a 24-question test would follow. For the dual task participants, no further verbal instructions were given. The ‘starter’ slide containing title of the

90 video and the printed instructions to “listen and watch, first time” appeared with no audio. This slide lasted 10 seconds. The next slide began the first exposure to the training stimulus. For the selective attention (SAM & SAP) groups, the ‘starter’ slide contained the additional text to focus 80% of your attention to music (or pictures) and only 20% on the pictures (or music). An audio track iterating these instructions verbatim and the additional statement “concentrate harder on the music/pictures” accompanied the slide. The selective attention instruction exposure was 10 seconds. The next slide began the first exposure to the training stimulus. (The Bailiff’s Daughter training stimuli are in Appendix P and Pomp and Circumstance are in Appendix Q). After this first exposure of the training video, a 5-second blank screen appeared between iterations. For those in the dual task condition, the next screen was a second ‘starter’ screen notifying participants of the title, printed instructions to listen and watch, and the second time notification. No audio track accompanied this 5-second slide. For those in selective attention groups, a second ‘starter’ slide with the same 80-20 instructions appeared for 5 seconds without an accompanying audio track. For all groups, the next slide began the training stimulus again. A blank white screen was exposed for 5 seconds between the trainer and the 3-second test (no audio). Test items. Each test item was presented with a number slide accompanied by the warning sound exposed for 2 seconds. The test stimulus, a single image that was either a trainer from the video or a distractor and a measure of music (four beats at M. M. = 68) that was either a trainer or distractor, was then presented. Each test item was exposed between 3:16 – 3:24 seconds. The variations in time were a result of the decay pattern of the final note in the test item music. When the final note was an eighth note or a quarter note, the decay pattern lengthened the time of the stimulus. The image exposure was synchronized with the music. A 5-second blank white screen was presented between test items without an audio track. The next test item was presented with the number slide and warning audio. A blank screen appeared without audio between the test and the beginning of the second test. From this point, procedures were identical to the first experimental testing with the opposite stimulus. The posttest for The Bailiff’s Daughter is located in Appendix N; the posttest for Pomp and Circumstance is located in Appendix O.

91 CHAPTER 4 RESULTS

Familiarity - Ratings and Total Correct Scores Analysis of variance techniques were used throughout with an alpha level of .05. A 5- way analysis of variance with between subject variables of gender, major, instruction type, and order and one within subjects variable of familiarity rating for Pomp and Circumstance and The Bailiff’s Daughter was conducted. There were significant differences in the familiarity ratings, F (1, 168) = 4367.01, p < .001, partial η2 = .96, and a significant interaction between familiarity ratings and major, F (1, 168) = 38.82, p < .001, partial η2 = .19. The familiar music selection was rated as highly familiar, M = 4.68 out of 5 (SD = 0.55), and most participants indicated that they had never heard the unfamiliar selection, M = 1.31 out of 5 (SD = 0.60). However, the main effect of familiarity was mediated by an interaction with major. Music majors rated Pomp and Circumstance more familiar than did non-music majors, who in turn rated The Bailiff’s Daughter more familiar than did music majors (Figure 16).

Familiarity Rating * Major Interaction

6

5 4.84 4.52 4

3 Music Non-Music 2 Familiarity Rating 1.47 1 1.16

0 Familiar Unfamiliar

Figure 16. Familiarity ratings by major interaction.

92 Recall of bimodal stimuli was compared using total correct scores. A 5-way analysis of variance with between subject variables of gender, major, instruction type, and order and a within subjects variable of familiarty was conducted. There were significant differences between familiar and unfamiliar music types, F (1, 168) = 230.91, p < .001, partial η2 = .58. The mean total correct score for familiar music, M = 20.19 (SD = 2.62), was higher than unfamiliar music, M = 16.64 (SD = 3.06). There were significant differences between the total scores of music majors and non-music majors, F (1, 168) = 9.20, p < .003, partial η2 = .05. Music majors answered more questions correctly than non-music majors (Music majors - M = 18.92, SD = 3.33; Non-music majors - M = 17.91, SD = 3.31). There were no significant main effects for gender, instruction, or order on total correct scores. The main effect of major was mediated by a significant major by instruction type interaction, F (2, 168) = 5.06, p < .007, partial η2 = .05 (Figure 17). Music majors’ posttest scores were higher than non-music majors following the dual task condition (difference of 2.16 points) and the selective attention to music (difference of 1.3 points). Non-music majors’ scores were higher than music majors following selective attention to pictures, though the difference in means was less than half of one point (0.41). The difference between the highest and lowest mean scores for music majors was 0.74 points while the difference between the highest and lowest mean scores for non-music majors was 1.83 points. The attention condition had a stronger effect upon non-music majors.

Total Score Major * Instruction Interaction

19.5 19.13 19.19 19 18.86 18.5 18.45 18 Music majors 17.5 17.83 Non-music majors 17 17.03

Mean total score 16.5 16 15.5 Dual Task Selective Attention Selective Attention- - Music Pictures

Figure 17. Total correct scores – interaction between major and instruction.

93 Perception of Attention Allocation The differences in total scores as a result of instruction provided evidence that participants did allocate attention differently across instructions types. Separate 4-way analyses of variance with between subject variables of gender, major, and instruction type and a within subjects variable of estimated attention allocation to music and pictures on the post-experiment questionnaire were conducted for The Bailiff’s Daughter and Pomp and Circumstance. There were significant differences in the perception of the percentage of attention allocated to music and pictures during The Bailiff’s Daughter, F (1, 180) = 10.25, p < .002, partial η2 = .05. Participants estimated that 54% of their attention was directed to music and 46% was directed toward the pictures. There was a significant interaction between attention allocation and instruction type, F (2, 180) = 47.36, p < .001, η2 = .35. Participants in selective attention conditions approached the 80%/20% instructions while participants in the dual task condition approached a 50%/50% division of attention (Table 8).

Table 8 Perception of Attention Allocation to Music and Pictures for Familiar and Unfamiliar Music Bailiff’s Daughter Music Pictures SD Dual Task 53.4% 46.6% 17.4% Selective Attention – Music 69.2% 30.8% 15.0% Selective Attention - Pictures 39.4% 60.6% 19.3% Pomp and Circumstance Music Pictures SD Dual Task 35.8% 64.2% 19.7% Selective Attention – Music 52.5% 47.5% 21.6% Selective Attention - Pictures 36.2% 63.8% 24.7%

There were also significant differences in the perception of attention allocated to music and pictures during Pomp and Circumstance, F (1, 180) = 27.88, p < .001, partial η2 = .13. Participants estimated that 42% of their attention was directed to music and 58% was directed toward the pictures. There was a significant interaction between attention allocation and instruction type, F (2, 180) = 11.69, p < .001, η2 = .12. The division of attention for dual task participants was highly similar to the selective attention to pictures condition (Table 8). Selective

94 attention to pictures had a similar division of attention during both music types. There were differences in the division of attention following dual task and selective attention instructions for the music conditions. Analyses of Modality of Error Scores Two error scores, one for pictures and one for music, were calculated by adding the total number of errors by modality for the 24 questions. A 4-way analysis of variance with between subject factors of gender, major, and instruction type and a within subjects variable of picture errors for familiar and unfamiliar music was conducted. There were significant differences in the number of picture errors for familiar and unfamiliar music, F (1, 180) = 12.02, p < .001, η2 = .06. A mean of 1.14 (SD = 1.55) picture errors were made during Pomp and Circumstance and a mean of 1.60 (SD = 1.84) picture errors were made during The Bailiff’s Daughter. There was a significant interaction between major and instruction, F (2, 180) = 4.41, p < .02, η2 = .05. Figure 18 displays the interaction. The mean picture errors were essentially the same for music and non- music majors during the selective attention to pictures condition. Music majors made 0.67 fewer picture errors during the dual task condition than non-music majors, but made 0.8 more picture errors during the selective attention to pictures condition than non-music majors. A 4-way analysis of variance with between subject factors of gender, major, and instruction type and a within subjects variable of music errors for familiar and unfamiliar music was conducted. There were significant differences in the number of music errors for familiar and unfamiliar music, F (1, 180) = 318.77, p < .001, η2 = .64. A mean of 2.92 (SD = 2.15) music errors were made during Pomp and Circumstance and a mean of 6.39 (SD = 2.68) music errors were made during The Bailiff’s Daughter. There was a significant difference between the music error scores of music and non-music majors, F (1, 180) = 19.17, p < .001, η2 = 0.10. Music majors (M = 4.04, SD = 2.90) made fewer music errors than did non-music majors (M = 5.27, SD = 2.95). There was a significant interaction between major and instruction, F (2, 180) = 3.32, p < .04, η2 = .04. Figure 19 displays the interaction. Music majors made fewer errors in each condition than non-music majors, though the difference in mean error scores was different among the conditions (difference of 1.97 during dual task, difference of 1.46 during selective attention to music, difference of 0.25 during selective attention to pictures). Music majors’ best performance on music (lowest error rate) was during the selective attention to music condition

95 while non-music majors’ best performance (lowest error rate) was during the selective attention to pictures condition.

Picture Errors Major * Instruction Interaction

2 1.83 1.8 1.56 1.6 1.47 1.4 1.52 1.2 1.16 Music Majors 1 Non-music Majors 0.8 0.67

Mean picture errors 0.6 0.4 0.2 0 Dual Task Selective Attention - Selective Attention - Music Pictures

Figure 18. Picture errors – major by instruction interaction.

Music Errors Major * Instruction Interaction

7

6 5.89 5.13 5 4.78 4.53 4 3.92 Music majors Non-music majors 3 3.67

Mean music errors 2

1

0 Dual Task Selective Attention - Selective Attention - Music Pictures

Figure 19. Music errors – major by instruction interaction.

96 Analysis of Question Type under Music Conditions Each of the four question types represented a distinct combination of music trainers and distractors paired with picture trainers and distractors. The score for pair correct measured the ability of participants to recognize music trainers with picture trainers, and picture new measured the recognition of training music paired with distractor pictures. Music new and new pair measured the abilities of participants to reject distractor music paired with training pictures or distractor pictures, respectively. Differences between question types among gender, major, and instruction type for familiar and unfamiliar music conditions were compared using multi-factor ANOVAs. Mauchly’s test for sphericity indicated that the null hypothesis of sphericity was rejected for both music conditions; therefore, the Greenhouse-Geisser adjustments were used. There were significant differences in the mean correct among the four question types for The Bailiff’s Daughter, F (2.60, 467.36) = 40.53, p < .001, η2 = .18. Bonferroni pairwise comparisons revealed that the means for picture new and pair correct were significantly different than music new and new pair and were significantly different from one another. Table 9 presents the means. It was easier for participants to recognize music trainers than to reject music distractors regardless of whether the picture was a trainer or distractor. There was a significant interaction between question type and gender for The Bailiff’s Daughter, F (2.60, 467.36) = 4.36, p < .04, partial η2 = .02 (Figure 20). Females scored higher by a third of a point when identifying music trainers paired with a picture trainer (pair correct), and males scored higher by the same margin when rejecting distractor music paired with a trainer picture. Females’ scores were marginally higher on picture new and males’ scores were marginally higher on new pair, though neither difference was over a sixteenth of a point.

Table 9 Mean Correct for Each Question Type for The Bailiff’s Daughter

Music New New Pair Pair Correct Picture New 3.68 (1.27) 3.89 (1.31) 4.21 (1.22) 4.84 (1.12)

Note. Underscored means are not significantly different. Standard deviations are in parentheses.

97 The Bailiff's Daughter Question Type * Gender Interaction

6 4.9 5 4.37 4.79 3.84 4 4.06 3.97 3.81 Females 3 3.51 Males

Mean correct 2

1

0 Pair Correct Picture New Music New New Pair

Figure 20. The Bailiff’s Daughter – question type by gender interaction.

There were significant differences in the scores by question type for Pomp and Circumstance, F (2.62, 472.20) = 10.53, p < .001, partial η2 = .06. Paired comparisons revealed that the score for new pair was significantly different than the other three question types indicating that it was easier to reject distractor music paired with a distractor picture than any other combination. Table 10 represents the significant means. The same significant gender by question type interaction found for unfamiliar music was found for familiar music, F (2.62, 472.20) = 3.63, p < .02, partial η2 = .02 (Figure 21). Females scored higher on pair correct by 0.34 of a point and scored marginally higher on picture new and music new. Males scored higher on new pair by 0.25 of a point. Again, females were better at recognizing trainer music paired with trainer pictures than were males who were better at rejecting distractor music paired with distractor pictures. The question type by major interaction was also significant, F (2.62, 472.20) = 4.08, p < .01, partial η2 = .03 (Figure 22). Music majors scored higher by two-thirds of a point when rejecting distractor music paired with a trainer picture than non-music majors. While the differences in means between music and non-music majors for picture new and new pair was

98 marginal, music majors scored higher when identifying trainer music paired with a trainer picture by a quarter of a point.

Table 10 Mean Correct for Each Question Type for Pomp and Circumstance

Pair Correct Music New Picture New New Pair 4.09 (0.97) 4.89 (1.17) 5.04 (0.93) 5.33 (1.0)

Note. Underscored means are not significantly different. Standard deviations are in parentheses.

Pomp and Circumstance Gender * Question Type Interaction

5.6 5.46 5.4

5.2 5.07 5.09 5.21 4.92 5 Females 4.99 4.8 Males 4.85 Mean correct 4.6 4.73

4.4

4.2 Pair Correct Picture New Music New New Pair

Figure 21. Pomp and Circumstance – question type by gender interaction.

99 Pomp and Circumstance Major * Question Type Interaction

5.6 5.4 5.42 5.22 5.25 5.2 5.03 5.08 5 5 Music Majors 4.8 Non-music Majors 4.6 4.77 Mean correct 4.4 4.55 4.2 4 Pair Correct Picture New Music New New Pair

Figure 22. Pomp and Circumstance – question type by major interaction.

Memory Strategies Each participant was given the opportunity to indicate if s/he used a particular strategy during the experiment and if so, to describe the strategy or strategies in his/her own words. These strategies were coded by the researcher and resulted in the six categories represented in Table 11. Fifty-nine individuals (30.7%) reported that they had used no particular strategy, while 133 (69.3%) listed one or more strategies. Some participants indicated they had used a different focus of attention during familiar and unfamiliar music conditions, while others indicated the strategy applied to the entire experiment. The most frequent strategy (tied with no strategy) was a picture- only strategy; the most common picture strategy was assigning a verbal label to each image. More music-only strategies were cited for The Bailiff’s Daughter. For the remaining three categories, ordered attention by trial, integrated, and separate strategies, participants identified strategies for both music and pictures. During ordered attention by trial, participants indicated that their attention during each training trial was focused on a single modality. Strategies involving projecting the contour or rhythm of the music onto each image were categorized as integrated. Two separate strategies, one for pictures and one for music but no indication of integration, were coded as separate.

100 Table 11 Frequency Distribution of Strategies Used by Participants Strategy Type Subtotal Details Subtotals Picture Strategy Only* 59 Verbal labels 29 *Picture focus on Pomp and Circ. 15 Picture only focus 6 Recall serial order 3 Group like pictures 2 Identify salient feature 2 Count aspects of image 1 General feel for picture 1 Music Strategy only* 34 *Music focus on Bailiff’s Daughter 15 Audiate 4 Focus on individual measures 3 Music focus only 3 Solfege 2 Focus on beat/rhythm 2 Contour 1 Key 1 Patterns 1 Phrases and Cadences 1 Visualize music notation 1 Ordered attention 22 1st time music – 2nd time pictures 12 1st time pictures – 2nd time music 7 Separated by training trial 2 Alternated 1 Integrated 21 Contour of melody and shape of image 21

Separate 12 Verbal label + contour of music 4 Verbal label + beats/rhythm 2 Verbal label + Audiate 1 Verbal label + intervals 1 Verbal labels + ‘feel of music’ 1 Ordered pictures + music strategy 1 Salient shape features + rhythm 1 Trace shape w/eyes + tap foot to beat 1 No strategy identified 59 59

Totals *192 192

*Contains 15 common to both sets – Picture focus for Pomp and Circumstance and Music focus for The Bailiff’s Daughter.

101 The mean total score by strategy for both music conditions is listed in Table 12. There were no significant differences in the mean correct total score by strategy for The Bailiff’s Daughter, F (5, 128) = .72, p > .61; there were no significant differences in the mean correct total score by strategy for Pomp and Circumstance, F (5, 128) = 2.22, p > .055. For both music types, participants who indicated they did not use a strategy produced the lowest mean score and participants who used separate strategies produced the highest mean score. The distribution of strategies by instruction type revealed that the focus of attention directions influenced strategy choices (Table 13). Selective attention to pictures generated the most picture-only strategies. For music-only strategies, the group with the highest frequency was selective attention to music. No strategy identified was equally distributed among the three instruction levels.

Table 12 Mean Total Score for Familiar and Unfamiliar Music By Strategy Type Strategy Number (n) Mean total Number (n) Mean total For Pomp score for Bailiff score for Pomp Bailiff None identified 59 19.56 59 16.32

Integrated 21 19.57 21 16.91

Ordered attention 22 19.86 22 16.41

Picture strategy only 59 20.71 44 16.98

Music strategy only 19 20.84 34 16.32

Separate strategies 12 21.33 12 17.83

102 Table 13 Frequency Distribution of Strategies by Instruction Type for Music Conditions Pomp and None Picture Music Ordered Integrated Separate Circumstance Only Only Attention Dual Task 19 13 9 10 8 6

Selective Attention – 21 7 17 9 8 1 Music Selective Attention – 19 24 8 3 5 5 Pictures The Bailiff’s Daughter None Picture Ordered Integrated Music Separate Only Attention Only Dual Task 19 18 19 8 4 6

Selective Attention – 21 13 10 8 11 1 Music Selective Attention – 19 28 9 5 4 5 Pictures

Analyses for Memory Decay The possibility for decay of memory over the test for the training stimuli existed. A total correct score by test half (questions 1 – 12 and 13 - 24) was calculated for each music condition. A 4-way analysis of variance was conducted with between subject factors of gender, major, and instruction type and a within subjects variable of test half for The Bailiff’s Daughter. The main effect of test half was not significant, F (1, 180) = .16, p > .70, nor were any interactions significant, Fs < 2.30, ps > .131. There was no evidence that memory decay affected the latter portion of the test. A 5-way analysis of variance was conducted with between subject factors of gender, major, instruction type, and order and a within subjects variable of test half for Pomp and

103 Circumstance. There were significant differences between scores by test half for Pomp and Circumstance, F (1, 168) = 87.86, p < .001, partial η2 = .34. The mean for the second half of the test (M = 9.58, SD = 1.81) was lower than the first half (M = 10.64, SD = 1.29). The significant main effect was mediated by a significant interaction between test half and order, F (1, 168) = 4.40, p < .04, partial η2 = .03 (Figure 23). While decay of memory over time cannot be ruled out, the differences were influenced by error rates on questions 13, 21, and 22. Fewer than 68% of the participants answered these three questions correctly. Only one question (question 7) appearing in the first half of the test failed a normal distribution. Perhaps participants who completed the test for Pomp and Circumstance second were more discriminating during the training and/or test. Nonetheless, the interaction accounted for less than 3% of the variance and the difference between means by order was 0.34 or less.

Pomp and Circumstance Test Half * Order Interaction

11 10.7 10.5 10.57

10 First Order 9.76 Second Order 9.5 9.42 Mean score by half 9

8.5 #1-#12 #13-#24

Figure 23. Pomp and Circumstance – test half by order interaction.

Analyses for Serial Position Effects Though the participants were informed that they would not have to remember the order of the stimuli, music is inherently ordered in time. The effects of serial position for the picture and music (measures) trainers were analyzed. The test was constructed such that 4 of the 8 trainers

104 for music and pictures were tested twice and the remaining four were tested once. In order to determine the effects of testing stimuli twice, the samples’ total correct responses for each of the two tests were compared using Chi square analyses for each music condition. There were no significant differences in the frequency of correct responses between the first and second testing for pictures for either The Bailiff’s Daughter or Pomp and Circumstance. A mean for the two trials for total correct response was taken for each twice-tested picture. The frequency of correct responses by serial position of each training picture was analyzed. There were no significant differences in the distribution for either The Bailiff’s Daughter, χ2 (7, 1402) = 2.35, p > .94, or Pomp and Circumstance, χ2 (7, 1299) = 3.07, p > .88. Thus, there appeared to be no effect of serial position for pictures under these experimental conditions, as primacy-recency effects were not observed. Table 14 represents the frequency distribution by serial position of the trainers.

Table 14 Frequency Distribution of Total Correct Responses for Pictures by Serial Position The Bailiff’s Daughter Picture 5 Picture 4 Picture 1 Picture 2 Picture 8 Picture 7 Picture 6 Picture 3 162 171 172 174 176 177 181.5 188 Rank 1 Rank 2 Rank 3 Rank 4 Rank 5 Rank 6 Rank 7 Rank 8 Pomp and Circumstance Picture 3 Picture 7 Picture 1 Picture 2 Picture 4 Picture 6 Picture 5 Picture 8 168 169 177 180 181 188 188.5 192 Rank 1 Rank 2 Rank 3 Rank 4 Rank 5 Rank 6 Rank 7 Rank 8

The frequencies of correct responses to music stimuli that were tested twice for The Bailiff’s Daughter were compared. All four items had a greater frequency of correct responses on the second testing; two of the four twice-tested measures of music were significantly different from the first to the second testing. This was expected since the pilot test revealed encoding of test items influencing results for re-test items. Two separate analyses were conducted – one on the first-time-tested scores and one on the second-time-tested. When the frequency of total correct responses of the sample for each measure of music was analyzed, there were significant differences in distribution of the first-time-tested scores, χ2 (7, 1192) = 34.62, p < .001. Paired comparisons using an adjusted alpha level of .01 for significance at .05 revealed that measures 8

105 and 6 were significantly different from each other and from measures 7, 1, 5, and 4. Measure 3 was significantly different than measure 4 (Table 15). A second analysis on the second-time- tested for dually tested items also revealed significant differences in the distribution of scores, χ2 (7, 1298) = 25.19, p < .001. Measure 6 was significantly different than measures 1, 7, 5, 4, and 2, and measure 3 was significantly different than measure 4 and 2. Regardless of test time, measures 3, 6, and 8 fell in the lowest three ranks. Measure 3 and 8 did not benefit from a second testing (Table 15). Interestingly, measure 3 and 6 belong to the same rhythm grouping. Measure 8 did not demonstrate a recency effect; in fact, it was poorly remembered.

Table 15 The Bailiff’s Daughter Frequency Distribution of Total Correct Responses for Music Measures Rank Rank Rank Rank Rank Rank Rank Rank 1 2 3 4 5 6 7 8 First Measure Measure Measure Measure Measure Measure Measure Measure 8 6 3 2 7 1 5 4 Testing Total 111 115 133 147 159 171 174 182 correct Second Measure Measure Measure Measure Measure Measure Measure Measure 6 3 8 1 7 5 4 2 Testing Total 115 135 167 171 172 174 182 182 Correct Note. Bold measures were tested twice.

The frequencies of correct responses for music stimuli that were tested twice for Pomp and Circumstance were compared. Curiously, there was no consistent advantage during the second testing and no comparison of first versus second testing was significant, p > .65. The ordered frequency of music training stimuli for first-time-tested and second-time-tested was the same. A mean correct frequency for twice-tested measures was used in the analysis. There were significant differences in the distribution of frequency for total correct response, χ2 (7, 1299) = 55.70, p < .001. Measure 8 was significantly different than measures 2, 6, 7, 1, 5, and 3, and measure 4 was significantly different than measures 7, 1, 5, and 3 (Table 16).

106 Table 16 Pomp and Circumstance Frequency Distribution of Total Correct Responses for Music Measures Rank Rank Rank Rank Rank Rank Rank Rank 1 2 3 4 5 6 7 8 Measure Measure Measure Measure Measure Measure Measure Measure 8 4 2 6 7 1 5 3 93 127 152.5 171.5 185.5 188.5 190 191 Note. Bold measures were tested twice.

Analyses of Posttest Questions The total correct score for each question of the two posttests was calculated in order to determine if the scores for each question produced a normal distribution. Questions receiving fewer than 130 (68%) correct responses were noted and the contents of the music in the test item examined. For Pomp and Circumstance, there were 3 questions (#7, 21, and 22) that failed this criterion and one additional question (#13) that marginally exceeded the criterion (139 correct). The only music distractor that failed criterion was question #22, answered correctly by 48% of the sample. This test item was distractor music based upon Rhythm IV paired with a training picture. Seventy participants indicated that the item was a correct pair. The contour of the test item was very similar to training measure 4 presented in question 7 (Figure 24).

Figure 24. Question #22 distractor music.

The other questions that failed criterion on Pomp and Circumstance were all based upon music trainers. Question #21 was training measure 8, which was correctly answered by 51% of the sample. Question #7 was training measure 4, which was correctly answered by 65% of the sample. These were both from the rhythmic group IV, thus 3 of the 4 questions that failed criterion were based on Rhythm IV. Question #13, measure 2 containing two half notes separated by a descending step, just exceeded the criterion with 139 correct responses. This was the second

107 time this measure was tested, and the total score was lower than the first testing, question #1 (144 correct responses). There were 9 questions (#s 1, 6, 7, 10, 11, 13, 16, 21, 22) that failed criterion and one that was borderline (#2) on the posttest for The Bailiff’s Daughter. Of these 10 questions, six were distractor music (Table 17) and four were training music. Measures 3, 6, and 8 were not well remembered. Measure 8 (Rhythm IV) in question 16 had only 55% of the respondents identify it correctly as pair correct. Question # 21 was measure 6, which 59% correctly answered. Measure 3 was tested in Question #2 (68%) with a trainer picture and Question #6 (66%) with a distractor picture. Both measures 3 and 6 were Rhythm III (four quarter notes). The lack of facilitation by the second testing of measure 3 would seem to indicate that the four quarter note rhythm strongly contributed to the item being poorly remembered.

Table 17 Bailiff’s Daughter Distractor Music Items Failing Criterion Question 1 Question 7 Question 13 Question 11 Question 10 Question 22

Among distractor questions failing criterion, Question #1 was the second poorest answered (40% correct). It was a distractor based on Rhythm III and provided further evidence that the four quarter note rhythm was poorly encoded. Distractors based on Rhythm IV contributed to two questions failing criterion, question #7 (65% correct) and question #13 (58% correct). Three distractor test items based upon Rhythm I failed criterion, question #11 (58% correct), question #10 (55% correct), and question #22 (16% correct). Only 30 participants accurately answered question #22. Question #22 was an (unintentional) exact transposition of measure 5, in addition to mimicking question 10. Question #11 mimicked the contour of measure 1, and question #10 mimicked the contour of measure 5. However, instead of the rhythmic grouping contributing to the failed criterion, similar contour seemed to create the confusion.

108 CHAPTER 5 DISCUSSION

Summary of Results Music majors were better able to recall bimodal audio-visual stimuli under the dual task and selective attention to music conditions than were non-music majors. Non-music majors’ best performance was under the selective attention to pictures condition. Music majors had fewer music errors than non-music majors while the two groups were comparable on picture errors. The recall for pictures was extremely high, but differed between the two music conditions. More picture errors were made during the unfamiliar music. For the familiar music condition, recalling paired training stimuli was the most difficult while rejecting distractor music paired with a distractor picture was the easiest. For unfamiliar music, rejecting distractor music was more difficult than recognizing training stimuli paired with either picture type. For both music types, music majors answered music new correctly with greater frequency than did non-music majors. Females were better at recognizing paired training stimuli for both music conditions than were males who were better at rejecting paired distractors. Participants’ identified strategies were influenced by the attention instructions given to them, though strategies did not have a significant effect on total scores. There was no evidence of serial position effects for pictures, but strong support for serial position effects in music. The last measure was recalled the poorest for both music conditions, and participants failed to reject distractors based upon Rhythm IV. Rhythm III in The Bailiff’s Daughter was poorly encoded as evidenced by failed trainers and distractors. Distractor music with contours similar to training music more easily lured participants than contrasting contours regardless of the familiarity level of the music. Music Training Effects There were significant differences found in recall between music majors and non-music majors, despite the majority of the non-music majors in this study having more than 3 years of music training. Only 15 of the 96 non-music majors indicated that they had no formal music

109 training. The amount of time music majors and non-music majors spent listening to music was not significantly different. The aspect of music training that was significantly different between the groups was the amount of time spent performing. Music majors spent significantly more time performing music than non-music majors. Based upon the Likert-type scale, non-music majors’ mean hours of performance per week was 1.23 (1 = 0 – 10 hours, 2 = 10 – 20 hours) while music majors’ mean hours of performance per week was 2.02 (3 = 20 – 30 hours). Since zero hours was not a choice, the true mean for non-music majors’ hours of performance is likely lower than this rating scale allowed. The active performance of music through playing an instrument or singing impacted the outcome of other memory and music studies (Hall & Blasko, 2005; Wallace, 1994). Researchers have been inconsistent in reporting active performance as an aspect of music training (See Table 1, review of literature), but the present research documents the role of music performance as a variable in memory for music. There is the possibility that some of the significant differences between music majors and non-music majors was the result of a Type I error (Madsen & Moore, 1978). The alpha level of .05 with an N of 192 may have found “significant differences” that are not reliable. However, recognition memory paradigms are less sensitive to differences than other memory measures such as recall and response time (Schmitt et al., 2000). Thus, the low alpha level may have actually avoided a Type II error given the lower sensitivity of the recognition design. Subsequent research should use a true no music training group of similar age to the current sample to clarify the effects of music training on the recognition of melodic fragments paired with images during highly familiar and unfamiliar music conditions. Furthermore, older professional musicians who perform regularly and an older non-performing group could provide additional illumination on the impact of active music performance on music memory. One difference between music majors and non-music majors was the ability to classify unknown music as such. The significant interaction between major and familiarity ratings indicated that music majors were more extreme in rating The Bailiff’s Daughter as unfamiliar. It remains possible that non-music majors have encountered the remote, 19th century folk tune, but that is highly unlikely. Perhaps recognizing music that was an exemplar of folk music influenced their ratings. Music majors also performed substantially better on the music new question type for both music conditions. Their ability to reject distractor music paired with a training picture was another example of the heightened ability to reject unknown music. Cusack and Carlyon

110 (2003) indicated that it was easier to recall a known memory artifact than to reject a distractor as unknown. Bella et al. (2003) confirmed this in music; musicians classified unfamiliar tunes as such in fewer notes than did nonmusicians. In Bella et al.’s study, 3 to 6 notes were needed to identify the titles of familiar tunes while 8 to 10 notes were needed to reject distractors. Distractors in Pomp and Circumstance were either 2 or 4 notes long and were 4 or 6 notes long in The Bailiff’s Daughter. The resources available to music majors in rejecting unfamiliar music were clearly different than those available to non-music majors. Recognition Versus Rejection of Stimuli in Working Memory Previous research in recognition memory shows that responses to paired training stimuli result in faster response times and greater accuracy than to trainers paired with distractors (Bonnel & Prinzmetal, 1998; Cusack & Carlyon, 2003; Morey & Cowan, 2004, 2005; Schon & Besson, 2002). This would seem to indicate that pair correct in the present study would be the most accurate question type. This was essentially true for The Bailiff’s Daughter (no differences between pair correct and picture new). Recognizing trainer music was easier than rejecting distractors. However, the opposite was true for Pomp and Circumstance. New pair was significantly different than the other three question types and pair correct had the lowest overall performance. Since the melody was highly familiar, participants were likely comparing each test item with a recalled representation. New music could be quickly compared to the remembered melody and easily rejected. However, searching for fragments of the recalled melody proved more difficult. One explanation could be that the representation of the melody in memory was more elaborate than the piano timbre unaccompanied stimulus (Kraemer, Macrae, Green, & Kelley, 2005). While searching for a known target facilitated identification in previous studies (Dowling, 1973a, 1973b), the differences between the representation and the test stimuli may have interfered with judgment making. The recognition versus rejection phenomenon was complicated by gender interactions. Females were better at pair correct while males were better at new pair question types for both music conditions. These differences between females and males were greater during familiar music. Gender differences were generally unanticipated in this study. The differences between pair correct and new pair were the only instance that gender was a factor during The Bailiff’s Daughter. Further research is needed to clarify gender differences during melodic recognition.

111 Coupled with the fact that the music was familiar, the presence of an interaction involving order and test half during Pomp and Circumstance may indicate that this music condition did not use working memory. Working memory studies generally do not find learning effects (Morey & Cowan, 2004, 2005; Turatto et al., 2003). The absence of order as an effect for The Bailiff’s Daughter supported unfamiliar music as using working memory for judgment making. This was not an associative memory task either. Since the pair correct question type always contained a correctly paired stimulus set, recall of exact associations was not required. Naveh-Benjamin et al. (2003) found item memory preceded association memory, which was more susceptible to distraction and interference. It appeared that participants used parallel processing for music and pictures (Allport et al., 1972; Bonnel et al., 2001; Treisman & Davies, 1973). Essentially, for each test item participants made judgments about music and pictures separately. The data for The Bailiff’s Daughter on question type support this. There was no difference between music new and new pair question types. The judgment about music was not impacted by the presence of a trainer or distractor picture. The addition of a mismatch test item (incorrectly paired music and picture trainer) could test association memory under this bimodal recognition paradigm. The Role of Strategy Participants who indicated they had used two separate strategies for pictures and music provided additional evidence that two separate judgments were made. The mean total correct scores were highest for participants using separate strategies and lowest for those identifying no strategy was used. While the difference in means by strategy was not significant for either music condition, the difference approached conventional significance for Pomp and Circumstance (p > .055). Since familiar music used long-term memory more than working memory, strategy had more of an effect than it did on unfamiliar music that relied on the working memory system. Despite its positive effect on outcome, few participants (n=12) indicated using separate strategies for music and picture. More indicated that they had attempted to integrate aspects of the picture and either the contour or rhythm of the music. This would seem to indicate that these individuals approached the task as associative memory rather than as separate streams for audio and visual stimuli. However, their scores fell in between no strategy and separate strategies. The conception of musical contour as visual was not new (Davies & Yelland, 1977; Madsen & Madsen, 2002; Vickers & Alty, 2002; Williams, 1982). Keller, Cowan, and Sauls (1995) found participants who

112 created an auditory image of the contour of pitch sequences (not music) were more successful at target recognition than were participants who visualized the contour; there were significant differences by strategy in Keller et al. (1995). Perhaps the attempt to create a conglomerate visual representation of the actual picture stimulus with the musical contour imposed upon it created more interference than benefit. Creating verbal labels for the pictures was a common strategy cited by those participants who indicated only using picture strategies. Seventy-five percent of those who used separate strategies indicated they had labeled the pictures in addition to their music strategy. Robinson and Sloutsky (2004) eliminated the auditory bias in children when abstract visual shapes were replaced by simple shapes that could be labeled (e.g., red cross, green square). Music does not render itself easily to verbal labeling (Cassidy, 1992), nor do graphic maps provide considerable assistance to the musically untrained (Cassidy, 2001; Gromko & Russell, 2002). The effectiveness of verbal labels for the pictures could explain a portion of the ceiling effect for picture recall during both music conditions. Generally, music strategies were poorly articulated (stated ‘focus on the music’), but several participants indicated using audiation (silent rehearsal of melody). Pembrook (1986, 1987) documented the ineffectiveness of singing a newly presented melody prior to melodic dictation. The recall of the melody may have been poor enough, particularly during The Bailiff’s Daughter, that singing the melody silently ‘in one’s head’ interfered with recall. A few participants indicated using solfege, visualizing musical notation, and attention to key or cadence points. The difficulty in verbally labeling music may apply to the poor description of music strategies. The undetailed music strategies may not be indicative of the absence of their use, but of the participants’ inability to describe what strategies they were using. This same thing could be true of those who chose ‘no strategy used’. They may have used a strategy, though they were unable to articulate what it was, or did not recognize their methodology in approaching this task as a strategy. Rhythm, Attention, and Information Processing One common memory strategy, chunking (Miller, 1956), was built into the trainers. Four rhythmic groupings were chosen for the music and four picture categories were chosen for each stimulus. Grouping the pictures into sets likely contributed the ceiling effects in much the same way that verbal labeling did. The effect of individual rhythm groups on recall proved to be a

113 much more complex matter. Rhythm IV was confounded with serial position of trainers for both pieces, and Rhythm I for The Bailiff’s Daughter was confounded with contour-related distractor issues. However, Rhythm II of The Bailiff’s Daughter and Rhythms I and III of Pomp and Circumstance were highly memorable. More than 68% of the participants recalled trainers using these rhythms. One commonality of these rhythms is complexity, particularly Rhythm II for The Bailiff’s Daughter. Infants preferred complex auditory stimuli (Richard et al., 2004; Ruff & Capozzoli, 2003), and participants in high-load conditions with complex auditory streams produced lower response times than low load conditions (Berti & Schroger, 2003). A second commonality was that short notes fell on beat 3 or 4, perhaps driving the rhythm forward. The implication appears to be that encoding was more efficient during more complex stimuli due to greater attentional effort. Rhythm III of The Bailiff’s Daughter was poorly remembered and had no confounding variables (contour or position). The four quarter note trainers in measures 3 and 6 were not well recalled. Question #1 based on this rhythm was highly deceptive; 43 music majors and 66 non- music majors failed to reject this music as unknown. Measure 3 only improved by 2 participants when tested a second time, clearly documenting the forgettable nature of this stimulus. Rhythm groups that contain combinations of short and long durations may provide a more distinct perceptual pattern than a rhythm composed of equal durations. With the actual musical stimuli used here, rhythm and contour were not experimentally manipulated. Thus, even with Rhythm III of The Bailiff’s Daughter, contour could be the reason it was unmemorable. Measure 3 contained a single ascending fourth interval change between repeated unisons, and measure 6 contained two descending steps and a descending minor third. These particular contours in conjunction with equal note durations may have provided too little information to be memorable. This conclusion may explain why measure 2 of Pomp and Circumstance with its two half notes separated by a single descending step was poorly recalled as well. Contour – Similar or Dissimilar to Target Melody The allure of distractor melodies with contours similar to target melodies has been documented (Dowling et al., 2002). Dowling et al. (2002) found that participants had significant difficulty rejecting lures with similar contour to the target melody than dissimilar contours. The false alarm rate to lures with similar contours did decrease over multiple trials. In the present study, Question #22 of The Bailiff’s Daughter had only 30 (16 music majors and 14 non-music

114 majors) of the 192 participants accurately identify it as a distractor. The contour was a direct transposition of measure #5 tested in Question #5 (missed by 48) and remarkably similar to the contour of Question #10 (missed by 86). It cannot be determined whether the contour interference was the result of the transposition, the encoding of test items, or a combination of these events. Question #11 was a similar contour to measure 1, which was not tested until Question #24. Thus, the inability to reject Question #11 as a distractor was not the result of test items encoding but the similar contour to the trainer. Two interval changes in the distractor (Question #11) have been expanded, one from a step to a minor third, one from a major third to a perfect fourth, but 81 participants (n=23 music majors, n=42 non-music majors) were drawn off by the contour. Forty-four percent of the non-music majors failed to reject the expanded intervals in the retained contour as compared to 24% of the music majors. Only one distractor question on Pomp and Circumstance failed criterion, which may be attributed to similar contour. Question #22 had a similar contour to measure 4 and lured 26 (27%) music majors and 55 (57%) non-music majors into identifying it as trainer music. Future study is needed to determine how rhythmic and melodic features are encoded and used in recognition judgments. Serial Position Effects – Expectancy Theory Music is time-ordered and by its very nature a serial task. Researchers have found participants recall first and last notes better than center notes (Williams, 1975, 1982). During this recognition task, the measure (8) in the final or recency position was the most poorly recalled for familiar music and among the poorest for unfamiliar music. This was unanticipated. However, both music examples came to predictable closures, both melodically and harmonically. Musical events that violate expectancy increase our attention (Fujioka et al., 2005; Radocy & Boyle, 2003; Williams, 1982). Since both endings were predictable, participants may have attended to these final measures less. Cowan (1995, 2005) has established the interconnected nature of attention and memory. Attention may have shifted from active listening to the music to rehearsal of previous stimuli (audiating, recalling picture ‘names’), thus encoding was substantially affected. Measure 4, the end of the 4-bar phrase, during Pomp and Circumstance was poorly recalled. Again, a sense of closure was established and attention may have shifted. The most memorable measures of Pomp and Circumstance were measure 5 and measure 3. Measure 3 immediately preceded the resolution and may represent a peak in attention.

115 Attention was once again summoned following the ‘break’. Measure 5 was the only measure in the excerpt that contained an accidental and violated expectancy of key. Attention during this highly familiar tune appeared to fluctuate in a specific manner relative to expectancy. In contrast to Pomp and Circumstance, where measure 4 appeared to be a point of lowered attention, measure 4 in The Bailiff’s Daughter was highly memorable (95% correctly recognized this it as a trainer). Since this was an unfamiliar tune, patterns of attention during the middle of the 8-bar excerpt were different than during the familiar tune. However, measure 8 was among the poorest recalled again. Attention and encoding for the final measure appeared to be similar as the piece came to a predictable close. Much of the music research in serial order has focused on recall through notating music or singing melodies (Pembrook, 1986, 1987). After hearing a short piano excerpt, Deliege et al. (1996) had participants re-order the randomized phrases. Musicians were better than nonmusicians at the task, but both groups struggled. Research using recognition paradigms for music may provide an opportunity to research serial position and expectancy in new ways. The last measure being so poorly recalled was novel since it contradicted the usual recency effects. Attention States Music majors outperformed non-music majors in the dual task (divided attention) and selective attention to music conditions by an overall mean of 2.16 and 1.3 points, respectively, though music majors’ performance was essentially the same under these conditions. Music majors performed the poorest in selective attention to pictures. Non-music majors performed worst under the dual task condition, better under selective attention to music, and best under selective attention to pictures instructions. Thus, when music majors focused their attention on music or divided their attention between pictures and music, their recall exceeded non-music majors. Focusing more on the pictures than the music may be a natural attention state for non- music majors, thereby facilitating their performance under this condition. In fact, non-music majors committed the fewest errors for both music and pictures during selective attention to pictures and the most picture and music errors during dual task. For music majors, recall was affected by actively suppressing attention to music during the selective attention to pictures condition. Music was not encoded to the same degree; in other words there was not automatic processing. Effortful attention to music improved recall for music majors. Music majors’ recall of pictures was also poorer during selective attention to pictures

116 than during the dual task (divided attention) condition. This provides further support that the natural attention state for music majors is one where the focus of attention to music is dominant. The mental effort of suppressing attention to music interfered with the encoding of pictures. Music majors had the fewest music errors during selective attention to music instructions, though picture errors increased, while non-music majors’ performance under this attention condition fell in between the dual task and selective attention to pictures performance. Curiously, there were no significant differences in the perception of attention allocation between the music and non-music majors despite the obvious differences in performance. Another reason non-music majors’ performance was facilitated during the selective attention to pictures may be related to their perception of familiarity. Non-music majors had more difficulty rejecting distractors than recognizing trainers. During selective attention to pictures, the memory trace for music would be fainter and their perception of knowing lower. This may have allowed them to reject distractors better under this attention condition. Focusing on music and naturally dividing attention between music and pictures may have given them a false sense of confidence. Similar to their ratings for The Bailiff’s Daughter being higher than music majors, non-music majors may have been lured by distractors more easily when they believed they ‘knew’ the music following the dual task (divided attention) and selective focus of attention to music conditions. Effortful attention to music was not beneficial to non-music majors, while active ignoring improved scores. Bigand et al. (2000) concluded that nonmusicians engaged in an error detection task reduced the attention load naturally by attending to the errors in only one of the two voices. They deemed this to be the case since nonmusicians recalled significantly fewer errors in the familiar nursery tunes in the lower voice than the upper voice. While selective attention instructions were not given, the group made the task accessible by attending to only one voice. Olivers and Nieuwenhuis (2005) had participants engage in a distraction task while observing rapidly presented visual stimuli. The distraction task (singing a familiar song subvocally or thinking about holidays) eliminated the attentional blink typically found in visual serial recognition studies. With the selective attention to pictures instructions, non-music majors may have reduced the load and the anxiety associated with the task. Positive affect may have improved recognition memory. Regardless of the cause, there were clear differences in performance between music

117 and non-music (though many were musically sophisticated) majors under divided and selective attention instructions. Implications for Music Education and Music Therapy Researchers should continue to investigate what makes melodies memorable. Educators would benefit from this information and be able to formulate strategies to help students increase success. Students may claim to recognize or ‘know’ music that is actually unfamiliar but adhere to known musical forms (e.g., Western art music, folk songs). Students may need more repetition before they can truly use the musical information for recognition judgments or recall the music through performance. Providing attention instructions that are similar to students’ natural attention states, at least for single exposures, would seem beneficial. For some students, this may include a focus tangential to the music itself. Active performance of music may provide students with greater ability to focus upon and recall music than merely listening to it. The unique audio- visual nature of reading visual notation while hearing one’s musical product may rehearse dual attention and parallel processing. Participation in ensembles requires shifts in attention from one’s own part, to other individual parts, and to the totality. These attention shifts may enhance one’s ability to control one’s attentional focus better than persons who are not performing music. This study provides new information on familiar and unfamiliar music, rhythmic complexity, and serial position that may impact the work of music therapists. Particularly, clients who are musically sophisticated may be able to use familiar music in different ways than musically untrained individuals. Furthermore, familiar music seems to provide a distinct opportunity for rehearsing specific memory and recall strategies. The trace for unfamiliar music may be too weak initially for strategies to impact memory performance. The amount of recall a participant has for unfamiliar music may be related to his/her working memory capacity, which can be quickly exceeded by unfamiliar music. Music that is rhythmically complex may be more memorable. The finding that the four quarter note rhythm was forgettable while rhythms with patterns of short and long durations were more easily recalled may be important. Attempts to simplify music in order to accommodate cognitive deficits may be doing more harm than good. Future studies with people who have attentional and cognitive deficits could illuminate the role of rhythmic complexity in memory. The ends of phrases and the ends of tunes that are predictable may lower attention and reduce encoding/memory. When testing memory or composing songs that contain information to be remembered, material at the end may not be well

118 recalled. Arranging the song such that to-be-remembered information appears before cadences would seem helpful. Attention to music is of interest to music educators and music therapists, particularly in today’s sound-saturated world. It is encouraging that participants could focus directly on music when told to do so, despite hours of rehearsing music as background during other tasks. Today’s technology allows listeners to rewind and hear again aspects of music that they missed. Teaching listeners how to attend and remember music the first time through may be something that is missing. Future study could address this issue along with bimodal encoding in a recognition paradigm of musically untrained participants.

119 APPENDIX A

Main Experiment - Composition of Audio Distractors for The Bailiff’s Daughter and Pomp and Circumstance

120 Description of Distractor Audio Generation for The Bailiff’s Daughter

Original song has no leap greater than P5; range is F#4 to G5 (octave +1/2 step); each measure has range not greater than P5.

RHYTHM I: dotted quarter, eighth, quarter, two eighths - Measures 1 and 5

DISTRACTOR 1- 1 Measure 1: Chord is G, starting pitch is B; range is P5 CHANGE TO C5 1.2 – descending stepwise interval CHANGE TO ASCENDING LEAP 1.3 – descending stepwise interval CHANGE TO ASCENDING STEP 1.4 – ascending P5 leap CHANGE TO DESCENDING P4

DISTRACTOR 1 - 2 Measure 1:Chord is G, starting pitch is B; range is P5 CHANGE TO A4 1.2 – descending stepwise interval CHANGE TO ASCENDING STEP 1.3 – descending stepwise interval CHANGE TO ASCENDING STEP 1.4 – ascending P5 leap CHANGE TO DESCENDING m3

DISTRACTOR 5-1 Measure 5: Starting pitch is E during a C chord; range is m3 CHANGE TO A5 5.1 – Ascending stepwise interval CHANGE TO DESCENDING LEAP m3 5.2 – Ascending stepwise interval CHANGE TO SAME PITCH 5.3 – Descending step CHANGE TO ASCENDING m3 LEAP

DISTRACTOR 5-2 Measure 5: Starting pitch is E during a C chord; range is m3 CHANGE TO D5 5.1 – Ascending stepwise interval CHANGE TO DESCENDING STEP 5.2 – Ascending stepwise interval CHANGE TO DESCENDING P4 LEAP 5.3 – Descending step CHANGE TO ASCENDING P4 LEAP

121 RHYTHM II: 4 eighth notes, dotted quarter and one eighth – Measures 2 and 7

DISTRACTOR 2 Measure 2: Starting pitch is B in a G chord; range M3 CHANGE TO F# 2.1 – ascending stepwise CHANGE DESCENDING M3 LEAP 2.2 – descending m3 leap CHANGE ASCENDING M3 2.3 – ascending step CHANGE DESCENDING LEAP M3 2.4 – descending M3 leap CHANGE ASCENDING STEP 2.5 – ascending M3 leap CHANGE DESCENDING STEP

DISTRACTOR 7 Measure 7: Starting pitch is b4 in an em chord: RANGE tritone (flat P5) CHANGE TO D4 7.1 – descending step CHANGE ASCENDING LEAP M3 7.2 – descending step CHANGE ASCENDING LEAP STEP 7.3 – ascending step CHANGE DESCENDING P4 LEAP 7.4 – descending step CHANGE ASCENDING STEP 7.5 – descending step CHANGE m3 LEAP

RHYTHM III: 4 quarter notes – measures 3 and 6

DISTRACTOR 3 Measure 3: Starting pitch D5, chord D, G/D, range P4 CHANGE TO C 3.1 same interval, no pitch change CHANGE DESCENDING m3 3.2 ascending P4 CHANGE ASCENDING STEP 3.3 same interval no pitch change CHANGE ASCENDING m3 LEAP

DISTRACTOR 6 Measure 6: Starting pitch D5, chord is G, range P5 CHANGE TO F#5

122 6.1 – descending step CHANGE NO INTERVAL CHANGE 6.2 – descending step CHANGE ASCENDING STEP 6.3 – descending M3 leap CHANGE ASCENDING STEP

RHTYHM IV: half note, 2 eights and quarter

DISTRACTOR 4-1 Measure 4: Starting pitch D5; range m3 CHANGE TO B4 4.1 descending step CHANGED TO ASCENDING P4 4.2 descending step CHANGED TO DESCENDING m3 LEAP 4.3 ascending m3 CHANGED TO DESCENDING STEP

DISTRACTOR 4 – 2 Measure 4: Starting pitch D5; range m3 CHANGE TO E4 4.1 descending step CHANGED TO SAME PITCH 4.2 descending step CHANGED TO ASCENDING STEP 4.3 ascending m3 CHANGED TO SAME PITCH

DISTRACTOR 8 Measure 8: Starting pitch G4, range P4 CHANGED TO E4 8.1 descending P4 leap CHANGED TO ASCENDING STEP 8.2 same pitch repeated CHANGED TO ASCENDING m3 8.3 ascending P4 leap CHANGED TO DESCENDING STEP

123 Description of Distractor Audio Generation for Pomp and Circumstance

Range – one octave Measure – no measure with greater than range of P5 One accidental (raised 4th)

RHYTHM I: Half note, 2 eighths, quarter, Measures 1 and 3

DISTRACTOR 1-1 Measure 1 – starting pitch, G4 on G chord, D chord; range M2 CHANGE TO C5 1.1 – descending step – CHANGE ASCENDING STEP 1.2 – ascending step – CHANGE NO INTERVAL CHANGE 1.3 – ascending step – CHANGE DESCENDING LEAP m3 (range now m3)

DISTRACTOR 1-2 Starting pitch A4 1.1 descending step CHANGE ASCENDING STEP 1.2 ascending step CHANGE DESCENDING M3 LEAP 1.3 ascending step CHANGE ASCENDING LEAP

Measure 3 – starting pitch C4, chord C, G/D; range m3: CHANGE TO F#4 3.1 – descending step – CHANGE ASCENDING LEAP m3 3.2 – ascending step – CHANGE DESCENDING LEAP m3 3.3 – ascending step – CHANGE NO INTERVAL CHANGE

RHYTHM II: 2 half notes, Measures 2 and 6

Measure 2 – starting pitch E4, chord em, G, range M2 (CHANGE TO A4)

DISTRACTOR 2-1 2-1 CHANGE TO A4 2.1 descending step – CHANGE ASCENDING m3

DISTRACTOR 2-2 2-2 CHANGE TO F#4 2.1 descending step CHANGE TO ascending step

Measure 6 – starting pitch A4, on D, D, range P5 (CHANGE TO E4) DISTRACTOR 6 6.1 – descending P5 – CHANGE NO INTERVAL

124 RHYTHM III: half note, eighth, quarter, eighth, Measures 5 and 7

Measure 5 – starting pitch B3, on G, A, range P5 (E4)

DISTRACTOR 5-1 5.1 – ascending step (accidental here) CHANGE – DESCENDING M3 LEAP 5.2 – ascending step CHANGE – DESCENDING STEP 5.3 – ascending step CHANGE – ASCENDING LEAP P5

DISTRACTOR 5-2 Starting pitch B3 – CHANGE TO A4 5.1 ascending step DESCENDING STEP 5.2 – ascending step ASCENDING LEAP m3rd 5.3 – ascending step ASCENDING LEAP M3

DISTRACTOR 7 Measure 7 – starting pitch G4, chord C, A7, range m3 (D5)

7.1 – same interval, no pitch change CHANGE – ASCENDING STEP 7.2 – descending step CHANGE DESCENDING LEAP m3 7.3 – descending step CHANGE ASCENDING LEAP m3

RHYTHM IV: Quarter, 2 eighths, and a half note - measures 4 and 8 Distractors 4 and 8-1 and 8-2

Measure 4

DISTRACTOR 4 –1 Starting pitch a3 on ii to V chord (DF#A); range 3rd; CHANGE – E4 4.1 descending step CHANGE ASCENDING STEP 4.2 minor 3 ascending leap CHANGE DESCENDING STEP 4.3 descending step CHANGE DESCENDING LEAP

125 Measure 8

DISTRACTOR 8-1 Starting pitch is d4 chord is V to I CHANGE b4 8.1 ascending step CHANGE DESCENDING STEP 8.2 ascending step NO CHANGE; SAME PITCH 8.3 ascending step CHANGE DESCENDING LEAP TRITONE

DISTRACTOR 8-2 Starting pitch d4 CHANGE TO C5 8.1 ascending step CHANGE DESCENDING LEAP 8.2 ascending step CHANGE TO ASCENDING m3 LEAP 8.3 ascending step CHANGE TO DESCENDING STEP

126 APPENDIX B

Informed Consent Form and Approval Letter from Human Subjects Committee

127 128 129 APPENDIX C

Pilot Study – Pre-Experiment Questionnaire

130 Music Experiment Questionnaire - Pretest

1. Please circle correct response. I am: female male

2. What is your current age? ______

3. Are you an ______undergraduate student or ______graduate student?

4. What is your academic major? ______

5. If you are majoring in music, skip to question #7. All other majors, please mark box of correct response.

I have no formal music training. Skip question to #7.

I have less than 3 years of formal music training. Please answer question #6.

I have more than 3 years of formal music training. Please answer question #6.

6. Please briefly describe the extent and type of music study and training.

7. How much time do you spend listening to music and performing music during a typical week? Please approximate using hours.

______Hours spent listening to music each week

______Hours spent performing (playing an instrument or singing) music (include individual practice, rehearsals, lessons, etc.) each week

Do you have any questions?

131 APPENDIX D

Main Experiment - Posttest Form for Pomp and Circumstance

132 Pomp & Cir 1st or 2nd

Answer each question.

Pair Correct Music Correct Picture Correct Music/Picture New

1. Pair Correct Picture New Music New Pair New

2. Pair Correct Picture New Music New Pair New

3. Pair Correct Picture New Music New Pair New

4. Pair Correct Picture New Music New Pair New

5. Pair Correct Picture New Music New Pair New

6. Pair Correct Picture New Music New Pair New

7. Pair Correct Picture New Music New Pair New

8. Pair Correct Picture New Music New Pair New

9. Pair Correct Picture New Music New Pair New

10. Pair Correct Picture New Music New Pair New

11. Pair Correct Picture New Music New Pair New

12. Pair Correct Picture New Music New Pair New

(Turn page over)

133 (Please continue)

Pair Correct Music Correct Picture Correct Music/Picture New

13. Pair Correct Picture New Music New Pair New

14. Pair Correct Picture New Music New Pair New

15. Pair Correct Picture New Music New Pair New

16. Pair Correct Picture New Music New Pair New

17. Pair Correct Picture New Music New Pair New

18. Pair Correct Picture New Music New Pair New

19. Pair Correct Picture New Music New Pair New

20. Pair Correct Picture New Music New Pair New

21. Pair Correct Picture New Music New Pair New

22. Pair Correct Picture New Music New Pair New

23. Pair Correct Picture New Music New Pair New

24. Pair Correct Picture New Music New Pair New

134 APPENDIX E

Main Experiment - Posttest Form for The Bailiff’s Daughter

135 Bailiff 1st or 2nd

Answer each question.

Pair Correct Music Correct Picture Correct Music/Picture New

1. Pair Correct Picture New Music New Pair New

2. Pair Correct Picture New Music New Pair New

3. Pair Correct Picture New Music New Pair New

4. Pair Correct Picture New Music New Pair New

5. Pair Correct Picture New Music New Pair New

6. Pair Correct Picture New Music New Pair New

7. Pair Correct Picture New Music New Pair New

8. Pair Correct Picture New Music New Pair New

9. Pair Correct Picture New Music New Pair New

10. Pair Correct Picture New Music New Pair New

11. Pair Correct Picture New Music New Pair New

12. Pair Correct Picture New Music New Pair New

(Turn page over)

136 Bailiff 1st or 2nd

(Please continue)

Pair Correct Music Correct Picture Correct Music/Picture New

13. Pair Correct Picture New Music New Pair New

14. Pair Correct Picture New Music New Pair New

15. Pair Correct Picture New Music New Pair New

16. Pair Correct Picture New Music New Pair New

17. Pair Correct Picture New Music New Pair New

18. Pair Correct Picture New Music New Pair New

19. Pair Correct Picture New Music New Pair New

20. Pair Correct Picture New Music New Pair New

21. Pair Correct Picture New Music New Pair New

22. Pair Correct Picture New Music New Pair New

23. Pair Correct Picture New Music New Pair New

24. Pair Correct Picture New Music New Pair New

137 APPENDIX F

Pilot Study – Post-Experiment Questionnaire

138 Music Experiment Questionnaire

1. Please guess at what percentage you got correct on the familiar music (Pomp & Circumstance) and symbols posttest. Circle one number.

0 10 20 30 40 50 60 70 80 90 100

2. Please guess at what percentage you got correct on the unfamiliar music (Bailiff’s Daughter) and symbols posttest. Circle one number.

0 10 20 30 40 50 60 70 80 90 100

3. Please rate how familiar the following melodies were to you. Please circle the number on the scale provided.

Never heard Somewhat Very Familiar It before Familiar Know it Well

Pomp & Circumstance 1 2 3 4 5

The Bailiff’s Daughter 1 2 3 4 5

4. In which of the following tasks do you engage while listening to music? Mark all that apply.

______Read ______Try to fall asleep ______Study/homework

______Computer ______Video/electronic games ______Have a conversation

Others? ______

Do you have any questions? If so, you may ask me now. Thank you!

139 APPENDIX G

Pilot Study – Instructions Script

140 Pilot test instructions

Thank you for volunteering for this study. Researchers have tried to determine if and under what conditions humans can successful complete two tasks at once.

During the main project, you will listen to a selection of music that is 30 seconds long while watching a set of 8 pictures. You will try to remember what musical phrase is paired with which picture. You will not be asked to remember the order. You will get to see and hear the training video 2 or 3 times before completing a 23-question posttest.

There will be 2 training videos with paired music and pictures and 2 posttests where you will circle a response on an answer sheet. The music will be familiar during one video and unfamiliar during the other video. We will watch a video and take the posttest, have a short break, then complete the second video.

Please take a look at the sheet titled Practice Items for Posttest Procedure. Follow along as I read the instructions.

• Please circle one response per test item indicating how these test items compare to the paired musical phrases and pictures you learned during the training video.

• If the test item contains a musical phrase and picture that were paired during the video, circle PAIR CORRECT under the Pair Correct column by the number of test item.

• If the test item contains a musical phrase from the training video paired with a new picture, circle PICTURE NEW under music correct column by the number of test item.

• If the test item contains a picture from the training video paired with a new musical phrase, circle MUSIC NEW under picture correct column by the number of test item.

• If the test item contains a new picture and new musical phrase that were not presented during the training video, circle PAIR NEW under music/picture new column by the number of test item.

• Please answer each question as accurately as possible. Guess if you are uncertain. Leave no questions unanswered.

Here is the first practice example.

Video 1 (Farmer’s Boy) - This music is probably unfamiliar to you.

(Play video tape)

141 You should have marked: MUSIC NEW ON #1 PAIR CORRECT ON #2

Let’s try another. This music is Hail to the Chief and probably familiar.

(Start video again)

You should have marked: PAIR NEW ON #1 PICTURE NEW ON #2

Are there any questions on the answer sheet?

There will be no feedback on the next tests.

Please have your answer sheet ready. There will be no feedback during the posttest. There are approximately 5 seconds during which you will respond to the test item.

Do you have any questions? Are you ready?

You will watch the training video for Pomp and Circumstance 2 (or 3) times and complete the posttest.

Start tape for 2 times (or 3 times)

Stop tape.

Please turn to the answer sheet for the Bailiff’s Daughter – the procedure is the same.

(Start Tape)

142 APPENDIX H

Pilot Study – Practice Test Form

143 Practice Items for Posttest Procedure

Please circle one response per test item indicating how these test items compare to the paired musical phrases and pictures you learned during the training video.

If the test item contains a musical phrase and picture that were paired during the video, circle PAIR CORRECT under the Pair Correct column by the number of test item.

If the test item contains a musical phrase from the training video paired with a new picture, circle PICTURE NEW under music correct column by the number of test item.

If the test item contains a picture from the training video paired with a new musical phrase, circle MUSIC NEW under picture correct column by the number of test item.

If the test item contains a new picture and new musical phrase that were not presented during the training video, circle PAIR NEW under music/picture new column by the number of test item.

Please answer each question as accurately as possible. Guess if you are uncertain. Leave no questions unanswered.

Example 1

Pair Correct Music Correct Picture Correct Music/Picture New

1. Pair Correct Picture New Music New Pair New

2. Pair Correct Picture New Music New Pair New

Example 2

Pair Correct Music Correct Picture Correct Both New

1. Pair Correct Picture New Music New Pair New

2. Pair Correct Picture New Music New Pair New

144 APPENDIX I

Main Experiment - Pre-Experiment Questionnaire

145 Music Experiment Questionnaire

1. Please circle correct response. I am: female male

2. What is your current age? ______

3. Are you an ______undergraduate student or ______graduate student?

4. What is your academic major? ______

5. If you are majoring in music, skip to question #7. All academic majors other than music, please the mark box of the correct response.

(Formal music training includes the following types of study: private lessons, performing with school or church chorus, orchestra or band, music theory classes or lessons)

I have no formal music training. Skip to question #7.

I have less than 3 years of formal music training. Please answer question #6.

I have more than 3 years of formal music training. Please answer question #6.

6. Please briefly describe the extent and type of music study and training.

7. Mark one response indicating much time you spend listening to music a typical week?

_____ 0 – 10 hours _____ 11-20 hours _____ 21-30 hours _____ more than 30 hours

8. Mark one response indicating much time you spend performing music (playing an instrument or singing, alone or with others) during a typical week?

_____ 0 – 10 hours _____ 11-20 hours _____ 21-30 hours _____ more than 30 hours

146 APPENDIX J

Main Experiment – Practice Test Form

147 Practice Items for Posttest Procedure

Please circle one response per test item indicating how these test items compare to the paired musical phrases and pictures you learned during the training video.

Circle PAIR CORRECT if the test item contains a musical phrase and picture that were paired during the video.

Circle PICTURE NEW if the test item contains a musical phrase from the training video paired with a new picture.

Circle MUSIC NEW if the test item contains a picture from the training video paired with a new musical phrase.

Circle PAIR NEW if the test item contains a new picture and new musical phrase that were not presented during the training video.

Please answer each question as accurately as possible. GUESS if you are uncertain. Leave no questions unanswered.

Example 1 – The Farmer’s Boy

Pair Correct Music Correct Picture Correct Music/Picture New

1. Pair Correct Picture New Music New Pair New

2. Pair Correct Picture New Music New Pair New

Example 2 – Hail to the Chief

Pair Correct Music Correct Picture Correct Both New

1. Pair Correct Picture New Music New Pair New

2. Pair Correct Picture New Music New Pair New

148 APPENDIX K

Main Experiment - Post-Experiment Questionnaire

149 Post Experiment Questionnaire

1. Please guess what percentage you got correct on the Pomp & Circumstance posttest. Circle one number.

0 10 20 30 40 50 60 70 80 90 100

2. Please guess what percentage you got correct on the Bailiff’s Daughter posttest. Circle one number.

0 10 20 30 40 50 60 70 80 90 100

3. Please rate how familiar the following melodies were to you. Please circle the number on the scale provided.

Never heard Somewhat Very Familiar It before Familiar Know it Well

Pomp & Circumstance 1 2 3 4 5

The Bailiff’s Daughter 1 2 3 4 5

4. In which of the following tasks do you engage while listening to music? Mark all that apply.

______Reading ______Trying to fall asleep ______Studying/homework

______Computer ______Video/electronic games ______During conversations

______While driving ______Household chores ______Exercise/working out

Others? ______

(Turn page over)

150 5. Please circle how your attention was focused on the music and pictures during the training video. For instance, if you focused your attention equally between the two, you would circle 50/50.

Pomp and Circumstance

Music 0 10 20 30 40 50 60 70 80 90 100 Pictures 100 90 80 70 60 50 40 30 20 10 0

Bailiff’s Daughter

Music 0 10 20 30 40 50 60 70 80 90 100 Pictures 100 90 80 70 60 50 40 30 20 10 0

6. Please write down what strategies you used in order to complete this memory task. If none, mark none.

______None. I used no particular strategy.

Thank you

151 APPENDIX L

Audio Instructions Accompanying Introductory and Instruction Slides in Experiment Videos

152 Audio Instructions for introduction through practice test

Slides 1, 11-12, 17-18, 23, 27, 30, 34, 39-40, 45-46, 49, and 52 – no audio

Slide 2 – “Thank for you participating in this research on focus of attention and memory for sights and sounds. During the main project, you will watch a short video 2 times and take a 24 question test. During one video, the music will be familiar. During the second, it will be unfamiliar.”

Slide 3 – “You will try to remember the pictures and music on a training video. You are trying to remember what music played during each picture. You will not have to remember the order.”

Slide 4 – “Each test item will contain one picture and a short segment of music. You will indicate if the music and/or the picture appeared during the training video.”

Slide 5 – “You will circle pair correct if both the music and the picture in the test item were in the video.”

Slide 6 – “You will circle picture new if the picture in the test item is new, but the music was in the video.”

Slide 7 – “You will circle music new if the music in the test item is new, but the picture was in the video.”

Slide 8 – “You will circle pair new if neither the music nor the picture was in the training video. They are both new.”

Slide 9 – “Please try to answer each question as accurately as possible, but if you are uncertain about an answer, please guess. This is part of the research. Leave no questions unanswered. Again, it is okay to guess.”

Slide 10 – “Let’s practice using the answer sheet. The first practice video is titled The Farmer’s Boy. This is folk song that is probably unfamiliar to you. It contains 4 pictures and the music. You will see the video trainer 2 times. Try to learn what music was playing during each picture. A test will follow.”

Slides 13 – 16 – music The Farmer’s Boy

Slide 19 – 22 – music The Farmer’s Boy

Slide 24 – “Please have your answer sheet ready and circle a response after each test item.”

Slide 25 – warning audio

Slide 26 – music – Audio distractor 1

153 Slide 28 – warning audio

Slide 29 – music – Measure 3 from Farmer’s

Slide 31 – “For question #1, you should have circled music new. The picture was on the video but the music was new.”

Slide 32 – “For number 2, you should have circle pair correct. Both the picture and the music were on the video.”

Slide 33 – “Let’s practice again. This practice video is titled Hail to the Chief. It is music that is probably familiar to you. The video contains 4 pictures and music. You will watch the video two times. The test will follow.”

Slides 35 – 38 – music Hail to the Chief

Slides 41 – 44 – music Hail to the Chief

Slide 47 – warning audio

Slide 48 – music – Audio distractor

Slide 50 – warning audio

Slide 51 – music – measure 3

Slide 53 – “Number 1, you should have circled pair new. Neither the picture nor the music was on the video.”

Slide 54 – “Number 2, you should have circle picture new. The music was on the video but the picture is new.”

Slide 55 – “Do you have any questions on how to use the answer sheet? Please ask me now.”

Slide 56 – “Please try to answer each question as accurately as possible, but if you are uncertain about an answer, please guess. This is part of the research. Leave no questions unanswered. Again, it is okay to guess.” (Same as slide 9 audio).

End 5:16

154 APPENDIX M

Introductory, Instruction, and Practice Test Slides

155 Slides 1, 11, 17, 23, 27, 30, 39, 45, 49, and 52 – Blank white screens Slides 13 – 16 and 18 – 22 – Figure 8 music and Figure 12 images Slide 35 – 38 and 41 – 44 – Figure 6 music and Figure 11 images

Slide 2 Slide 3 Slide 4

Slide 5 Slide 6 Slide 7

Slide 8 Slide 9 Slide 10

Slide 12 Slide 18 Slide 24

156 Slide 25 Slide 26 (Visual) Slide 26 (audio)

Slide 28 Slide 29 (Visual) Slide 29 (audio)

Slide 31 Slide 32 Slide 33

Slide 34 Slide 46 Slide 47

Slide 48 (original visual green) Slide 48 (audio) Slide 50

157 Slide 51 (original visual green) Slide 51 (audio) Slide 53

Slide 54 Slide 55 Slide 56

Slide 57 (Bailiff First) Slide 58 Slide 58 (Selective Attention pictures) (Dual Task)

158 APPENDIX N

Main Experiment – The Bailiff’s Daughter Posttest

159 Distractor Distractor

Trainer Trainer

Trainer Distractor

Distractor Trainer

Trainer Trainer

Distractor Trainer

Distractor Distractor

Distractor Trainer

Trainer Distractor

160 Distractor Distractor

Trainer Distractor

Trainer Trainer

Trainer Distractor

Distractor Distractor

Trainer Trainer

Trainer Trainer

Distractor Trainer

Distractor Distractor

161 Distractor Distractor

Trainer Distractor

Trainer Trainer

Trainer Distractor

Distractor Trainer

Distractor Trainer

162 APPENDIX O

Main Experiment - Pomp and Circumstance Posttest

163 Trainer Trainer

Distractor Trainer

Trainer Distractor

Trainer Distractor

Trainer Trainer

Distractor Trainer

Distractor Trainer

Distractor Distractor

Trainer Trainer

164 Distractor Distractor

Trainer Distractor

Distractor Distractor

Distractor Trainer

Trainer Distractor

Distractor Trainer

Trainer Trainer

Trainer Trainer

Distractor Distractor

165 Distractor Distractor

Trainer Distractor

Trainer Trainer

Trainer Distractor

Distractor Distractor

Distractor Trainer

166 APPENDIX P

The Bailiff’s Daughter Video

167 (1) (2) (3) (4)

(5) (6) (7) (8)

168 APPENDIX Q

Pomp and Circumstance Video

169 (1) (2) (3) (4)

(5) (6) (7) (8)

170 APPENDIX R

Raw Data Spreadsheets

171 Order One – Dual Task Bailiff’s Daughter First – Pomp and Circumstance Second Demographic Data Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years FM 1 1BDT F 22 U Music education Yes 12 FM 2 1BDT F 21 U Music therapy Yes 13 FM 3 1BDT F 18 U Music therapy Yes 14 FM 4 1BDT F 27 G Music education Yes 30 FM 5 1BDT F 25 G Music education Yes 31 FM 6 1BDT F 22 G Music therapy Yes 32 FM 7 1BDT F 23 G Music therapy Yes 33 FM 8 1BDT F 19 U Music therapy Yes 6 FN 9 1BDT F 18 U History X 13 FN 10 1BDT F 24 U Social work X 14 FN Biological 11 1BDT F 18 U X 25 science/pre med FN 12 1BDT F 19 U Psychology X 3 FN 13 1BDT F 19 U Education - English X 31 FN 14 1BDT F 19 U Graphic design X 32 FN Communication/ 15 1BDT F 21 U X 43 creative writing FN 16 1BDT F 19 U Undecided/education X 48

172 Order One – Dual Task Bailiff’s Daughter First – Pomp and Circumstance Second Demographic Data Continued Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years MM Music 17 1BDT M 42 G Yes 10 education MM 18 1BDT M 22 U Music - BA 17 Yes MM Music 19 1BDT M 21 U Yes 19 education MM Music 20 1BDT M 20 U Yes 2 education MM Music 21 1BDT M 20 U Yes 20 education MM 22 1BDT M 19 U Music - BA Yes 45 MM 23 1BDT M 21 U Music - BA Yes 46 MM Music 24 1BDT M 21 U Yes 49 education MN 25 1BDT M 18 U Undecided X 1 MN 26 1BDT M 18 U Chemistry X 10 MN Creative 27 1BDT M 20 U X 30 writing MN Information 28 1BDT M 20 U X 31 technology MN Creative 29 1BDT M 21 U X 40 writing MN 30 1BDT M 20 U Business X 41 MN Biological 31 1BDT M 21 U X 8 science MN Computer 32 1BDT M 23 G X 9 science

173 Order One – Dual Task Non-music Majors – Description of Music Training Code Description of Music Training FN 13 Band 9 years FN 14 Voice 10 years; chorus FN 25 Choir 5 years FN 3 Middle school chorus, piano lesson for few months FN 31 Choir 10 years; violin 6 years FN 32 Piano 1 year FN 43 Choir 4 years FN 48 Choir 9 years MN 1 Band 8 years, 1 year chorus MN 10 Band 10 years MN 30 Band 6 years MN 31 Piano 14 years MN 40 Band 4 years; piano as child MN 41 Piano 2 years; guitar 2 years; AP theory MN 8 Choir 4 years

MN 9 Choir 8 years

174 Order One – Dual Task Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation P&C P&C Bail Bail P&C Bailiff Code Tape Listening Performing % % % % Rate Rate Music Pics Music Pics FM 1 1BDT 2 2 5 1 70 30 50 50 12 FM 2 1BDT 1 1 4 1 10 90 50 50 13 FM 3 1BDT 3 1 4 1 60 40 80 20 14 FM 4 1BDT 2 2 5 1 20 80 60 40 30 FM 5 1BDT 3 1 5 3 40 60 60 40 31 FM 6 1BDT 4 4 5 1 10 90 70 30 32 FM 7 1BDT 1 1 4 1 50 50 30 70 33 FM 8 1BDT 1 1 4 1 30 70 50 50 6 FN 9 1BDT 3 1 5 3 40 60 50 50 13 FN 10 1BDT 1 1 5 3 50 50 40 60 14 FN 11 1BDT 4 2 5 1 20 80 50 50 25 FN 12 1BDT 2 1 3 3 50 50 50 50 3 FN 13 1BDT 4 1 5 1 60 40 50 50 31 FN 14 1BDT 3 1 5 1 10 90 80 20 32 FN 15 1BDT 2 1 4 1 40 60 60 40 43 FN 16 1BDT 3 2 5 2 10 90 50 50 48

175 Order One – Dual Task Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation Continued P&C Bailiff P&C P&C Bail Bail Code Tape Listening Performing Fam. Fam. % % % % Rate Rate Music Pics Music Pics MM 17 1BDT 4 2 5 1 40 60 70 30 10 MM 18 1BDT 4 1 5 1 80 20 60 40 17 MM 19 1BDT 4 4 5 1 10 90 70 30 19 MM 20 1BDT 1 2 5 1 40 60 60 40 2 MM 21 1BDT 2 2 4 1 40 60 20 80 20 MM 22 1BDT 3 1 5 1 30 70 50 50 45 MM 23 1BDT 2 2 5 1 50 50 50 50 46 MM 24 1BDT 2 1 5 1 30 70 40 60 49 MN 25 1BDT 3 2 5 3 40 60 50 50 1 MN 26 1BDT 4 2 5 2 70 30 40 60 10 MN 27 1BDT 2 1 5 1 80 20 40 60 30 MN 28 1BDT 2 1 5 1 50 50 50 50 31 MN 29 1BDT 4 2 4 1 30 70 70 30 40 MN 30 1BDT 4 1 4 2 20 80 80 20 41 MN 31 1BDT 3 2 5 1 10 90 90 10 8 MN 32 1BDT 1 1 3 1 60 40 90 10 9

176 Order One – Dual Task Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 1 1BDT 21 4 6 5 6 12 9 0 3 12 FM 2 1BDT 14 4 3 3 4 7 7 0 10 13 FM 3 1BDT 13 3 5 3 2 6 7 4 8 14 FM 4 1BDT 19 5 6 3 5 11 8 1 4 30 FM 5 1BDT 20 5 5 5 5 11 9 1 3 31 FM 6 1BDT 18 5 5 3 5 9 9 2 6 32 FM 7 1BDT 19 5 4 5 5 10 9 0 5 33 FM 8 1BDT 17 4 5 4 4 10 7 4 7 6 FN 9 1BDT 20 6 6 5 3 10 10 0 4 13 FN 10 1BDT 12 5 4 1 2 6 6 5 8 14 FN 11 1BDT 17 5 5 3 4 9 8 0 7 25 FN 12 1BDT 12 6 5 1 0 6 6 6 11 3 FN 13 1BDT 10 1 3 3 3 5 5 6 10 31 FN 14 1BDT 17 6 6 2 3 8 9 4 4 32 FN 15 1BDT 18 5 6 2 5 9 9 1 6 43 FN 16 1BDT 18 6 6 3 3 9 9 0 6 48

177 Order One – Dual Task Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 17 1BDT 18 4 5 4 5 11 7 4 6 10 MM 18 1BDT 16 4 4 5 3 7 9 6 5 17 MM 19 1BDT 17 2 6 5 4 9 8 2 5 19 MM 20 1BDT 14 4 5 2 3 7 7 8 5 2 MM 21 1BDT 19 5 6 4 4 11 8 5 5 20 MM 22 1BDT 19 6 5 4 4 9 10 6 5 45 MM 23 1BDT 14 1 4 3 6 7 7 1 7 46 MM 24 1BDT 20 4 6 5 5 9 11 4 2 49 MN 25 1BDT 20 5 6 5 4 10 10 5 4 1 MN 26 1BDT 16 5 3 5 3 8 8 6 8 10 MN 27 1BDT 10 2 3 2 3 4 6 4 12 30 MN 28 1BDT 12 3 4 2 3 6 6 5 12 31 MN 29 1BDT 18 5 5 4 4 9 9 5 4 40 MN 30 1BDT 17 5 4 4 4 7 10 5 6 41 MN 31 1BDT 15 4 4 4 3 8 7 7 8 8 MN 32 1BDT 16 4 6 2 4 8 8 4 7 9

178 Order One – Dual Task Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 1 1BDT 23 5 6 6 6 12 11 0 1 12 FM 2 1BDT 20 6 5 5 4 11 9 0 4 13 FM 3 1BDT 15 3 6 3 3 7 8 3 7 14 FM 4 1BDT 23 5 6 6 6 12 11 0 1 30 FM 5 1BDT 23 5 6 6 6 11 12 1 0 31 FM 6 1BDT 23 6 6 5 6 12 11 1 0 32 FM 7 1BDT 21 6 6 4 5 11 10 0 3 33 FM 8 1BDT 23 6 5 6 6 12 11 1 0 6 FN 9 1BDT 19 6 3 5 5 11 8 3 3 13 FN 10 1BDT 13 5 5 0 3 7 6 3 9 14 FN 11 1BDT 21 5 6 5 5 12 9 1 2 25 FN 12 1BDT 10 4 4 0 2 5 5 5 11 3 FN 13 1BDT 19 5 5 5 4 11 8 2 3 31 FN 14 1BDT 21 6 6 4 5 11 10 0 3 32 FN 15 1BDT 15 4 5 2 4 9 6 3 8 43 FN 16 1BDT 22 6 5 6 5 12 10 1 1 48

179 Order One – Dual Task Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 17 1BDT 22 5 5 6 6 11 11 0 2 10 MM 18 1BDT 17 5 4 5 3 8 9 4 3 17 MM 19 1BDT 22 6 6 5 5 12 10 0 2 19 MM 20 1BDT 23 5 6 6 6 12 11 0 1 2 MM 21 1BDT 23 6 6 5 6 12 11 0 1 20 MM 22 1BDT 21 5 6 5 5 11 10 0 3 45 MM 23 1BDT 18 3 4 5 6 10 8 0 6 46 MM 24 1BDT 23 6 6 5 6 11 12 1 0 49 MN 25 1BDT 21 5 5 6 5 10 11 0 3 1 MN 26 1BDT 21 5 6 5 6 11 10 1 3 10 MN 27 1BDT 15 4 5 1 5 8 7 5 6 30 MN 28 1BDT 21 5 5 5 6 12 9 0 3 31 MN 29 1BDT 22 5 5 6 6 12 10 0 2 40 MN 30 1BDT 20 4 5 5 6 11 9 0 4 41 MN 31 1BDT 20 5 4 6 5 12 8 3 1 8 MN 32 1BDT 17 3 4 5 5 8 9 2 6 9

180 Order One – Selective Attention to Music Bailiff’s Daughter First – Pomp and Circumstance Second Demographic Data Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years FM 1 2BSAM F 31 G Music education Yes 20 FM 2 2BSAM F 25 G Music education Yes 21 FM 3 2BSAM F 19 U Music undecided Yes 28 FM 4 2BSAM F 21 U Music education Yes 37 FM Music 5 2BSAM F 19 U Yes 48 performance FM 6 2BSAM F 20 U Music education Yes 49 FM 7 2BSAM F 19 U Music therapy Yes 7 FM Music 8 2BSAM F 22 U Yes 8 performance FN 9 2BSAM F 19 U Criminology X 1 FN 10 2BSAM F 18 U Business X 11 FN 11 2BSAM F 20 U History X 22 FN 12 2BSAM F 21 U English X 27 FN 13 2BSAM F 18 U Religion X 33 FN 14 2BSAM F 19 U Business X 37 FN 15 2BSAM F 19 U Social work X 41 FN 16 2BSAM F 19 U Social work X 42

181 Order One – Selective Attention to Music Bailiff’s Daughter First – Pomp and Circumstance Second Demographic Data Continued Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years MM Music 17 2BSAM M 32 G Yes 14 education MM Music 18 2BSAM M 19 U Yes 18 performance MM Music 19 2BSAM M 21 U Yes 26 education MM Music 20 2BSAM M 22 U Yes 27 education MM Music 21 2BSAM M 21 U Yes 34 education MM 22 2BSAM M 21 U Music - BA Yes 47 MM Music 23 2BSAM M 19 U Yes 48 education MM Music 24 2BSAM M 20 U Yes 7 education MN 25 2BSAM M 22 U English X 19 MN 26 2BSAM M 18 U Engineering X 20 MN Sports 27 2BSAM M 19 U X 32 management MN 28 2BSAM M 18 U Creative writing X 33 MN Sports 29 2BSAM M 19 U X 34 journalism MN 30 2BSAM M 20 U Criminology X 49 MN 31 2BSAM M 22 U Political science X 50 MN 32 2BSAM M 22 U Mathematics X 6

182 Order One – Selective Attention to Music Non-music Majors – Description of Music Training Code Description of Music Training FN 1 Band 5 years FN 22 Choir 10 years FN 27 Choir 8 years FN 33 Piano 8 years; chorus 8 years FN 37 Piano 7 years; voice 4 years FN 42 Choir 4 years; band 2 years; piano 1 years MN 19 Choir 6 years MN 20 Band 7 years MN 32 Piano 2 years MN 33 Band 6 years; guitar 3 years; piano 2 years MN 34 Guitar 7 years MN 49 Band 6 years MN 6 Choir 2 years

183 Order One – Selective Attention to Music Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation P&C P&C Bail Bail P&C Bailiff Code Tape Listening Performing % % % % Rate Rate Music Pics Music Pics FM 2B 1 2 2 5 2 60 40 80 20 20 SAM FM 2B 2 2 1 5 1 60 40 80 20 21 SAM FM 2B 3 4 2 5 2 50 50 70 30 28 SAM FM 2B 4 4 3 4 1 40 60 60 40 37 SAM FM 2B 5 2 1 5 1 10 90 90 10 48 SAM FM 2B 6 2 3 5 1 70 30 80 20 49 SAM FM 2B 7 4 2 5 1 90 10 60 40 7 SAM FM 2B 8 1 2 5 2 70 30 70 30 8 SAM FN 2B 9 4 1 5 2 10 90 50 50 1 SAM FN 2B 10 3 1 4 1 40 60 70 30 11 SAM FN 2B 11 2 1 4 1 30 70 50 50 22 SAM FN 2B 12 3 2 4 2 70 30 60 40 27 SAM FN 2B 13 1 1 4 1 50 50 40 60 33 SAM FN 2B 14 2 2 5 1 70 30 90 10 37 SAM FN 2B 15 4 1 5 1 10 90 80 20 41 SAM FN 2B 16 2 1 5 1 80 20 50 50 42 SAM

184 Order One – Selective Attention to Music Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation Continued P&C Bailiff P&C P&C Bail Bail Code Tape Listening Performing Fam. Fam. % % % % Rate Rate Music Pics Music Pics MM 2B 17 3 3 5 1 40 60 60 40 14 SAM MM 2B 18 2 2 5 1 10 90 60 40 18 SAM MM 2B 19 4 3 5 1 70 30 90 10 26 SAM MM 2B 20 4 2 5 1 80 20 30 70 27 SAM MM 2B 21 3 1 5 2 50 50 70 30 34 SAM MM 2B 22 2 2 5 1 60 40 70 30 47 SAM MM 2B 23 1 1 4 1 60 40 60 40 48 SAM MM 2B 24 3 1 5 1 60 40 80 20 7 SAM MN 2B 25 4 2 4 2 40 60 70 30 19 SAM MN 2B 26 2 1 5 1 70 30 80 20 20 SAM MN 2B 27 4 1 4 1 70 30 80 20 32 SAM MN 2B 28 2 2 5 1 50 50 30 70 33 SAM MN 2B 29 3 2 5 2 50 50 70 30 34 SAM MN 2B 30 2 1 5 2 60 40 80 20 49 SAM MN 2B 31 2 1 3 1 70 30 80 20 50 SAM MN 2B 32 1 1 4 1 40 60 30 70 6 SAM

185 Order One – Selective Attention to Music Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter

Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 2B 1 19 5 6 5 3 9 10 0 5 20 SAM FM 2B 2 19 4 5 6 4 9 10 1 4 21 SAM FM 2B 3 18 6 5 4 3 10 8 0 6 28 SAM FM 2B 4 15 6 4 3 2 6 9 4 5 37 SAM FM 2B 5 14 5 4 4 1 5 9 2 8 48 SAM FM 2B 6 13 4 4 2 3 4 9 3 10 49 SAM FM 2B 7 16 5 4 2 5 8 8 3 5 7 SAM FM 2B 8 17 3 4 4 6 10 7 1 6 8 SAM FN 2B 9 12 3 3 3 3 8 4 4 8 1 SAM FN 2B 10 10 3 4 1 2 6 4 1 12 11 SAM FN 2B 11 16 4 5 4 3 7 9 1 7 22 SAM FN 2B 12 15 3 6 2 4 7 8 4 5 27 SAM FN 2B 13 18 5 6 4 3 9 9 2 6 33 SAM FN 2B 14 21 6 6 5 4 11 10 0 3 37 SAM FN 2B 15 13 3 4 1 5 7 6 2 10 41 SAM FN 2B 16 15 4 3 3 5 6 9 5 5 42 SAM

186 Order One – Selective Attention to Music Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 2B 17 13 3 2 3 5 6 7 7 6 14 SAM MM 2B 18 18 4 5 4 5 8 10 0 6 18 SAM MM 2B 19 14 3 1 4 6 8 6 4 8 26 SAM MM 2B 20 17 3 6 3 5 7 10 5 3 27 SAM MM 2B 21 15 4 4 3 4 9 6 3 6 34 SAM MM 2B 22 18 4 6 4 4 10 8 1 6 47 SAM MM 2B 23 22 5 6 5 6 12 10 0 2 48 SAM MM 2B 24 20 5 6 5 4 11 9 0 4 7 SAM MN 2B 25 15 5 4 4 2 5 10 1 9 19 SAM MN 2B 26 13 3 5 2 3 3 10 3 9 20 SAM MN 2B 27 13 2 6 1 4 7 6 6 6 32 SAM MN 2B 28 16 2 5 5 4 8 8 0 8 33 SAM MN 2B 29 16 3 3 5 5 8 8 3 7 34 SAM MN 2B 30 20 4 6 5 5 10 10 0 4 49 SAM MN 2B 31 17 3 5 5 4 10 7 1 7 50 SAM MN 2B 32 13 4 4 3 2 6 7 0 11 6 SAM

187 Order One – Selective Attention to Music Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 2B 1 23 6 5 6 6 12 11 1 0 20 SAM FM 2B 2 21 4 6 5 6 10 11 2 1 21 SAM FM 2B 3 22 5 5 6 6 11 11 0 2 28 SAM FM 2B 4 19 5 6 4 4 11 8 1 4 37 SAM FM 2B 5 21 5 5 5 6 11 10 0 2 48 SAM FM 2B 6 18 6 3 5 4 11 7 1 5 49 SAM FM 2B 7 19 3 5 5 6 10 9 0 5 7 SAM FM 2B 8 22 5 5 6 6 11 11 0 2 8 SAM FN 2B 9 21 5 5 5 6 11 10 0 3 1 SAM FN 2B 10 21 5 5 5 6 11 10 0 3 11 SAM FN 2B 11 23 6 6 6 5 11 12 0 1 22 SAM FN 2B 12 20 4 5 5 6 10 10 2 2 27 SAM FN 2B 13 19 4 4 6 5 10 9 0 5 33 SAM FN 2B 14 20 6 6 5 3 11 9 0 4 37 SAM FN 2B 15 18 6 4 4 4 10 8 1 5 41 SAM FN 2B 16 18 4 5 4 5 10 8 5 1 42 SAM

188 Order One – Selective Attention to Music Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 2B 17 20 4 5 5 6 10 10 2 2 14 SAM MM 2B 18 22 5 6 5 6 12 19 0 2 18 SAM MM 2B 19 21 5 4 6 6 10 11 0 3 26 SAM MM 2B 20 21 4 6 6 5 11 10 2 1 27 SAM MM 2B 21 20 5 4 5 6 11 9 2 3 34 SAM MM 2B 22 22 6 4 6 6 11 11 0 2 47 SAM MM 2B 23 20 5 4 6 5 10 10 2 2 48 SAM MM 2B 24 21 5 4 6 6 11 10 0 3 7 SAM MN 2B 25 17 4 6 3 4 9 8 3 6 19 SAM MN 2B 26 22 5 6 5 6 11 11 0 2 20 SAM MN 2B 27 20 4 5 5 6 10 10 1 3 32 SAM MN 2B 28 22 6 6 6 4 11 11 2 0 33 SAM MN 2B 29 18 2 5 4 6 10 8 3 3 34 SAM MN 2B 30 18 3 6 3 6 9 9 2 4 49 SAM MN 2B 31 18 3 5 4 6 9 9 2 5 50 SAM MN 2B 32 21 5 6 4 6 12 9 0 3 6 SAM

189 Order One – Selective Attention to Pictures Bailiff’s Daughter First – Pomp and Circumstance Second Demographic Data Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years FM 3B 1 F 30 G Music education Yes 15 SAP FM 3B 2 F 31 G Music education Yes 16 SAP FM 3B 3 F 24 G Music therapy Yes 25 SAP FM 3B 4 F 22 U Music education Yes 26 SAP FM 3B 5 F 30 G Music education Yes 34 SAP FM 3B 6 F 25 G Music education Yes 40 SAP FM 3B 7 F 20 U Music education Yes 50 SAP FM 3B 8 F 20 U BFA music Yes 9 SAP FN 3B 9 F 20 U English X 16 SAP FN 3B 10 F 19 U Undecided X 2 SAP FN 3B 11 F 19 U Psychology X 28 SAP FN 3B 12 F 18 U Humanities X 29 SAP FN 3B Exercise 13 F 21 U X 30 SAP physiology FN 3B Psychology/ 14 F 21 U X 46 SAP Pre-med FN 3B 15 F 20 U English X 7 SAP FN 3B 16 F 21 U Merchandising X 8 SAP

190 Order One – Selective Attention to Pictures Bailiff’s Daughter First – Pomp and Circumstance Second Demographic Data Continued Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years MM 3B 17 M 27 G Music education Yes 11 SAP MM 3B 18 M 23 G Music performance Yes 29 SAP MM 3B 19 M 28 G Music therapy Yes 30 SAP MM 3B 20 M 29 G Music education Yes 37 SAP MM 3B 21 M 19 U Music education Yes 39 SAP MM 3B 22 M 18 U Music education Yes 4 SAP MM 3B 23 M 19 U Music education Yes 40 SAP MM 3B 24 M 24 G Musicology Yes 9 SAP MN 3B 25 M 19 U Criminology/psychology X 11 SAP MN 3B 26 M 19 U Civil engineering X 12 SAP MN 3B Business-risk 27 M 19 U X 13 SAP management/insurance MN 3B 28 M 20 U Biological science X 25 SAP MN 3B 29 M 20 U Engineering X 26 SAP MN 3B 30 M 20 U Criminology X 37 SAP MN 3B 31 M 19 U Hospitality/business X 39 SAP MN 3B 32 M 21 U Mass media studies X 46 SAP

191 Order One – Selective Attention to Pictures Non-music Majors – Description of Music Training Code Description of Music Training FN 16 Band 10 years FN 2 Band 6 years FN 28 Choir 7 years; piano 11 years FN 29 Chorus/band 7 years; theatre FN 30 Choir 2 years FN 46 Band 4 years FN 7 Band 7 years, theory, church choir, community orchestra FN 8 Band 7 years; chorus MN 25 Piano 2 years; guitar 2 years; drums 2 years MN 39 Band 7 years

192 Order One – Selective Attention to Pictures Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation P&C P&C Bail Bail P&C Bailiff Code Tape Listening Performing % % % % Rate Rate Music Pics Music Pics FM 3B 1 4 2 5 1 10 90 40 60 15 SAP FM 3B 2 1 2 5 1 10 90 50 50 16 SAP FM 3B 3 3 2 5 1 30 70 30 70 25 SAP FM 3B 4 2 2 4 1 40 60 20 80 26 SAP FM 3B 5 2 1 5 1 40 60 50 50 34 SAP FM 3B 6 3 2 5 1 80 20 40 60 40 SAP FM 3B 7 3 2 4 1 20 80 10 90 50 SAP FM 3B 8 4 3 5 2 40 60 50 50 9 SAP FN 3B 9 2 1 4 1 10 90 40 60 16 SAP FN 3B 10 3 1 5 3 90 10 60 40 2 SAP FN 3B 11 1 1 4 1 30 70 50 50 28 SAP FN 3B 12 4 2 4 1 20 80 20 80 29 SAP FN 3B 13 3 2 4 1 40 60 70 30 30 SAP FN 3B 14 1 1 4 2 10 90 30 70 46 SAP FN 3B 15 3 1 5 2 10 90 20 80 7 SAP FN 3B 16 3 1 5 1 50 50 70 30 8 SAP

193 Order One – Selective Attention to Pictures Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation Continued P&C Bailiff P&C P&C Bail Bail Code Tape Listening Performing Fam. Fam. % % % % Rate Rate Music Pics Music Pics MM 3B 17 3 2 5 2 20 80 50 50 11 SAP MM 3B 18 4 3 4 1 20 80 20 80 29 SAP MM 3B 19 3 1 5 1 10 90 30 70 30 SAP MM 3B 20 1 1 5 1 10 90 40 60 37 SAP MM 3B 21 4 3 5 1 50 50 30 70 39 SAP MM 3B 22 3 2 5 1 10 90 40 60 4 SAP MM 3B 23 2 1 5 1 10 90 30 70 40 SAP MM 3B 24 4 4 5 1 30 70 20 80 9 SAP MN 3B 25 3 1 5 3 70 30 60 40 11 SAP MN 3B 26 1 1 4 1 30 70 50 50 12 SAP MN 3B 27 4 1 5 1 60 40 20 80 13 SAP MN 3B 28 1 1 5 1 90 10 80 20 25 SAP MN 3B 29 3 1 3 1 40 60 20 80 26 SAP MN 3B 30 4 1 4 2 10 90 40 60 37 SAP MN 3B 31 3 1 5 1 20 80 50 50 39 SAP MN 3B 32 3 1 5 1 30 70 40 60 46 SAP

194 Order One – Selective Attention to Pictures Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter

Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 3B 1 12 1 6 1 4 6 6 5 8 15 SAP FM 3B 2 13 6 3 3 1 6 7 1 10 16 SAP FM 3B 3 17 6 6 3 2 8 9 0 7 25 SAP FM 3B 4 19 6 5 3 5 9 10 1 5 26 SAP FM 3B 5 12 4 3 3 2 5 7 3 11 34 SAP FM 3B 6 14 3 5 4 2 8 6 3 8 40 SAP FM 3B 7 17 6 6 2 3 9 8 1 6 50 SAP FM 3B 8 21 5 5 5 6 11 10 0 3 9 SAP FN 3B 9 16 3 6 4 3 9 7 2 7 16 SAP FN 3B 10 17 4 4 4 5 9 8 0 7 2 SAP FN 3B 11 20 4 6 5 5 11 9 0 4 28 SAP FN 3B 12 16 4 4 3 3 6 8 3 10 29 SAP FN 3B 13 17 6 4 4 3 9 8 1 6 30 SAP FN 3B 14 20 5 6 4 6 10 10 0 4 46 SAP FN 3B 15 15 3 6 3 3 8 7 0 9 7 SAP FN 3B 16 18 5 4 5 4 9 9 0 6 8 SAP

195 Order One – Selective Attention to Pictures Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 3B 17 15 4 5 4 2 7 8 4 9 11 SAP MM 3B 18 11 5 3 2 1 6 5 1 12 29 SAP MM 3B 19 19 4 5 4 6 10 9 3 2 30 SAP MM 3B 20 20 4 6 5 5 10 10 0 4 37 SAP MM 3B 21 19 3 5 5 6 12 7 0 5 39 SAP MM 3B 22 19 5 5 4 5 8 11 1 4 4 SAP MM 3B 23 22 5 6 5 6 11 11 0 2 40 SAP MM 3B 24 18 5 4 5 4 9 9 2 5 9 SAP MN 3B 25 16 5 5 3 3 6 10 0 8 11 SAP MN 3B 26 20 5 5 4 6 10 10 0 4 12 SAP MN 3B 27 16 5 5 4 2 9 7 0 8 13 SAP MN 3B 28 20 5 5 5 5 10 10 1 3 25 SAP MN 3B 29 20 5 6 5 5 10 10 0 4 26 SAP MN 3B 30 15 4 5 3 3 6 9 2 8 37 SAP MN 3B 31 18 3 6 4 5 10 8 0 6 39 SAP MN 3B 32 17 4 6 3 4 9 8 0 7 46 SAP

196 Order One – Selective Attention to Pictures Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 3B 1 24 6 6 6 6 12 12 0 0 15 SAP FM 3B 2 19 5 4 4 6 11 8 0 5 16 SAP FM 3B 3 20 6 4 6 4 10 10 1 3 25 SAP FM 3B 4 17 4 4 4 5 8 9 0 7 26 SAP FM 3B 5 21 6 5 5 5 10 11 0 3 34 SAP FM 3B 6 15 4 5 2 4 9 6 5 7 40 SAP FM 3B 7 20 6 6 5 3 11 9 0 4 50 SAP FM 3B 8 22 5 6 6 5 12 10 0 2 9 SAP FN 3B 9 19 4 3 6 6 9 10 1 4 16 SAP FN 3B 10 21 5 5 5 6 11 10 0 3 2 SAP FN 3B 11 22 4 6 6 5 11 11 1 2 28 SAP FN 3B 12 19 4 5 5 5 10 9 1 4 29 SAP FN 3B 13 22 5 5 6 6 11 11 1 2 30 SAP FN 3B 14 23 6 6 5 6 12 11 0 1 46 SAP FN 3B 15 19 4 4 5 6 10 9 0 5 7 SAP FN 3B 16 20 5 5 4 6 10 10 0 4 8 SAP

197 Order One – Selective Attention to Pictures Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 3B 17 22 5 6 5 6 11 11 1 1 11 SAP MM 3B 18 20 5 4 5 6 12 8 2 2 29 SAP MM 3B 19 18 3 4 5 6 9 9 2 4 30 SAP MM 3B 20 23 6 5 6 6 11 12 0 1 37 SAP MM 3B 21 22 6 4 6 6 11 11 0 2 39 SAP MM 3B 22 21 6 4 5 6 12 9 0 3 4 SAP MM 3B 23 23 6 5 6 6 11 12 0 1 40 SAP MM 3B 24 21 6 4 6 5 10 11 0 3 9 SAP MN 3B 25 21 6 5 4 6 11 10 0 3 11 SAP MN 3B 26 23 6 6 5 6 12 11 0 1 12 SAP MN 3B 27 19 5 5 3 5 11 8 2 3 13 SAP MN 3B 28 24 6 6 6 6 12 12 0 0 25 SAP MN 3B 29 21 5 6 3 6 11 9 3 2 26 SAP MN 3B 30 18 4 3 5 6 10 8 3 3 37 SAP MN 3B 31 21 5 5 5 6 10 11 2 1 39 SAP MN 3B 32 19 4 5 4 6 9 10 3 2 46 SAP

198 Order Two – Dual Task Pomp and Circumstance First – Bailiff’s Daughter Second Demographic Data Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years FM 4PC 1 F 20 U Music therapy Yes 17 DT FM 4PC 2 F 23 G Music performance Yes 18 DT FM 4PC 3 F 23 G Music education Yes 2 DT FM 4PC 4 F 20 U Music education Yes 27 DT FM 4PC 5 F 18 U Music education Yes 3 DT FM 4PC 6 F 23 G Music education Yes 38 DT FM 4PC 7 F 23 U Music therapy Yes 42 DT FM 4PC 8 F 21 U Music therapy Yes 43 DT FN 4PC 9 F 21 U English X 20 DT FN 4PC 10 F 22 U Sociology X 21 DT FN 4PC 11 F 18 U Undecided X 23 DT FN 4PC 12 F 18 U Undecided X 34 DT FN 4PC 13 F 19 U Psychology X 38 DT FN 4PC 14 F 21 U Humanities X 45 DT FN 4PC 15 F 19 U Communications X 5 DT FN 4PC 16 F 18 U Psychology X 9 DT

199 Order Two – Dual Task Pomp and Circumstance First – Bailiff’s Daughter Second Demographic Data Continued Un / Music No < 3 Code Tape Gender Age Degree > 3 years grad Major training years MM 4PC 17 M 26 G Music education Yes 12 DT MM 4PC 18 M 20 U Music education Yes 13 DT MM 4PC 19 M 19 U Music - business Yes 22 DT MM 4PC 20 M 25 G Music theory Yes 23 DT MM 4PC 21 M 22 U Music education Yes 3 DT MM 4PC 22 M 20 U Music education Yes 42 DT MM 4PC Music 23 M 21 U Yes 43 DT performance MM 4PC 24 M 19 U Music - BA Yes X2 DT MN 4PC 25 M 19 U Business X 18 DT MN 4PC 26 M 19 U Communications X 21 DT MN 4PC 27 M 20 U Dietetics X 28 DT MN 4PC Biological 28 M 19 U X 3 DT science/pre med MN 4PC Computer 29 M 19 U X 35 DT science MN 4PC International 30 M 19 U X 44 DT affairs MN 4PC 31 M 18 U Psychology X 45 DT MN 4PC Residential 32 M 20 U X 7 DT sciences/housing

200 Order Two – Dual Task Non-music Majors – Description of Music Training Code Description of Music Training FN 20 Band 7 years; chorus FN 21 Choir 8 years; band 6 years FN 23 Band 7 years FN 34 Voice 10 years FN 38 Choir 2 years FN 45 Voice 8 years FN 5 Cello/viola 1 year, trumpet 2 years, church choir FN 9 Band 7 years; jazz 2 years MN 18 Piano 4 years; guitar 9 years; choir 4 years MN 28 Band 4 years MN 3 Horn 6 years MN 35 2 HS musicals MN 44 Band 1 year; piano 1 year MN 45 Piano 1 year; guitar 1 year MN 7 Band 9 years; piano 3 years

201 Order Two – Dual Task Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation P&C P&C Bail Bail P&C Bailiff Code Tape Listening Performing % % % % Rate Rate Music Pics Music Pics FM 4PC 1 4 4 5 1 40 60 20 80 17 DT FM 4PC 2 2 3 5 1 30 70 50 50 18 DT FM 4PC 3 4 2 5 1 80 20 60 40 2 DT FM 4PC 4 3 3 5 1 40 60 70 30 27 DT FM 4PC 5 1 1 5 1 10 90 60 40 3 DT FM 4PC 6 1 1 5 1 10 90 60 40 38 DT FM 4PC 7 2 1 5 1 10 90 40 60 42 DT FM 4PC 8 4 2 5 1 30 70 60 40 43 DT FN 4PC 9 2 1 3 1 50 50 70 30 20 DT FN 4PC 10 4 1 4 1 20 80 50 50 21 DT FN 4PC 11 3 2 5 2 30 70 80 20 23 DT FN 4PC 12 4 1 5 2 20 80 50 50 34 DT FN 4PC 13 3 1 3 1 10 90 10 90 38 DT FN 4PC 14 1 1 5 1 50 50 60 40 45 DT FN 4PC 15 1 1 5 1 50 50 70 30 5 DT FN 4PC 16 4 1 5 3 30 70 70 30 9 DT

202 Order Two – Dual Task Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation Continued P&C Bailiff P&C P&C Bail Bail Code Tape Listening Performing Fam. Fam. % % % % Rate Rate Music Pics Music Pics MM 4PC 17 2 2 5 1 30 70 50 50 12 DT MM 4PC 18 3 3 5 1 50 50 50 50 13 DT MM 4PC 19 2 1 4 2 60 40 30 70 22 DT MM 4PC 20 1 1 5 1 10 90 30 70 23 DT MM 4PC 21 2 2 5 1 30 70 10 90 3 DT MM 4PC 22 4 1 5 1 10 90 50 50 42 DT MM 4PC 23 4 3 5 1 50 50 70 30 43 DT MM 4PC 24 3 2 4 1 30 70 40 60 X2 DT MN 4PC 25 1 2 5 3 0 100 50 50 18 DT MN 4PC 26 3 1 4 2 30 70 30 70 21 DT MN 4PC 27 2 1 5 1 30 70 80 20 28 DT MN 4PC 28 4 1 3 1 40 60 50 50 3 DT MN 4PC 29 3 1 3 1 30 70 40 60 35 DT MN 4PC 30 1 1 5 1 40 60 60 40 44 DT MN 4PC 31 1 1 4 1 60 40 50 50 45 DT MN 4PC 32 3 1 5 2 30 70 40 60 7 DT

203 Order Two – Dual Task Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 4PC 1 19 6 6 4 3 11 8 0 5 17 DT FM 4PC 2 22 5 6 5 6 11 11 0 2 18 DT FM 4PC 3 14 5 3 4 2 7 7 2 8 2 DT FM 4PC 4 12 4 3 3 2 5 7 1 11 27 DT FM 4PC 5 18 5 6 3 4 7 11 2 4 3 DT FM 4PC 6 17 5 5 3 4 7 10 1 6 38 DT FM 4PC 7 14 2 6 2 4 7 7 5 6 42 DT FM 4PC 8 19 5 3 5 6 10 9 1 4 43 DT FN 4PC 9 17 4 6 4 3 8 9 2 6 20 DT FN 4PC 10 13 3 4 3 3 5 8 5 9 21 DT FN 4PC 11 20 3 6 5 6 10 10 0 4 23 DT FN 4PC 12 8 2 3 1 2 4 4 6 13 34 DT FN 4PC 13 19 3 4 6 6 10 9 1 5 38 DT FN 4PC 14 18 4 6 5 3 10 8 1 6 45 DT FN 4PC 15 18 4 6 3 5 9 9 2 4 5 DT FN 4PC 16 17 5 5 3 4 7 10 0 7 9 DT

204 Order Two – Dual Task Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 4PC 17 15 3 5 4 5 9 6 1 8 12 DT MM 4PC 18 23 6 6 6 5 12 11 0 1 13 DT MM 4PC 19 20 5 6 5 4 11 9 0 4 22 DT MM 4PC 20 16 4 4 5 3 9 7 0 8 23 DT MM 4PC 21 15 3 3 5 4 5 10 0 9 3 DT MM 4PC 22 12 3 3 3 3 3 9 4 8 42 DT MM 4PC 23 20 6 6 4 4 9 11 1 3 43 DT MM 4PC 24 19 4 5 5 5 8 11 1 4 X2 DT MN 4PC 25 16 4 6 2 4 7 9 3 5 18 DT MN 4PC 26 12 2 4 3 3 7 5 3 9 21 DT MN 4PC 27 10 2 3 1 4 4 6 7 10 28 DT MN 4PC 28 18 5 5 2 6 10 8 0 6 3 DT MN 4PC 29 16 4 5 3 4 8 8 0 8 35 DT MN 4PC 30 14 5 4 2 3 6 8 0 10 44 DT MN 4PC 31 14 5 5 2 2 6 8 0 10 45 DT MN 4PC 32 20 5 6 5 4 9 11 0 4 7 DT

205 Order Two – Dual Task Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 4PC 1 21 6 6 5 4 11 10 1 2 17 DT FM 4PC 2 22 5 6 6 5 12 10 0 2 18 DT FM 4PC 3 22 6 6 5 5 12 10 0 2 2 DT FM 4PC 4 22 6 5 6 5 10 12 0 2 27 DT FM 4PC 5 20 4 5 5 6 10 10 2 2 3 DT FM 4PC 6 24 6 6 6 6 12 12 0 0 38 DT FM 4PC 7 21 5 6 5 5 11 10 1 2 42 DT FM 4PC 8 17 4 3 6 4 9 8 2 6 43 DT FN 4PC 9 20 4 5 4 6 11 9 0 4 20 DT FN 4PC 10 18 5 4 4 5 9 9 2 4 21 DT FN 4PC 11 20 5 5 4 6 12 8 0 4 23 DT FN 4PC 12 18 5 5 4 4 11 7 0 6 34 DT FN 4PC 13 19 4 3 6 6 9 10 0 5 38 DT FN 4PC 14 17 3 4 4 6 8 9 1 6 45 DT FN 4PC 15 20 5 6 4 5 12 8 1 3 5 DT FN 4PC 16 19 4 5 6 4 10 9 0 5 9 DT

206 Order Two – Dual Task Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 4PC 17 19 4 4 5 6 10 9 0 5 12 DT MM 4PC 18 16 4 3 5 4 9 7 3 5 13 DT MM 4PC 19 23 5 6 6 6 12 11 0 1 22 DT MM 4PC 20 23 5 6 6 6 12 11 0 1 23 DT MM 4PC 21 23 5 6 6 6 12 11 0 1 3 DT MM 4PC 22 21 5 6 4 6 11 10 2 1 42 DT MM 4PC 23 22 5 5 6 6 11 11 0 2 43 DT MM 4PC 24 19 5 5 4 5 11 8 3 2 X2 DT MN 4PC 25 21 5 5 5 6 10 11 2 1 18 DT MN 4PC 26 11 1 3 2 5 7 4 7 9 21 DT MN 4PC 27 19 4 5 4 6 10 9 1 5 28 DT MN 4PC 28 19 6 4 4 5 10 9 1 5 3 DT MN 4PC 29 20 5 5 4 6 11 9 1 4 35 DT MN 4PC 30 19 6 5 3 5 11 8 0 5 44 DT MN 4PC 31 13 4 5 2 2 7 6 1 10 45 DT MN 4PC 32 22 6 6 5 5 11 11 2 0 7 DT

207 Order Two – Selective Attention to Music Pomp and Circumstance First –Second Bailiff’s Daughter Demographic Data Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years FM 5PC 1 F 22 U Music education Yes 1 SAM FM 5PC 2 F 20 U Music therapy Yes 10 SAM FM 5PC 3 F 21 U Music therapy Yes 11 SAM FM 5PC 4 F 18 U Music therapy Yes 22 SAM FM 5PC 5 F 24 G Music therapy Yes 29 SAM FM 5PC 6 F 21 U Music education Yes 39 SAM FM 5PC 7 F 22 U Music education Yes 44 SAM FM 5PC 8 F 33 G Music therapy Yes 46 SAM FN 5PC Communication 9 F 23 G X 10 SAM disorder FN 5PC 10 F 19 U Social work X 17 SAM FN 5PC Mechanical 11 F 19 U X 18 SAM engineering FN 5PC Education - social 12 F 20 U X 19 SAM science FN 5PC 13 F 21 U Accounting X 35 SAM FN 5PC 14 F 18 U Advertising X 39 SAM FN 5PC Communication 15 F 22 U X 4 SAM disorder FN 5PC 16 F 19 U Political science X 47 SAM

208 Order Two– Selective Attention to Music Bailiff’s Daughter First – Pomp and Circumstance Second Demographic Data Continued Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years MM 5PC Music 17 M 18 U Yes 1 SAM composition MM 5PC 18 M 38 G Music education Yes 16 SAM MM 5PC 19 M 26 U Music education Yes 24 SAM MM 5PC 20 M 25 G Music theory Yes 25 SAM MM 5PC 21 M 19 U Music education Yes 33 SAM MM 5PC 22 M 23 G Musicology Yes 35 SAM MM 5PC Music 23 M 18 U Yes 41 SAM performance MM 5PC Music 24 M 19 U Yes 8 SAM composition MN 5PC Computer 25 M 18 U X 15 SAM science MN 5PC Exercise 26 M 20 U X 16 SAM physiology MN 5PC Computer 27 M 22 U X 2 SAM science MN 5PC 28 M 19 U Business X 22 SAM MN 5PC Civil 29 M 19 U X 24 SAM engineering MN 5PC 30 M 19 U Business X 47 SAM MN 5PC Biological 31 M 19 U X 48 SAM science MN 5PC 32 M 20 U Communications X 51 SAM

209 Order Two – Selective Attention to Music Non-music Majors – Description of Music Training Code Description of Music Training FN 10 Band 11 years FN 17 Choirs 10 years FN 18 Choir 8 years; orchestra 7 years FN 19 Piano 16 years; music history & theory FN 35 Choir 2 years FN 39 Choir 8 years FN 47 Choir 2 years; band 2 years MN 15 Orchestra 5 years; guitar 4 years MN 16 Choir 5 years MN 2 Band 10 years MN 22 Band 7 years MN 51 Band 6 years

210 Order Two – Selective Attention to Music Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation P&C P&C Bail Bail P&C Bailiff Code Tape Listening Performing % % % % Rate Rate Music Pics Music Pics FM 5PC 1 4 4 5 1 60 40 80 20 1 SAM FM 5PC 2 3 2 5 1 80 20 80 20 10 SAM FM 5PC 3 1 2 5 1 50 50 70 30 11 SAM FM 5PC 4 2 1 5 1 10 90 60 40 22 SAM FM 5PC 5 4 4 5 1 60 40 50 50 29 SAM FM 5PC 6 3 3 5 1 80 20 80 20 39 SAM FM 5PC 7 2 3 5 1 40 60 80 20 44 SAM FM 5PC 8 2 2 5 1 50 50 80 20 46 SAM FN 5PC 9 1 1 5 1 70 30 90 10 10 SAM FN 5PC 10 3 2 5 1 30 70 70 30 17 SAM FN 5PC 11 3 1 5 1 70 30 60 40 18 SAM FN 5PC 12 2 2 5 1 70 30 90 10 19 SAM FN 5PC 13 2 1 4 2 80 20 60 40 35 SAM FN 5PC 14 4 1 5 2 30 70 80 20 39 SAM FN 5PC 15 1 1 5 1 20 80 40 60 4 SAM FN 5PC 16 2 1 5 1 80 20 60 40 47 SAM

211 Order Two – Selective Attention to Music Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation Continued P&C Bailiff P&C P&C Bail Bail Code Tape Listening Performing Fam. Fam. % % % % Rate Rate Music Pics Music Pics MM 5PC 17 2 4 5 1 80 20 70 30 1 SAM MM 5PC 18 2 2 5 1 60 40 80 20 16 SAM MM 5PC 19 4 3 5 1 10 90 60 40 24 SAM MM 5PC 20 2 1 4 1 80 20 80 20 25 SAM MM 5PC 21 4 2 5 1 40 60 70 30 33 SAM MM 5PC 22 4 2 5 1 20 80 70 30 35 SAM MM 5PC 23 2 1 5 1 60 40 70 30 41 SAM MM 5PC 24 4 1 5 1 40 60 70 30 8 SAM MN 5PC 25 4 2 5 3 70 30 60 40 15 SAM MN 5PC 26 3 3 4 1 30 70 90 10 16 SAM MN 5PC 27 2 1 5 2 30 70 60 40 2 SAM MN 5PC 28 4 1 5 1 50 50 80 20 22 SAM MN 5PC 29 2 1 4 1 30 70 80 20 24 SAM MN 5PC 30 3 1 4 1 60 40 90 10 47 SAM MN 5PC 31 1 1 5 1 70 30 70 30 48 SAM MN 5PC 32 2 1 5 1 60 40 80 20 51 SAM

212 Order Two – Selective Attention to Music Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter

Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 5PC 1 19 6 6 5 2 10 9 4 2 1 SAM FM 5PC 2 20 3 6 5 6 11 9 1 3 10 SAM FM 5PC 3 20 5 6 5 4 9 11 0 4 11 SAM FM 5PC 4 21 5 6 4 6 12 9 0 3 22 SAM FM 5PC 5 17 3 3 4 5 8 9 2 5 29 SAM FM 5PC 6 17 4 3 5 5 8 9 0 7 39 SAM FM 5PC 7 21 5 6 5 5 11 10 0 3 44 SAM FM 5PC 8 16 3 5 4 4 9 7 1 7 46 SAM FN 5PC 9 16 1 5 5 5 8 8 1 8 10 SAM FN 5PC 10 21 5 5 5 6 11 10 1 2 17 SAM FN 5PC 11 18 6 5 4 3 9 9 1 6 18 SAM FN 5PC 12 15 5 5 2 3 7 8 4 7 19 SAM FN 5PC 13 12 4 6 0 2 4 8 2 11 35 SAM FN 5PC 14 19 5 6 5 3 10 9 0 5 39 SAM FN 5PC 15 9 3 3 1 2 2 7 4 14 4 SAM FN 5PC 16 17 3 5 4 5 8 9 1 6 47 SAM

213 Order Two – Selective Attention to Music Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 5PC 17 16 4 4 4 4 7 9 3 7 1 SAM MM 5PC 18 19 6 6 3 4 9 10 1 4 16 SAM MM 5PC 19 17 4 4 5 4 9 8 1 5 24 SAM MM 5PC 20 17 5 6 3 3 8 9 5 6 25 SAM MM 5PC 21 18 3 5 5 5 8 10 1 6 33 SAM MM 5PC 22 16 5 3 4 4 7 9 3 5 35 SAM MM 5PC 23 18 3 6 4 5 10 8 1 5 41 SAM MM 5PC 24 18 5 5 5 3 9 9 0 6 8 SAM MN 5PC 25 12 2 4 2 4 7 5 0 12 15 SAM MN 5PC 26 15 3 4 5 3 7 8 2 7 16 SAM MN 5PC 27 16 5 5 4 2 9 7 0 8 2 SAM MN 5PC 28 19 5 6 5 3 10 9 0 5 22 SAM MN 5PC 29 17 5 5 4 3 7 10 2 6 24 SAM MN 5PC 30 12 3 3 1 5 5 7 6 8 47 SAM MN 5PC 31 17 2 6 5 4 10 7 1 6 48 SAM MN 5PC 32 21 6 6 4 5 11 10 0 3 51 SAM

214 Order Two – Selective Attention to Music Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 5PC 1 21 6 4 6 5 12 9 3 2 1 SAM FM 5PC 2 23 6 5 6 6 11 12 0 1 10 SAM FM 5PC 3 23 5 6 6 6 12 11 1 0 11 SAM FM 5PC 4 20 6 5 4 5 11 9 4 1 22 SAM FM 5PC 5 23 6 6 5 6 11 12 1 1 29 SAM FM 5PC 6 20 4 4 6 6 11 9 1 4 39 SAM FM 5PC 7 24 6 6 6 6 12 12 0 0 44 SAM FM 5PC 8 23 5 6 6 6 12 11 0 1 46 SAM FN 5PC 9 21 5 4 6 6 11 10 2 2 10 SAM FN 5PC 10 24 6 6 6 6 12 12 0 0 17 SAM FN 5PC 11 18 5 4 5 4 9 9 4 4 18 SAM FN 5PC 12 19 6 6 3 4 10 9 2 4 19 SAM FN 5PC 13 12 6 3 2 1 7 5 4 8 35 SAM FN 5PC 14 24 6 6 6 6 12 12 0 0 39 SAM FN 5PC 15 20 5 5 4 6 10 10 1 4 4 SAM FN 5PC 16 22 5 5 6 6 11 11 0 2 47 SAM

215 Order Two – Selective Attention to Music Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 5PC 17 17 2 4 5 6 10 7 2 5 1 SAM MM 5PC 18 20 5 6 4 5 11 9 2 2 16 SAM MM 5PC 19 20 5 6 4 5 11 9 2 2 24 SAM MM 5PC 20 21 6 5 4 6 11 10 1 2 25 SAM MM 5PC 21 19 3 5 5 6 9 10 1 4 33 SAM MM 5PC 22 20 6 3 6 5 11 9 4 0 35 SAM MM 5PC 23 21 5 5 5 6 10 11 2 1 41 SAM MM 5PC 24 19 4 6 4 5 11 8 3 2 8 SAM MN 5PC 25 22 5 5 6 6 12 10 0 2 15 SAM MN 5PC 26 23 6 5 6 6 11 12 0 1 16 SAM MN 5PC 27 21 5 5 6 5 10 11 0 3 2 SAM MN 5PC 28 22 5 6 5 6 12 10 0 2 22 SAM MN 5PC 29 20 6 5 3 6 10 10 1 3 24 SAM MN 5PC 30 18 4 5 4 5 9 9 3 4 47 SAM MN 5PC 31 16 2 4 5 5 9 7 2 7 48 SAM MN 5PC 32 21 5 5 6 5 12 9 2 1 51 SAM

216 Order Two – Selective Attention to Pictures Bailiff’s Daughter First – Pomp and Circumstance Second Demographic Data Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years FM 6PC 1 F 18 U Music therapy Yes 19 SAP FM 6PC Music 2 F 20 U Yes 23 SAP performance FM 6PC 3 F 20 U Music education Yes 24 SAP FM 6PC 4 F 22 U Music education Yes 35 SAP FM 6PC 5 F 21 U Music therapy Yes 4 SAP FM 6PC 6 F 21 U Music education Yes 41 SAP FM 6PC 7 F 21 U Music education Yes 47 SAP FM 6PC 8 F 22 G Music education Yes 5 SAP FN 6PC 9 F 27 G Child development X 12 SAP FN 6PC Education - social 10 F 19 U X 15 SAP science FN 6PC 11 F 21 U Biological science X 24 SAP FN 6PC 12 F 20 U Biological science X 26 SAP FN 6PC Media pro. 13 F 18 U X 36 SAP /creative writing FN 6PC Information 14 F 18 U X 40 SAP technology FN 6PC 15 F 18 U Nursing X 44 SAP FN 6PC 16 F 21 U Nursing X 6 SAP

217 Order Two – Selective Attention to Pictures Bailiff’s Daughter First – Pomp and Circumstance Second Demographic Data Continued Un / Music No < 3 > 3 Code Tape Gender Age Degree grad Major training years years MM 6PC 17 M 21 U Music education Yes 15 SAP MM 6PC 18 M 22 U Music education Yes 21 SAP MM 6PC 19 M 27 G Music performance Yes 31 SAP MM 6PC 20 M 21 U Music education Yes 32 SAP MM 6PC 21 M 18 U Music education Yes 36 SAP MM 6PC 22 M 19 U Music education Yes 44 SAP MM 6PC 23 M 28 G Music education Yes 5 SAP MM 6PC 24 M 30 G Music - conducting Yes 6 SAP MN 6PC 25 M 19 U Civil engineering X 14 SAP MN 6PC 26 M 20 U Information technology X 17 SAP MN 6PC 27 M 19 U Art, studio X 27 SAP MN 6PC 28 M 19 U Political science X 29 SAP MN 6PC 29 M 22 U Dietetics/pre-med X 38 SAP MN 6PC 30 M 20 U Biological science X 42 SAP MN 6PC 31 M 19 U Communications X 43 SAP MN 6PC Engineering/electric & 32 M 21 U X 5 SAP computers

218 Order Two – Selective Attention to Pictures Non-music Majors – Description of Music Training Code Description of Music Training FN 15 Band 10 years FN 24 Piano 8 years; band 4 years FN 26 4 years voice; 2 years guitar; theory FN 36 Piano 4; band 4; chorus 6 FN 40 Choir 10 years FN 44 Piano 10 years; choir 4 years FN 6 Piano 5 years MN 14 Band 5 years; self-taught piano and guitar MN 17 Band 10 years MN 27 3 years bass guitar MN 29 Piano 3 years MN 38 Band 8 years MN 42 Band 6 years MN 43 Piano 7 years MN 5 Band 10 years

219 Order Two – Selective Attention to Pictures Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation P&C P&C Bail Bail P&C Bailiff Code Tape Listening Performing % % % % Rate Rate Music Pics Music Pics FM 6PC 1 3 2 5 3 20 80 10 90 19 SAP FM 6PC 2 4 2 4 1 40 60 10 90 23 SAP FM 6PC 3 4 2 5 1 80 20 80 20 24 SAP FM 6PC 4 1 2 5 1 20 80 20 80 35 SAP FM 6PC 5 2 1 4 1 50 50 30 70 4 SAP FM 6PC 6 3 2 5 1 20 80 50 50 41 SAP FM 6PC 7 4 2 5 2 30 70 20 80 47 SAP FM 6PC 8 1 1 5 2 20 80 40 60 5 SAP FN 6PC 9 2 1 4 1 20 80 40 60 12 SAP FN 6PC 10 4 1 4 1 10 90 10 90 15 SAP FN 6PC 11 3 1 5 1 20 80 30 70 24 SAP FN 6PC 12 4 2 4 2 60 40 60 40 26 SAP FN 6PC 13 3 1 4 1 90 10 80 20 36 SAP FN 6PC 14 2 1 4 1 20 80 10 90 40 SAP FN 6PC 15 2 1 5 2 20 80 40 60 44 SAP FN 6PC 16 3 1 5 3 70 30 30 70 6 SAP

220 Order Two – Selective Attention to Pictures Hours Ratings for Listening and Performing, Familiarity Ratings for Pomp and Circumstance (P&C) and The Bailiff’s Daughter (Bail) Perception of Attention Allocation Continued P&C Bailiff P&C P&C Bail Bail Code Tape Listening Performing Fam. Fam. % % % % Rate Rate Music Pics Music Pics MM 6PC 17 3 2 5 1 20 80 40 60 15 SAP MM 6PC 18 2 2 5 1 60 40 50 50 21 SAP MM 6PC 19 4 2 5 2 10 90 50 50 31 SAP MM 6PC 20 4 4 5 1 70 30 40 60 32 SAP MM 6PC 21 4 3 5 1 20 80 60 40 36 SAP MM 6PC 22 2 3 5 2 30 70 40 60 44 SAP MM 6PC 23 4 1 5 1 80 20 20 80 5 SAP MM 6PC 24 1 2 5 1 20 80 40 60 6 SAP MN 6PC 25 4 2 5 1 60 40 20 80 14 SAP MN 6PC 26 3 1 5 1 90 10 60 40 17 SAP MN 6PC 27 3 1 5 3 70 30 20 80 27 SAP MN 6PC 28 1 1 4 1 50 50 80 20 29 SAP MN 6PC 29 2 1 5 2 40 60 20 80 38 SAP MN 6PC 30 2 1 5 3 30 70 40 60 42 SAP MN 6PC 31 4 1 5 1 20 80 80 20 43 SAP MN 6PC 32 1 1 5 1 20 80 30 70 5 SAP

221 Order Two – Selective Attention to Pictures Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter

Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 6PC 1 14 5 5 1 3 6 8 2 10 19 SAP FM 6PC 2 21 4 6 5 6 12 9 0 3 23 SAP FM 6PC 3 14 3 5 2 4 6 8 3 8 24 SAP FM 6PC 4 15 3 4 3 5 9 6 1 9 35 SAP FM 6PC 5 13 3 3 3 4 9 4 3 10 4 SAP FM 6PC 6 22 5 6 6 5 10 12 0 2 41 SAP FM 6PC 7 16 5 4 4 3 9 7 0 8 47 SAP FM 6PC 8 21 5 6 4 6 12 9 0 3 5 SAP FN 6PC 9 14 3 4 3 4 7 7 1 9 12 SAP FN 6PC 10 17 6 3 4 4 6 11 1 7 15 SAP FN 6PC 11 18 4 5 4 5 10 8 0 6 24 SAP FN 6PC 12 18 5 6 3 4 9 9 0 6 26 SAP FN 6PC 13 17 4 5 3 5 10 7 2 5 36 SAP FN 6PC 14 14 6 6 2 0 7 7 0 10 40 SAP FN 6PC 15 19 6 6 4 3 9 10 1 5 44 SAP FN 6PC 16 19 6 5 4 4 9 10 0 5 6 SAP

222 Order Two – Selective Attention to Pictures Total Score, Question Type Score, Half of Test Score, and Error Score For The Bailiff’s Daughter Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 6PC 17 12 3 3 4 2 8 4 1 11 15 SAP MM 6PC 18 14 3 4 4 3 7 7 1 9 21 SAP MM 6PC 19 19 6 4 5 4 10 9 5 1 31 SAP MM 6PC 20 9 3 4 2 0 6 3 8 9 32 SAP MM 6PC 21 17 4 6 2 5 10 7 3 6 36 SAP MM 6PC 22 23 6 6 6 5 12 11 0 1 44 SAP MM 6PC 23 12 3 3 3 3 4 8 1 11 5 SAP MM 6PC 24 20 6 5 5 4 10 10 0 4 6 SAP MN 6PC 25 17 4 5 4 4 8 9 0 7 14 SAP MN 6PC 26 20 6 6 5 3 10 10 0 4 17 SAP MN 6PC 27 13 3 3 3 4 6 7 2 11 27 SAP MN 6PC 28 16 3 5 4 4 7 9 2 6 29 SAP MN 6PC 29 13 3 3 3 4 7 6 2 11 38 SAP MN 6PC 30 19 4 5 5 5 11 8 1 5 42 SAP MN 6PC 31 18 5 6 3 4 8 10 0 6 43 SAP MN 6PC 32 21 5 6 5 5 10 11 0 3 5 SAP

223 Order Two – Selective Attention to Pictures Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music FM 6PC 1 20 5 4 5 6 11 9 0 4 19 SAP FM 6PC 2 24 6 6 6 6 12 12 0 0 23 SAP FM 6PC 3 18 5 3 4 6 10 8 4 3 24 SAP FM 6PC 4 21 4 6 5 6 12 9 2 1 35 SAP FM 6PC 5 21 5 4 6 6 12 9 0 3 4 SAP FM 6PC 6 23 5 6 6 6 12 11 0 1 41 SAP FM 6PC 7 23 6 5 6 6 11 12 0 1 47 SAP FM 6PC 8 23 6 6 5 6 12 11 0 1 5 SAP FN 6PC 9 20 4 5 5 6 10 10 0 4 12 SAP FN 6PC 10 22 5 6 6 5 11 11 0 2 15 SAP FN 6PC 11 21 4 6 5 6 12 9 1 2 24 SAP FN 6PC 12 21 6 6 4 5 12 9 0 3 26 SAP FN 6PC 13 21 5 6 4 6 12 9 0 3 36 SAP FN 6PC 14 18 6 6 3 3 12 6 0 6 40 SAP FN 6PC 15 20 5 5 4 6 11 9 0 4 44 SAP FN 6PC 16 18 4 4 5 5 10 8 0 6 6 SAP

224 Order Two – Selective Attention to Pictures Total Score, Question Type Score, Half of Test Score, and Error Score For Pomp and Circumstance Continued Total Pair Picture Music New #1- #13- Error Error Code Tape Score Correct New New Pair 12 24 Pictures Music MM 6PC 17 21 5 6 5 5 11 10 1 2 15 SAP MM 6PC 18 18 4 4 5 5 10 8 2 4 21 SAP MM 6PC 19 17 4 5 4 4 10 7 6 0 31 SAP MM 6PC 20 7 3 2 2 0 6 1 11 11 32 SAP MM 6PC 21 20 4 5 5 6 10 10 2 2 36 SAP MM 6PC 22 24 6 6 6 6 12 12 0 0 44 SAP MM 6PC 23 21 4 6 5 6 12 9 1 2 5 SAP MM 6PC 24 22 5 6 6 5 11 11 1 1 6 SAP MN 6PC 25 21 5 4 6 6 10 11 0 3 14 SAP MN 6PC 26 21 5 4 6 6 11 10 0 3 17 SAP MN 6PC 27 20 5 6 4 4 11 8 0 5 27 SAP MN 6PC 28 18 4 5 4 5 11 7 2 4 29 SAP MN 6PC 29 18 5 4 4 5 10 8 1 6 38 SAP MN 6PC 30 19 4 4 5 6 10 9 0 5 42 SAP MN 6PC 31 23 5 6 6 6 12 11 0 1 43 SAP MN 6PC 32 21 5 6 5 5 11 10 1 2 5 SAP

225 REFERENCES

Allport, D. A., Antonis, B., & Reynolds, P. (1972). On the division of attention: A disproof of the single channel hypothesis. Quarterly Journal of Experimental Psychology, 24, 225- 235.

Andrews, M. W., & Dowling, W. J. (1991). The development of perception of interleaved melodies and control of auditory attention. , 8, 349-368.

Baddeley, A. D. (1976). The psychology of memory. New York: Basic Books, Inc.

Baddeley, A. D. (1986). Working memory (Oxford psychology series; no. 11). New York: Oxford Press.

Baddeley, A. D., & Hitch, G. (1974). Working memory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory, volume 8. New York: Academic Press.

Bahrick, L. E., & Lickliter, R. (2000). Intersensory redundancy guides attentional selectivity and perceptual learning in infancy. Developmental Psychology, 36, 190-201.

Balch, W. R. (1984). The effects of auditory and visual interference on immediate recall of melody. Memory and Cognition, 12, 581-589.

Baron-Cohen, S. (2005). The essential difference: The male and female brain. Phi Kappa Phi Forum, 85(1), Winter/Spring, 23-26.

Bartgis, J., Lilly, A. R., & Thomas, D. G. (2003). Event-related potential and behavioral measures of attention in 5-, 7-, and 9-year-olds. Journal of General Psychology, 130, 311-335.

Beaman, C. P. (2004). The irrelevant sound phenomenon revisited: What role for working memory capacity? Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 1106-1118.

Bella, S. D., Peretz, I., & Aronoff, N. (2003). Time course of melody recognition: A gating paradigm. Perception and Psychophysics, 65, 1019-1028.

Benedict, R. H. B., Lockwood, A. H., Shucard, J. L., Shucard, D. W., Wack, D., & Murphy, B. W. (1998). Functional neuroimaging of attention in the auditory modality. NeuroReport, 9, 121-126.

Berti, S., & Schroger, E. (2003). Working memory controls involuntary attention switching:

226 Evidence from an auditory distraction paradigm. European Journal of Neuroscience, 17, 1119-1122.

Berz, W. L. (1995). Working memory in music: A theoretical model. Music Perception, 12, 353- 364.

Besson, M., & Faita, F. (1995). An event-related potential study of musical expectancy: Comparison of musicians with nonmusicians. Journal of Experimental Psychology: Human Perception and Performance, 21, 1278-1296.

Bigand, E., McAdams, S., & Foret, S. (2000). Divided attention in music. International Journal of Psychology, 35, 270-278.

Boltz, M. G. (2001). Musical soundtracks as a schematic influence on the cognitive processing of filmed events. Music Perception, 18, 427-454.

Bonnel, A. M., Faita, F., Peretz, I., & Besson, M. (2001). Divided attention between lyrics and tunes of operatic songs: Evidence for independent processing. Perception and Psychophysics, 63, 1201-1213.

Bonnel, A. M., & Hafter, E. R. (1998). Divided attention between simultaneous auditory and visual signals. Perception and Psychophysics, 60, 179-190.

Bonnel, A. M., & Prinzmetal, W. (1998). Dividing attention between the color and the shape of objects. Perception and Psychophysics, 60, 113-124.

Borling, J. A. (1981). The effects of sedative music on alpha rhythms and focused attention in high-creative and low-creative students. Journal of Music Therapy, 18, 101-106.

Broadbent, D. E. (1971). Decision and stress. New York: Academic Press.

Brodsky, W., Henik, A., Rubinstein, B.-S., Zorman, M. (2003). Auditory imagery from musical notation in expert musicians. Perception and Psychophysics, 65, 602-612.

Burleson, S. J., Center, D. B., & Reeves, H. (1989). The effect of background music on task performance in psychotic children. Journal of Music Therapy, 26, 198-205.

Byo, J. L. (1997). The effects of texture and number of parts on the ability of music majors to detect performance errors. Journal of Research in Music Education, 45, 51-66.

Caldwell, C., & Hibbert, S. A. (2002). The influence of music tempo and musical preference on restaurant patrons’ behavior. Psychology and Marketing, 19, 895-917.

Carroll-Phelan, B., & Hampson, P. J. (1996). Multiple components of the perception of musical sequences: A cognitive neuroscience analysis and some implications for auditory imagery. Music Perception, 13, 517-561.

227 Cassidy, J. W. (1992). Communication disorders: Effect on children’s ability to label music characteristics. Journal of Music Therapy, 29, 113-124.

Cassidy, J. W. (2001). Listening maps: Undergraduate students’ ability to interpret various iconic representations. Update: Applications of Research in Music Education, 19(2), 15-19.

Cassidy, J. W., & Ditty, K. M. (2001). Gender differences among newborns on transient otoacoustic emissions test for hearing. Journal of Music Therapy, 38, 28-35.

Cassidy, J. W., & Geringer, J. M. (1999). Effects of animated videos on preschool children’s music preferences. Update: Applications of Research in Music Education, 17(2), 3-7.

Choudhury, N., & Gorman, K. S. (2000). The relationship between sustained attention and cognitive performance in 17-24-month old toddlers. Infant and Child Development, 9, 127-146.

Cook, R. B. (1973). Left-right differences in the perception of dichotically presented musical stimuli. Journal of Music Therapy, 10, 59-63.

Cowan, N. (1995). Attention and memory: An integrated framework. Oxford Psychology Series, No. 26. New York: Oxford University Press.

Cowan, N. (1998). Visual and auditory working memory capacity. Trends in Cognitive Sciences, 2(3), 77-78.

Cowan, N. (2005). Working memory capacity. New York: Psychology Press.

Crawford, H. J., & Strapp, C. M. (1994). Effects of vocal and instrumental music on visuospatial and verbal performance as moderated by studying preference and personality. Personality and Individual Differences, 16, 237-245.

Crawley, E. J., Acker-Mills, B. E., Pastore, R. E., & Weil, S. (2002). Change detection in multi- voice music: The role of musical structure, musical training, and task demands. Journal of Experimental Psychology: Human Perception and Performance, 228, 367-378.

Crowder, R. G., Serafine, M. L., & Repp, B. (1990). Physical integration and association by contiguity in memory for words and melodies of songs. Memory and Cognition, 18, 469- 476.

Cuddy, L. L., & Cohen, A. J. (1976). Recognition of transposed melodic sequences. Quarterly Journal of Experimental Psychology, 28, 255-270.

Cusack, R., & Carlyon, R. P. (2003). Perceptual asymmetries in audition. Journal of Experimental Psychology: Human Perception and Performance, 29, 713-725.

228 Cutietta, R. A., & Booth, G. D. (1996). The influence of metre, mode, interval type and contour in repeated melodic free-recall. Psychology of Music, 24, 222-236.

Dalton, P., & Lavie, N. (2004). Auditory attentional capture: Effects of singleton distractor sounds. Journal of Experimental Psychology: Human Perception and Performance, 30, 180-193.

Darrow, A. A., & Goll, H. (1989). The effect of vibrotactile stimuli via the SOMATRON on the identification of rhythmic concepts by hearing impaired children. Journal of Music Therapy, 26, 115-124.

Davidson, L. L., & Banks, W. P. (2003). Selective attention in two-part counterpoint. Music Perception, 21, 3-20.

Davidson, R. J., & Schwartz, G. E. (1977). The influence of musical training on patterns of EEG asymmetry during musical and non-musical self-generation tasks. Psychophysiology, 14, 58-63.

Deliege, I., Melen, M., Stammers, D., & Cross, I. (1996). Musical schemata in real-time listening to a piece of music. Music Perception, 14, 117-160.

Demorest, S. M., & Serlin, R. C. (1997). The integration of pitch and rhythm in musical judgment: Testing age-related trends in novice listeners. Journal of Research in Music Education, 45, 67-79.

Dennis, M., & Hopyan, T. (2001). Rhythm and melody in children and adolescents after left or right temporal lobectomy. Brain and Cognition, 47, 461-469.

Davies, J. B., & Yelland, A. (1977). Effects of two training procedures on the production of melodic contour in short-term memory for tonal sequences. Psychology of Music, 5(2), 3- 9.

Dowling, W. J. (1973a). Rhythmic groups and subjective chunks in memory for melodies. Perception and Psychophysics, 14, 37-40.

Dowling, W. J. (1973b). The perception of interleaved melodies. Cognitive Psychology, 5, 322- 337.

Dowling, W. J., Tillmann, B. & Ayers, D. F. (2002). Memory and the experience of hearing music. Music Perception, 19, 249-276.

Elgar, E. (1901). Pomp and circumstance military march no. 1 in D (arranged for piano solo). New York: G. Schirmer, Inc.

Flowers, P. J. (2001). Patterns of attention in music listening. Bulletin of the Council for Research in Music Education, 148, 48-59.

229 Flowers, P. J., & O’Neill, A. A. M. (2005). Self-reported distractions of middle school students in listening to music and prose. Journal of Research in Music Education, 53, 308-321.

Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R., & Pantev, C. (2005). Automatic encoding of polyphonic melodies in musicians and nonmusicians. Journal of Cognitive Neuroscience, 17, 1578-1592.

Gaeta, H., Friedman, D., and Ritter, W. (2003). Auditory selective attention in young and elderly adults: The selection of single versus conjoint features. Psychophysiology, 40, 389-406.

Gardstrom, S. C. (1999). Music exposure and criminal behavior: Perceptions of juvenile offenders. Journal of Music Therapy, 36, 207-221.

Gaser, N., & Schlaug, G. (2003). Brain structures differ between musicians and nonmusicians. Journal of Neuroscience, 23, 9240-9245.

Glass, M. R., Franks, J. R., & Potter, R. E. (1986). A comparison of two tests of auditory selective attention. Language, Speech, and Hearing Services in Schools, 17, 3000-306.

Goldstone, S., & Goldfarb, J. L. (1964a). Auditory and visual time judgment. Journal of General Psychology, 70, 369-387.

Goldstone, S., & Goldfarb, J. L. (1964b). Direct comparison of auditory and visual durations. Journal of Experimental Psychology, 67, 483-485.

Goldstone, S., & Lhamon, W. T. (1972). Auditory-visual differences in human temporal judgment. Perceptual and Motor Skills, 34, 623-633.

Gopher, D. (1973). Eye-movement patterns in selective listening tasks of focused attention. Perception and Psychophysics, 14, 259-264.

Gordon, H. W. (1978). Hemispheric asymmetry for dichotically-presented chords in musicians and non-musicians, males and females. Acta Psychologica, 42, 383-395.

Grant, R. E., & LeCroy, S. (1986). Effects of sensory mode input on the performance of rhythmic perception tasks by mentally retarded students. Journal of Music Therapy, 23, 2-9.

Green, T. J., & McKeown, J. D. (2001). Capture of attention in selective frequency listening. Journal of Experimental Psychology: Human Perception and Performance, 27, 1197- 1210.

Grover, C. B. (1973). A heritage of songs. Norwood, PA: Norwood Editions.

Gromko, J. E., & Russell, C. (2002). Relationships among young children’s aural perception,

230 listening condition, and accurate reading of graphic listening maps. Journal of Research in Music Education, 50, 333-342.

Hair, H. I. (1997). Divergent research in children’s musical development. Psychomusicology, 16, 26-39.

Hall, M. D., & Blasko, D. G. (2005). Attentional interference in judgments of musical timbre: Individual differences in working memory. Journal of General Psychology, 132, 94-112.

Halpern, A. R., Bartlett, J. C., & Dowling, W. J. (1998). Perception of mode, rhythm, and contour in unfamiliar melodies: Effects of age and experience. Music Perception, 15, 335-355.

Halpern, A. R., & Bower, G. H. (1982). Musical expertise and melodic structure in memory for musical notation. American Journal of Psychology, 95, 31-50.

Harrison, A. C., & O’Neill, S. A. (2000). Children's gender-typed preferences for musical instruments: An intervention study. Psychology of Music, 28, 81-97.

Ho, Y. C., Cheung, M. C., & Chan, A. S. (2003). Music training improves verbal but not visual memory: Cross-sectional and longitudinal exploration in children. Neuropsychology, 17, 439-450.

Huang-Pollock, C. L., Carr, T. H., & Nigg, J. T. (2002). Development of selective attention: Perceptual load influences early versus late attentional selection in children and adults. Developmental Psychology, 38, 363-375.

Hughes, R. W., & Jones, D. M. (2005). The impact of order incongruence between a task- irrelevant auditory sequence and a task-relevant visual sequence. Journal of Experimental Psychology: Human Perception and Performance, 31, 316-327.

Hughes, R. W., Vachon, F., & Jones, D. M. (2005). Auditory attentional capture during serial recall: Violations at encoding of an algorithm-based neural model? Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 736-749.

Hutchinson, S., Lee, L. H.-L., Gaab, N., & Schlaug, G. (2003). Cerebellar volume of musicians. Cerebral Cortex, 13, 943-949.

James, W. (1902). The principles of psychology. New York: H. Holt.

Janata, P., Tillmann, B., & Bharucha, J. J. (2002). Listening to polyphonic music recruits domain-general attention and working memory circuits. Cognitive, Affective, and Behavioral Neuroscience, 2, 121-140.

Jakobson, L. S., Cuddy, L. L., & Kilgour, A. R. (2003). Time tagging: A key to musicians’ superior memory. Music Perception, 20, 307-313.

231 15 Jeffries, K. J., Fritz, J. B., & Braun, A. R. (2003). Words in melody: An H2 O PET study of brain activation during singing and speaking. Neuroreport, 14(5), 749-754.

Jellison, J. A., & Miller, N. I. (1982). Recall of digit and word sequences by musicians and nonmusicians as a function of spoken or sung input and task. Journal of Music Therapy, 19, 194-209.

Johnson, J., Im-Bolter, N., & Pascual-Leone, J. (2003). Development of mental attention in gifted and mainstream children: The role of mental capacity, inhibition, and speed of processing. Child Development, 74, 1594-1614.

Johnson, C. M., & Stewart, E. E. (2005). Effect of sex and race identification on instrument assignment by music educators. Journal of Research in Music Education, 53, 348-357.

Johnson, J. A., & Zatorre, R. J. (2005). Attention to simultaneous unrelated auditory and visual events: Behavioral and neural correlates. Cerebral Cortex, 15, 1609-1620.

Jones, D. M. (1999). The cognitive psychology of auditory distraction: The 1997 BPS Broadbent lecture. British Journal of Psychology, 90, 167-187.

Jones, M. R., Summerell, L., & Marshburn, E. (1987). Recognizing melodies: A dynamic interpretation. Quarterly Journal of Experimental Psychology, 39A, 89-121.

Kahneman, D., Ben-Ishai, R., & Lotan, M. (1973). Relation of a test of attention to road accidents. Journal of Applied Psychology, 58, 113-115.

Keller, P. E., & Burnham, D. K. (2005). Musical meter in attention to multipart rhythm. Music Perception, 22, 629-661.

Keller, T. A., Cowan, N., & Sauls, J. S. (1995). Can auditory memory for tone pitch be rehearsed? Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 635-645.

Kemner, C., Jonkman, L. M., Kenemans, J. L., Bocker, K. B. E., Verbaten, M. N., & van Engeland, H. (2004). Sources of auditory selective attention and the effects of methylphenidate in children with attention-deficit/hyperactivity disorder. Biological Psychiatry, 55, 776-778.

Kidd, G., Boltz, M., & Jones, M. R. (1984). Some effects of rhythmic context on melody recognition. American Journal of Psychology, 97, 153-173.

Kilgour, A. R., Jakobson, L. S., & Cuddy, L. L. (2000). Music training and rate of presentation as mediators of text and song recall. Memory and Cognition, 28, 700-710.

Kinney, D. K., & Kagan, J. (1976). Infant attention to auditory discrepancy. Child Development,

232 47, 155-164.

Klahr, D., & MacWhinney, B. (1998). Information processing. In D. Kuhn & R. Siegler (Eds.), W. Damon (Series Ed.), Handbook of child psychology: Vol. 2, Cognition, perception and language (5th ed., pp. 631-678). New York: Wiley.

Klein, R. M. (1977). Attention and visual dominance: A chronometric analysis. Journal of Experimental Psychology: Human Perception and Performance, 3, 365-377.

Klinger, R., Campbell, P. S., & Goolsby, T. (1998). Approaches to children’s song acquisition: Immersion and phrase-by-phrase. Journal of Research in Music Education, 46, 24-34.

Kraemer, D. J. M., Macrae, C. N., Green, A. E., & Kelley, W. M. (2005). Musical imagery: Sound of silence activates auditory cortex. Nature, 434(7030, 158.

Kubey, R., & Larson, R. (1990). The use and experience of the new video media among children and young adolescents. Communication Research, 17, 107-130.

Kubovy, M., & van Valkenburg, D. (2001). Auditory and visual objects. Cognition, 80, 97-126.

LaBerge, D. (1995). Attentional processing in music listening: A cognitive neuroscience approach. Psychomusicology, 14, 20-34.

Larsen, A., McIlhagga, W., Baert, J., & Bundesen, C. (2003). Seeing or hearing? Perceptual independence, modality confusions, and crossmodal congruity effects with focused and divided attention. Perception and Psychophysics, 65, 568-574.

Larson, B. A. (1981). Auditory and visual rhythmic pattern recognition by emotionally disturbed and normal adolescents. Journal of Music Therapy, 18, 128-136.

Lesiuk, T. (2005). The effect of music listening on work performance. Psychology of Music, 33, 173-191.

Lewkowicz, D. J. (2003). Learning and discrimination of audiovisual events in human infants: The hierarchical relation between intersensory temporal synchrony and rhythmic pattern cues. Developmental Psychology, 39, 795-804.

Lukas, J. H. (1980). Human auditory attention: The olivocochlear bundle may function as a peripheral filter. Psychophysiology, 17, 444-452.

Macken, W. J., Tremblay, S., Houghton, R. J., Nicholls, A. P., & Jones, D. M. (2003). Does auditory streaming require attention? Evidence from attentional selectivity in short-term memory? Journal of Experimental Psychology: Human Perception and Performance, 29, 43-51.

Madsen, C. K., & Coggiola, J. C. (2001). The effect of manipulating a CRDI dial on the focus of

233 attention of musicians/nonmusicians and perceived aesthetic response. Bulletin of the Council for Research in Music Education, 149, 13-22.

Madsen, C. K., & Geringer, J. M. (1990). Differential patterns of music listening: Focus of attention of musicians and nonmusicians. Bulletin of the Council for Research in Music Education, 105, 45-57.

Madsen, C. K., & Geringer, J. M. (2000/2001). A focus of attention model for meaningful listening. Bulletin of the Council for Research in Music Education, 147, 103-108.

Madsen, C. K., Geringer, J. M., & Frederickson, W. E. (1997). Focus of attention to musical elements in Haydn’s Symphony #104. Bulletin of the Council for Research in Music Education, 133, 57-63.

Madsen, C. K., & Madsen, K. (2002). Perception and cognition in music: Musically trained and untrained adults compared to sixth-grade and eighth-grade children. Journal of Research in Music Education, 50, 111-130.

Madsen, C. K., & Moore, R. S. (1978). Experimental research in music: Workbook in design and statistical tests, revised edition. Raleigh, NC: Contemporary Publishing Company.

Marshall, S. K., & Cohen, A. J. (1988). Effects of musical soundtrack on attitudes toward animated geometric figures. Music Perception, 6, 95-112.

McDonald, J. J., Teder-Salejarvi, W. A., & Hillyard, S. A. (2000). Involuntary orienting to sound improves visual perception. Nature, 407(6806), 906-908.

McElhinney, M., & Annett, J. M. (1996). Pattern of efficacy of a musical mnemonic on recall of familiar words over several presentations. Perceptual and Motor Skills, 82, 395-400.

Merikle, P. (1976). On the disruption of visual memory: Interference produced by visual report cues. Quarterly Journal of Experimental Psychology, 28, 193-202.

Merrill, E. C., & Lookadoo, R. (2004). Selective search for conjunctively defined targets by children and young adults. Journal of Experimental Child Psychology, 89, 72-90.

Miller, G. A. (1956). The magic number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97.

Mirz, F., Ovensen, T., Ishizu, K., Johannsen, P., Madsen, S., Gjedde, A., & Pedersen, C. B. (1999). Stimulus-dependent central processing of auditory stimuli. Scandinavian Audiology, 28, 161-169.

Misenhelter, D. (2004). An examination of responses to audio and audio/visual stimulus: The Bach Passacaglia in C minor with Jean Cocteau’s Ballet Le Jeune Homme et la Mort. Southern Music Education Journal, 1, 25-37.

234 Mondor, T. A., & Terrio, N. A. (1998). Mechanisms of perceptual organization and auditory selective attention: The role of pattern structure. Journal of Experimental Psychology: Human Perception and Performance, 24, 1628-1641.

Moog, H. (1976). The development of musical experience in children of pre-school age. Psychology of Music, 4(2), 38-45.

Moore, R. S., & Staum, M. (1987). Effects of age and nationality on auditory/visual sequential memory for English and American children. Bulletin of the Council for Research in Music Education, 91, 126-131.

Morey, C. C., & Cowan, N. (2004). When visual and verbal memories compete: Evidence of cross-domain limits in working memory. Psychonomic Bulletin and Review, 11, 296-301.

Morey, C. C., & Cowan, N. (2005). When do visual and verbal memories conflict? The importance of working-memory load and retrieval. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 703-713.

Morrongiello, B. A. (1992). Effects of training on children’s perception of music: A review. Psychology of Music, 20, 29-41.

Morton, L. L., Kershner, J. R., & Siegel, L. S. (1990). The potential for therapeutic applications of music on problems related to memory and attention. Journal of Music Therapy, 27, 195-208.

Napolitano, A. C., & Sloutsky, A. M. (2004). Is a picture worth a thousand words? The flexible nature of modality dominance in young children. Child Development, 75, 1850-1870.

Naveh-Benjamin, M., Guez, J., & Marom, M. (2003). The effects of divided attention at encoding on item and associative memory. Memory and Cognition, 31, 1021-1035.

Nelson, K. (1993). Explaining the emergence of autobiographical memory in early childhood. In A. F. Collins, S. E. Gathercole, M. A. Conway, & P. E. Morris (Eds.), Theories of Memory (pp. 355-382). East Sussex, UK: Lawrence Erlbaum Associates, Ltd.

Norris, C. E. (2000). Factors related to the validity of reproduction tonal memory tests. Journal of Research in Music Education, 48, 52-64.

North, A. C., Hargreaves, D. J., & Hargreaves, J. J. (2004). Uses of music in everyday life. Music Perception, 22, 41-77.

North, A. C., Hargreaves, D. J., & O’Neill, S. A. (2000). The importance of music to adolescents. British Journal of Educational Psychology, 70, 255-272.

Obrzut, J. E., Boliek, C. A., & Obrzut, A. (1986). The effect of stimulus type and directed

235 attention on dichotic listening in children. Journal of Experimental Child Psychology, 41, 198-209.

Olivers, C. N. L., & Nieuwenhuis, S. (2005). The beneficial effect of concurrent task-irrelevant mental activity on temporal attention. Psychological Science, 16, 265-269.

O’Neill, S. A., & Boulton, M. J. (1996). Boys' and girls' preferences for musical instruments: A function of gender? Psychology of Music, 24,171-183.

Oura, Y., & Hatano, G. (1988). Memory for melodies among subjects differing in age and experience. Psychology of Music, 16, 91-109.

Pearsall, E. R. (1989). Differences in listening comprehension with tonal and atonal background music. Journal of Music Therapy, 26, 188-197.

Pembrook, R. G. (1986). Interference of the transcription process and other selected variables on perception and memory during melodic dictation. Journal of Research in Music Education, 34, 238-261.

Pembrook, R. G. (1987). The effect of vocalization on melodic memory conservation. Journal of Research in Music Education, 35, 155-169.

Penney, T. B., Gibbon, J., & Meck, W. H. (2000). Differential effects of auditory and visual signals on clock speed and temporal memory. Journal of Experimental Psychology: Human Perception and Performance, 26, 1770-1787.

Peretz, I. (2001). Music perception and recognition. In B. Rapp (Ed.), The handbook of cognitive psychology; What deficits reveal about the human mind, (pp. 519-540.

Persellin, D. C. (1992). Responses to rhythm patterns when presented to children through auditory, visual, and kinesthetic modalities. Journal of Research in Music Education, 40, 306-315.

Plenger, P.M., Breier, J. I., Wheless, J. W., Ridley, T. D., Papanicolaou, A. C., Brookshire, B., et al. (1996). Lateralization of memory for music: Evidence from the intracarotid sodium amobarbital procedure. Neuropsychologia, 34, 1015-1018.

Povel, D. J., & van Egmond, R. (1993). The function of accompanying chords in the recognition of melodic fragments. Music Perception, 11, 101-115.

Radocy, R. E., & Boyle, J. D. (2003). Psychological foundations of musical behavior. Springfield, IL: Charles C Thomas, Publisher.

Radvansky, G. A., Fleming, K. J., & Simmons, J. A. (1995). Timbre reliance in nonmusicians’ and musicians’ memory for melodies. Music Perception, 13, 127-140.

236 Rainey, D. W., & Larsen, J. D. (2002). The effect of familiar melodies on initial learning and long-term memory for unconnected text. Music Perception, 20, 173-186.

Reisberg, D., Scheiber, R., & Potemken, L. (1981). Eye position and the control of auditory attention. Journal of Experimental Psychology: Human Perception and Performance, 7, 318-323.

Richard, J. F., Normandeau, J., Brun, V., & Maillet, M. (2004). Attracting and maintaining infant attention during habituation: Further evidence of the importance of stimulus complexity. Infant and Child Development, 13, 277-286.

Rife, N. A., Shnek, Z. M., Lauby, J. L., & Lapidus, L. B. (2001). Children’s satisfaction with private music lessons. Journal of Research in Music Education, 49 (1), 21-32.

Ritterfeld, U., Klimmt, C., Vorderer, P., & Steinhilper, L. K. (2005). The effects of a narrative audiotape on preschoolers’ entertainment experience and attention. Media Psychology, 7, 47-72.

Robertson, L. C. (2005). The bilateral brain: Are two better than one? Phi Kappa Phi Forum, 85(1), Winter/Spring, 19-22.

Robinson, C. W., & Sloutsky, V.M. (2004). Auditory dominance and its change in the course of development. Child Development, 75, 1387-1401.

Ruff, H. A., & Capozzoli, M. C. (2003) Development of attention and distractibility in the first 4 years of life. Developmental Psychology, 39, 877-890.

Saito, S. (2001). The phonological loop and memory for rhythms: An individual differences approach. Memory, 9, 313-322.

Samson, S., & Zatorre, R. J. (1991). Recognition memory for text and melody of songs after unilateral temporal lobe lesion: Evidence for dual encoding. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 793-804.

Satoh, M., Takeda, K., Nagata, K., Hatazawa, J., & Kuzuhara, S. (2001). Activated brain regions in musicians during an ensemble: A PET study. Cognitive Brain Research, 12, 101-108.

Schacter, D. (1993). Understanding implicit memory: A cognitive neuroscience approach. In A. F. Collins, S. E. Gathercole, M. A. Conway, & P. E. Morris (Eds.), Theories of Memory (pp. 387-408). East Sussex, UK: Lawrence Erlbaum Associates, Ltd.

Schmitt, M., Postma, A., & De Haan, E. (2000). Interactions between exogenous auditory and visual spatial attention. Quarterly Journal of Experimental Psychology, 53A, 105-130.

Schon, D., & Besson, M. (2002). Processing pitch and duration in music reading: A RT-ERP study. Neuropsychologia, 40, 868-878.

237 Schroger, E., & Wolff, C. (1998). Behavioral and electrophysiological effects of task-irrelevant sound change: A new distraction paradigm. Cognitive Brain Research, 7, 71-87.

Schulkind, M. D., Posner, R. J., & Rubin, D. C. (2003). Musical features that facilitate melody identification: How do you know it’s “your song” when they finally play it? Music Perception, 21, 217-249.

Schwartz, K. D., & Fouts (2003). Music preferences, personality style, and developmental issues of adolescents. Journal of Youth and Adolescence, 32, 205-213.

Serafine, M. L., Crowder, R. G., & Repp, B. H. (1984). Integration of melody and text in memory for songs. Cognition, 16, 285-303.

Serafine, M. L., Davidson, J., Crowder, R. G., & Repp, B. H. (1986). On the nature of melody text integration in memory for songs. Journal of Memory and Language, 25, 123-135.

Shank, J. (2004). The effect of visual art on music listening. Southern Music Education Journal, 1, 38-50.

Shehan, P. K. (1981). A comparison of mediation strategies in paired-associate learning for children with learning disabilities. Journal of Music Therapy, 18, 120-127.

Shehan, P. K. (1987). Effects of rote versus note presentations on rhythm learning and retention. Journal of Research in Music Education, 35, 117-126.

Sheldon, D. A. (2004). Effects of multiple listenings on error-detection acuity in multivoice, multitimbral musical examples. Journal of Research in Music Education, 52, 102-115.

Silverman, J. (1975). Folk song encyclopedia, volume 1. Milwaukee, WI: Chappell/Intersong group.

Sims, W. L. (2005). Effect of free versus directed listening on duration of individual music listening by prekindergarten children. Journal of Research in Music Education, 53, 78- 86.

Sims, W. L., & Nolker, D. B. (2002). Individual differences in music listening responses of kindergarten children. Journal of Research in Music Education, 50, 292-300.

Sink, P. E. (1983). Effects of rhythmic and melodic alterations on rhythmic perception. Journal of Research in Music Education, 31, 101-113.

Sloboda, J. A. (1976). Visual perception of musical notation: Registering pitch symbols in memory. Quarterly Journal of Experimental Psychology, 28, 1-16.

Sloboda, J., & Edworthy, J. (1981). Attending to two melodies at once: The effect of key

238 relatedness. Psychology of Music, 39(1), 39-43.

Sloutsky, V. M., & Napolitano, A. C. (2003). Is a picture worth a thousand words? Preferences for auditory modality in young children. Child Development, 74, 822-833.

Soto-Faraco, S., & Spence, C. (2002). Modality-specific auditory and visual temporal processing deficits. Quarterly Journal of Experimental Psychology, 55A, 23-40.

Spence, C. (2002). Multisensory attention and tactile information-processing. Behavioural Brain Research, 135, 57-64.

Spence, C., Ranson, J., & Driver, J. (2000). Cross-modal selective attention: On the difficulty of ignoring sounds at the locus of visual attention. Perception and Psychophysics, 62, 410- 424.

Spence, C., & Read, L, (2003). Speech shadowing while driving: On the difficulty of splitting attention between eye and ear. Psychological Science, 14, 251-256.

Standley, J. M. (1991). The effect of vibrotactile and auditory stimuli on the perception of comfort, heart rate, and peripheral finger temperature. Journal of Music Therapy, 28, 120-134.

Standley, J. M. (1998). The effect of music and multimodal stimulation on response of premature infants in neonatal intensive care. Pediatric Nursing, 24(6), 532-538.

Standley, J. M. (2000). Music research in medical treatment. In C. E. Furman (Ed.), Effectiveness of music therapy procedures: Documentation of research and clinical practice (3rd edition). Silver Spring, MD: American Music Therapy Association, Inc.

Tarrant, M., North, A. C., & Hargreaves, D. J. (2000). English and American adolescents’ reasons for listening to music. Psychology of Music, 28, 166-173.

The book: The most amazing colossal ultimate selection of the best songs ever assembled in a real legal fake book – over 1200 songs, C edition. (1994). Milwaukee, WI: Hal Leonard.

Treisman, A. M., & Davies, A. (1973). Divided attention to ear and eye. In S. Kornblum (Ed.), Attention and performance, volume IV. New York: Academic Press.

Tremblay, S., Macken, W. J., & Jones, D. M. (2001). The impact of broadband noise on serial memory: Changes in band-pass frequency increases disruption. Memory, 9, 323-331.

Turatto, M., Benso, F., Galfano, G., & Umilta, C. (2003). Nonspatial attention shifts between audition and vision. Journal of Experimental Psychology: Human Perception and Performance, 28, 628-639.

Turatto, M., Mazza, V., & Umilta, C. (2005). Crossmodal object-based attention: Auditory

239 objects affect visual processing. Cognition, 96, B55-B64.

Wallace, W. T. (1994). Memory for music: Effect of melody on recall of text. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 1471-1485.

Widmann, A., Kujala, T., Tervaniemi, M., Kujala, A., & Schroger, E. (2004). From symbols to sounds: Visual symbolic information activates sound representations. Psychophysiology, 41, 709-715.

Williams, D. B. (1975). Short-term retention of pitch sequence. Journal of Research in Music Education, 23, 53-66.

Williams, D. B. (1982). Auditory cognition: A study of the similarities in memory processing for music tones and spoken words. Bulletin for the Council for Research in Music Education, 71, 30-44.

Woldorff, M. G., Gallen, C. C., Hampson, S. A., Hillyard, S. A., Pantev, C., Sobel, D., & Bloom, F. E. (1993). Modulation of early sensory processing in human auditory cortex during auditory selective attention. Neurobiology, 90, 8722-8726.

Wolfe, D. E., & Hom, C. (1993). Use of melodies as structural prompts for learning and retention of sequential verbal information by preschool students. Journal of Music Therapy, 30, 100-118.

Wolpert, R. S. (1990). Recognition of melody, harmonic accompaniment, and instrumentation: Musicians vs. nonmusicians. Music Perception, 8, 95-106.

Woods, D. L., Alain, C., Diaz, R., Rhodes, D., & Ogawa, K. H. (2001). Location and frequency cues in auditory selective attention. Journal of Experimental Psychology: Human Perception and Performance, 27, 65-74.

Wrigley, S. N., & Brown, G. J. (2004). A computational model of auditory selective attention. IEEE Transaction Neural Networks, 15, 1151-1163.

Yumoto, M., Matsuda, M., Itoh, K., Uno, A., Karino, S., Saitoh, O., et al. (2005). Auditory imagery mismatch negativity elicited in musicians. NeuroReport, 16, 1175-1178.

Zikmund, A. B., & Nierman, G. E. (1992). The effect of perceptual mode preference and other selected variables on upper elementary school students' responses to conservation-type rhythmic and melodic tasks. Psychology of Music, 20, 57-69.

240 BIOGRAPHICAL SKETCH

Name: Jennifer Dawn Jones

Birthplace: Nashville, Tennessee

Higher Education: Tennessee Technological University Cookeville, Tennessee Major: Music Therapy Degree: BS (1992)

The Florida State University Tallahassee, Florida Major: Music Therapy Degree: M.M. (1998)

The Florida State University Tallahassee, Florida Major: Music Education Degree: Ph.D. (2006)

Experience: Visiting Assistant Professor of Music/Music Therapy Fall 2001 – Spring 2003 Tennessee Technological University

Clinical Music Therapist (MT I) Clover Bottom Developmental Center Nashville, Tennessee (1999-2001)

Lead Teacher First Steps, Inc. Early Intervention Nashville, TN (1998-1999)

Music Therapist Apalachee Center for Human Services Tallahassee, FL (1997-1998)

Clinical Music Therapist Parthenon Pavilion Psychiatric Hospital Nashville, TN (1994-1996)

Director of Expressive Therapy Residential Management Services Dickson, TN (1993-1994)

241