The Subjective Size of Melodic Intervals Over a Two-Octave Range

Total Page:16

File Type:pdf, Size:1020Kb

The Subjective Size of Melodic Intervals Over a Two-Octave Range JournalPsychonomic Bulletin & Review 2005, ??12 (?),(6), ???-???1068-1075 The subjective size of melodic intervals over a two-octave range FRANK A. RUSSO and WILLIAM FORDE THOMPSON University of Toronto, Mississauga, Ontario, Canada Musically trained and untrained participants provided magnitude estimates of the size of melodic intervals. Each interval was formed by a sequence of two pitches that differed by between 50 cents (one half of a semitone) and 2,400 cents (two octaves) and was presented in a high or a low pitch register and in an ascending or a descending direction. Estimates were larger for intervals in the high pitch register than for those in the low pitch register and for descending intervals than for ascending intervals. Ascending intervals were perceived as larger than descending intervals when presented in a high pitch register, but descending intervals were perceived as larger than ascending intervals when presented in a low pitch register. For intervals up to an octave in size, differentiation of intervals was greater for trained listeners than for untrained listeners. We discuss the implications for psychophysi- cal pitch scales and models of music perception. Sensitivity to relations between pitches, called relative scale describes the most basic model of pitch. Category pitch, is fundamental to music experience and is the basis labels associated with intervals assume a logarithmic re- of the concept of a musical interval. A musical interval lation between pitch height and frequency; that is, inter- is created when two tones are sounded simultaneously or vals spanning the same log frequency are associated with sequentially. They are experienced as large or small and as equivalent category labels, regardless of transposition. Lis- consonant or dissonant, and for musically trained listen- teners with musical training can explicitly label intervals ers, they are associated with category labels, such as the (Killam, Lorton, & Schubert, 1975; Plomp, Wagenaar, & perfect fourth, the minor sixth, and the octave. Sequential Mimpen, 1973). More important, judgments of interval intervals form the basis of melody and may occur in as- size by musicians are resistant to the influence of context cending or descending pitch directions. (J. A. Siegel & W. Siegel, 1977a) and exhibit steplike func- Relative pitch allows us to perceive, appreciate, and tions characteristic of categorical perception (J. A. Siegel remember melodies. Large melodic intervals, or leaps, & W. Siegel, 1977b; see also Burns & Campbell, 1994; form the basis for gap-fill melodies (Meyer, 1973) and Burns & Ward, 1978). These findings imply that estimates are experienced as a point of accent (Boltz & Jones, of interval size may depend on musical training. 1986; Drake, Dowling, & Palmer, 1991; Jones, 1987). Drawing from early studies of tone perception by Conversely, melodies sound more coherent or cohesive Titchener (1905, pp. 232–248), Stevens, Volkmann, and when they consist of a sequence of small intervals (Huron, Newman (1937) questioned whether the psychophysical 2001; Russo, 2002). Interval size may also influence me- relation between pitch and frequency was logarithmic and lodic expectancy (Larson & McAdams, 2004; Margulis, used scaling techniques to derive a new scale, called the 2005; Narmour, 1990) and grouping (Deliège, 1987; Le- mel scale. A pure tone of 1000 Hz at 40 dB above thresh- rdahl, 1989; Lerdahl & Jackendoff, 1983). In short, there old was first defined as 1,000 mels, and the pitch in mels is a need for research whose aim is to identify factors that of other frequencies was determined by asking partici- influence the experience of interval size. The goal of this pants to adjust a comparison tone until it was perceived to study was to investigate relative pitch by obtaining magni- be one half of the pitch height of a standard tone (method tude estimates of interval size and assessing the influence of fractionation). Although the mel scale and the logarith- of three factors: musical training, pitch register, and pitch mic scale are roughly equivalent below 500 Hz, above direction (ascending or descending). 500 Hz, the mel scale increases at a slower rate. That is, Psychophysical models of pitch can be used to make perceptually equivalent pitch interval sizes (in mels) span predictions about judgments of interval size. A logarithmic progressively smaller frequency ratios with higher and higher transpositions (Stevens et al., 1937; see also Beck & Shaw, 1961). These results suggest that estimates of This research was supported by a discovery grant awarded to the sec- interval size may depend on pitch register. ond author from the Natural Sciences and Engineering Research Council Another psychophysical scaling method, called the of Canada. We thank Bruce Schneider for helpful comments and Jane method of equal sense distances, involved presenting lis- Campbell for research assistance. Correspondence should be sent to F. A. Russo or W. F. Thompson, Department of Psychology, University teners with a melodic interval and asking them to divide of Toronto, Mississauga, ON, L5L 1C6 Canada (e-mail: frusso@utm it into four equal pitch interval sizes (Greenwood, 1997; .utoronto.ca or [email protected]). Stevens & Volkmann, 1940). Results based on this method Copyright 2005 Psychonomic Society, Inc. 1068 THE SUBJECTIVE SIZE OF MELODIC INTERVALS 1069 were similar to those based on the method of fractionation, participants had 2 years or less of musical instruction (M ϭ 0.57 but an asymmetry was revealed. When the interval was years, SE ϭ 0.23) and no continued musical activity. They included ascending (i.e., low tone first), participants set dividing 10 females and 2 males ranging in age from 17 to 20 years, with a frequencies higher than when the interval was descending. mean age of 18.5 years. Trained participants had 7 years or more of musical instruction (M ϭ 11.07 years, SE ϭ 0.55) and continued This asymmetry suggests that estimates of interval size musical activity. They included 10 females and 2 males ranging in depend on pitch direction. age from 17 to 25 years, with a mean age of 18.9 years. All the par- Some researchers have questioned the relevance of psy- ticipants were given course credit. No participant reported having chophysical scales such as the mel scale (Burns, 1999; abnormal hearing. Krumhansl, 1990, 2000; Shepard, 1999). First, the tasks used to derive the mel scale have ambiguous interpreta- Stimuli Seventy-three tones were constructed by additive synthesis with tions (e.g., half as high), and responses to the tasks are not Praat software (Boersma & Weenink, 2004). The fundamental always reliable (Rasch & Plomp, 1999). It has been argued frequency of each tone was separated by 50 cents from its near- that the mel scale cannot be derived from other measures est neighbor, falling within the frequency range of 87.3 Hz (F2) to of pitch, such as difference limens, critical bands, or equal 698.5 Hz (F5). All the tones consisted of 12 equal-intensity compo- cochlear distances (Greenwood, 1997, p. 200), and differ- nents (the fundamental and the first 11 overtones). Tones were used ent strategies of deriving the mel scale yield somewhat dif- to create 48 melodic intervals (two-tone sequences) that varied in size (log frequency distance between tones) from 50 cents (one-half ferent functions (Lewis, 1942; but see Schneider, Parker, semitone) to 2,400 cents (two octaves). Each of the 48 intervals was & Upenieks, 1982). Second, the ease with which listeners presented in ascending and descending pitch directions and in low recognize and reproduce simple melodies independently and high pitch registers. The three factors of interval, pitch direc- of pitch register implies that frequency ratios are experi- tion, and pitch register were counterbalanced. Intervals in the low enced as invariant across transposition (Attneave & Olson, pitch register were centered on F3 (174.6 Hz), and intervals in the 1971). Third, the mel scale was derived from judgments of high pitch register were centered on F4 (349.2 Hz). For instance, the isolated pure tones, which may have limited relevance to largest ascending intervals (2,400 cents) spanned F2 to F4 in the low pitch register and F to F in the high pitch register. the way tones are perceived in a musical context. 3 5 Nonetheless, it is difficult to dismiss the data that were Procedure used to derive the mel scale. Specifically, they imply that Each participant completed two blocks each of ascending and there is a dimension of melody perception that is some- descending trials for a total of 384 trials (48 intervals ϫ 2 pitch what independent of the explicit labels associated with registers ϫ 2 directions ϫ 2 blocks). Block presentation adhered to musical intervals (Makeig, 1982). This implication moti- one of two orders: (1) ascending, descending, descending, ascend- ing or (2) descending, ascending, ascending, descending. The two vated us to obtain direct estimates of the size of melodic orders were counterbalanced across participants, and the order of intervals and to identify some influences on such judg- presentation of trials within blocks was randomly determined for ments. Trained and untrained listeners estimated the size each participant. of 48 different intervals ranging between 50 cents (one- The participants provided magnitude estimates of the size of each half semitone) to 2,400 cents (two octaves) presented in interval on a scale from 1 to 100. To discourage explicit mapping of a high or a low pitch register and in an ascending or a known interval categories, a time limit of 5 sec (with a 3-sec warn- ing) was imposed on each estimate. Before testing, the participants descending pitch direction. were familiarized with examples of the smallest interval (50 cents, We considered the possibility that the ability to dis- defined as 1) and the largest interval (2,400 cents, defined as 100). criminate intervals up to an octave might be somewhat different from the ability to discriminate intervals larger RESULTS than an octave.
Recommended publications
  • Perceptual Interactions of Pitch and Timbre: an Experimental Study on Pitch-Interval Recognition with Analytical Applications
    Perceptual interactions of pitch and timbre: An experimental study on pitch-interval recognition with analytical applications SARAH GATES Music Theory Area Department of Music Research Schulich School of Music McGill University Montréal • Quebec • Canada August 2015 A thesis submitted to McGill University in partial fulfillment of the requirements of the degree of Master of Arts. Copyright © 2015 • Sarah Gates Contents List of Figures v List of Tables vi List of Examples vii Abstract ix Résumé xi Acknowledgements xiii Author Contributions xiv Introduction 1 Pitch, Timbre and their Interaction • Klangfarbenmelodie • Goals of the Current Project 1 Literature Review 7 Pitch-Timbre Interactions • Unanswered Questions • Resulting Goals and Hypotheses • Pitch-Interval Recognition 2 Experimental Investigation 19 2.1 Aims and Hypotheses of Current Experiment 19 2.2 Experiment 1: Timbre Selection on the Basis of Dissimilarity 20 A. Rationale 20 B. Methods 21 Participants • Stimuli • Apparatus • Procedure C. Results 23 2.3 Experiment 2: Interval Identification 26 A. Rationale 26 i B. Method 26 Participants • Stimuli • Apparatus • Procedure • Evaluation of Trials • Speech Errors and Evaluation Method C. Results 37 Accuracy • Response Time D. Discussion 51 2.4 Conclusions and Future Directions 55 3 Theoretical Investigation 58 3.1 Introduction 58 3.2 Auditory Scene Analysis 59 3.3 Carter Duets and Klangfarbenmelodie 62 Esprit Rude/Esprit Doux • Carter and Klangfarbenmelodie: Examples with Timbral Dissimilarity • Conclusions about Carter 3.4 Webern and Klangfarbenmelodie in Quartet op. 22 and Concerto op 24 83 Quartet op. 22 • Klangfarbenmelodie in Webern’s Concerto op. 24, mvt II: Timbre’s effect on Motivic and Formal Boundaries 3.5 Closing Remarks 110 4 Conclusions and Future Directions 112 Appendix 117 A.1,3,5,7,9,11,13 Confusion Matrices for each Timbre Pair A.2,4,6,8,10,12,14 Confusion Matrices by Direction for each Timbre Pair B.1 Response Times for Unisons by Timbre Pair References 122 ii List of Figures Fig.
    [Show full text]
  • A Thesis Entitled Nocturnal Bird Call Recognition System for Wind Farm
    A Thesis entitled Nocturnal Bird Call Recognition System for Wind Farm Applications by Selin A. Bastas Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Master of Science Degree in Electrical Engineering _______________________________________ Dr. Mohsin M. Jamali, Committee Chair _______________________________________ Dr. Junghwan Kim, Committee Member _______________________________________ Dr. Sonmez Sahutoglu, Committee Member _______________________________________ Dr. Patricia R. Komuniecki, Dean College of Graduate Studies The University of Toledo December 2011 Copyright 2011, Selin A. Bastas. This document is copyrighted material. Under copyright law, no parts of this document may be reproduced without the expressed permission of the author. An Abstract of Nocturnal Bird Call Recognition System for Wind Farm Applications by Selin A. Bastas Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Master of Science Degree in Electrical Engineering The University of Toledo December 2011 Interaction of birds with wind turbines has become an important public policy issue. Acoustic monitoring of birds in the vicinity of wind turbines can address this important public policy issue. The identification of nocturnal bird flight calls is also important for various applications such as ornithological studies and acoustic monitoring to prevent the negative effects of wind farms, human made structures and devices on birds. Wind turbines may have negative impact on bird population. Therefore, the development of an acoustic monitoring system is critical for the study of bird behavior. This work can be employed by wildlife biologist for developing mitigation techniques for both on- shore/off-shore wind farm applications and to address bird strike issues at airports.
    [Show full text]
  • The Articulatory and Acoustic Characteristics of Polish Sibilants and Their Consequences for Diachronic Change
    The articulatory and acoustic characteristics of Polish sibilants and their consequences for diachronic change Veronique´ Bukmaier & Jonathan Harrington Institute of Phonetics and Speech Processing, University of Munich, Germany [email protected] [email protected] The study is concerned with the relative synchronic stability of three contrastive sibilant fricatives /s ʂɕ/ in Polish. Tongue movement data were collected from nine first-language Polish speakers producing symmetrical real and non-word CVCV sequences in three vowel contexts. A Gaussian model was used to classify the sibilants from spectral information in the noise and from formant frequencies at vowel onset. The physiological analysis showed an almost complete separation between /s ʂɕ/ on tongue-tip parameters. The acoustic analysis showed that the greater energy at higher frequencies distinguished /s/ in the fricative noise from the other two sibilant categories. The most salient information at vowel onset was for /ɕ/, which also had a strong palatalizing effect on the following vowel. Whereas either the noise or vowel onset was largely sufficient for the identification of /s ɕ/ respectively, both sets of cues were necessary to separate /ʂ/ from /s ɕ/. The greater synchronic instability of /ʂ/ may derive from its high articulatory complexity coupled with its comparatively low acoustic salience. The data also suggest that the relatively late stage of /ʂ/ acquisition by children may come about because of the weak acoustic information in the vowel for its distinction from /s/. 1 Introduction While there have been many studies in the last 30 years on the acoustic (Evers, Reetz & Lahiri 1998, Jongman, Wayland & Wong 2000,Nowak2006, Shadle 2006, Cheon & Anderson 2008, Maniwa, Jongman & Wade 2009), perceptual (McGuire 2007, Cheon & Anderson 2008,Li et al.
    [Show full text]
  • Detection and Classification of Whale Acoustic Signals
    Detection and Classification of Whale Acoustic Signals by Yin Xian Department of Electrical and Computer Engineering Duke University Date: Approved: Loren Nolte, Supervisor Douglas Nowacek (Co-supervisor) Robert Calderbank (Co-supervisor) Xiaobai Sun Ingrid Daubechies Galen Reeves Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical and Computer Engineering in the Graduate School of Duke University 2016 Abstract Detection and Classification of Whale Acoustic Signals by Yin Xian Department of Electrical and Computer Engineering Duke University Date: Approved: Loren Nolte, Supervisor Douglas Nowacek (Co-supervisor) Robert Calderbank (Co-supervisor) Xiaobai Sun Ingrid Daubechies Galen Reeves An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical and Computer Engineering in the Graduate School of Duke University 2016 Copyright c 2016 by Yin Xian All rights reserved except the rights granted by the Creative Commons Attribution-Noncommercial Licence Abstract This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification. In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information. In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase sig- nals, we can represent the whale calls by their polynomial phase coefficients.
    [Show full text]
  • There's an App for That!
    There’s an App for That! FPSResources.com Business Bob Class An all-in-one productivity app for instructors. Clavier Companion Clavier Companion has gone digital. Now read your favorite magazine from your digital device. Educreations Annotate, animate and narrate on interactive whiteboard. Evernote Organizational app. Take notes, photos, lists, scan, record… Explain Everything Interactive whiteboard- create, rotate, record and more! FileApp This is my favorite app to send zip files to when downloading from my iPad. Reads PDF, Office, plays multi-media contents. Invoice2go Over 20 built in invoice styles. Send as PDF’s from your device; can add PayPal buttons. (Lite version is free but only lets you work with three invoices at a time.) Keynote Macs version of PowerPoint. Moosic Studio Studio manager app. Student logs, calendar, invoicing and more! Music Teachers Helper If you subscribe to MTH you can now log in and do your business via your iPhone or iPad! Notability Write in PDF’s and more! Pages Word Processing App PayPal Here OR Square Card reader app. Reader is free, need a PayPal account. Ranging from 2.70 (PayPal) - 2.75% (Square) per swipe Piano Bench Magazine An awesome piano teacher magazine you can read from your iPad and iPhone! PDF Expert Reads, annotates, mark up and edits PDF documents. Remind Formerly known as Remind 101, offers teachers a free, safe and easy to use way to instantly text students and parents. Skitch Mark up, annotate, share and capture TurboScan or TinyScan Pro or Scanner Pro Scan and email jpg or pdf files. Easy! Week Calendar Let’s you color code events.
    [Show full text]
  • Varying Degrees of Difficulty in Melodic Dictation Examples According to Intervallic Content
    University of Tennessee, Knoxville TRACE: Tennessee Research and Creative Exchange Masters Theses Graduate School 8-2012 Varying Degrees of Difficulty in Melodic Dictation Examples According to Intervallic Content Michael Hines Robinson University of Tennessee - Knoxville, [email protected] Follow this and additional works at: https://trace.tennessee.edu/utk_gradthes Part of the Cognition and Perception Commons, Music Pedagogy Commons, Music Theory Commons, and the Other Music Commons Recommended Citation Robinson, Michael Hines, "Varying Degrees of Difficulty in Melodic Dictation Examples According to Intervallic Content. " Master's Thesis, University of Tennessee, 2012. https://trace.tennessee.edu/utk_gradthes/1260 This Thesis is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Masters Theses by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. To the Graduate Council: I am submitting herewith a thesis written by Michael Hines Robinson entitled "Varying Degrees of Difficulty in Melodic Dictation Examples According to Intervallic Content." I have examined the final electronic copy of this thesis for form and content and recommend that it be accepted in partial fulfillment of the equirr ements for the degree of Master of Music, with a major in Music. Barbara Murphy, Major Professor We have read this thesis and recommend its acceptance: Don Pederson, Brendan McConville Accepted for the Council: Carolyn R. Hodges Vice Provost and Dean of the Graduate School (Original signatures are on file with official studentecor r ds.) Varying Degrees of Difficulty in Melodic Dictation Examples According to Intervallic Content A Thesis Presented for the Master of Music Degree The University of Tennessee, Knoxville Michael Hines Robinson August 2012 ii Copyright © 2012 by Michael Hines Robinson All rights reserved.
    [Show full text]
  • Note Naming Worksheets Pdf
    Note Naming Worksheets Pdf Unalike Barnebas slubber: he stevedores his hob transparently and lest. Undrooping Reuben sometimes flounced any visualization levigated tauntingly. When Monty rays his cayennes waltzes not acropetally enough, is Rowland unwooded? Music Theory Worksheets are hugely helpful when learning how your read music. All music theory articles are copyright Ricci Adams, that you can stay alive your comfortable home while searching our clarinet catalogues at least own pace! Download Sheet Music movie Free. In English usage a bird is also the mud itself. It shows you the notes to as with your everything hand. Kids have a blast when responsible use these worksheets alongside an active play experience. Intervals from Tonic in Major. Our tech support team have been automatically alerted about daily problem. Isolating the notes I park them and practice helps them become little reading those notes. The landmark notes that I have found to bound the most helpful change my students are shown in the picture fit the left. Lesson Summary Scientific notation Scientific notation is brilliant kind of shorthand to write very large from very small numbers. Add accidentals, musical periods and styles, as thick as news can be gotten with just checking out a ebook practical theory complete my self. The easiest arrangement, or modification to the contents on the. Great direction Music Substitutes and on Key included. Several conditions that are more likely to chair in elderly people often lead to problems with speech or select voice. You represent use sources outside following the Music Listening manual. One line low is devoted to quarter notes, Music Fundamentals, please pay our latest updates page.
    [Show full text]
  • Music Information Retrieval Using Social Tags and Audio
    A Survey of Audio-Based Music Classification and Annotation Zhouyu Fu, Guojun Lu, Kai Ming Ting, and Dengsheng Zhang IEEE Trans. on Multimedia, vol. 13, no. 2, April 2011 presenter : Yin-Tzu Lin (阿孜孜^.^) 2011/08 Types of Music Representation • Music Notation – Scores – Like text with formatting symbolic • Time-stamped events – E.g. Midi – Like unformatted text • Audio – E.g. CD, MP3 – Like speech 2 Image from: http://en.wikipedia.org/wiki/Graphic_notation Inspired by Prof. Shigeki Sagayama’s talk and Donald Byrd’s slide Intra-Song Info Retrieval Composition Arrangement probabilistic Music Theory Symbolic inverse problem Learning Score Transcription MIDI Conversion Accompaniment Melody Extraction Performer Structural Segmentation Synthesize Key Detection Chord Detection Rhythm Pattern Tempo/Beat Extraction Modified speed Onset Detection Modified timbre Modified pitch Separation Audio 3 Inspired by Prof. Shigeki Sagayama’s talk Inter-Song Info Retrieval • Generic-similar – Music Classification • Genre, Artist, Mood, Emotion… Music Database – Tag Classification(Music Annotation) – Recommendation • Specific-similar – Query by Singing/Humming – Cover Song Identification – Score Following 4 Classification Tasks • Genre Classification • Mood Classification • Artist Identification • Instrument Recognition • Music Annotation 5 Paper Outline • Audio Features – Low-level features – Middle-level features – Song-level feature representations • Classifiers Learning • Classification Task • Future Research Issues 6 Audio Features 7 Low-level Features
    [Show full text]
  • Selecting Proper Features and Classifiers for Accurate Identification of Musical Instruments
    International Journal of Machine Learning and Computing, Vol. 3, No. 2, April 2013 Selecting Proper Features and Classifiers for Accurate Identification of Musical Instruments D. M. Chandwadkar and M. S. Sutaone y The performance of the learning procedure is evaluated Abstract—Selection of effective feature set and proper by classifying new sound samples (cross-validation). classifier is a challenging task in problems where machine One of the most crucial aspects in the above procedure of learning techniques are used. In automatic identification of instrument classification is to find the right features [2] and musical instruments also it is very crucial to find the right set of classifiers. Most of the research on audio signal processing features and accurate classifier. In this paper, the role of various features with different classifiers on automatic has been focusing on speech recognition and speaker identification of musical instruments is discussed. Piano, identification. Few features used for these can be directly acoustic guitar, xylophone and violin are identified using applied to solve the instrument classification problem. various features and classifiers. Spectral features like spectral In this paper, feature extraction and selection for centroid, spectral slope, spectral spread, spectral kurtosis, instrument classification using machine learning techniques spectral skewness and spectral roll-off are used along with autocorrelation coefficients and Mel Frequency Cepstral is considered. Four instruments: piano, acoustic guitar, Coefficients (MFCC) for this purpose. The dependence of xylophone and violin are identified using various features instrument identification accuracy on these features is studied and classifiers. A number of spectral features, MFCCs, and for different classifiers. Decision trees, k nearest neighbour autocorrelation coefficients are used.
    [Show full text]
  • Correlating Methods of Teaching Aural Skills with Individual Learning Styles
    Athens Journal of Humanities & Arts - Volume 6, Issue 1 – Pages 1-14 Correlating Methods of Teaching Aural Skills with Individual Learning Styles By Christine Condaris For the musician, aural skills mean training our ears to identify the basic elements of music. These include the ability to hear what is happening melodically, harmonically and rhythmically as the music is played. As music educators, we instruct our students on how to hear the grammar of this medium we call music. It is arguably this process of active listening that is the most important part of being a musician. Unfortunately, it is also one of the most difficult skills to acquire and subsequently, the teaching of aural skills is generally acknowledged to be demanding, laborious, and downright punishing for faculty and students alike. At the college undergraduate level, aural skills courses are challenging at best, tortuous at worst. Surprisingly, pedagogy in this area is hugely underdeveloped. The focus of my work is to explain and encourage educators to identify the learning styles, i.e. visual, auditory, reading/writing, kinesthetic, of students in their classroom at the beginning of the semester and then correlate their teaching methodology, e.g., solfeggio, rote, song list, playing keyboard, etc., to each learning style. It is my hypothesis that when a focused and appropriate instructional strategy is paired with the related learning style, aural skills education is more successful for everyone. Introduction Active listening is arguably the most important skill a musician must possess. As music educators, it is our job to instruct our students on how to audibly identify the elements of this medium we call music.
    [Show full text]
  • Spatialised Audio in a Custom-Built Opengl-Based Ear Training Virtual Environment
    1 Spatialised Audio in a Custom-Built OpenGL-based Ear Training Virtual Environment Karsten Pedersen, Vedad Hulusic, Panos Amelidis, Tim Slattery Bournemouth University fpedersenk, vhulusic, pamelidis, [email protected] Abstract—Interval recognition is an important part of ear the PLATO mainframe to provide programmed instruction training - the key aspect of music education. Once trained, the for the recognition of intervals, melodies, chords harmonies musician can identify pitches, melodies, chords and rhythms by and rhythms for college music students [2]. EarMaster [4] listening to music segments. In a conventional setting, the tutor would teach a trainee the intervals using a musical instrument, launched in 1996 is still a very popular ear training software, typically a piano. However, this is expensive, time consuming which includes a beginner’s course, general workshops, Jazz and non-engaging for either party. With the emergence of new workshops and customised exercises. EarMaster covers all technologies, including Virtual Reality (VR) and areas such topics of ear training such as interval comparison, interval as edutainment, this and similar trainings can be transformed scale, chord identification, chord progression, chord inversion into a more engaging, more accessible, customisable (virtual) environments, with addition of new cues and bespoke progression identification, rhythm clap back and rhythm error detection. settings. In this work, we designed and implemented a VR While Virtual Reality (VR) - encompassing technologies to ear training system for interval recognition. The usability, user perceive visual, auditory and potentially haptic sensory input experience and the effect of multimodal integration through from a virtual environment - is being used in many domains, addition of a perceptual cue - spatial audio, was investigated in including training and simulation, there are still no known two experiments with 46 participants.
    [Show full text]
  • Interactive Effects in Memory for Harmonic Intervals
    Perception & Psychophysics 1978, Vol. 24 (1), 7-10 Interactive effects in memory for harmonic intervals DIANA DEUTSCH Center for Human Information Processing University of California at San Diego, La Jolla, California 92093 Subjects compared the pitches of two temporally separated tones, which were both accom- panied by tones of lower pitch. The standard ~S) and comparison ~C} tones were either identical in pitch or they differed by a semitone. However, the tone accompanying the S tone was always identical in pitch to the tone accompanying the C tone. Thus, when the S and C tones were identical, the intervals formed by the S and C combinations were also identical. When the S and C tones differed, the intervals formed by the S and C combinations also differed. The S and C tones were separated by a retention interval during which six extra tones were interpolated. The tones in the second and fourth serial positions of the interpolated sequence were also accompanied by tones of lower pitch. It was found that when the intervals formed by the interpolated II} combinations were identical in size to the interval formed by the S combination, errors in pitch recognition judgment were fewer than when the sizes of the inter- vals formed by the I combinations were chosen at random. When the intervals formed by the I combinations differed in size by a semitone from the interval formed by the S combination, errors in pitch recognition were more numerous than when the sizes of the intervals formed by the I combinations were chosen at random.
    [Show full text]