Analyzing and Classifying Guitarists from Rock Guitar Solo Tablature

Analyzing and Classifying Guitarists from Rock Guitar Solo Tablature

Analyzing and Classifying Guitarists from Rock Guitar Solo Tablature Orchisama Das Blair Kaneshiro Tom Collins CCRMA, CCRMA, Department of Psychology, Stanford University Stanford University Lehigh University [email protected] [email protected] [email protected] ABSTRACT a weighted Markov chain model trained on MIDI files for composer identification [8], as well as feedforward neural Guitar solos provide a way for guitarists to distinguish networks [12]. Markov models have been used to distin- themselves. Many rock music enthusiasts would claim to guish between Mozart and Haydn [9]. Existing work on be able to identify performers on the basis of guitar solos, feature extraction from symbolic music is extremely valu- but in the absence of veridical knowledge and/or acousti- able for such a classification task. For example, Pienimaki cal (e.g., timbral) cues, the task of identifying transcribed et al. describe an automatic cluster analysis method for solos is much harder. In this paper we develop methods symbolic music analysis [13], while Collins et al. propose for automatically classifying guitarists using (1) beat and computational methods for generating music in the style of MIDI note representations, and (2) beat, string, and fret various composers [14, 15]. information, enabling us to investigate whether there ex- Past studies have modeled rhythm and lead content of ist “fretboard choreographies” that are specific to certain guitar parts. Of particular relevance is work by McVicar et artists. We analyze a curated collection of 80 transcribed al. [16–18], in which models are trained to emulate playing guitar solos from Eric Clapton, David Gilmour, Jimi Hen- styles of various guitarists such as Jimi Hendrix, Jimmy drix, and Mark Knopfler. We model the solos as zero Page, and Keith Richards. The output is a stylistic gener- and first-order Markov chains, and do performer predic- ation of rhythm and lead guitar tablature based on string tion based on the two representations mentioned above, and fret rather than staff notation representations. It is for a total of four classification models. Our systems pro- unknown, however, whether this choice of representation duce above-chance classification accuracies, with the first- confers any analytic or compositional advantage. A single order fretboard model giving best results. Misclassifica- MIDI note number (MNN) can be represented by several tions vary according to model but may implicate stylistic different (string, fret)-pairs on the fretboard, and it could differences among the artists. The current results confirm be that such choices vary systematically from one artist to that performers can be labeled to some extent from sym- another. Methods for separating voices in lute tablature bolic representations. Moreover, performance is improved seemed to benefit from such a tablature-based representa- by a model that takes into account fretboard choreogra- tion [19]. phies. In addition, Ferretti has modeled guitar solos as directed graphs and analyzed them with complex network theories 1. INTRODUCTION to yield valuable information about playing styles of mu- Avid listeners of rock music claim they can easily distin- sicians [20]. Another study by Cherla et al. automatically guish between a guitar solo by Jimi Hendrix versus Jimmy generated guitar phrases by directly transcribing pitch and Page. This raises many questions about the types of fea- onset information from audio data and then using their tures underlying such a task. For example, can artist iden- symbolic representations for analysis [21]. tification of guitar solos be performed successfully from To our knowledge, the task of identifying artists from gui- compositional features alone; or are other performance and tar solos has not been attempted previously. Furthermore, timbral cues required? McVicar et al.’s [18] work raises the question of whether Artist identification is an established research topic in fretboard representations are really more powerful than Music Information Retrieval (MIR). Timbral features ex- staff notation representations and associated numeric en- tracted from audio representations have been used for artist codings (e.g., MIDI note numbers). In support of McVicar recognition [1–3] and for singer identification in popular et al.’s [18] premise, research in musicology alludes to spe- music [4,5]. cific songs and artists having distinctive “fretboard chore- Identification of artists/composers from symbolic repre- ographies” [22], but the current endeavor enables us to as- sentations (digital encodings of staff notation) has also sess such premises and allusions quantitatively. been attempted [6–11]. Kaliakatsos-Papakostas et al. used Widmer [23] is critical of the prevalence of Markov models in music-informatic applications, since such mod- Copyright: c 2018 Orchisama Das et al. This is an open-access article distributed els lack incorporation of long-term temporal dependen- under the terms of the Creative Commons Attribution 3.0 Unported License, which cies that most musicologists would highlight in a given permits unrestricted use, distribution, and reproduction in any medium, provided piece. Collins et al. [15], however, show that embed- the original author and source are credited. ding Markov chains in a system that incorporates such long-term dependencies is sufficient for generating mate- than or equal to 24 (the usual number of frets on an elec- rial that is in some circumstances indistinguishable from tric guitar), it was wrapped back around to the start of the human-composed excerpts. Whether the zero and first- fretboard by a modulo 24 operation, resulting in the fret order Markov models used in the present study are suf- range [0, 23]. The resulting dimensions of beat, MNN, ficient to identify the provenance of guitar solos is de- pitch class, string and transposed fret were saved in JSON batable; however, we consider them a reasonable starting format for each song in the dataset. Finally, we generated point for the task at hand. two types of tuples on a per-note basis as our state spaces: The rest of this paper is organized as follows. We de- the first state space comprises beat and centralized MNN, scribe the dataset and features, Markov models and maxi- denoted (beat, MNN) hereafter; the second comprises beat, mum likelihood interpretations, and our classification pro- string, and transposed fret, denoted (beat, string, fret) here- cedure in Section2. In Section3 we visualize our data after. The quarter note is represented as a single beat. For and report classification results. We conclude in Section4 example, an eighth note played on the fifth fret of the sec- with discussion of results, insights into stylistic differences ond string would be (0:5; 64) in the 2D (beat, MNN) rep- among the artists, potential issues, and avenues for future resentation and (0:5; 2; 5) in the 3D (beat, string, fret) rep- research. resentation. 2.3 Markov Model 2. METHOD A Markov model is a stochastic model of processes in 2.1 Dataset which the future state depends only on the previous n We collated our own dataset for the present study, since no states [24]. Musical notes can be modeled as random vari- pre-existing dataset was available. First, we downloaded ables that vary over time, with their probability of occur- guitar tabs in GuitarPro format from UltimateGuitar. 1 The rence depending on the previous n notes. quality of tabs was assessed by us as well as the number In the present classification paradigm, let xi represent a of stars they received from UltimateGuitar users. Any tab state in a Markov chain at time instant i. In a first-order with a rating below four stars was discarded. We then man- Markov model, there is a transition matrix P which gives ually extracted the guitar solos from each song’s score and the probability of transition from xi to xi+1 for a set of all converted them to MusicXML format with the free Tux- possible states. If fx1; x2; :::xN g is the set of all possible Guitar software. 2 In total, our final dataset comprised 80 states, the transition matrix P has dimensions N × N. solos—20 each from Eric Clapton, David Gilmour, Jimi Given a new sequence of states [x1; x2; :::xT ], we can Hendrix, and Mark Knopfler. While the size of this dataset represent it as a path with a probability of occurrence is in no way exhaustive, the number of songs curated was P (x1; :::; xT ). According to the product rule, this joint restricted by the availability of accurate tabs. probability distribution can be written as: P (x1; :::; xT ) = P (xT jxT −1; :::; x1)P (x1; :::; xT −1) 2.2 Representations (1) P (x jx ; :::; x ) For parsing MusicXML data and symbolic feature extrac- Since the conditional probability T T −1 1 in a P (x jx ) tion, we used a publicly available JavaScript library. 3 Us- first-order Markov process reduces to T T −1 , we ing methods in this library, we wrote a script that returns can write: ontime (symbolic onset time), MIDI note number (MNN), P (x1; x2; :::xT ) = P (xT jxT −1)P (x1; x2; ::xT −1) (2) morphetic pitch number (MPN), note duration, string num- ber, and fret number for each note in the solo. To obtain the Solving this recursively brings us to: beat of the measure on which each note begins, we took its T ontime modulo the time signature of that particular solo. Y The tonic pitch of each song was identified from the key P (x1; x2; :::xT ) = P (x1) P (xijxi−1) (3) signature using an algorithm in the JavaScript library that i=2 finds the tonic MIDI note closest to the mean of all pitches Taking the log of P (x1; x2; :::xT ) gives us the log likeli- in a song. We then subtracted this tonic MNN from each hood, L1 defined as: raw MNN to give a “centralized MNN”, which accounted T for solos being in different keys.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us