Jazz Solo Instrument Classification with Convolutional Neural Networks, Source Separation, and Transfer Learning

Jazz Solo Instrument Classification with Convolutional Neural Networks, Source Separation, and Transfer Learning

JAZZ SOLO INSTRUMENT CLASSIFICATION WITH CONVOLUTIONAL NEURAL NETWORKS, SOURCE SEPARATION, AND TRANSFER LEARNING Juan S. Gomez´ Jakob Abeßer Estefan´ıa Cano Semantic Music Technologies Group, Fraunhofer IDMT, Ilmenau, Germany fgomejn,abr,[email protected] ABSTRACT ment classification in ensemble music recordings. In par- ticular, this paper focuses on the identification of predom- Predominant instrument recognition in ensemble record- inant solo instruments in multitimbral music recordings, ings remains a challenging task, particularly if closely- i. e., the most salient instruments in the audio mixture. This related instruments such as alto and tenor saxophone need assumes that the spectral-temporal envelopes that describe to be distinguished. In this paper, we build upon a recently- the instrument’s timbre are dominant in the polyphonic proposed instrument recognition algorithm based on a hy- mixture [11]. As a particular use-case, we focus on the brid deep neural network: a combination of convolu- classification of solo instruments in jazz ensemble record- tional and fully connected layers for learning character- ings. Here, we study the task of instrument recognition istic spectral-temporal patterns. We systematically eval- both on a class and sub-class level, e. g. between soprano, uate harmonic/percussive and solo/accompaniment source alto, and tenor saxophone. Besides the high timbral sim- separation algorithms as pre-processing steps to reduce the ilarity between different saxophone types, a second chal- overlap among multiple instruments prior to the instrument lenge lies in the large variety of recording conditions that recognition step. For the particular use-case of solo in- heavily influence the overall sound of a recording [21,25]. strument recognition in jazz ensemble recordings, we fur- A system for jazz solo instrument classification could be ther apply transfer learning techniques to fine-tune a previ- used for content-based metadata clean-up and enrichment ously trained instrument recognition model for classifying of jazz archives. six jazz solo instruments. Our results indicate that both As the main contributions of this paper, we systemat- source separation as pre-processing step as well as trans- ically evaluate two state-of-the-art source separation al- fer learning clearly improve recognition performance, es- gorithms as pre-processing steps to improve instrument pecially for smaller subsets of highly similar instruments. recognition (see Section 3). We extend and improve upon a recently proposed hybrid neural network architecture (see Figure 1) that combines convolutional layers for automatic 1. INTRODUCTION learning of spectral-temporal timbre features, and fully Automatic Instrument Recognition (AIR) is a fundamental connected layers for classification [28]. We further evalu- task in Music Information Retrieval (MIR) which aims at ate transfer learning strategies to adapt a given neural net- identifying all participating music instruments in a given work model to more specific classification use-cases such recording. This information is valuable for a variety of as jazz solo instrument classification, which require a more tasks such as automatic music transcription, source separa- granular level of detail [13]. tion, music similarity computation, and music recommen- dation, among others. In general, musical instruments can 2. RELATED WORK be categorized based on their underlying sound production mechanisms. However, various aspects of human music The majority of work towards automatic instrument recog- performance such as dynamics, intonation, or vibrato cre- nition has focused on instrument classification of isolated ate a large timbral variety that complicate the distinction of note events or monophonic phrases and melodies played closely-related instruments such as a violin and a cello. by single instruments. Considering classification scenarios As part of the ISAD (Informed Sound Activity Detec- with more than 10 instrument classes, the best-performing tion in Music Recordings) research project, we aim at im- systems achieve recognition rates above 90%, as shown for proving existing methods for timbre description and instru- instance in [14, 27]. In polyphonic and multitimbral music recordings, how- ever, AIR is a more complicated problem. Traditional ap- c Juan S. Gomez,´ Jakob Abeßer, Estefan´ıa Cano. Li- proaches rely on hand-crafted audio features designed to censed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: Juan S. Gomez,´ Jakob Abeßer, Estefan´ıa capture the most discriminative aspects of instrument tim- Cano. “Jazz Solo Instrument Classification with Convolutional Neural bres. Such features are based on different signal represen- Networks, Source Separation, and Transfer Learning”, 19th International tations based on cepstrum [8–10, 29], group delay [5], or Society for Music Information Retrieval Conference, Paris, France, 2018. line spectral frequencies [18]. A classifier ensemble focus- 577 578 Proceedings of the 19th ISMIR Conference, Paris, France, September 23-27, 2018 Figure 1. Reference model proposed by Han et al. [28]. Time-frequency spectrogram patches are processed by successive pairs of convolutional layers (Conv) with ReLU activation function (R), max pooling (MaxPool), and global max pooling (GlobMaxPool). Dropout (D) is applied for regularization in the feature extractor and classifier. Conv layers have increasing number of filters (32, 64, 128, and 256) and output shapes are specified for each layer. ing on note-wise, frame-wise, and envelope-wise features authors achieved a 59% recognition rate for six polyphonic was proposed in [14]. We refer the reader to [11] for an notes randomly chosen from 19 different instruments. extensive overview of AIR algorithms that include hand- crafted audio features. 3. PROCESSING STEPS Novel deep learning algorithms, particularly convolu- tional neural networks (CNN), have been widely used for 3.1 Baseline Instrument Recognition Framework various image recognition tasks [13]. As a consequence, In this section, we briefly summarize the instrument recog- these methods were successfully adopted to MIR tasks nition model proposed by Han et al. [28], which we use such as chord recognition [17] and music transcription [1], as the starting point for our experiments. As a first step, where they significantly improved upon previous state-of- monaural audio signals are processed at a sampling rate the-art results. Similarly, the first successful AIR methods of 22.05 kHz. A mel spectrogram with a window size of based on deep learning were recently proposed and de- 1024, a hop size of 512, and 128 mel bands is then com- signed from the combination of convolutional layers for puted. After applying a logarithmic magnitude compres- feature learning, and fully-connected layers for classifi- sion, spectral patches one second long are used as input cation [24, 28]. Park et al. use a CNN to recognize in- to the deep neural network. The resulting time-frequency struments using single tone recordings [24]. Han et al. 128×43 patches have shape xi 2 R . [28] propose a similar architecture and evaluate different The network architecture is illustrated in Figure 1 and late-fusion results to obtain clip-wise instrument labels. consists of four pairs of convolutional layers with a filter The authors aim at classifying predominant instruments in size of 3 × 3 and ReLU activation functions. The input polyphonic and multitimbral recordings, and improve upon of each convolution layer is zero-padded with 1 × 1, con- previous state-of-the-art systems by around 0.1 in f-score. sidered in the output shape of each layer. The number of Li et al. [20] propose to use end-to-end learning, consid- filters in the conv layer pairs increases from 32 to 256. ering a different network architecture. By these means, Max pooling over both time and frequency is performed they use raw audio data as input without relying on spec- between successive layer pairs. Dropout of 0.25 is used for tral transformations such as mel spectrograms. regularization. An intermediate global max pooling layer A variety of pre-processing strategies have been been and flatten layer (F) connect the feature extractor with the applied MIR tasks such as singing voice detection [19] and classifier. Finally, a fully-connected layer (FC), dropout of melody line estimation [26]. Regarding the AIR task, sev- 0.5, and a final output layer sigmoid activation (S) with 11 eral algorithms include a preceding source separation step. classes are used. The model was trained with a learning In [2], Bosch et al. evaluate two segregation methods for rate of 0:001, a batch size of 128, and the Adam optimizer. stereo recordings—a simple LRMS (Left/Right-Mid/Side) In the post-processing stage, Han et al. compare two ag- separation and FASST (Flexible Audio Source Separation gregation strategies to obtain class predictions on a audio Framework) developed by Ozerov et al. [22]. The authors file level: first, they apply thresholds over averaged and report improvements of 19% in f-score using a simple pan- normalized segment-wise class predictions (S1 strategy). ning separation, and up to 32% when the model was trained Secondly, a sliding window of 6 segments and hop-size 3 with previously separated audio, taking into account the segments is used for local aggregation prior to performing typical artifacts produced by source separation techniques. S1 strategy (S2 strategy). Refer to [28] for the identifica- Heittola

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us