Source Separation for Improving Music Emotion Recognition Models and Their Interpretability

Source Separation for Improving Music Emotion Recognition Models and Their Interpretability

THE MULTIPLE VOICES OF MUSICAL EMOTIONS: SOURCE SEPARATION FOR IMPROVING MUSIC EMOTION RECOGNITION MODELS AND THEIR INTERPRETABILITY Jacopo de Berardinis1;2 Angelo Cangelosi1 Eduardo Coutinho2 1 Machine Learning and Robotics Group, University of Manchester 2 Applied Music Research Lab, University of Liverpool [email protected] ABSTRACT possibility to explore large music libraries using affective cues. Indeed, recommending music based on the emotion Despite the manifold developments in music emotion of the listener at home [10] or background music person- recognition and related areas, estimating the emotional im- alised for the ones present in a restaurant would provide a pact of music still poses many challenges. These are of- more personal and enjoyable user experience [14]. ten associated to the complexity of the acoustic codes to At the core of these systems is music emotion recog- emotion and the lack of large amounts of data with ro- nition (MER), an active field of research in music infor- bust golden standards. In this paper, we propose a new mation retrieval (MIR) for the past twenty years. The au- computational model (EmoMucs) that considers the role tomatic prediction of emotions from music is a challeng- of different musical voices in the prediction of the emo- ing task due to the subjectivity of the annotations and the tions induced by music. We combine source separation lack of considerable data for effectively training supervised algorithms for breaking up music signals into independent models. Song et al. [29] also argued that MER methods song elements (vocals, bass, drums, other) and end-to-end tend to perform well for genres such as classical music and state-of-the-art machine learning techniques for feature ex- film soundtracks, but not yet for popular music [25]. In traction and emotion modelling (valence and arousal re- addition, it is difficult to interpret emotional predictions in gression). Through a series of computational experiments terms of musical content, especially for models based on on a benchmark dataset using source-specialised models deep neural networks. Although a few approaches exist for trained independently and different fusion strategies, we interpretable MER [5], the recognition accuracy of these demonstrate that EmoMucs outperforms state-of-the-art methods is compromised, with the resulting performance approaches with the advantage of providing insights into loss often referred to as “cost of explainability". the relative contribution of different musical elements to As different voices within a composition can have a the emotions perceived by listeners. distinct emotional impact [13], our work leverages state- of-the-art deep learning methods for music source sep- 1. INTRODUCTION aration (MSS) to reduce the complexity of MER when The ability of music to express and induce emotions limited training data is available. The proposed architec- [15,21] and act as a powerful tool for mood regulation [28] ture (EmoMucs) is based on combining source separation are well-known and demonstrable. Indeed, research shows methods with a parallel block of source-specialised models that music listening is a commonly used, efficacious, and trained independently, subsequently aggregated with a fu- adaptable device to achieve regulatory goals [31], includ- sion strategy. To benchmark our idea, we evaluated Emo- ing coping with negative experiences by alleviating nega- Mucs on the popular music with emotional annotations tive moods and feelings [17]. (PMEmo) dataset [38], and compared its performance with Crucial to this process is selecting the music that can two reference deep learning models for MER. Experimen- facilitate the listener to achieve a determined mood reg- tal results demonstrate that our method achieves better per- ulation target, which often is not an easy task. In order formance for valence recognition, and comparable ones for to support listeners in this process, emotion-aware music arousal, while providing increased interpretability. recommendation systems became popular as they offer the The main technical contributions are manifold: (i) we provide an in-depth evaluation of two reference models for MER, (ii) we propose a computational model that achieves c Jacopo de Berardinis, Angelo Cangelosi, Eduardo an improved performance on the current baselines, under Coutinho. Licensed under a Creative Commons Attribution 4.0 Interna- similar experimental conditions, and finally (iii) we show tional License (CC BY 4.0). Attribution: Jacopo de Berardinis, Angelo Cangelosi, Eduardo Coutinho. “The multiple voices of musical emotions: that our model provides no cost of interpretability. source separation for improving Music Emotion Recognition models and The rest of the paper is structured as follows: Section 2 their interpretability”, 21st International Society for Music Information gives a primer on MER and an overview of related work on Retrieval Conference, Montréal, Canada, 2020. content-based methods, whilst Section 3 outlines the base- Separated Source sources models Vocal Model Vocals Bass Valence Model Demucs Bass FC Drums dropout dropout Audioss FC +ReLU mixdown Model Arousal Drums log-mel spectogram Other Model Other Prediction MLP Source features or predictions Figure 1. An overall architecture illustrating our proposed model, EmoMucs. line architectures and our novel EmoMucs model. Sec- 2.2 Methods for content-based MER tion 4 details the experimental evaluation carried out, in- The field of MIR has followed a similar path to other ma- cluding our results on interpretability. Finally, Section 5 chine learning ones. Prior to the deep learning era, most draws conclusions and gives direction for future work. methods relied on manual audio feature extraction. Huq et al. [12] give an overview on how musical features were 2. BACKGROUND AND RELATED WORK traditionally extracted and fed into different architectures such as support vector machines, k-nearest neighbours, 2.1 A primer on music emotion recognition random forests, deep belief networks and other regression Prior to introducing content-based methods for MER, we models. These methods were tested for MER on Russell’s provide the reader with the fundamentals concepts of the well-established arousal and valence emotion model [24]. task and refer to [3, 16, 34, 35] for a detailed overview. These were succeeded with deep learning methods. Induced vs perceived emotions. Perceived emotions Such techniques, like binarised and bi-directional long refers to the recognition of emotional meaning in music short-term memory recurrent neural networks (LSTM- [29]. Induced (or felt) emotions refer to the feelings expe- RNN) and deep belief networks, have also been suc- rienced by the listener whilst listening to music. cessfully employed for valence and arousal prediction [33]. Other methods again used LSTM-RNNs for dynamic Annotation system. The conceptualisation of emotion arousal and valence regression, on the acoustic and psy- with its respective emotion taxonomy remains a longstand- choacoustic features obtained from songs [6]. Most of ing issue in MER [29]. There exist numerous emotion these works stemmed from entries in the MediaEval emo- models, from miscellaneous [19] to domain specific [36], tion challenge [2]. Multimodality has also been an interest categorical [11] and dimensional [24], with the latter two for this research community, where [9] looked into MER being the prevailing ones. Whilst the categorical model fo- based on both the audio signal and the lyrics of a musical cuses on all the emotions evolving from universal innate track. Again, deep learning methods such as LSTM-RNNs emotions like happiness, sadness, fear and anger [11], the are at the core of the architectures proposed. dimensional model typically comprises an affective two- An important factor in machine learning has been to dimensional valence-arousal space. Valence represents build interpretable models, to make them applicable to a a pleasure-displeasure continuum, whilst arousal outlines wider array of applications. To the best of our knowledge, the activation-deactivation continuum [24]. only a few works have attempted to build an interpretable Time scale of predictions. Predictions can either be static model for MER. In [37], different model classes were built or dynamic. In the former case, the representative emotion over the extracted and selected features. These vital fea- of a song is given by one valence and arousal value [16]. tures were filtered and wrapped, followed by shrinkage Emotion annotations can also be obtained over time (e.g. methods. In [5], a deep network based on two-dimensional second-by-second valence-arousal labels), thus resulting in convolutions is trained to jointly predict “mid-level percep- dynamic predictions [27]. tual features", related to emotional qualities of music [1], Audio features. Musical compositions consist of a rich with emotion classes in a categorical annotation space. array of features such as harmony, tempo, loudness and Our work focuses on the prediction of induced emotions timbre and these all have an effect on emotion. Previ- at a global time scale (static predictions). This is done in ous work in MIR has fuelled around developing informa- a continuous annotation space, as adopting a categorical tive acoustic features [16]. However, as illustrated in other one would not exhibit the same richness in induced hu- works [18, 22, 26] and to the best of our knowledge, there man emotion [35]. The idea

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us