Modeling the Compatibility of Stem Tracks to Generate Music Mashups

Modeling the Compatibility of Stem Tracks to Generate Music Mashups

The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) Modeling the Compatibility of Stem Tracks to Generate Music Mashups Jiawen Huang*,2, Ju-Chiang Wang1, Jordan B. L. Smith1, Xuchen Song1, and Yuxuan Wang1 1 ByteDance 2 Queen Mary University of London [email protected], fju-chiang.wang, jordan.smith, xuchen.song, [email protected] Abstract of other songs. Typically, the vocal part from one song is juxtaposed with the instrumental part of another, although A music mashup combines audio elements from two or more it is also common for basic mashups to include samples songs to create a new work. To reduce the time and effort re- quired to make them, researchers have developed algorithms of 3 songs (Boone 2013). However, creating mashups is that predict the compatibility of audio elements. Prior work challenging: it requires an expert’s ear to decide whether has focused on mixing unaltered excerpts, but advances in two samples would make a good mashup, and it is time- source separation enable the creation of mashups from iso- consuming to search for pairs of songs that would work well lated stems (e.g., vocals, drums, bass, etc.). In this work, together. Both issues are exacerbated when aiming to com- we take advantage of separated stems not just for creating bine elements of three or more songs. mashups, but for training a model that predicts the mutual Accordingly, efforts to assist users in creating mashups or compatibility of groups of excerpts, using self-supervised and to automate the process have continued for over a decade. semi-supervised methods. Specifically, we first produce a ran- dom mashup creation pipeline that combines stem tracks ob- At minimum, two samples being combined should have tained via source separation, with key and tempo automati- the same tempo and time signature, and they should not cally adjusted to match, since these are prerequisites for high- clash harmonically; these criteria informed early systems for quality mashups. To train a model to predict compatibility, assisting mashup creation (Tokui 2008; Griffin, Kim, and we use stem tracks obtained from the same song as posi- Turnbull 2010), but they are also easy to meet using beat- tive examples, and random combinations of stems with key tracking, key estimation, and audio-stretching algorithms. and/or tempo unadjusted as negative examples. To improve To predict the mashability (i.e., the compatibility) of a can- the model and use more data, we also train on “average” ex- didate group of samples is more challenging, but allows one amples: random combinations with matching key and tempo, to automate the creation of mashups. where we treat them as unlabeled data as their true compati- bility is unknown. To determine whether the combined signal Previous methods for estimating compatibility have relied or the set of stem signals is more indicative of the quality on rule-based systems with hand-crafted features (Davies of the result, we experiment on two model architectures and et al. 2014; Lee et al. 2015), even though the criteria used train them using semi-supervised learning technique. Finally, by listeners to judge mashup quality are unknown and un- we conduct objective and subjective evaluations of the sys- doubtedly complex. We thus propose to use a neural network tem, comparing them to a standard rule-based system. to learn the criteria. The central challenge in training such a model is that there is no training data: i.e., no datasets of au- Introduction dio samples with annotations defining which combinations of samples are better than others. To address this, we pro- In Margaret Boden’s account of how creativity works, pose to use self-supervised learning (by leveraging existing “combinational” creativity—the juxtaposition of unrelated music as training data) and a semi-supervised approach that ideas—and “exploratory” creativity—the searching within maximizes the utility of the other data we create. the rules of a style for exciting possibilities—are two es- We create training data by applying a supervised music sential modes of creative thinking (Boden 2007). Modeling source separation (MSS) algorithm (e.g., Jansson et al. 2017; these processes computationally is an important step for de- Stoter,¨ Liutkus, and Ito 2018) to extract independent stem veloping artificial creativity in the field of AI (Jordanous tracks from existing music (i.e., separated vocal, bass, drum 2014). The combinatory possibilities and joy of exploration and other parts). We can then recombine the stems to gen- are perhaps two causes for the continued popularity of cre- erate many new mashups, with the original combinations ating music mashups. serving as ground truth examples of ‘good’ mashups; in this Mashups are a popular genre of music where new songs way our model is self-supervised.1 It is straightforward to are created by combining audio excerpts (called samples) *The author performed this work as an intern at ByteDance. 1This may be different from conventional settings such as learn- Copyright c 2021, Association for the Advancement of Artificial ing a representation of signals or their temporal coherence (Misra, Intelligence (www.aaai.org). All rights reserved. Zitnick, and Hebert 2016; Huang, Chou, and Yang 2018). 187 use separated signals to help Music Information Retrieval AMU is a well-described, well-motivated baseline model (MIR) tasks such as music transcription (Pedersoli, Tzane- that has been re-implemented for open-source use. takis, and Yi 2020), singing voice detection (Stoller, Ewert, In contrast to AMU, our system uses a supervised model and Dixon 2018), and modeling vocal features (Lee 2019). where the training data were obtained by running MSS on However, to the best of our knowledge, no prior work has existing songs. These steps were also taken to train Neural leveraged supervised MSS for automatic generation of new Loop Combiner (NLC), a neural network model that esti- music pieces. mates audio compatibility of one-bar loops (Chen, Smith, The above explains how to acquire positive examples for and Yang 2020). However, NLC uses an unsupervised training the model. To obtain negative examples, we can MSS algorithm designed to isolate looped content (Smith, use random combinations of stems with different keys and Kawasaki, and Goto 2019), resulting in a very different sys- tempo that are almost guaranteed to sound awful. However, tem to ours, which uses supervised MSS to isolate vocal, the extreme difference in compatibility between these two bass, drum, and other parts. cases may lead to a highly polarized model that only re- First, since vocals are looped less often, NLC likely had gards stems as compatible if they were extracted from the far fewer instances of vocals in its extracted training set. same song. To avoid this, we use semi-supervised learn- This is a drawback since vocals are an essential part of ing: we create random mashups that meet the minimum mashups. Second, the data acquisition pipeline for NLC in- requirements for viability—combinations where the tempo volves several heuristics to improve source separation qual- and key are automatically matched—and treat them as “un- ity, and the outputs are not guaranteed to contain distinct labeled” instances. This step aims to improve the reliabil- instruments (e.g., a positive training pair could include two ity of the model, and it also means that our model sees drum loops). In contrast, the supervised separation we use more mashups, including many potentially “creative” ones, leads to highly distinct stems, which is more appropriate for because the stems are sourced from different genres. This creating mashups. It also enables us to train a model where has been described as a key aspect of successful mashups: the input audio clips have a fixed role—namely, vocal, har- “the combination of musical congruity and contextual in- monic and drum parts—which is important, since the fea- congruity” (Brøvig-Hanssen and Harkins 2012). tures that determine the compatibility likely depend strongly Our contributions can be summarized as follows. First, we on the role. This design is also novel since it allows us to di- propose a novel framework that leverages the power of MSS rectly estimate the mashability of groups of stems instead and machine learning to generate music mashups. Second, of learning a representation space for embedding the stems, we propose techniques to generate data without human la- since mashability can be non-transitive. bels and develop two deep neural network architectures that A separate difference is that the authors of NLC chose to are trained in a self- and semi-supervised way. Third, we focus strictly on hip-hop, whereas our training dataset spans conduct objective and subjective evaluations, where we pro- a wide variety of genres. We do this to obtain a more general- pose to use an unstudied dataset, Ayumix2, to evaluate the izable model (the semi-supervised technique explained later task. The result demonstrates our AI mashups can achieve a assists here too), and because much of the joy of mashups good overall quality according to our participants. comes from the surprise of hearing two disparate samples from different genres work well together (Brøvig-Hanssen Related Work and Harkins 2012; Boone 2013). Early approaches to estimating mashability relied on fixed notions of what makes two sound clips mesh well. Auto- Data Generation Pipeline MashUpper (AMU) modeled the mashability of two clips as The pipeline aims to generate mashup candidates by mix- a weighted sum of harmonic compatibility, rhythmic com- ing stems with different conditions. Then the candidates are patibility and spectral balance, each of which is computed sent to a machine learning model (described in the next sec- as a correlation between two beat-synchronous representa- tion) to predict their compatibility.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us