A Prototypical Triplet Loss for Cover Detection
Total Page:16
File Type:pdf, Size:1020Kb
A PROTOTYPICAL TRIPLET LOSS FOR COVER DETECTION Guillaume Doras Geoffroy Peeters Sacem & Ircam Lab, LTCI, Tel´ ecom´ Paris CNRS, Sorbonne Universite´ Institut Polytechnique de Paris [email protected] [email protected] ABSTRACT In a recent work, we proposed such a solution based on a con- volutional neural network designed to map each track’s dominant Automatic cover detection – the task of finding in an audio dataset melody representation to a single embedding vector. We trained this all covers of a query track – has long been a challenging theoretical network to minimize the Euclidean distance between cover pairs em- problem in MIR community. It also became a practical need for mu- beddings, while maximizing it for non-cover pairs [1]. We showed sic composers societies requiring to detect automatically if an audio that such a model trained with works having enough covers can learn excerpt embeds musical content belonging to their catalog. a similarity metric between dominant melodies. We showed in par- In a recent work, we addressed this problem with a convolutional ticular that training this model with works having at least five to neural network mapping each track’s dominant melody to an embed- ten covers yields state-of-the-art accuracy on small and large scale ding vector, and trained to minimize cover pairs distance in the em- datasets. beddings space, while maximizing it for non-covers. We showed in This does not, however, reflects the realistic use case: in prac- particular that training this model with enough works having five or tice, the reference dataset is usually a music catalog containing one more covers yields state-of-the-art results. original interpretation of each work, along with typically zero or This however does not reflect the realistic use case, where music more rarely one or two covers. Moreover, query tracks have no par- catalogs typically contain works with zero or at most one or two cov- ticular reason to be covers of any work seen during training. ers. We thus introduce here a new test set incorporating these con- We thus extend here our earlier work as follows: we first build straints, and propose two contributions to improve our model’s ac- a new test set reflecting realistic conditions, i.e. disjoint from the curacy under these stricter conditions: we replace dominant melody original training set and containing only works having very few cov- with multi-pitch representation as input data, and describe a novel ers. Our main contribution to improve our model’s generalization prototypical triplet loss designed to improve covers clustering. We under these stricter conditions is then twofold: we propose the use show that these changes improve results significantly for two con- of a multi-pitch representation instead of dominant melody as in- crete use cases, large dataset lookup and live songs identification. put data, and we introduce a novel prototypical triplet loss designed to improve covers clustering. We finally show that these changes Index Terms— triplet loss, prototypical triplet loss, dominant improve results significantly on two common tasks in the industry: melody, multi pitch, cover detection large dataset lookup and live songs identification. In the rest of this paper, we review in x2 the main concepts used in this work. We describe our proposed improvements, namely the 1. INTRODUCTION multi-pitch representation and the prototypical triplet loss in x3. We detail our large dataset lookup experiment and its results in x4, and Covers are different interpretations of the same original musical the live song identification experiment and its results in x5. We con- work. They usually share a similar melodic line, but typically dif- clude with future improvements to bring to our method. fer greatly in one or several other dimensions, such as structure, arXiv:1910.09862v2 [cs.LG] 9 Apr 2020 tempo, key, instrumentation, genre, etc. Automatic cover detection – the task of finding in an audio corpus all covers of one or several 2. RELATED WORK query tracks – has long been seen as a challenging theoretical prob- 2.1. Multi-pitch estimation lem in MIR. It also became an acute practical problem for music composers societies facing continuous expansion of user-generated Dominant melody and multi-pitch estimation have long been another online content including musical excerpts under copyright. challenging problem in the MIR community [2, 3]. A major break- Cover detection is not stricto sensu a classification problem: due through was brought recently with the introduction of a convolu- to the ever growing amount of musical works (the classes) and the tional network that learns to extract the dominant melody out of the relatively small number of covers per work (the samples), the actual audio Harmonic CQT [4]. question is not so much “to which work this track belongs to ?” as “to We built upon this idea in a recent work and proposed for this which other tracks this track is the most similar ?”. Formally, cover task an adaptation of U-Net – a model originally designed for medi- detection therefore requires to establish a similarity relationship be- cal image segmentation [5, 6] – which yields state-of-the-art results. tween a query track and a reference track: it implies the composite A multi-pitch representation typically contains the dominant of a feature extraction function preserving common musical facets melody, and usually includes also the bass line and some extra iso- between different covers of the same work, followed by a simple lated melodic lines, as can be seen on Fig. 1 which shows the two pairwise comparison function allowing fast lookup in large music representations of the same track obtained by two versions of our corpora. U-Net trained for each task. In the following, we propose to compare cover detection results networks compute for each sample query a class probability distri- obtained when using dominant melody vs. multi-pitch as input data. bution based on its distance with the centroid or prototype of each Our assumption behind this proposal is that multi-pitch embeds use- class, and is trained to maximize the probability of the correct class ful information shared by covers that is not present in the dominant [30]. melody. In the following, we build upon these previous works and in- troduce the prototypical triplet loss, which represents a class by its Summertime (Ella Fitzgerald) - F0 Summertime (Ella Fitzgerald) - Multi F0 prototype rather than by the set of its samples. C6 C6 C5 C5 3. PROPOSED METHOD C4 C4 C3 C3 Notes Notes We now extend our earlier work replacing dominant melody with C2 C2 multi-pitch as input data, and adapting the standard triplet loss to a C1 C1 novel training loss designed to improve covers clustering. C0 C0 0 5 10 15 20 25 30 0 5 10 15 20 25 30 Time (in sec) Time (in sec) 3.1. Realistic test set Fig. 1: Dominant melody (left) and multi-pitch (right) estimated by our U- Net [6] for the first 30 sec. of Ella Fitzgerad’s interpretation of Summertime. We built for our previous work a dataset SHS5+ of covers provided by the SecondHandSong website API 1. This initial dataset contains 2.2. Cover detection about 7.5k works with a least five covers per work, for a total of 62k tracks, i.e. 8.2 covers per work on average2. This high amount was Successful approaches in cover detection usually first extract an required for efficient training of our model, but does not mimic a input representation preserving common musical facets between realistic music catalog. different versions – in particular dominant melody [7, 8, 9] or har- We thus built for this paper another set SHS , totally disjoint monic/tonal progression [10, 11, 12, 13, 14], and then compute 4– from the original training set, containing only the works provided by a similarity score between pairs of melodic and/or harmonic se- SecondHandSong with exactly two, three or four covers per work. quences, typically using a cross-correlation [10, 15] or an alignment This new set contains about 49k songs for about 20k works, i.e. 2.5 scoring algorithm [16, 11]. covers per work on average. It reflects better a realistic music cat- To alleviate the cost of the comparison function, various authors alog, and implies harder testing conditions: having less covers per proposed to encode each audio representation as a single scalar or work implies more non-cover confusing tracks for each query. In vector – its embedding, and to reduce the similarity computation the following, we have thus used SHS as our new test set2. to a simple Euclidean distance between embedding. This approach 4– shifts the computation burden to the audio features extraction func- tion – which can be done offline and stored, and allows faster lookup 3.2. Multi-pitch as input data of large dataset. Originally, embeddings were hashing functions of Dominant melody and multi-pitch representations are extracted from the original representation [17, 8], but as for many other MIR appli- audio HCQT across 6 octaves with a frequency resolution of 5 bins cations, ad-hoc – and somewhat arbitrary – hand-crafted features ex- per semi-tone , as in [4, 6]. A dominant melody typically does not traction was progressively replaced with data-driven automatic met- exhibit important variations and its frequency range can be trimmed ric learning [18, 19, 20, 21]. to 3 octaves around its mean pitch value with no loss of information, In the following, we extend in particular our earlier work de- as seen in [1]. Conversely, multi-pitch values are spread across sev- scribing a model learning a similarity metric between dominant eral octaves with two major modes corresponding to the bass lines melody representations [1].