Addressing Tempo Estimation Octave Errors in Electronic Music by Incorporating Style Information Extracted from Wikipedia

Total Page:16

File Type:pdf, Size:1020Kb

Addressing Tempo Estimation Octave Errors in Electronic Music by Incorporating Style Information Extracted from Wikipedia View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by ZENODO Addressing Tempo Estimation Octave Errors in Electronic Music by Incorporating Style Information Extracted from Wikipedia Florian Horschl¨ ager,¨ Richard Vogl, Sebastian Bock,¨ Peter Knees Dept. of Computational Perception, Johannes Kepler University Linz, Austria [email protected], [email protected], [email protected], [email protected] ABSTRACT this discrepancy shows that there is still a need for im- provement. To this end, several approaches have directly A frequently occurring problem of state-of-the-art tempo addressed the octave error problem, either by incorporat- estimation algorithms is that the predicted tempo for a piece ing a-priori knowledge of tempo distributions [3], spectral of music is a whole-number multiple or fraction of the and rhythmic similarity [1,2], source separation [4,5], or tempo as perceived by humans (tempo octave errors). While classification into speed categories based on audio [6,7] often this is simply caused by shortcomings of the used al- and user-generated meta-data [8,9]. gorithms, in certain cases, this problem can be attributed to The importance of stylistic context for the task of beat the fact that the actual number of beats per minute (BPM) tracking has been stated before [10]. Similarly, in this within a piece is not a listener’s only criterion to consider work, we argue that there is a connection between the style it being “fast” or “slow”. Indeed, it can be argued that the of the music and its perceived tempo (which is strongly perceived style of music sets an expectation of tempo and related to the perception of the beat). More precisely, we therefore influences its perception. assume that human listeners take not only rhythmic infor- In this paper, we address the issue of tempo octave errors mation (onsets, percussive elements) but also stylistic cues in the context of electronic music styles. We propose to (such as instrumentation or loudness) into account when incorporate stylistic information by means of probability estimating tempo. 1 Therefore, when including knowledge density functions that represent tempo expectations for the on the style of the music, tempo estimation accuracy should individual music styles. In combination with a style classi- improve. Consider this simple example: If we knew that an fier those probability density functions are used to choose audio sample is a drum and bass track, it would be unrea- the most probable BPM estimate for a sample. Our evalu- sonable to estimate a tempo below 160 BPM. Yet our find- ation shows a considerable improvement of tempo estima- ings show that uninformed estimators can produce such an tion accuracy on the test dataset. output. Therefore we propose to incorporate stylistic in- formation into the tempo estimation process by means of 1. INTRODUCTION a music style classifier trained on audio data. In addition to predicting multiple hypotheses on the tempo of a piece A well-known problem of tempo estimation algorithms is of music using a state-of-the-art tempo estimation algo- the so called tempo octave error, i.e., the tempo as pre- rithm, we determine its style using the classifier and choose dicted by the algorithm is a whole-number multiple or frac- the tempo hypothesis being most likely in the context of tion of the actual tempo as perceived by humans. Since the determined style. For this, we utilize probability den- these errors on the metrical level are not always clearly sity functions (PDF) constructed from data extracted from agreed on by humans, evaluations performed in the lit- Wikipedia articles (i.e., BPM ranges or values as well as erature discount octave tempo errors by introducing sec- tempo relationships). ondary accuracy values (i.e. accuracy2) which also con- sider double, triple, half, and third of the ground truth tempo The remainder of this paper is organized as follows. Sec- as a correct prediction. In average, these values exceed the tion2 covers a representative selection of related work. In primary accuracy values based only on exact matches by section3 the proposed system is presented. This includes about 20 percentage points, cf. [1,2]. our strategy to extract style information from Wikipedia, While for tasks such as automatic tempo alignment for DJ in particular information on style-specific tempo ranges. mixes this can be a tolerable mistake, for making accurate In section4 we evaluate our approach using a new data set predictions of the semantic category of musical “speed,” for tempo estimation in electronic music. The paper con- i.e., whether a piece of music is considered “fast” or “slow,” cludes with a short discussion and ideas for future work in section5. Copyright: c 2015 Florian Horschl¨ ager,¨ Richard Vogl, Sebastian Bock,¨ Peter Knees et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted 1 This assumption is supported by psychological evidence that identi- fication of pieces as well as recognition of styles and emotions can be per- use, distribution, and reproduction in any medium, provided the original author formed by humans within 400 msecs [11]. This information can therefore and source are credited. prime the assessment of rhythm and tempo which requires more context. 2. RELATED WORK ing user tags of “fast” and “slow” [9]. Independent of a specific method for tempo estimation, Moelants and McK- Gouyon et al. compare and discuss 11 tempo estimation al- inney investigate the factors of a piece being perceived as gorithms submitted to the ISMIR’04 tempo induction con- fast, slow, or temporally ambiguous [16]. In this work, we test [12]. Their paper shows that all submitted algorithms focus on predicting the correct beats per minute (bpm) for perform much better if tempo octave errors are considered a music piece rather than directly classifying music into as correctly estimated tempos. By ignoring this kind of speed categories. error it was already possible to reach accuracies beyond 80%. A more recent comparison of state-of-the art tempo estimation algorithms is given by Zapata and Gomez´ [13]. 3. METHOD Again the 11 algorithms compared in [12] are discussed Our approach consists of a two stage tempo estimation pro- along with 12 new approaches. In this comparison, again, cess (visualized in figure1). First, the Tempo Estimator the algorithm presented by Klapuri et al. [3] performs best, generates n = 10 tempo estimates using a state-of-the-art if tempo octave errors are ignored. tempo estimation approach. Second, the Style Estimator The tempo estimation algorithm described in [3] uses a classifies the audio file into a style. Finally, the Tempo bank of comb filters similarly to the approach by Scheirer Ranker chooses the most probable tempo in the context [14]. One important difference is that while Scheirer uses of the classified style. In the following, we describe the only five frequency subbands to calculate the input signal used tempo induction approach (that also serves as a refer- for the comb filters, Klapuri et al. use 36 frequency sub- ence baseline for our evaluations), the construction of the bands which are combined into 4 so-called “accent bands”. style classifier and the strategy for picking the most prob- This was done with the goal that changes in narrower fre- able tempo estimate. In the context of this work we also quency bands are also detected while maintaining the abil- present an approach for extraction of music style specific ity to detect more global changes which was already the tempo information from Wikipedia articles and how it is case in [14]. used within the described scheme. The two approaches presented by Seyerlehner et al. [1] are based on two periodicity sensitive features (the auto- 3.1 Baseline Tempo Estimator correlation function and fluctuation patterns) which are each used to train a k-Nearest-Neighbour classifier. The results In the tempo estimation stage, any state-of-the-art tempo obtained with this algorithm is at least comparable to the induction algorithm that can provide more than one tempo best results found in [12]. hypothesis can be used. In this work, we make use of the Peeters [2] uses a frequency domain analysis of an onset- beat detection method introduced by Bock¨ in [17]. The al- energy function to extract so called spectral templates. Dur- gorithm is based on bidirectional long short-term memory ing training, reference spectral patterns are created used (BLSTM) recurrent neural networks. As network input, six in two different approaches. First in an unsupervised ap- variations of short time fourier transform (STFT) spectro- proach where clustering of similar spectral templates is grams transformed to the Mel-scale (20 bands) are used. done via a fuzzy k-means algorithm and second in an su- The six variations consist of three spectrograms which are pervised variant where the 8 genres of the training dataset calculated using window sizes of 1024, 2048, and 4096 (ballroom) are used. Viterbi decoding is then used to de- samples. For this process, audio data with a sampling rate termine the two hidden variables (tempo and rhythmical of 44.1kHz is used which results in windows lengths of pattern) of the spectra templates to estimate the tempo of 23.2ms, 46.4ms and 92.8ms, respectively. In addition to an audio track. these three spectrograms, the positive first order difference Gkiokas et al. [7] and Eronen and Klapuri [6] use ma- to the median of the last 0.41s of every spectrogram is chine learning approaches to further improve tempo esti- used as input. The neural networks are randomly initial- mation results.
Recommended publications
  • EDMTCC 2014 – the EDM Guide
    EDMTCC 2014 F# The EDM Guide: Technology, Culture, Curation Written by Robby Towns EDMTCC.COM [email protected] /EDMTCC NESTAMUSIC.COM [email protected] @NESTAMUSIC ROBBY TOWNS AUTHOR/FOUNDER/ENTHUSIAST HANNAH LOVELL DESIGNER LIV BULI EDITOR JUSTINE AVILA RESEARCH ASSISTANT ACKNOWLEDGEMENTS SIMON MORRISON GOOGLE VINCENT REINDERS 22TRACKS GILLES DE SMIT 22TRACKS LUKE HOOD UKF DANA SHAYEGAN THE COLLECTIVE BRIAN LONG KNITTING FACTORY RECORDS ERIC GARLAND LIVE NATION LABS BOB BARBIERE DUBSET MEDIA HOLDINGS GLENN PEOPLES BILLBOARD MEGAN BUERGER BILLBOARD THE RISE OF EDM 4 1.1 SURVIVAL OF THE FITTEST 6 1.2 DISCO TO THE DROP 10 1.3 A REAL LIFE VIDEO GAME 11 1.4 $6.2 BILLION GLOBAL INDUSTRY 11 1.5 GOING PUBLIC 13 1.6 USB 14 TECHNOLOGY: 303, 808, 909 15 2.1 ABLETON LIVE 18 2.2 SERATO 19 2.3 BEATPORT 21 2.4 SOUNDCLOUD 22 2.5 DUBSET MEDIA HOLDINGS 23 CULTURE: BIG BEAT TO MAIN STREET 24 3.1 DUTCH DOMINANCE 26 3.2 RINSE FM 28 3.3 ELECTRIC DAISY CARNIVAL 29 3.4 EDM FANS = HYPERSOCIAL 30 CURATION: DJ = CURATOR 31 4.1 BOOMRAT 33 4.2 UKF 34 4.3 22TRACKS 38 BONUS TRACK 41 THE RISE OF EDM “THE MUSIC HAS SOMETHING IN COMMON WITH THE CURRENT ENGLISH- SYNTHESIZER LED ELECTRONIC DANCE MUSIC...” –LIAM LACEY, CANADIAN GLOBE & MAIL 1982 EDMTCC.COM What is “EDM”? The answer from top brands, and virtually to this question is not the every segment of the entertain- purpose of this paper, but is ment industry is looking to cap- a relevant topic all the same.
    [Show full text]
  • Editorial Metadata in Electronic Music Distribution Systems: Between Universalism and Isolationism
    Journal of New Music Research 2005, Vol. 34, No. 2, pp. 173 – 184 Editorial Metadata in Electronic Music Distribution Systems: Between Universalism and Isolationism Franc¸ ois Pachet, Amaury La Burthe, Jean-Julien Aucouturier and Anthony Beurive´ Sony CSL – Paris Abstract about the artist, the title, and so on. Although you enjoy the new tune, you do not know much about We address the problem of metadata management in the Bjork: you have only been able to classify the artist in context of future Electronic Music Distribution (EMD) your walkman’s local genre taxonomy as ‘‘Electronica / systems, and propose a classification of existing musical Icelandic’’. editorial systems in two categories: the isolationists and As you walk by a cafe´ in Reykjavik, you decide that the universalists. Universalists propose shared informa- you would like to see what information the local tion at the expense of consensuality, while isolationist community has to offer about Bjork (she’s Icelandic, approaches allow individual parameterization at the isn’t she?) You turn on your walkman, which immedi- expense of the lack of reusability. We propose an ately connects to the nearby metadata server at the cafe´ architecture and a system for managing editorial and download locally available information. Unfortu- metadata that lies in the middle of these two extremes: nately, access to the music files is copyrighted and we organize musical editorial information in such a way restricted. However, you have access to the correspond- that users can benefit from shared metadata when they ing metadata. Your walkman automatically browses the wish, while also allowing them to create and manage a local genre taxonomy and is able to trace down Bjork, private version of editorial information.
    [Show full text]
  • Text-Based Description of Music for Indexing, Retrieval, and Browsing
    JOHANNES KEPLER UNIVERSITAT¨ LINZ JKU Technisch-Naturwissenschaftliche Fakult¨at Text-Based Description of Music for Indexing, Retrieval, and Browsing DISSERTATION zur Erlangung des akademischen Grades Doktor im Doktoratsstudium der Technischen Wissenschaften Eingereicht von: Dipl.-Ing. Peter Knees Angefertigt am: Institut f¨ur Computational Perception Beurteilung: Univ.Prof. Dipl.-Ing. Dr. Gerhard Widmer (Betreuung) Ao.Univ.Prof. Dipl.-Ing. Dr. Andreas Rauber Linz, November 2010 ii Eidesstattliche Erkl¨arung Ich erkl¨are an Eides statt, dass ich die vorliegende Dissertation selbstst¨andig und ohne fremde Hilfe verfasst, andere als die angegebenen Quellen und Hilfsmittel nicht benutzt bzw. die w¨ortlich oder sinngem¨aß entnommenen Stellen als solche kenntlich gemacht habe. iii iv Kurzfassung Ziel der vorliegenden Dissertation ist die Entwicklung automatischer Methoden zur Extraktion von Deskriptoren aus dem Web, die mit Musikst¨ucken assoziiert wer- den k¨onnen. Die so gewonnenen Musikdeskriptoren erlauben die Indizierung um- fassender Musiksammlungen mithilfe vielf¨altiger Bezeichnungen und erm¨oglichen es, Musikst¨ucke auffindbar zu machen und Sammlungen zu explorieren. Die vorgestell- ten Techniken bedienen sich g¨angiger Web-Suchmaschinen um Texte zu finden, die in Beziehung zu den St¨ucken stehen. Aus diesen Texten werden Deskriptoren gewon- nen, die zum Einsatz kommen k¨onnen zur Beschriftung, um die Orientierung innerhalb von Musikinterfaces zu ver- • einfachen (speziell in einem ebenfalls vorgestellten dreidimensionalen Musik- interface), als Indizierungsschlagworte, die in Folge als Features in Retrieval-Systemen f¨ur • Musik dienen, die Abfragen bestehend aus beliebigem, beschreibendem Text verarbeiten k¨onnen, oder als Features in adaptiven Retrieval-Systemen, die versuchen, zielgerichtete • Vorschl¨age basierend auf dem Suchverhalten des Benutzers zu machen.
    [Show full text]
  • 31295019800852.Pdf (1.481Mb)
    IMPACT OF PAY-FOR-PLAY DOWNLOADING ON THE MUSIC INDUSTRY by James Robert Law A SENIOR THESIS in GENERAL STUDIES Submitted to the General Studies Council in the College of Arts and Sciences at Texas Tech University in Partial fulfillment of the Requirements for the Degree of BACHELOR OF GENERAL STUDIES Approved BLOOD Choo~~~~s~~ unications of Thesis Committee ~- THOMAS HUGHES School o-f Visual and Performing Arts Accepted DR. ~MICHAEL SCHOENECKE Director of General Studies MAY 2005 -r3 ACKNOWLEDGMENTS ^ooS ,,-) < I would like to thank Dr. Youngblood and Dr. Hughes CxO.c^ for their incredible assistance and patience. Your help is much appreciated, and I could not have finished this without the incredible amount of time and energy you both put forth. I would like to consider you both not only valuable mentors in my life, but also friends. I would like to thank my girlfriend, Michele Stephens, for her incredible encouragement that motivated me when I was so near completion. Thank you for understanding my one-track mind. Thanks for letting me talk out my ideas with you to solidify them. Finally, thanks to my mother, whose support I will always appreciate. Her prayers and concerns were always felt and well received. Without the sacrifices she made, I would not have had the time or the wits to finish this. 11 TABLE OF CONTENTS ACKNOWLEDGMENTS ii CHAPTER I. INTRODUCTION 1 II. TECHNOLOGICAL BACKGROUND 3 What Is the MP3 File Format 3 What Is a Peer-to-Peer Network 5 What Makes a Download Legal/Illegal 7 What Is a Pay-for-Play Downloading Program...
    [Show full text]
  • Employing Subjective Tests and Deep Learning for Discovering the Relationship Between Personality Types and Preferred Music Genres
    electronics Article Employing Subjective Tests and Deep Learning for Discovering the Relationship between Personality Types and Preferred Music Genres Aleksandra Dorochowicz 1, Adam Kurowski 1,2 and Bo˙zenaKostek 1,2,* 1 Faculty of Electronics, Telecommunications and Informatics, Multimedia Systems Department, Gdansk University of Technology, Gabriela Narutowicza 11/12, 80-233 Gda´nsk,Poland; [email protected] (A.D.); [email protected] (A.K.) 2 Faculty of Electronics, Telecommunications and Informatics, Audio Acoustics Laboratory, Gdansk University of Technology, Gabriela Narutowicza 11/12, 80-233 Gda´nsk,Poland * Correspondence: [email protected]; Tel.: +48-58-3472717 Received: 13 October 2020; Accepted: 25 November 2020; Published: 28 November 2020 Abstract: The purpose of this research is two-fold: (a) to explore the relationship between the listeners’ personality trait, i.e., extraverts and introverts and their preferred music genres, and (b) to predict the personality trait of potential listeners on the basis of a musical excerpt by employing several classification algorithms. We assume that this may help match songs according to the listener’s personality in social music networks. First, an Internet survey was built, in which the respondents identify themselves as extraverts or introverts according to the given definitions. Their task was to listen to music excerpts that belong to several music genres and choose the ones they like. Next, music samples were parameterized. Two parametrization schemes were employed for that purpose, i.e., low-level MIRtoolbox parameters (MIRTbx) and variational autoencoder neural network-based, which automatically extract parameters of musical excerpts. The prediction of a personality type was performed employing four baseline algorithms, i.e., support vector machine (SVM), k-nearest neighbors (k-NN), random forest (RF), and naïve Bayes (NB).
    [Show full text]
  • Audio Based Genre Classification of Electronic Music
    Audio Based Genre Classification of Electronic Music Priit Kirss Master's Thesis Music, Mind and Technology University of Jyväskylä June 2007 JYVÄSKYLÄN YLIOPISTO Faculty of Humanities Department of Music Priit Kirss Audio Based Genre Classification of Electronic Music Music, Mind and Technology Master’s Thesis June 2007 Number of pages: 72 This thesis aims at developing the audio based genre classification techniques combining some of the existing computational methods with models that are capable of detecting rhythm patterns. The overviews of the features and machine learning algorithms used for current approach are presented. The total 250 musical excerpts from five different electronic music genres such as deep house, techno, uplifting trance, drum and bass and ambient were used for evaluation. The methodology consists of two main steps, first, the feature data is extracted from audio excerpts, and second, the feature data is used to train the machine learning algorithms for classification. The experiments carried out using feature set composed of Rhythm Patterns, Statistical Spectrum Descriptors from RPextract and features from Marsyas gave the highest results. Training that feature set on Support Vector Machine algorithm the classification accuracy of 96.4% was reached. Music Information Retrieval Contents 1 INTRODUCTION ..................................................................................................................................... 1 2 PREVIOUS WORK ON GENRE CLASSIFICATION ........................................................................
    [Show full text]
  • NCH Software Stamp ID3 Tag Editor
    NCH Software Stamp ID3 Tag Editor This user guide has been created for use with Stamp ID3 Tag Editor Version 2.xx ©NCH Software Technical Support If you have difficulties using Stamp ID3 Tag Editor please read the applicable topic before requesting support. If your problem is not covered in this user guide please view the up-to-date Stamp ID3 Tag Editor Online Technical Support at www.nch.com.au/tageditor/support.html. If that does not solve your problem, you can contact us using the technical support contacts listed on that page. Software Suggestions If you have any suggestions for improvements to Stamp ID3 Tag Editor, or suggestions for other related software that you might need, please post it on our Suggestions page at www.nch.com.au/suggestions/index.html . Many of our software projects have been undertaken after suggestions from users like you. You get a free upgrade if we follow your suggestion. Stamp ID3 Tag Editor Contents Introduction..................................................................................................................................... 2 Stamping a file................................................................................................................................ 3 Renaming Multiple files.................................................................................................................. 4 Shortcut Key Reference................................................................................................................. 5 Troubleshoot Problems.................................................................................................................
    [Show full text]
  • Tonality Estimation in Electronic Dance Music a Computational and Musically Informed Examination
    Tonality Estimation in Electronic Dance Music A Computational and Musically Informed Examination Ángel Faraldo Pérez TESI DOCTORAL UPF / 2017 Thesis Directors: Dr. Sergi Jordà Puig Music Technology Group Department of Information and Communication Technologies Universitat Pompeu Fabra, Barcelona, Spain Dissertation submitted to the Department of Information and Communication Tech- nologies of Universitat Pompeu Fabra in partial fulfilment of the requirements for the degree of DOCTOR PER LA UNIVERSITAT POMPEU FABRA Copyright c 2017 by Ángel Faraldo Licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 Music Technology Group (http://mtg.upf.edu), Department of Information and Communication Tech- nologies (http://www.upf.edu/dtic), Universitat Pompeu Fabra (http://www.upf.edu), Barcelona, Spain. The doctoral defence was held on ......................... at the Universitat Pompeu Fabra and scored as ........................................................... Dr. Sergi Jordà Puig (Thesis Supervisor) Universitat Pompeu Fabra (UPF), Barcelona, Spain Dr. Agustín Martorell Domínguez (Thesis Committee Member) Universitat Pompeu Fabra (UPF), Barcelona, Spain Dr. Gianni Ginesi (Thesis Committee Member) Escola Superior de Musica de Catalunya (ESMUC), Barcelona, Spain Dr. Anja Volk (Thesis Committee Member) Universiteit Utrecht, The Netherlands This thesis has been carried out at the Music Technology Group (MTG) of Universitat Pompeu Fabra in Barcelona, Spain, from January 2014 to November 2017, supervised by Dr. Sergi Jordà Puig and Prof. Perfecto Herrera Boyer. My work has been supported by the Department of Information and Communication Technologies (DTIC) PhD fellowship (2015-17), Universitat Pompeu Fabra, and the European Research Council under the European Union’s 7th Framework Program, as part of the GiantSteps project (FP7-ICT-2013-10, grant agreement 610591).
    [Show full text]
  • Music Genre Recognition 4 2.1 Overview
    MUSICGENRERECOGNITION Karin Kosina DIPLOMARBEIT eingereicht am Fachhochschul-Studiengang MEDIENTECHNIKUND-DESIGN in Hagenberg im Juni 2002 c Copyright 2002 Karin Kosina. All rights reserved. ii Erkl¨arung Hiermit erkl¨are ich an Eides statt, dass ich die vorliegende Arbeit selbst- st¨andig und ohne fremde Hilfe verfasst, andere als die angegebenen Quellen und Hilfsmittel nicht benutzt und die aus anderen Quellen entnommenen Stellen als solche gekennzeichnet habe. Hagenberg, am 19. Juni 2002 Karin Kosina iii Contents Erkl¨arung iii Preface vii Abstract viii Kurzfassung ix 1 Introduction 1 1.1 Motivation . 1 1.2 Scope of this work . 3 2 Music Genre Recognition 4 2.1 Overview . 4 2.1.1 Terminology . 4 2.1.2 Schematic Model . 5 2.1.3 Difficulties . 6 2.2 Human Sound Perception . 7 2.2.1 The Human Ear . 7 2.2.2 A Study of Human Music Genre Classification . 8 2.3 Related Fields . 8 2.3.1 Speech Recognition . 9 2.3.2 Speaker Recognition . 9 2.3.3 Music vs. Speech Classification . 9 2.3.4 Video Content Analysis . 9 2.3.5 Beat Tracking and Rhythm Detection . 10 2.3.6 Query By Example . 10 2.3.7 Automatic Music Transcription . 10 2.3.8 Auditory Scene Analysis . 11 2.3.9 Image Recognition . 11 3 Feature Extraction 12 3.1 Overview . 12 3.1.1 Introduction . 12 iv CONTENTS v 3.1.2 Formal Notation . 13 3.1.3 Choosing the Right Features . 13 3.2 Background: Fourier Analysis . 14 3.2.1 Sinusoid Superposition . 15 3.2.2 Fourier Transform .
    [Show full text]
  • User Guide: MPC Software
    User Guide English Manual Version 2.8 Table of Contents Introduction ............................................................ 7 Tutorial ................................................................... 13 MPC vs. MPC Beats .......................................... 7 Creating a Drum Kit ......................................... 13 System Requirements & Product Support ...... 7 Creating a Drum Sequence ............................. 14 About This User Guide ...................................... 8 Renaming & Saving .......................................... 15 Important Notes ................................................. 8 Editing Note Events .......................................... 17 Setup ................................................................... 9 Making Basic Sound Edits ............................... 19 1. Connection .................................................. 9 Creating a Bass Track ..................................... 20 2. Installation ................................................... 9 Recording an Audio Track ............................... 23 3. Getting Started .......................................... 10 Creating a Song ................................................ 24 MPC Software.................................................... 10 MPC Beats Software .......................................... 11 Exporting the Song ........................................... 25 Basic Concepts .................................................... 12 Other Features Explained ...............................
    [Show full text]
  • City Research Online
    City Research Online City, University of London Institutional Repository Citation: Barthet, M., Plumbley, M. D., Kachkaev, A., Dykes, J., Wolff, D. and Weyde, T. (2014). Big Chord Data Extraction and Mining. Paper presented at the 9th Conference on Interdisciplinary Musicology – CIM14, 03-12-2014 - 06-12-2014, Staatliches Institut für Musikforschung, Berlin, Germany. This is the published version of the paper. This version of the publication may differ from the final published version. Permanent repository link: https://openaccess.city.ac.uk/id/eprint/5803/ Link to published version: Copyright: City Research Online aims to make research outputs of City, University of London available to a wider audience. Copyright and Moral Rights remain with the author(s) and/or copyright holders. URLs from City Research Online may be freely distributed and linked to. Reuse: Copies of full items can be used for personal research or study, educational, or not-for-profit purposes without prior permission or charge. Provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way. City Research Online: http://openaccess.city.ac.uk/ [email protected] Proceedings of the 9th Conference on Interdisciplinary Musicology – CIM14. Berlin, Germany 2014 BIG CHORD DATA EXTRACTION AND MINING Mathieu Barthet1, Mark D. Plumbley1, Alexander Kachkaev2, Jason Dykes2, Daniel Wolff2, Tillman Weyde2 1 Centre for Digital Music, Queen Mary University of London 2 City University London Correspondence should be addressed to: [email protected] Abstract: Harmonic progression is one of the cornerstones of tonal has led to the emergence of different chord progressions.
    [Show full text]
  • Idest – ID3 Edit and Scripting Tool Version 2.1, 31 December 2016
    IdEst – ID3 Edit and Scripting Tool version 2.1, 31 December 2016 Sergey Poznyakoff Copyright c 2009-2011, 2015, 2016 Sergey Poznyakoff Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sec- tions, and without restrictions as to the Front and Back-Cover texts. A copy of the license is included in the section entitled “GNU Free Documentation License”. i Short Contents 1 Introduction .......................................... 1 2 ID3 Tag Versions ...................................... 3 3 ID3 Frames .......................................... 5 4 Viewing Existing Tags.................................. 7 5 Modifying Existing Tags ................................ 9 6 Copying Tags Between Files ............................ 11 7 Deleting Tags and Frames.............................. 13 8 Storing Tags in Different ID3 Versions .................... 15 9 Examining File Structure .............................. 17 10 Scripting............................................ 19 11 Keeping Backup Copies................................ 35 12 Invocation Summary .................................. 37 13 How to Report a Bug ................................. 41 A ID3 Genre Codes ..................................... 43 B GNU Free Documentation License ....................... 47 Concept Index ........................................... 57 iii Table of Contents 1 Introduction ....................................
    [Show full text]