Listening to “Naima”: An Automated Structural Analysis of Music from Recorded Audio Roger B. Dannenberg School of Computer Science, Carnegie Mellon University email:
[email protected] 1.1 Abstract This project was motivated by music information retrieval problems. Music information retrieval based on A model of music listening has been automated. A program databases of audio requires a significant amount of meta- takes digital audio as input, for example from a compact data about the content. Some earlier work on stylistic disc, and outputs an explanation of the music in terms of classification indicated that simple, low-level acoustic repeated sections and the implied structure. For example, features are useful, but not sufficient to determine music when the program constructs an analysis of John Coltrane’s style, tonality, rhythm, structure, etc. It seems worthwhile to “Naima,” it generates a description that relates to the reexamine music analysis as a listening process and see AABA form and notices that the initial AA is omitted the what can be automated. A good description of music can be second time. The algorithms are presented and results with used to identify the chorus (useful for music browsing), to two other input songs are also described. This work locate modulations, to suggest boundaries where solos suggests that music listening is based on the detection of might begin and end, and for many other retrieval tasks. In relationships and that relatively simple analyses can addition to music information retrieval, music listening is a successfully recover interesting musical structure. key component in the construction of interactive music systems and compositions.