Multi-Label Music Genre Classification from Audio, Text, and Images Using Deep Features

Total Page:16

File Type:pdf, Size:1020Kb

Multi-Label Music Genre Classification from Audio, Text, and Images Using Deep Features MULTI-LABEL MUSIC GENRE CLASSIFICATION FROM AUDIO, TEXT, AND IMAGES USING DEEP FEATURES Sergio Oramas1, Oriol Nieto2, Francesco Barbieri3, Xavier Serra1 1Music Technology Group, Universitat Pompeu Fabra 2Pandora Media Inc. 3TALN Group, Universitat Pompeu Fabra fsergio.oramas, francesco.barbieri, [email protected], [email protected] ABSTRACT exclusive (i.e., a song could be Pop, and at the same time have elements from Deep House and a Reggae grove). In Music genres allow to categorize musical items that share this work we aim to advance the field of music classifi- common characteristics. Although these categories are not cation by framing it as multi-label genre classification of mutually exclusive, most related research is traditionally fine-grained genres. focused on classifying tracks into a single class. Further- To this end, we present MuMu, a new large-scale mul- more, these categories (e.g., Pop, Rock) tend to be too timodal dataset for multi-label music genre classification. broad for certain applications. In this work we aim to ex- MuMu contains information of roughly 31k albums clas- pand this task by categorizing musical items into multiple sified into one or more 250 genre classes. For every al- and fine-grained labels, using three different data modal- bum we analyze the cover image, text reviews, and audio ities: audio, text, and images. To this end we present tracks, with a total number of approximately 147k audio MuMu, a new dataset of more than 31k albums classified tracks and 447k album reviews. Furthermore, we exploit into 250 genre classes. For every album we have collected this dataset with a novel deep learning approach to learn the cover image, text reviews, and audio tracks. Addition- multiple genre labels for every album using different data ally, we propose an approach for multi-label genre classi- modalities (i.e., audio, text, and image). In addition, we fication based on the combination of feature embeddings combine these modalities to study how the different com- learned with state-of-the-art deep learning methodologies. binations behave. Experiments show major differences between modalities, Results show how feature learning using deep neu- which not only introduce new baselines for multi-label ral networks substantially surpasses traditional approaches genre classification, but also suggest that combining them based on handcrafted features, reducing the gap between yields improved results. text-based and audio-based classification [29]. Moreover, an extensive comparative of different deep learning archi- 1. INTRODUCTION tectures for audio classification is provided, including the usage of a dimensionality reduction approach that yields Music genres are useful labels to classify musical items improved results. Finally, we show how the late fusion of into broader categories that share similar musical, regional, feature vectors learned from different modalities achieves or temporal characteristics. Dealing with large collections better scores than each of them individually. of music poses numerous challenges when retrieving and classifying information [3]. Music streaming services tend to offer catalogs of tens of millions of tracks, for which 2. RELATED WORK tasks such as music classification are of utmost importance. Most published music genre classification approaches rely Music genre classification is a widely studied problem in on audio sources [2, 40]. Traditional techniques typically arXiv:1707.04916v1 [cs.IR] 16 Jul 2017 the Music Information Research (MIR) community [40]. use handcrafted audio features, such as Mel Frequency However, almost all related work is concentrated in multi- Cepstral Coecients (MFCCs) [20], as input of a machine class classification of music items into broad genres (e.g., learning classifier (e.g., SVM) [39, 44]. More recent deep Pop, Rock), assigning a single label per item. This is prob- learning approaches take advantage of visual representa- lematic since there may be hundreds of more specific mu- tions of the audio signal in form of spectrograms. These sic genres [33], and these may not be necessarily mutually visual representations are used as input to Convolutional Neural Networks (CNNs) [5, 6, 8, 9, 34], following ap- c Sergio Oramas1, Oriol Nieto2, Francesco Barbieri3, proaches similar to those used for image classification. Xavier Serra1. Licensed under a Creative Commons Attribution 4.0 In- Text-based approaches have also been explored for this 1 ternational License (CC BY 4.0). Attribution: Sergio Oramas , Oriol task. For instance, in [13, 29] album customer reviews Nieto2, Francesco Barbieri3, Xavier Serra1. “Multi-label Music Genre Classification from audio, text, and images using Deep Features”, 18th In- are used as input for the classification, whereas in [4, 22] ternational Society for Music Information Retrieval Conference, Suzhou, song lyrics are employed. By contrast, there are a limited China, 2017. number of papers dealing with image-based genre classi- fication [18]. Most multimodal approaches for this task 3.1 Genre Labels found in the literature combine audio and song lyrics as text [16, 27]. Moreover, the combination of audio and Amazon has its own hierarchical taxonomy of music gen- video has also been explored [37]. However, the authors res, which is up to four levels in depth. In the first level are not aware of published multimodal approaches for mu- there are 27 genres, and almost 500 genres overall. In our sic genre classification that involve deep learning. dataset, we keep the 250 genres that satisfy the condition of having been annotated in at least 12 albums. Every al- Multi-label classification is a widely studied problem bum in Amazon is annotated with one or more genres from [14, 43]. Despite the scarcity in terms of approaches for different levels of the taxonomy. The Amazon Dataset con- multi-label classification of music genres [36, 46], there is tains complete information about the specific branch from a long tradition in MIR for tag classification, which is a the taxonomy used to classify each album. For instance, an multi-label problem [5, 46]. album annotated as Traditional Pop comes with the com- plete branch information Pop / Oldies / Traditional Pop. To exploit either the taxonomic and the co-occurrence infor- mation, we provide every item with the labels of all their 3. MULTIMODAL DATASET branches. For example, an album classified as Jazz / Vocal Jazz and Pop / Vocal Pop is annotated in MuMu with the To the best of our knowledge, there are no publicly avail- four labels: Jazz, Vocal Jazz, Pop, and Vocal Pop. There able large-scale datasets that encompass audio, images, are in average 5.97 labels for each song (3.13 standard de- text, and multi-label annotations. Therefore, we present viation). MuMu, a new Multimodal Music dataset with multi- label genre annotations that combines information from the Amazon Reviews dataset [23] and the Million Song Table 1. Top-10 most and least represented genres Dataset (MSD) [1]. The former contains millions of al- Genre % of albums Genre % of albums bum customer reviews and album metadata gathered from Pop 84.38 Tributes 0.10 Amazon.com. The latter is a collection of metadata and Rock 55.29 Harmonica Blues 0.10 Alternative Rock 27.69 Concertos 0.10 precomputed audio features for a million songs. World Music 19.31 Bass 0.06 Jazz 14.73 European Jazz 0.06 To map the information from both datasets we use Mu- Dance & Electronic 12.23 Piano Blues 0.06 sicBrainz 1 . For every album in the Amazon dataset, we Metal 11.50 Norway 0.06 Indie & Lo-Fi 10.45 Slide Guitar 0.06 query MusicBrainz with the album title and artist name to R&B 10.10 East Coast Blues 0.06 find the best possible match. Matching is performed using Folk 9.69 Girl Groups 0.06 the same methodology described in [30], following a pair- wise entity resolution approach based on string similarity. The labels in the dataset are highly unbalanced, follow- Following this approach, we were able to map 60% of the ing a distribution which might align well with those found Amazon dataset. For all the matched albums, we obtain the in real world scenarios. In Table 1 we see the top 10 most MusicBrainz recording ids of their songs. With these, we and least represented genres and the percentage of albums use an available mapping from MSD to MusicBrainz 2 to annotated with each label. The unbalanced character of the obtain the subset of recordings present in the MSD. From genre annotations poses an interesting challenge for music the mapped recordings, we only keep those associated with classification that we also aim to exploit. Among the mul- a unique album. This process yields the final set of 147,295 tiple possibilities that this dataset may offer to the MIR songs, which belong to 31,471 albums. community, we focus our work on the multi-label classifi- The song features provided by the MSD are not gener- cation problem, described next. ally suitable for deep learning [45], so we instead use in our experiments audio previews between 15 and 30 seconds re- 4. MULTI-LABEL CLASSIFICATION trieved from 7digital.com. For the mapped set of al- bums, there are 447,583 customer reviews in the Amazon In multi-label classification, multiple target labels may be Dataset. In addition, the Amazon Dataset provides further assigned to each classifiable instance. More formally: information about each album, such as genre annotations, given a set of n labels L = fl1; l2; : : : ; lng, and a set of average rating, selling rank, similar products, cover image m items I = fi1; i2; : : : ; img, we aim to model a function url, etc. We employ the provided image url to gather the f able to associate a set of c labels to every item in I, where cover art of all selected albums. The mapping between the c 2 [1; n] varies for every item.
Recommended publications
  • Multimodal Deep Learning for Music Genre Classification
    Oramas, S., Barbieri, F., Nieto, O., and Serra, X (2018). Multimodal Transactions of the International Society for Deep Learning for Music Genre Classification, Transactions of the Inter- Music Information Retrieval national Society for Music Information Retrieval, V(N), pp. xx–xx, DOI: https://doi.org/xx.xxxx/xxxx.xx ARTICLE TYPE Multimodal Deep Learning for Music Genre Classification Sergio Oramas,∗ Francesco Barbieri,y Oriol Nieto,z and Xavier Serra∗ Abstract Music genre labels are useful to organize songs, albums, and artists into broader groups that share similar musical characteristics. In this work, an approach to learn and combine multimodal data representations for music genre classification is proposed. Intermediate rep- resentations of deep neural networks are learned from audio tracks, text reviews, and cover art images, and further combined for classification. Experiments on single and multi-label genre classification are then carried out, evaluating the effect of the different learned repre- sentations and their combinations. Results on both experiments show how the aggregation of learned representations from different modalities improves the accuracy of the classifica- tion, suggesting that different modalities embed complementary information. In addition, the learning of a multimodal feature space increase the performance of pure audio representa- tions, which may be specially relevant when the other modalities are available for training, but not at prediction time. Moreover, a proposed approach for dimensionality reduction of target labels yields major improvements in multi-label classification not only in terms of accuracy, but also in terms of the diversity of the predicted genres, which implies a more fine-grained categorization. Finally, a qualitative analysis of the results sheds some light on the behavior of the different modalities in the classification task.
    [Show full text]
  • Music Genre Classification
    Music Genre Classification Michael Haggblade Yang Hong Kenny Kao 1 Introduction Music classification is an interesting problem with many applications, from Drinkify (a program that generates cocktails to match the music) to Pandora to dynamically generating images that comple- ment the music. However, music genre classification has been a challenging task in the field of music information retrieval (MIR). Music genres are hard to systematically and consistently describe due to their inherent subjective nature. In this paper, we investigate various machine learning algorithms, including k-nearest neighbor (k- NN), k-means, multi-class SVM, and neural networks to classify the following four genres: clas- sical, jazz, metal, and pop. We relied purely on Mel Frequency Cepstral Coefficients (MFCC) to characterize our data as recommended by previous work in this field [5]. We then applied the ma- chine learning algorithms using the MFCCs as our features. Lastly, we explored an interesting extension by mapping images to music genres. We matched the song genres with clusters of images by using the Fourier-Mellin 2D transform to extract features and clustered the images with k-means. 2 Our Approach 2.1 Data Retrieval Process Marsyas (Music Analysis, Retrieval, and Synthesis for Audio Signals) is an open source software framework for audio processing with specific emphasis on Music Information Retrieval Applica- tions. Its website also provides access to a database, GTZAN Genre Collection, of 1000 audio tracks each 30 seconds long. There are 10 genres represented, each containing 100 tracks. All the tracks are 22050Hz Mono 16-bit audio files in .au format [2].
    [Show full text]
  • California State University, Northridge Where's The
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE WHERE’S THE ROCK? AN EXAMINATION OF ROCK MUSIC’S LONG-TERM SUCCESS THROUGH THE GEOGRAPHY OF ITS ARTISTS’ ORIGINS AND ITS STATUS IN THE MUSIC INDUSTRY A thesis submitted in partial fulfilment of the requirements for the Degree of Master of Arts in Geography, Geographic Information Science By Mark T. Ulmer May 2015 The thesis of Mark Ulmer is approved: __________________________________________ __________________ Dr. James Craine Date __________________________________________ __________________ Dr. Ronald Davidson Date __________________________________________ __________________ Dr. Steven Graves, Chair Date California State University, Northridge ii Acknowledgements I would like to thank my committee members and all of my professors at California State University, Northridge, for helping me to broaden my geographic horizons. Dr. Boroushaki, Dr. Cox, Dr. Craine, Dr. Davidson, Dr. Graves, Dr. Jackiewicz, Dr. Maas, Dr. Sun, and David Deis, thank you! iii TABLE OF CONTENTS Signature Page .................................................................................................................... ii Acknowledgements ............................................................................................................ iii LIST OF FIGURES AND TABLES.................................................................................. vi ABSTRACT ..................................................................................................................... viii Chapter 1 – Introduction ....................................................................................................
    [Show full text]
  • Ethnomusicology a Very Short Introduction
    ETHNOMUSICOLOGY A VERY SHORT INTRODUCTION Thimoty Rice Sumário Chapter 1 – Defining ethnomusicology...........................................................................................4 Ethnos..........................................................................................................................................5 Mousikē.......................................................................................................................................5 Logos...........................................................................................................................................7 Chapter 2 A bit of history.................................................................................................................9 Ancient and medieval precursors................................................................................................9 Exploration and enlightenment.................................................................................................10 Nationalism, musical folklore, and ethnology..........................................................................10 Early ethnomusicology.............................................................................................................13 “Mature” ethnomusicology.......................................................................................................15 Chapter 3........................................................................................................................................17 Conducting
    [Show full text]
  • Fabian Holt. 2007. Genre in Popular Music. Chicago and London: University of Chicago Press
    Fabian Holt. 2007. Genre in Popular Music. Chicago and London: University of Chicago Press. Reviewed by Elizabeth K. Keenan The concept of genre has often presented difficulties for popular music studies. Although discussions of genres and their boundaries frequently emerge from the ethnographic study of popular music, these studies often fail to theorize the category of genre itself, instead focusing on elements of style that demarcate a localized version of a particular genre. In contrast, studies focusing on genre itself often take a top-down, Adorno-inspired, culture-industry perspective, locating the concept within the marketing machinations of the major-label music industry. Both approaches have their limits, leaving open numerous questions about how genres form and change over time; how people employ the concept of genre in their musical practices-whether listening, purchasing, or performing; and how culture shapes our construction of genre categories. In Genre in Popular Music, Fabian Holt deftly integrates his theoretical model of genre into a series of ethnographic and historical case studies from both the center and the boundaries of popular music in the United States. These individual case studies, from reactions to Elvis Presley in Nashville to a portrait of the lesser­ known Chicago jazz and rock musician Jeff Parker, enable Holt to develop a multifaceted theory of genre that takes into consideration how discourse and practice, local experience and translocal circulation, and industry and individuals all shape popular music genres in the United States. This book is a much needed addition to studies of genre, as well as a welcome invitation to explore more fully the possibilities for doing ethnography in American popular music studies.
    [Show full text]
  • Genre Classification Via Album Cover
    Genre Classification via Album Cover Jonathan Li Di Sun Department of Computer Science Department of Computer Science Stanford University Stanford University [email protected] [email protected] Tongxin Cai Department of Civil and Environmental Engineering Stanford University [email protected] Abstract Music information retrieval techniques are being used in recommender systems and automatically categorize music. Research in music information retrieval often focuses on textual or audio features for genre classification. This paper takes a direct look at the ability of album cover artwork to predict album genre and expands upon previous work in this area. We use transfer learning with VGG-16 to improve classifier performance. Our work results in an increase in our topline metrics, area under the precision recall curve, of 80%: going from 0.315 to 0.5671, and precision and recall increasing from 0.694 and 0.253 respectively to 0.7336 and 0.3812 respectively. 1 Introduction Genre provides information and topic classification in music. Additionally, the vast majority of music is written collaboratively and with several influences, which leads to multi-label (multi-genres) albums, i.e., a song and its album cover may belong to more than one genre. Music genre classification, firstly introduced by Tzanetakis et al [18] in 2002 as a pattern recognition task, is one of the many branches of Music Information Retrieval (MIR). From here, other tasks on musical data, such as music generation, recommendation systems, instrument recognition and so on, can be performed. Nowadays, companies use music genre classification, either to offer recommendations to their users (such as iTunes, Spotify) or simply as a product (for example Shazam).
    [Show full text]
  • Genre, Form, and Medium of Performance Terms in Music Cataloging
    Genre, Form, and Medium of Performance Terms in Music Cataloging Ann Shaffer, University of Oregon [email protected] OLA 2015 Pre-Conference What We’ll Cover • New music vocabularies for LC Genre & Form and LC Medium of Performance Terms • Why were they developed? • How to use them in cataloging? • What does it all mean? Genre, Form, & MoP in LCSH Music Terms When describing music materials, LCSH have often blurred the distinction between: • Subject what the work is about • Genre what the work is • Form the purpose or format of the work • Medium of performance what performing forces the work requires Genre, Form, & MoP in LCSH Music Terms Book Music Score • LCSH describe what • LCSH describe what the musical work is: subject the book is Symphonies about: Operas Symphony Bluegrass music Opera Waltzes Bluegrass music • LCSH also describe what the format is: Rap (music) $v Scores and parts $v Vocal scores $v Fake books Waltz • And what the medium of performance is: Sonatas (Cello and piano) Banjo with instrumental ensemble String quartets Genre, Form, & MoP in LCSH Music Terms Audio Recording • LCSH describes what the musical work is: Popular music Folk music Flamenco music Concertos • And what the medium of performance is: Songs (High voice) with guitar Jazz ensembles Choruses (Mens’ voices, 4 parts) with piano Concertos (Tuba with string orchestra) Why Is This Problematic? • Machines can’t distinguish between topic, form, genre, and medium of performance in the LCSH • All coded in same field (650) • Elements of headings & subdivisions
    [Show full text]
  • You Can Judge an Artist by an Album Cover: Using Images for Music Annotation
    You Can Judge an Artist by an Album Cover: Using Images for Music Annotation Janis¯ L¯ıbeks Douglas Turnbull Swarthmore College Department of Computer Science Swarthmore, PA 19081 Ithaca College Email: [email protected] Ithaca, NY 14850 Email: [email protected] Abstract—While the perception of music tends to focus on our acoustic listening experience, the image of an artist can play a role in how we categorize (and thus judge) the artistic work. Based on a user study, we show that both album cover artwork and promotional photographs encode valuable information that helps place an artist into a musical context. We also describe a simple computer vision system that can predict music genre tags based on content-based image analysis. This suggests that we can automatically learn some notion of artist similarity based on visual appearance alone. Such visual information may be helpful for improving music discovery in terms of the quality of recommendations, the efficiency of the search process, and the aesthetics of the multimedia experience. Fig. 1. Illustrative promotional photos (left) and album covers (right) of artists with the tags pop (1st row), hip hop (2nd row), metal (3rd row). See I. INTRODUCTION Section VI for attribution. Imagine that you are at a large summer music festival. appearance. Such a measure is useful, for example, because You walk over to one of the side stages and observe a band it allows us to develop a novel music retrieval paradigm in which is about to begin their sound check. Each member of which a user can discover new artists by specifying a query the band has long unkempt hair and is wearing black t-shirts, image.
    [Show full text]
  • Classification of Music Genre Muralidhar Talupur, Suman Nath, Hong Yan
    Project Report for 15781 Classification of Music Genre Muralidhar Talupur, Suman Nath, Hong Yan As the demand for multimedia grows, the development of information retrieval systems including information about music is of increasing concern. Radio stations and music TV channels hold archives of millions of music tapes. Gigabytes of music files are also spread over the web. To automate searching and organizing the music files based on their genre is a challenging task. In this report, we present an approach to identifying genre of music file based on their content. We present experimental result that shows that our approach is effective in identifying the genre of the music file with acceptable level of confidence. 1. Introduction Vast musical databases are currently accessible over computer networks (e.g., the Web), creating a need for sophisticated methods to search and organize these databases. Because music is a multifaceted, multi-dimensional medium, it demands specialized representations, abstractions and processing techniques for effective search that are fundamentally different from those used for other retrieval tasks. The project “Exploiting Style as Retrieval and Classification Mechanism” undertaken by Roger Dannenberg and his group aims to develop hierarchical, stochastic music representations and concomitant storage and retrieval mechanisms that are well suited to music’s unique characteristics by exploiting reductionist theories of musical structure and performance (i.e., musical style). As a part of this whole effort, we decided to explore if it is possible to make prediction about music genre/type based on the content of the audio file. Work done previously shows that the spectro-temporal features are sufficiently rich to allow for coarse classification of music from brief sample of the music.
    [Show full text]
  • B-Sides, Undercurrents and Overtones: Peripheries to Popular in Music, 1960 to the Present
    B-Sides, Undercurrents and Overtones: Peripheries to Popular in Music, 1960 to the Present George Plasketes B-SIDES, UNDERCURRENTS AND OVERTONES: PERIPHERIES TO POPULAR IN MUSIC, 1960 TO THE PRESENT for Julie, Anaïs, and Rivers, heArt and soul B-Sides, Undercurrents and Overtones: Peripheries to Popular in Music, 1960 to the Present GEORGE PLASKETES Auburn University, USA © George Plasketes 2009 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise without the prior permission of the publisher. George Plasketes has asserted his moral right under the Copyright, Designs and Patents Act, 1988, to be identified as the author of this work. Published by Ashgate Publishing Limited Ashgate Publishing Company Wey Court East Suite 420 Union Road 101 Cherry Street Farnham Burlington, VT 05401-4405 Surrey GU9 7PT USA England www.ashgate.com British Library Cataloguing in Publication Data Plasketes, George B-sides, undercurrents and overtones : peripheries to popular in music, 1960 to the present. – (Ashgate popular and folk music series) 1. Popular music – United States – History and criticism I. Title 781.6’4’0973 Library of Congress Cataloging-in-Publication Data Plasketes, George. B-sides, undercurrents and overtones : peripheries to popular in music, 1960 to the present / George Plasketes. p. cm.—(Ashgate popular and folk music series) ISBN 978-0-7546-6561-8 (hardcover : alk. paper) 1. Popular music—United
    [Show full text]
  • Music Genre/Form Terms in LCGFT Derivative Works
    Music Genre/Form Terms in LCGFT Derivative works … Adaptations Arrangements (Music) Intabulations Piano scores Simplified editions (Music) Vocal scores Excerpts Facsimiles … Illustrated works … Fingering charts … Posters Playbills (Posters) Toy and movable books … Sound books … Informational works … Fingering charts … Posters Playbills (Posters) Press releases Programs (Publications) Concert programs Dance programs Film festival programs Memorial service programs Opera programs Theater programs … Reference works Catalogs … Discographies ... Thematic catalogs (Music) … Reviews Book reviews Dance reviews Motion picture reviews Music reviews Television program reviews Theater reviews Instructional and educational works Teaching pieces (Music) Methods (Music) Studies (Music) Music Accompaniments (Music) Recorded accompaniments Karaoke Arrangements (Music) Intabulations Piano scores Simplified editions (Music) Vocal scores Art music Aʼak Aleatory music Open form music Anthems Ballades (Instrumental music) Barcaroles Cadenzas Canons (Music) Rounds (Music) Cantatas Carnatic music Ālāpa Chamber music Part songs Balletti (Part songs) Cacce (Part songs) Canti carnascialeschi Canzonets (Part songs) Ensaladas Madrigals (Music) Motets Rounds (Music) Villotte Chorale preludes Concert etudes Concertos Concerti grossi Dastgāhs Dialogues (Music) Fanfares Finales (Music) Fugues Gagaku Bugaku (Music) Saibara Hát ả đào Hát bội Heike biwa Hindustani music Dādrās Dhrupad Dhuns Gats (Music) Khayāl Honkyoku Interludes (Music) Entremés (Music) Tonadillas Kacapi-suling
    [Show full text]
  • Bikerbar Radio Songs March-01
    BikerBar Radio Songs March-01 Song Name Artist Name Album Name 105 Fred Eaglesmith Lipstick Lies & Gasoline 2002 Bob Schneider Lonelyland 10 Six Days On The Road BR5-49 Coast to Coast Live 11 Cracker Jack BR5-49 Coast to Coast Live 29 Ways Marc Cohn Marc Cohn 300 Pounds Of Joy Howlin' Wolf The Chess Box (Disk 2) 59 Tweedle dee Rosie Flores Dance Hall Dreams A Mh ithr n, A' Leigfe ' Cathie Ryan The Music of What Happens A Night To Remember SHeDAISY The Whole SHeBANG A Road Is Just A Road Dixie Chicks Little Ol' Cowgirl A Thousand Miles from Nowhere Dwight Yoakam This Time Across The Great Divide Kate Wolf Gold In California Disc One Adeus Batucada (Synval Silva) Virginia Rodrigues Sol Negro Against The Wind Bob Seger Harley-Davidson Road Songs (Disc 1) Ain't That Always The Way Sawyer Brown The Dirt Road Airline Agent James McMurtry Walk Between the Raindrops Airplane Fell Down in Dixie Ray Wylie Hubbard Crusades of the Restless Knights Alabama Song Allison Moorer Alabama Song Alcohol & Pills Fred Eaglesmith Lipstick Lies & Gasoline All Of My Life Steve Earle Transcendental Blues All Shook Up Ry Cooder Get Rhythm All THese Old Things Were New Haggard, Merle If I Could Only Fly Alternator Fred Eaglesmith 50-Odd Dollars Am I Wrong Keb Mo Keb' Mo' Amor Verdadero Afro Cuban All Stars A Toda Cuba Le Gusta Angeline James McMurtry Too Long in the Wasteland Angels Singing Reverend Gary Davis Reverend Gary Davis Anna Lee Sonia dada A Day At The Beach Another Colorado Jimmie Dale Gillmore Spinning Around the Sun Another Town Steve Earle Transcendental
    [Show full text]