CHI 2007 Workshop on Vocal Interaction in Assistive Technologies and Games (CHI 2007), San Jose, CA, USA, April 29 – May 3 Teaching a Music Search Engine Through Play Bryan Pardo David A. Shamma EECS, Northwestern University Yahoo! Research Berkeley Ford Building, Room 3-323, 2133 Sheridan Rd. 1950 University Ave, Suite 200 Evanston, IL, 60208, USA Berkeley, CA 94704
[email protected] [email protected] +1 (847) 491-7184 +1 (510) 704-2419 ABSTRACT include string alignment [1], n-grams [2], Markov models Systems able to find a song based on a sung, hummed, or [3], dynamic time warping [4] and earth mover distance [5]. whistled melody are called Query-By-Humming (QBH) These compare melodies transcribed from sung queries to systems. We propose an approach to improve search human-encoded symbolic music keys, such as MIDI files. performance of a QBH system based on data collected from an online social music game, Karaoke Callout. Users of Deployed QBH systems [6] have no provision for Karaoke Callout generate training data for the search automatically vetting search keys in the database for engine, allowing both ongoing personalization of query effectiveness. As melodic databases grow from thousands processing as well as vetting of database keys. to millions of entries, hand-vetting of keys will become Personalization of search engine user models takes place impractical. Current QBH systems also presume users through using sung examples generated in the course of singing styles will be similar to that of those the system was play to optimize parameters of user models. initially designed for.