<p> Optimal Playlist Generation Matthew Hivner, Eddie Galloway, & DeVante Pickering University of North Carolina at Wilmington</p><p>Abstract may not be a cohesive or desirable grouping of songs. Playlist generation implementations are The goal of our implementation is to often either entirely dependent on the user’s increase the effectiveness of automated manual input, or use a random selection playlist generation. Given a seed song as a algorithm. Both of these methods have less starting point, we generate a playlist of than desirable results, with the first being desired length that minimizes the difference cumbersome and the second producing between the characteristic vectors of the playlist results that are not always optimal selected songs. We will then use these for the user. The goal of this paper is to differences in characteristic vectors to present an improved playlist generation establish the transition probabilities for a method that makes use of a Hidden Markov Hidden Markov Model. These Model, combined with user feedback, in characteristics are retrieved using the order to produce a more successful playlist. Spotify API, and will be explained in more A successful playlist has a strong detail later in this paper. A playlist of songs correlation between song characteristics, with minimal characteristic differences will yet avoids repeating songs or including produce a more cohesive grouping, and several songs from the same artist. For therefore a more desirable listening comparison, we will implement a Fisher- experience. Yates shuffle and a simple random shuffle to As a point of comparison we will also provide a typical auto-generated playlist. implement a Fisher-Yates shuffle. This common shuffling method ensures a random 1. Introduction ordering of songs with no repeats. Fisher- Yates is used by several music applications, A playlist is a common feature of most including Spotify. [1] Finally, we will also modern music player applications. Playlists compare a simple random shuffle. This consist of a selection of songs in a shuffled method is similar to Fisher-Yates, but order. Music player applications generally cannot guarantee that there will be no offer two main methods of playlist repeats in the songs selected. The overall generation. First, users may manually create characteristic differences of a playlist a playlist by adding songs to a blank produced by our Hidden Markov Model playlist, thereby accumulating a collection. implementation will be compared to the The other predominant method of playlist results from both random shuffle variations, generation involves performing a random and conclusions will be drawn about the selection from the user’s stored music merits and drawbacks of each library. This method is less cumbersome for implementation. the user, but often results in a playlist that 2. Formal Problem Statement progression as the base for genre differentiation, they then further Formally, the goal of our algorithm can distinguished songs by mapping the note be stated as follows: value transitions using Markov chains. The students then created an undirected acyclic For each song x in the available song pool, graph in which each node represented a song S, retrieve the characteristic vector Cx for and the weight of the connections between that song. Then, generate a playlist such songs represents the difference between the that feature vectors for each song. They then chose an arbitrary starting node and generated a playlist by traversing from that node to the next node with the lowest weight where C is the characteristic vector of the i between nodes [2]. song at index i and n is the number of songs Amith Nair, a student the University of in the playlist. Massachusetts, published a study where he The merit of a playlist will be evaluated attempts to design a music recommendation using this strategy, and comparisons will be system built around the Hidden Markov based upon these values. Model. His system operates in a similar fashion as our implementation, but differs in 3. Context the parameters that are applied to the HMM. His implementation takes into account the All of the popular music companies prevalence of a song in a user’s pre- (Spotify, Pandora, Google Play, etc.) each generated playlists (top 25 most played, top have their own implementation of the rated, custom playlists etc.) and uses that shuffle feature. Unfortunately, they do not information to weight the score assigned to always publish which algorithms they that song. The system then makes choose to implement. Most shuffle features recommendations based on that score. [3] simply seek to avoid repeating tracks, but do nothing to tailor the generated playlist to the 4. Song Characteristics listener. With a large library of tracks a user could repeatedly be subjected to songs that As mentioned above, the data we will do not fit their current mood or listening use to help identify and compare songs will habits. come from the Spotify API. This API is Students at the University of Sao Paulo provided for free to developers who want to in Brazil published a study where they make use of Spotify’s massive musical attempted to use MIDI samples to compose information database. Each song in Minimum Spanning Trees (MST). They then Spotify’s collection has been assigned used those MST’s to attempt to map the copious amounts of identifying data, of relationships between songs of different which we will be using a subsampling. The genres and produce playlists based on those song characteristics that we will be using to relationships. Using the percussive track populate our characteristic vectors are listed derived. Major is represented by 1 and below with a brief explanation: minor is 0.</p><p>Acousticness: A confidence measure Speechiness: Speechiness detects the from 0.0 to 1.0 of whether the track is presence of spoken words in a track. The acoustic. 1.0 represents high confidence more exclusively speech-like the the track is acoustic. recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute Danceability: Danceability describes value. Values above 0.66 describe tracks how suitable a track is for dancing based that are probably made entirely of on a combination of musical elements spoken words. Values between 0.33 and including tempo, rhythm stability, beat 0.66 describe tracks that may contain strength, and overall regularity. A value both music and speech, either in sections of 0.0 is least danceable and 1.0 is most or layered, including such cases as rap danceable. music. Values below 0.33 most likely represent music and other non-speech- Instrumentalness: Predicts whether a like tracks. track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental Tempo: The overall estimated tempo of in this context. Rap or spoken word a track in beats per minute (BPM). In tracks are clearly "vocal". The closer the musical terminology, tempo is the speed instrumentalness value is to 1.0, the or pace of a given piece and derives greater likelihood the track contains no directly from the average beat duration. vocal content. Values above 0.5 are [4] intended to represent instrumental tracks, but confidence is higher as the 5. Hidden Markov Model value approaches 1.0. As described by Ghahramani in his Loudness: The overall loudness of a introductory paper on the basics of Hidden track in decibels (dB). Loudness values Markov Models, a HMM “is a tool for are averaged across the entire track and representing probability distributions over are useful for comparing relative sequences of observations”. [5] More loudness of tracks. Loudness is the precisely, a HMM can be described as: quality of a sound that is the primary psychological correlate of physical Q : [q1,q2,q3,...qN] = a list of N states. strength (amplitude). Values typical range between -60 and 0 db. A = a01, a02, … an1, an2, … anm = a transition probability matrix A, with each ai,j representing the probability of Mode: Mode indicates the modality moving from state i to j. (major or minor) of a track, the type of scale from which its melodic content is Q0, QF = special start and end states that playlist, we wanted the user to be able to are not associated with observations. give meaningful feedback that could influence the songs chosen for the playlist. We implemented our version of a To accomplish this, we implemented a Hidden Markov Model through the set of feedback options that would influence following steps: the song chosen to follow the one currently playing. These options included a very 1. Using a seed song provided by the positive “like”, a slightly positive “listen”, a user as a starting point, populate a slightly negative “skip”, and a very negative song pool to pick potential playlist “dislike”. A “like” indicates that the user candidates from. enjoyed the song and would like to increase 2. Populate the characteristic vector for the likelihood of songs like that one every song in the song pool. appearing next in the playlist. If the user 3. Loop through the entire song pool, does not enter any specific feedback and comparing every song to every other simply lets the current song finish playing, song and storing the difference this registers as a “listen” and it is assumed values in a 2-dimensional array, D. that the listener liked the current song but Di,j represents the difference value, was not enticed to increase the appearance or transition probability, from the of similar songs. If the user “skips” a song, current song at index i in the song they did not like the currently playing song, pool to song j in the song pool. A and would like to decrease the likelihood of smaller difference value represents a including songs like the one currently higher transition probability. playing. Finally, if the user inputs a 4. Using this probability matrix, select “dislike” then they severely disliked the a starting song index at random. current song and would not like to hear any 5. Compare the transition probabilities more songs like the current one. for that song to a threshold value in These different levels of feedback order to select the next unplayed manipulate the threshold value used for song song in the playlist. selection. It is important to remember that 6. Transition to the song chosen at step lower values represent higher transition 5 and set that song’s played flag probabilities when considering how these equal to true. interactions affect the threshold. Initially 7. Repeat steps 5 and 6 for as many the threshold value for selection is the ⅓ of iterations as the desired number of the average of the difference values for a songs in the final playlist. particular song. The next song chosen will be the first song less than the threshold. The threshold value described in step 5 If the user does not interfere with the above is intentionally left vague, as this is song playing, the threshold value stays the where our algorithm differs from a same. If the user inputs a “thumbs up”, the traditional HMM implementation. new threshold value is set to ¼ of the Throughout the course of listening to a difference values for that song. This further narrows the selection threshold. If the user For this playlist, we simply chose N random “skips” the song, then the threshold is song indexes from a range equal to the size increased to the average difference value. of the song pool. This implementation was Finally, if the user “dislikes” the current truly random and did not attempt to ensure song, then the threshold is set to the average there were no repeats. difference value and the next song chosen will now be the first song greater than the 7. Results and Analysis threshold. This will ensure that the next song is dissimilar to the current song. We ran all three previously discussed algorithms on 15 different song pools 6. Fisher-Yates Shuffle and Generic generated from seed songs of varying genre, Random Shuffle artist, and style. The results of these tests can be seen in the chart below. In order to provide a point of comparison for our HMM, we implemented both a Fisher-Yates shuffle and a basic random shuffle using the built in random package in Python. These shuffling algorithms were applied to the same song pool that was used in the HMM to ensure an even comparison. The F-Y shuffle is a common shuffling algorithm that produces a randomly ordered list with no repeats allowed. The steps taken in implementing F-Y are Figure 1: Generated Playlist Comparison 1. Let n = 0. 2. Choose a random k such As evident from the chart our HMM that 1≤ k ≤N. algorithm produced playlists with much lower average difference values than those 3. If k ≠ n, let An = Ak. produced by Fisher-Yates and the basic 4. Let Ak = n. random shuffle. Across all 15 playlists, our 5. Increase n by one. HMM averaged a difference value of 6. If n ≤ N, repeat from step 8.580385168, while F-Y and basic random 2. averaged 36.94263096 and 36.52560812 respectively. Further data is displayed in the following table: where N is the predefined length of the list of songs to choose. We also tested a basic random selection using the built in random package in Python. shuffled playlist as well. However, the quality of the playlist generated, according to our constraints, was worse than that of playlists generated using our HMM. Similar Figure 2: Playlist Avg, Max, Min Difference observations can be made about the basic Values random shuffle, although it may be less suitable for real-world use because of the The maximum average difference value potential for repeated songs to appear in a observed from the HMM was smaller than shuffled playlist. the minimum observed value for either of In practice, much of the additional the other shuffling algorithms. In fact in all runtime required for implementing the cases regarding difference values, our HMM was likely a result of latency caused Hidden Markov Model outperformed both by our use of the Spotify API. When the Fisher-Yates and basic random shuffling generating the probability matrix there are algorithms. However, this was not always multiple calls to the Spotify servers to the case when examining other aspects of retrieve data, all of which are very the three algorithm’s execution. computationally expensive. These calls are Computationally speaking, our HMM also entirely dependent on a network was considerably more expensive to connection, so our implementation could not implement. Our HMM resulted in a be used locally without modification. 2 computational complexity of O(nm ), In the end, music is almost entirely compared to O(n) for both Fisher-Yates and subjective. There are countless variables basic random. This translated directly into that go into a person’s musical tastes and runtime increases, with our HMM taking they can vary on a case by case basis. This substantially longer to run than either of the makes the task of designing a “perfect” random shuffles. playlist an impossible task. Our goal was to try and empirically quantify the suitability of 8. Conclusions a group of songs to form a cohesive and likeable playlist, and in this regard we have Although more computationally succeeded. expensive up front, our Hidden Markov Model for playlist generation was able to 9. Future Work produce a more successful playlist that minimized average difference values among Possible extensions abound for the work songs selected. Theoretically these songs are we have accomplished here. There is room more likely to form a cohesive collection for code optimization, both to work more that would be enjoyable to a listener who efficiently with the Spotify API and within chose the seed song from which the playlist the application itself. Another, and perhaps was built. immediate, next step would be to deploy this Fisher-Yates provided a simple, to a user base and determine if an optimal efficient, and effective means of producing a playlist in the terms we have defined translates into a “better” real world listening [3] experience. http://www.cs.uml.edu/ecg/uploads/AIfall12 /nair_Kradio.pdf Works Cited [4] [1] https://labs.spotify.com/2014/02/28/how- https://developer.spotify.com/web- to-shuffle-songs/ api/object-model/#audio-features-object [2] http://www.academia.edu/2692753/A_Grap [5] h-Based_Method_for_Playlist_Generation http://mlg.eng.cam.ac.uk/zoubin/papers/ijpra i.pdf</p>
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-