The Role of Emotion and Context in Musical Preference. Song, Yading
Total Page:16
File Type:pdf, Size:1020Kb
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Queen Mary Research Online The Role of Emotion and Context in Musical Preference. Song, Yading The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the author For additional information about this publication click this link. http://qmro.qmul.ac.uk/xmlui/handle/123456789/12915 Information about this research object was correct at the time of download; we occasionally make corrections to records, please therefore check the published record when citing. For more information contact [email protected] The Role of Emotion and Context in Musical Preference Yading Song PhD thesis Thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy of the University of London School of Electronic Engineering and Computer Science Queen Mary, University of London United Kingdom January 2016 Abstract The powerful emotional effects of music increasingly attract the attention of music informa- tion retrieval researchers and music psychologists. In the past decades, a gap exists between these two disciplines, and researchers have focused on different aspects of emotion in music. Music information retrieval researchers are concerned with computational tasks such as the classifica- tion of music by its emotional content, whereas music psychologists are more interested in the understanding of emotion in music. Many of the existing studies have investigated the above issues in the context of classical music, but the results may not be applicable to other genres. This thesis focusses on musical emotion in Western popular music combining knowledge from both disciplines. I compile a Western popular music emotion dataset based on online social tags, and present a music emotion classification system using audio features corresponding to four different musical dimensions. Listeners' perceived and induced emotional responses to the emotion dataset are compared, and I evaluate the reliability of emotion tags with listeners' ratings of emotion using two dominant models of emotion, namely the categorical and the dimensional emotion models. In the next experiment, I build a dataset of musical excerpts identified in a questionnaire, and I train my music emotion classification system with these audio recordings. I compare the differences and similarities between the emotional responses of listeners and the results from automatic classification. Music emotions arise in complex interactions between the listener, the music, and the situa- tion. In the final experiments, I explore the functional uses of music and musical preference in everyday situations. Specifically, I investigate emotional uses of music in different music-listening situational contexts. Finally, I discuss the use of emotion and context in the future design of subjective music recommendation systems and propose the study of musical preference using musical features. 1 I, Yading Song, confirm that the research included within this thesis is my own work or that where it has been carried out in collaboration with, or supported by others, that this is duly acknowledged below and my contribution indicated. Previously published material is also ac- knowledged below. I attest that I have exercised reasonable care to ensure that the work is original, and does not to the best of my knowledge break any UK law, infringe any third party's copyright or other Intellectual Property Right, or contain any confidential material. I accept that the College has the right to use plagiarism detection software to check the electronic version of the thesis. I confirm that this thesis has not been previously submitted for the award of a degree by this or any other university. The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the author. Signature: Date: Details of collaboration and publications: All collaborations and earlier publications that have influenced the work and writing of this thesis are fully detailed in Section 1.4. Acknowledgements It has been 4 years since a PhD position at Queen Mary University of London was offered to me (a massive thank to Dawn Black for motivating me). My work on this thesis is now coming to an end, and I would like to take this opportunity to thank all the people who have helped me. While working on this thesis, I was very fortunate to be part of the Centre for Digital Music (C4DM). It has been a wonderful experience, and it is definitely the highlight of my life. First and foremost, I would like to thank my two supervisors, Simon Dixon and Marcus Pearce most sincerely, for their patience and guidance, for their firm support and faith in me, for all the delightful and inspiring conversations, for keeping me focused throughout my PhD study, as well as for allowing me to grow as a research scientist. Each one is unsurpassed as a supervisor, except by their combination. I would also like to thank my independent assessor, Professor Geraint Wiggins, for his invaluably constructive criticism and friendly advice in the past four years. I want to express my gratitude to both of my external examiners, Professor David J. Harg- reaves and Dr. Alinka Greasley. I was very privileged to receive feedback from two experts in music psychology. Their advice has been priceless to shape the final version of my thesis. I also would like to express my warm thanks to George Fazekas and Katerina Kosta for their zealous support on the Greek music project. I want to thank Professor Andrea Halpern and Professor Tuomas Eerola for their collaborations on the projects related to musical emotion and context respectively. It was a great honour to receive their brilliant comments and suggestions. My sincere thanks goes to Birgitta Burger, Mart´ınHartmann, Markku P¨oyh¨onen,Pasi Saari, Petri Toiviainen, and Anemone Van Zijl at the Finnish Centre of Excellence in Interdisciplinary Music Research at the University of Jyv¨askyl¨a,for making my stay in Finland so pleasant. Additionally, thank you my former colleagues at Youtube, Eric, Meijie, Zack, Dominick, Justin, Sam, Sean, and Umang for offering me a fabulous and fruitful summer in California. I would like to express my heartfelt gratitude especially to Vivek and Bob for their encouragement and support, for leading my work on exciting projects. You have also been tremendous managers 2 3 for me. I am very also grateful to my mentor Luke, for his guidance and advice on my career. Good friends are hard to find, harder to leave, and impossible to forget. A special thanks to my sweetest \104 gang": Siying, Chunyang, Tian, Shan, and Mi. I will always remember the days and evenings we spent together working, playing, and have amazing dinners. Thank you for the laughs and tears you shared with me and everything in between. Thank you for the absolute privilege of being able to attend special moments with you on wedding days, birthdays, and travelling. Also, I am very grateful to Kelly and Sally from learning development for their helpful support during my writing-up period, and to Peta for her writing tips. I appreciate my writing buddies, Pollie and Kavin, for keeping good progress of our work: they have made my writing-up so colourful and fun. I would like to express appreciation to Emmanouil Benetos for introducing me to IEEE, sharing his truthful views on my research, and for his occasional proofreading. Thank you, Mark Plumbley, for providing all the resources to me. Many thanks to my amazing music informatics group colleagues: Magdelena Chudy, Pablo Alejandro Alvarado Duran, Sebastian Ewert, Peter Foster, Holger Kirchhoff, Robert Macrae, Matthias Mauch, Lesley Mearns, Julien Osmalskyj, Maria Panteli, and Rob Tubb, as well as my enthusiastic music cognition colleagues: Yvonne Blokland, Ioana Dalca, L´enaDelval, Miriam Kirsch, Sarah Sauv´e,JP Tauscher, Jordan Smith, and Sonia Wilkie for their assistance, dedicated involvement, and lively discussions. I also enjoyed lunch and coffee breaks, nights out, and trips together with Dimitrious Giannoulis, Steve Hargreaves, Chris Harte, Antonella Mazzoni, Dave Moffat, Giulio Moro, Madeleine Le Bouteiller, Jose J. Valero-Mas, Elio Quinton, and Bogdan Vera. Further thanks to other members and visitors of C4DM who have made my time enjoyable at QMUL: Mathieu Barthet, Chris Cannam, Alice Clifford, Brecht De Man, Lu´ısFigueira, Shengchen Li, Zheng Ma, Laurel Pardue, Dan Stowell, and Janis Sokolovskis. I must also thank all the people who participated in my listening tests and anonymous reviewers. Without them, this thesis would have never been accomplished. Unquestionably, my deep gratitude goes to my family, especially to my dad, mom, and my partner Mati. You are the best and most beautiful things that happened to me in this world. I will always be grateful for standing behind me and giving me your biggest support. You are always my inspiration. Thank you for believing in me and giving me your unconditional and selfless love. This work was supported financially by China Scholarship Council. Contents Acknowledgements :::::::::::::::::::::::::::::::::::::: 2 List of Figures ::::::::::::::::::::::::::::::::::::::::: 8 List of Tables :::::::::::::::::::::::::::::::::::::::::: 10 Glossary of Technical Terms :::::::::::::::::::::::::::::::: 14 List of Abbreviations ::::::::::::::::::::::::::::::::::::: 16 1 Introduction :::::::::::::::::::::::::::::::::::::::: 18 1.1 Motivation and Aim . 18 1.2 Thesis Structure . 20 1.3 Contributions . 21 1.4 Associated Publications . 22 2 Background in Music and Emotion :::::::::::::::::::::::::: 24 2.1 Definition . 24 2.2 Perception and Induction of Musical Emotions . 25 2.2.1 Perceived Musical Emotion . 26 2.2.2 Induced Musical Emotion . 27 2.2.3 Relationship between Emotion Perception and Induction . 28 2.3 Musical Emotion Representation . 29 2.3.1 Categorical Model . 29 2.3.2 Dimensional Model . 31 2.3.3 Domain-specific Model .