Expression Control in Singing Voice Synthesis Features, Approaches, Evaluation, and Challenges Martí Umbert, Jordi Bonada, Masataka Goto, Tomoyasu Nakano, and Johan Sundberg M. Umbert and J. Bonada are with the Music Technology Group (MTG) of the Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018 Barcelona, Spain, e-mail: {marti.umbert, jordi.bonada}@upf.edu M. Goto and T. Nakano are with the Media Interaction Group, Information Technology Research Institute (ITRI) at the National Institute of Advanced Industrial Science and Technology (AIST), Japan, e-mail: {m.goto, t.nakano}@aist.go.jp J. Sundberg is with the Voice Research Group of the Department of Speech, Music and Hearing (TMH) at the Royal Institute of Technology (KTH), Stockholm, Sweden, e-mail:
[email protected] In the context of singing voice synthesis, expression control manipulates a set of voice features related to a particular emotion, style, or singer. Also known as performance modeling, it has been approached from different perspectives and for different purposes, and different projects have shown a wide extent of applicability. The aim of this article is to provide an overview of approaches to expression control in singing voice synthesis. Section I introduces some musical applications that use singing voice synthesis techniques to justify the need for an accurate control of expression. Then, expression is defined and related to speech and instrument performance modeling. Next, Section II presents the commonly studied set of voice parameters that can change perceptual aspects of synthesized voices. Section III provides, as the main topic of this review, an up-to-date classification, comparison, and description of a selection of approaches to expression control.