Expressive Sampling Synthesis. Learning Extended Source-Filter Models from Instrument Sound Databases for Expressive Sample Manipulations Henrik Hahn
Total Page:16
File Type:pdf, Size:1020Kb
Expressive sampling synthesis. Learning extended source-filter models from instrument sound databases for expressive sample manipulations Henrik Hahn To cite this version: Henrik Hahn. Expressive sampling synthesis. Learning extended source-filter models from instrument sound databases for expressive sample manipulations. Signal and Image Processing. Université Pierre et Marie Curie - Paris VI, 2015. English. NNT : 2015PA066564. tel-01331028 HAL Id: tel-01331028 https://tel.archives-ouvertes.fr/tel-01331028 Submitted on 7 Jul 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. i THESE` DE DOCTORAT DE l’UNIVERSITE´ PIERRE ET MARIE CURIE Ecole´ doctorale Informatique, T´el´ecommunications et Electronique´ (EDITE) Expressive Sampling Synthesis Learning Extended Source–Filter Models from Instrument Sound Databases for Expressive Sample Manipulations Presented by Henrik Hahn September 2015 Submitted in partial fulfillment of the requirements for the degree of DOCTEUR de l’UNIVERSITE´ PIERRE ET MARIE CURIE Supervisor Axel R¨obel Analysis/Synthesis group, IRCAM, Paris, France Reviewer Xavier Serra MTG, Universitat Pompeu Fabra, Barcelona, Spain Vesa V¨alim¨aki Aalto University, Espoo, Finland Examiner Sylvain Marchand Universit´ede La Rochelle, France Bertrand David T´el´ecom ParisTech, Paris, France Jean-Luc Zarader UPMC Paris VI, Paris, France This page is intentionally left blank. Abstract This thesis addresses imitative digital sound synthesis of acoustically viable in- struments with support of expressive, high-level control parameters. A general model is provided for quasi-harmonic instruments that reacts coherently with its acoustical equivalent when control parameters are varied. The approach builds upon recording-based methods and uses signal trans- formation techniques to manipulate instrument sound signals in a manner that resembles the behavior of their acoustical equivalents using the fundamental control parameters intensity and pitch. The method preserves the inherent quality of discretized recordings of a sound of acoustic instruments and in- troduces a transformation method that retains the coherency with its timbral variations when control parameters are modified. It is thus meant to introduce parametric control for sampling sound synthesis. The objective of this thesis is to introduce a new general model repre- senting the timbre variations of quasi-harmonic music instruments regarding a parameter space determined by the control parameters pitch as well as global and instantaneous intensity. The model independently represents the deter- ministic and non-deterministic components of an instrument's signal and an extended source-filter model will be introduced for the former to represent the excitation and resonance characteristics of a music instrument by individual parametric filter functions. The latter component will be represented using a classic source-filter approach using filters with similar parameterization. All filter functions are represented using tensor-product B-splines to support for multivariate control variables. An algorithm will be presented for the estimation of the model's parameters that allows for the joint estimation of the filter functions of either component in a multivariate surface-fitting approach using a data-driven optimization strat- egy. This procedure also includes smoothness constraints and solutions for missing or sparse data and requires suitable data sets of single note recordings of a particular musical instrument. Another original contribution of the present thesis is an algorithm for the calibration of a note's intensity by means of an analysis of crescendo and de- crescendo signals using the presented instrument model. The method enables the adjustment of the note intensity of an instrument sound coherent with the relative differences between varied values of its note intensity. A subjective evaluation procedure is presented to assess the quality of the transformations obtained using a calibrated instrument model and indepen- dently varied control parameters pitch and note intensity. Several extends of sound signal manipulations will be presented therein. For the support of inharmonic sounds as present in signals produced by ii iii the piano, a new algorithm for the joint estimation of a signal's fundamental frequency and inharmonicity coefficient is presented to extend the range of possible instruments to be manageable by the system. The synthesis system will be evaluated in various ways for sound signals of a trumpet, a clarinet, a violin and a piano. This page is intentionally left blank. Preface This thesis represents the results of a part of the multidisciplinary research project Sample Orchestrator 2 conducted from november 2010 until may 2013 at the IRCAM in Paris under the direction of its scientific director Hugues Vinet. The project involved the collaboration with the industrial partner Univers Sons1, a Paris-based music software company. The research project involved contributions from several research teams across the IRCAM including the Analysis/Synthesis, the Sound Music Move- ment Interaction, Acoustics and Cognitive Spaces as wellf as the Music Repre- sentationsg team and additionally the Sound & Software department of Univers Sons. Their respective responsible leads at IRCAM have been Axel R¨obel, Norbert Schnell, Markus Noisternig and Gerard Assayag as well as Alaine from Univers Sons (US). The research program has been setup as a multidisciplinary project for the development of new techniques and algorithms to create a next generation sample-based sound synthesizer. The following main research directions have been targeted within the project: the creation of innovative, realistic sounding virtual solo instruments that • allow for expressively controlling its sound, the creation of realistic ensembles from individual instruments using a • parametric convolution engine for sound spatialization and the development of new techniques for a on-the-fly arrangement synthesis • of musical sequences for the augmentation of musical performances, whereas the former research item eventually led to this thesis. The goal for the creation of expressively controllable solo virtual instru- ments was to design new sound transformation methods, which render actual instrument characteristics of these instruments for phenomena such as trans- position, intensity changes, note transitions and modulations with the target of enhancing the quality/cost ratio of current state-of-the-art methods for in- strument sound synthesis. The Sample Orchestrator 2 project has been financed by the french research agency Agence nationale de la recherche as part of the CONTINT (Digital Content and Interactions) program. 1www.uvi.net, last accessed 2015-08-07 v This page is intentionally left blank. Acknowledgements First of all I'd like to thank my supervisor Axel R¨obel for giving me the op- portunity to do my PhD at the Ircam and with whom I had the privilege to share the office giving me the chance to debate my work almost on a daily basis. His comments and suggestions regarding my work have been invaluable throughout the whole project and the amount of things I was able to learn from him throughout these years is truly amazing. I would further like to thank all members of my PhD committee: Xavier Serra, Vesa V¨alim¨aki,Sylvain Marchand, Bertrand David as well as Jean-Luc Zarader for their commitment and interest into my research. I also like to thank the various members of the analysis/synthesis team whose company I enjoyed a lot while working at IRCAM and at various after work occasions: Stefan Huber, Nicolas Obin, Marco Liuni, Sean O'Leary, Pierre Lanchantin, Marcelo Caetano, Alessandro Saccoia, Frederic Cornu, Wei-Hiang Lao, Geoffroy Peeters, Maria Gkotzampougiouki, Alex Baker, Reuben Thomas but also members from other IRCAM research departments with whom I in- teracted a lot within the 3 years of being there: Norbert Schnell, Jean-Philippe Lambert and Markus Noisternig. Working with and next to these people all sharing the passion for music, technology and research was a deeply inspiring experience. I further like to thank Sean O'Leary and Stefan Huber for providing shelter in Paris when needed by the time I had moved back to Berlin at the end of my work. All the anonymous participants of our subjective online survey deserve words of thanks as they all took part voluntarily without compensation and participated in the test only motivated by their interest into current research developments.. vii This page is intentionally left blank. Publications [HR12] H. Hahn and A. Roebel. Extended Source-Filter Model of Quasi- Harmonic Instruments for Sound Synthesis, Transformation and Interpo- lation. In Proceedings of the 9th Sound and Music Computing Conference (SMC), Copenhagen, Denmark, July 2012. [HR13a] H. Hahn and A. Roebel. Joint f0 and inharmonicity estimation using second order optimization. In Proceedings of the 10th Sound and Music Computing Conference (SMC), Stockholm, Sweden, August 2013. [HR13b] H. Hahn and