The Hybrid Mobile Instrument: Recoupling the Haptic, the Physical, and the Virtual
Total Page:16
File Type:pdf, Size:1020Kb
THE HYBRID MOBILE INSTRUMENT: RECOUPLING THE HAPTIC, THE PHYSICAL, AND THE VIRTUAL A DISSERTATION SUBMITTED TO THE DEPARTMENT OF MUSIC AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Romain Pierre Denis Michon June 2018 © 2018 by Romain Pierre Denis Michon. All Rights Reserved. Re-distributed by Stanford University under license with the author. This work is licensed under a Creative Commons Attribution- Noncommercial 3.0 United States License. http://creativecommons.org/licenses/by-nc/3.0/us/ This dissertation is online at: http://purl.stanford.edu/rd318qn0219 ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Chris Chafe, Co-Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Julius Smith, III, Co-Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Ge Wang I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Matthew Wright, Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost for Graduate Education This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file in University Archives. iii Abstract The decoupling of the “controller” from the “synthesizer” is one of the defining characteristic of digital musical instruments (DMIs). While this allows for much flexibility, this “demutualization” (as Perry Cook termed) sometimes results in a loss of intimacy between the performer and the instrument. In this thesis, we introduce a framework to craft “mutualized” DMIs by leveraging the concepts of augmented mobile device, hybrid instrument design, and skill transfer from existing performer technique. Augmented mobile instruments combine commodity mobile devices with passive and active el- ements that can take part in the production of sound (e.g., resonators, exciter, etc.), while adding new affordances to the device and changing its form and overall aesthetics. Screen interfaces can be designed to facilitate skill transfer, accelerating the learning and the mastery of such instruments. Hybrid instrument design mutualizes physical and “physically-informed” virtual elements, taking advantage of recent progress in physical modeling and digital fabrication. This design ethos allows physical/acoustical elements to be substituted with virtual/digital ones and vice versa (as long as it is physically possible). A set of tools to design hybrid mobile instruments is introduced and evaluated. Overall, we demonstrate how this approach can help digital luthiers to think about DMI design “as a whole” in order to create mutualized instruments. Through a series of case studies, we discuss aesthetic and design implications when making such hybrid instruments. iv Acknowledgments Where to start? So many people need to be thanked for their various contributions to this work. First, I owe so much to my four advisors Professors Julius O. Smith, Chris Chafe, Ge Wang, and Matt Wright. Julius was my former advisor when I came to CCRMA as a visiting researcher in 2011. He’s one of the most enthusiast and curious person I know. He provided me with constant inestimable feedback, and I wouldn’t be here today without his help and support. Chris Chafe has done so much for me since I started my PhD. He got me involved in so many fun and interesting projects. He’s the “benevolent father” of CCRMA, looking at all his kids. His knowledge and understanding of our field as well as his thoughtful advices really helped me move forward in my PhD journey. I collaborated with Ge on various projects related or unrelated to this thesis in recent years. His energy, craziness, curiosity, and ideas are an endless source of inspiration to all of us. CCRMA as I know it would not be the same without Ge. He’s also a very good friend. Matt’s feedback has been priceless both for this thesis and also for the various papers that we co-authored. His sense of detail and his rigor helped me strengthen my work. I want to thank Yann Orlarey who’s been my mentor and my “spiritual father” for the past eight years. Yann has been a constant source of inspiration for me since we met. His kindness, ideas, and awareness of ongoing research and technologies in our field (and beyond) keep inspiring me. Thanks Yann! Thanks to my parents, Gilles and Michèle Michon, to my sister Ninon Michon (Doucette), to my grandmother Marie Michon, to my uncle, aunt, and cousin Serge, Denise, and Sophie Ferreri for their love and unswerving support, even though I’m not sure if they fully understand what I’m doing. I would probably not being dedicating my life to music technology if I didn’t meet Laurent Pottier. Laurent was my former advisor and instructor when I did my bachelors and my masters at Saint-Etienne University. He transmitted me his passion for our field and introduced me to John Chowning who helped me becoming a visiting researcher at CCRMA. I collaborated with Sara Martin to make mesh2faust. This was such a fun experience and this work would not have been possible without her mathematical and physical modeling skills. My dear friend John Granzow and I co-taught most of the workshops that served as the evaluation v vi of the various frameworks presented in this thesis. His work has always been a great source of inspiration for me. Stéphane Letz provided me with invaluable technical help for the various tools presented in this thesis. He’s just so good! GRAME and the Faust community are so lucky to have him. Thanks to Sasha Leitman for making the MaxLab such a productive and cool environment. She helped me a lot in the first years of my PhD as I got interested in the NIME field and musical interaction design in general. I really miss our chats and her Bloody Marys! Thanks to Nette Worthey for being the “CCRMA mum.” We’re all so grateful for her investment in our community and her kindness. CCRMA and the Stanford Music Department are the most incredible and inspiring places I was given to work at. This is partly thanks to their staff and faculty. In particular, I’d like to thank Fernando Lopez-Lezcano, Jonathan Abel, Jonathan Berger, Takako Fujioka, Chris Jette, Jay Kadis, Dave Kerr, Carr Wilkerson, Debbie Barney, Velda Williams, Mario Champagne, Carlos Sanchez, John Chowning, and Jaroslaw Kapuscinski for their help and support throughout my stay here. My friends at Stanford played and invaluable role in this work. People here are both so talented and down to earth. In particular, I’d like to thank Elliot Canfield-Dafilou, Spencer Salazar, Nolan Lem, Madeline Huberth, Tim O’Brien, Constantin Basica, Alex Chechile, François Germain, Emily Graber, Nick Gang, Ethan Geller, Woodrow Herman, Kurt Werner, Chet Gnegy, Myles Borins, Pablo Castellanos Macin, Priyanka Shekar, Eoin Callery, Rahul Agnihotri, Jack Atherton, Hongchan Choi, Esteban Maestre, Ed Berdahl, Luke Dahl, Rob Hamilton, Jay Dunlap, Orchisama Das, Mark Rau, Julie Herndon, Jorge Herrera, Sarah McCarthy, Kara Riopelle, Charlie Sdraulig, Kitty Shi, Jeremy Hsu, Alex Colavin, Dan Schlessinger, Victoria Grace, Blair Kaneshiro, Aury Washburn, Lauri Savioja, Lonce Wyse, Iran Roman, Séverine Ballon, and Victoria Chang. My friends back home always supported me throughout this adventure and always made me feel like I never left. Thanks to Léo and Flavie Brossy, Clément Terrade, Clément Paulet, Julien Sthème de Jubécourt, Ludovic Chaux, Anthony Paillet, Luisa Kabala, Marine Staron, Angie Harry, and Dimitri Bouche. Thanks to Stefania Serafin, Juraj Kojs, Dan Overholt, and Cumuhr Erkut for giving me the opportunity to teach workshops in their respective institutions. Thanks also to the all the workshop participants who served as my guinea pigs! I owe a lot to Stefania for proof-reading this thesis and giving me invaluable feedbacks. Other people in our field helped me put this work together and have been a source of inspiration since I became interested in music technology. In particular, I’d like to thank some of the members of the faculty and staff at the National University of Ireland in Maynooth (NUIM) for being my brilliant instructors during my year there: Victor Lazzarini, Joe Timoney, and Mathieu Hodgkinson. Thanks to my “Faust colleagues” too: Dominique Fober, Albert Graef, Emilio Gallego Arias, Pierre-Amaury Grumiaux, and Pierre Jouvelot. vii Thanks to Pat Scandalis and Nick Porcaro at moForte for our collaborations, for giving my access to their network, and for helping me in various ways during the course of my PhD. Thanks to Professor Doug James for accepting to be the chair of my oral defense committee! Thanks to Stanford University for offering me so many opportunities and for being such an incredible place. Thanks to GRAME in Lyon for hosting me multiple times in the frame of scientific residencies and for funding the development of Faust. I’m so proud to be from a country where artistic creation and scientific research are supported by the state. Finally, thanks to everyone I forgot! Contents Abstract iv Acknowledgments v Introduction 1 Overview . .1 Outline . .2 1 Background 6 1.1 Physical Interfaces and Virtual Instruments: Remutualizing the Instrument .