The Mobile Csound Platform 6

Total Page:16

File Type:pdf, Size:1020Kb

The Mobile Csound Platform 6 are also working on MusicXML import, which will 5. http://cycling74.com/docs/max6/dynamic/ add to MaxScore’s user-friendliness. c74_docs.html#m4l_live_object_model (retrieved on February 14, 2012) THE MOBILE CSOUND PLATFORM 6. http://www.didkovsky.com/JavaMusicSyst 6. REFERENCES ems/JMSL3.pdf (retrieved on February 14, Victor Lazzarini, Steven Yi, Joseph Timoney 2012) Damian Keller 7. Marc Sabat, “The Extended Helmholtz- Sound and Digital Music Technology Group 1. Nick Didkovsky & Georg Hajdu (2008). Ellis JI Pitch Notation,” in Mikrotöne und National University of Ireland, Maynooth, Ireland Nucleo Amazonico de Pesquisa Musical (NAP) MaxScore. “Music notation in Max/MSP”. Mehr. von Bockel Verlag, 2005, pp. 315– [email protected] Universidade Federal do Acre, Brazil Proceedings of the International 331. Computer Music Conference. 8. Clarence Barlow (1987). “Two essays on [email protected] [email protected] 2. Georg Hajdu & Nick Didkovsky. “On the theory”. Computer Music Journal, 11, 44- [email protected] evolution of music notation in network 60. music environments”. Contemporary 9. http://www.computermusicnotation.com/? Marcelo Pimenta Music Review 28, 4/5, pp. 395 – 407 page_id=266 (retrieved on February 14, 3. http://www.sadam.hu/?q=software 2012) LCM, Instituto de Informatica (retrieved on February 14, 2012) 10. Georg Hajdu, Kai Niggemann, Ádám Universidade Federal do Rio Grande do Sul, Brazil 4. Cognitive Siska & Andrea Szigetvári. “Notation in [email protected] Foundations of Musical Pitch. the Context of Quintet.net Projects”. Carol L. Krumhansl. Contemporary Music Review, 29, 1, pp. 39 New York: - 53 Oxford University Press, 1990. ABSTRACT Csound API has been described in a number of articles [4], [6] [7]. This article discusses the development of the Mobile Csou- New platforms for Computer Music have been brought nd Platform (MCP), a group of related projects that aim to to the fore by the increasing availability of mobile devices provide support for sound synthesis and processing un- for computing (in the form of mobile phones, tablets and der various new environments. Csound is itself an estab- netbooks). With this, we have an ideal scenario for a vari- lished computer music system, derived from the MUSIC ety of deployment possibilities for computer music sys- N paradigm, which allows various uses and applications tems. In fact, Csound has already been present as the through its Application Programming Interface (API). In sound engine for one of the pioneer portable systems, the the article, we discuss these uses and introduce the three XO-based computer used in the One Laptop per Child environments under which the MCP is being run. The (OLPC) project [5]. The possibilities allowed by the re- projects designed for mobile operating systems, iOS and engineered Csound were partially exploited in this sys- Android, are discussed from a technical point of view, ex- tem. Its development sparked the ideas for a Ubiquitous ploring the development of the CsoundObj toolkit, which Csound, which is now steadily coming to fruition with a is built on top of the Csound host API. In addition to number of parallel projects, collectively named the Mo- these, we also discuss a web deployment solution, which bile Csound Platform (MCP). In this paper, we would like allows for Csound applications on desktop operating sys- to introduce these and discuss the implications and possi- tems without prior installation. The article concludes with bilities provided by them. some notes on future developments. 1. INTRODUCTION 2. THE CSOUND APPLICATION ECOSYSTEM Csound is a well-established computer music system in Csound originated as a command-line application that pars- the MUSIC N tradition [1] , developed originally at MIT ed text files, setup a signal processing graph, and pro- and then adopted as a large community project, with its cessed score events to render sound. In this mode, users development base at the University of Bath. A major new hand-edit text files to compose, or use a mix of hand- version, Csound 5, was released in 2006, offering a com- edited text and text generated by external programs. Many pletely re-engineered software, as a programming library applications–whether custom programs for individual use with its own application programming interface (API). or publicly shared programs–were created that could gen- This allowed the system to be embedded and integrated erate text files for Csound usage. However, the usage sce- into many applications. Csound can interface with a vari- narios were limited as applications could not communi- ety of programming languages and environments (C/C++, cate with Csound except by what they could put into the Objective C, Python, Java, Lua, Pure Data, Lisp, etc.). text files, prior to starting rendering. Full control of Csound compilation and performance is Csound later developed realtime rendering and event provided by the API, as well software bus access to its input, with the latter primarily coming from MIDI or stan- control and audio signals, and hooks into various aspects dard input, as Csound score statements were also able to of its internal data representation. Composition systems, be sent to realtime rendering Csound via pipes. These fea- signal processing applications and various frontends have tures allowed development of Csound-based music sys- been developed to take advantage of these features. The tems that could accept events in realtime at the note-level, _162 _163 such as Cecilia [8]. These developments extended the use tivity. A CsoundObj object controls Csound performance Android can take their experience and work on standard cases for Csound to realtime application development. and provides the audio input and output functionality, via Java desktop applications. The two versions of Java do However, it was not until Csound 5 that a full API the CoreAudio AuHAL mechanism. MIDI input is also differ, however, in some areas such as classes for access- was developed and supported that could allow finer grain handled either by the object, by allowing direct pass-through ing hardware and different user interface libraries. Simi- interaction with Csound [3]. Applications using the API to Csound for standard Csound MIDI-handling, or by rout- larly to iOS, in order to help ease development, a could now directly access memory within Csound, control ing MIDI through a separate MIDIManager class to UI CsoundObj class, here written in Java, of course, was de- rendering frame by frame, as well as many other low-level widgets, which in turn send values to Csound. Addition- veloped to provide straightforward solutions for common features. It was at this time that desktop development of ally, a number of sensors that are found on iOS devices tasks. applications grew within the Csound community. It is also come pre-wrapped and ready to use with Csound through this level of development that Csound has been ported to CsoundObj. As with iOS, some issues with the Android platform have motivated some internal changes to Csound. One mobile platforms. To communicate with Csound, an object-oriented call- such problem was related to difficulties in handling tem- Throughout these developments, the usage of the back system was implemented in the CsoundObj API. Ob- porary files by the system. As Csound was dependent on Csound language as well as exposure to users has changed jects that are interested in communicating values, whether these in the compilation/parsing stage, a modification to as well. In the beginning, users were required to under- control data or audio signals, to and from Csound must use core (memory) files instead of temporary disk files stand Csound syntax and coding to operate Csound. To- implement the CsoundValueCacheable protocol. These was required. day, applications are developed that expose varying de- CsoundValueCacheables are then added to CsoundObj and grees of Csound coding, from full knowledge of Csound values will then be read from and written to on each con- Two options have been developed for audio IO. The required to none at all. Applications such as those created trol cycle of performance (fig.1). The CsoundObj API first involves using pure Java code through the Audio- for the XO platform highlight where Csound was lever- comes with a number of CsoundValueCacheables that wrap Track API provided by the Android SDK. This is, at pres- aged for its audio capabilities, while a task-focused inter- hardware sensors as well as UI widgets, and examples of ent, the standard way of accessing the DAC/ADC, as it face was presented to the user. Other applications such creating custom CsoundValueCacheables accompany the appears to provide a slightly better performance on some as Cecilia show where users are primarily presented with Csound for iOS Examples project. devices. It employs the blocking mechanism given by Au- a task-focused interface, but the capability to extend the dioTrack to push audio frames to the Csound input buffer system is available to those who know Csound coding. (spin) and to retrieve audio frames from the output buffer The Csound language then has grown as a means to ex- (spout), sending them to the system sound device. Al- press a musical work, to becoming a domain-specific lan- though low latency is not available in Android, this mech- guage for audio engine programming. anism works satisfactorily. Today, these developments have allowed many classes of applications to be created. With the move from desktop As a future low-latency option, we have also devel- . Csound for iOS SDK sample app platforms to mobile platforms, the range of use cases that Figure 2 oped a native code audio interface. It employs the OpenSL Csound can satisfy has achieved a new dimension. API offered by the Android NDK. It is built as a replace- ment for the usual Csound IO modules (portaudio, alsa, (performed by screen or MIDI input), signal processing jack, etc.), using the provided API hooks.
Recommended publications
  • 62 Years and Counting: MUSIC N and the Modular Revolution
    62 Years and Counting: MUSIC N and the Modular Revolution By Brian Lindgren MUSC 7660X - History of Electronic and Computer Music Fall 2019 24 December 2019 © Copyright 2020 Brian Lindgren Abstract. MUSIC N by Max Mathews had two profound impacts in the world of music ​ synthesis. The first was the implementation of modularity to ensure a flexibility as a tool for the user; with the introduction of the unit generator, the instrument and the compiler, composers had the building blocks to create an unlimited range of sounds. The second was the impact of this implementation in the modular analog synthesizers developed a few years later. While Jean-Claude Risset, a well known Mathews associate, asserts this, Mathews actually denies it. They both are correct in their perspectives. Introduction Over 76 years have passed since the invention of the first electronic general purpose computer,1 the ENIAC. Today, we carry computers in our pockets that can perform millions of times more calculations per second.2 With the amazing rate of change in computer technology, it's hard to imagine that any development of yesteryear could maintain a semblance of relevance today. However, in the world of music synthesis, the foundations that were laid six decades ago not only spawned a breadth of multifaceted innovation but continue to function as the bedrock of important digital applications used around the world today. Not only did a new modular approach implemented by its creator, Max Mathews, ensure that the MUSIC N lineage would continue to be useful in today’s world (in one of its descendents, Csound) but this approach also likely inspired the analog synthesizer engineers of the day, impacting their designs.
    [Show full text]
  • The Computational Attitude in Music Theory
    The Computational Attitude in Music Theory Eamonn Bell Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2019 © 2019 Eamonn Bell All rights reserved ABSTRACT The Computational Attitude in Music Theory Eamonn Bell Music studies’s turn to computation during the twentieth century has engendered particular habits of thought about music, habits that remain in operation long after the music scholar has stepped away from the computer. The computational attitude is a way of thinking about music that is learned at the computer but can be applied away from it. It may be manifest in actual computer use, or in invocations of computationalism, a theory of mind whose influence on twentieth-century music theory is palpable. It may also be manifest in more informal discussions about music, which make liberal use of computational metaphors. In Chapter 1, I describe this attitude, the stakes for considering the computer as one of its instruments, and the kinds of historical sources and methodologies we might draw on to chart its ascendance. The remainder of this dissertation considers distinct and varied cases from the mid-twentieth century in which computers or computationalist musical ideas were used to pursue new musical objects, to quantify and classify musical scores as data, and to instantiate a generally music-structuralist mode of analysis. I present an account of the decades-long effort to prepare an exhaustive and accurate catalog of the all-interval twelve-tone series (Chapter 2). This problem was first posed in the 1920s but was not solved until 1959, when the composer Hanns Jelinek collaborated with the computer engineer Heinz Zemanek to jointly develop and run a computer program.
    [Show full text]
  • LAC-07 Proceedings
    LINUX AUDIO CONFERENCE BERLIN Lectures/Demos/Workshops Concerts/LinuxSoundnight P roceedin G S TU-Berlin 22.-25.03.07 www.lac.tu-berlin.de5 Published by: Technische Universität Berlin, Germany March 2007 All copyrights remain with the authors www.lac.tu-berlin.de Credits: Cover design and logos: Alexander Grüner Layout: Marije Baalman Typesetting: LaTeX Thanks to: Vincent Verfaille for creating and sharing the DAFX’06 “How to make your own Proceedings” examples. Printed in Berlin by TU Haus-Druckerei — March 2007 Proc. of the 5th Int. Linux Audio Conference (LAC07), Berlin, Germany, March 22-25, 2007 LAC07-iv Preface The International Linux Audio Conference 2007, the fifth of its kind, is taking place at the Technis- che Universität Berlin. We are very glad to have been given the opportunity to organise this event, and we hope to have been able to put together an interesting conference program, both for developers and users, thanks to many submissions of our participants, as well as the support from our cooperation partners. The DAAD - Berliner Künstlerprogramm has supported us by printing the flyers and inviting some of the composers. The Cervantes Institute has given us support for involving composers from Latin America and Spain. Tesla has been a generous host for two concert evenings. Furthermore, Maerz- Musik and the C-Base have given us a place for the lounge and club concerts. The Seminar für Medienwissenschaften of the Humboldt Universität zu Berlin have contributed their Signallabor, a computer pool with 6 Linux audio workstations and a multichannel setup, in which the Hands On Demos are being held.
    [Show full text]
  • International Computer Music Conference (ICMC/SMC)
    Conference Program 40th International Computer Music Conference joint with the 11th Sound and Music Computing conference Music Technology Meets Philosophy: From digital echos to virtual ethos ICMC | SMC |2014 14-20 September 2014, Athens, Greece ICMC|SMC|2014 14-20 September 2014, Athens, Greece Programme of the ICMC | SMC | 2014 Conference 40th International Computer Music Conference joint with the 11th Sound and Music Computing conference Editor: Kostas Moschos PuBlished By: x The National anD KapoDistrian University of Athens Music Department anD Department of Informatics & Telecommunications Panepistimioupolis, Ilissia, GR-15784, Athens, Greece x The Institute for Research on Music & Acoustics http://www.iema.gr/ ADrianou 105, GR-10558, Athens, Greece IEMA ISBN: 978-960-7313-25-6 UOA ISBN: 978-960-466-133-6 Ξ^ĞƉƚĞŵďĞƌϮϬϭϰʹ All copyrights reserved 2 ICMC|SMC|2014 14-20 September 2014, Athens, Greece Contents Contents ..................................................................................................................................................... 3 Sponsors ..................................................................................................................................................... 4 Preface ....................................................................................................................................................... 5 Summer School .......................................................................................................................................
    [Show full text]
  • Compositional and Analytic Applications of Automated Music Notation Via Object-Oriented Programming
    UC San Diego UC San Diego Electronic Theses and Dissertations Title Compositional and analytic applications of automated music notation via object-oriented programming Permalink https://escholarship.org/uc/item/3kk9b4rv Author Trevino, Jeffrey Robert Publication Date 2013 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California UNIVERSITY OF CALIFORNIA, SAN DIEGO Compositional and Analytic Applications of Automated Music Notation via Object-oriented Programming A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Music by Jeffrey Robert Trevino Committee in charge: Professor Rand Steiger, Chair Professor Amy Alexander Professor Charles Curtis Professor Sheldon Nodelman Professor Miller Puckette 2013 Copyright Jeffrey Robert Trevino, 2013 All rights reserved. The dissertation of Jeffrey Robert Trevino is approved, and it is acceptable in quality and form for publication on mi- crofilm and electronically: Chair University of California, San Diego 2013 iii DEDICATION To Mom and Dad. iv EPIGRAPH Extraordinary aesthetic success based on extraordinary technology is a cruel deceit. —-Iannis Xenakis v TABLE OF CONTENTS Signature Page .................................... iii Dedication ...................................... iv Epigraph ....................................... v Table of Contents................................... vi List of Figures .................................... ix Acknowledgements ................................. xii Vita .......................................... xiii Abstract of the Dissertation . xiv Chapter 1 A Contextualized History of Object-oriented Musical Notation . 1 1.1 What is Object-oriented Programming (OOP)? . 1 1.1.1 Elements of OOP . 1 1.1.2 ANosebleed History of OOP. 6 1.2 Object-oriented Notation for Composers . 12 1.2.1 Composition as Notation . 12 1.2.2 Generative Task as an Analytic Framework . 13 1.2.3 Computational Models of Music/Composition .
    [Show full text]
  • The Sndobj Sound Object Library
    The SndObj Sound Object Library VICTOR E. P. LAZZARINI Department ofMusic, National University ofIreland, Maynooth, Co. Kildare, Ireland E-mail: [email protected] The SndObj Sound Object Library is a C++ Programming libraries constitute a lower-level in object-oriented audio processing framework and toolkit. computer music practice, when compared to sound com- This article initially examines some of the currently pilers. The context where they appear is more general, available sound processing packages,including sound and consequently their use is more complex. This can compilers,programming libraries and toolkits. It reviews be easily demonstrated by the differences in coding of a the processes involved in the use of these systems and their strengths and weaknesses. Some application similar instrument under csound and cmix (or clm), examples are also provided. In this context,the SndObj which will be shown later. It involves not only the library is presented and its components are discussed in knowledge ofthe syntax ofa particular language, but detail. The article shows the library as composed of a set also the grasp ofconcepts not necessarily needed when of classes that encapsulate all processes involved in using a sound compiler. For instance, with cmix, the synthesis,processing and IO operations. Programming user has to understand variable declaration, memory examples are included to show some uses of the system. allocation, header files, object code linking and other These,together with library binaries,source code and C-related concepts in order to build a sound processing complete documentation,are included in the or synthesis ‘instrument’. The advantage is that, in this downloadable package,available on the Internet.
    [Show full text]
  • Analytical Techniques of Electroacoustic Music”
    David Huff Proposal for “Analytical Techniques of Electroacoustic Music” Introduction The foundations of musical analysis have long been tested by the weight of music from the periods of tonal practice. There is a rich and expansive repertory of techniques from which the analyst may choose in approaching music composed in the tonal tradition of the Eighteenth and Nineteenth Centuries. The Twentieth Century saw an expansion of compositional strategies, which in turn charged theorists and musicologists with the tasks of developing and adopting new methods of analysis. Analysts met these challenges by employing a range of techniques nearly as varied as the post-tonal compositions themselves. Pitch-class set theory and twelve-tone serial techniques have been the most widely adopted analytical methods for the post- tonal repertory, representing rigorous, quasi-scientific approaches to what is often extremely complex music. This expansion of compositional and analytical techniques coincided with the development of technology that eventually made its way into the hands of composers. Technology’s influence on composition in the mid-Twentieth Century formed in the concurrent strands of musique concrète and elektronische Musik, one concerned with the use of recorded sound as a malleable medium, the other with generating sounds themselves through electronic means. The two approaches eventually converged, with varying degrees of cohesion, into what is generally referred to as electroacoustic music.1 1 Leigh Landy discusses various titles applied to strands of electronic-based music in Landy 2006. While the proposed dissertation will summarize these terminological aspects the present proposal will simply use the term “electroacoustic music” to mean any music composed with the full or partial use of electronic means.
    [Show full text]
  • Audio Programming Book
    The Audio Programming Book edited by Richard Boulanger and Victor Lazzarini foreword by Max V. Mathews The MIT Press Cambridge, Massachusetts London, England ( 2010 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about quantity discounts, email [email protected]. Set in Stone Serif and Stone Sans on 3B2 by Asco Typesetters, Hong Kong. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data [to come] 10987654321 The Audio Programming Book DVD Contents Continuations of the Printed Text Chapters 1 Delay Lines and Noise Generation Richard Dobson 2 Digital Filters Richard Dobson 3 From C to C++ Richard Dobson 4 Audio Processing in C++ Richard Dobson 5 Audio Plugins in C++ Richard Dobson 6 APB & DVD Chapter Summaries: An Annotated Outline Richard Dobson 7 Continuing Down The Audio Stream Gabriel Maldonado 8 Working with Control Signals to Build a SoftSynth with a Cross-Platform GUI Gabriel Maldonado 9 More VST Plug-ins Victor Lazzarini 10 Algorithmic Patch Design for Synmod Eric Lyon 11 Algorithmic Sound Design Eric Lyon Foundations 12 Compiling Software in the OS X Terminal – UNIX-style Joo Won Park 13 Building Linux Audio Applications From Source: A User's Guide David L. Phillips 14 A Guide To Linux Audio Applications David L. Phillips MIDI Programming
    [Show full text]
  • Generative Processes for Audification
    Generative Processes for Audification Judy Jackson April 23rd, 2018 Abstract Using the JavaSerial library, I present a generative method for the digital signal processing technique of audification. By analyzing multiple test instances produced by this method, I demonstrate that a generative audification process can be precise and easily controlled. The parame- ters tested in this experiment cause explicit, one-to-one changes in the resulting audio of each test instance. 1 INTRODUCTION Beginning in the early twentieth century, composers have sought to utilize the sounds of a technologically evolving society in their musical practice. In 1913, the Italian Futurist composer, Luigi Russolo, famously called for the full in- corporation of industrialization sounds (`noise-sounds') into classical music. He captured these sentiments and ideas for their execution in his seminal manifesto, The Art of Noise [7]. Russolo's manifesto earned him the title of \the grandfa- ther of noise music" and laid the groundwork for generations of composers to interface with the various screeches and crashes of the future. This paper explores a new approach to generating complex sonic material for the purpose of composing electronic music. Through the use of the JavaSerial library, composers can craft small modules from data structures and arrange them into various combinations to create larger-scale pieces. Since 1913, various technological innovations have spurred the birth of a wide variety of electronic music disciplines. The invention of vacuum tubes permitted the creation of smaller, more efficient electronics, and marked the birth of analog synthesizers and synthesized music. Magnetic tape, a product of World War II, allowed composers to capture any sound in the real world and place it in space and time for playback as a musical composition.
    [Show full text]
  • Audio Programming Book
    The Audio Programming Book edited by Richard Boulanger and Victor Lazzarini foreword by Max V. Mathews The MIT Press Cambridge, Massachusetts London, England ( 2010 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about quantity discounts, email [email protected]. Set in Stone Serif and Stone Sans on 3B2 by Asco Typesetters, Hong Kong. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data [to come] 10987654321 Preface ‘‘But how does an oscillator really work?’’ My 40-year journey to The Audio Programming Book began with that question. Some of the answers came from Tom Piggott (my first electronic music teacher, and the one who got me started with analog synthesizers—an EML200 and an Arp2600). More answers came from Alan Pearlman, founder and president of the ARP Synthesizer Company, the man who commissioned my first symphony, Three Soundscapes for Synthesizers and Orchestra. Still more answers came from Dexter Morrill, who offered me a Visiting Composer’s Resi- dency at Colgate University, where I made my first computer music. It took days back then, but I rendered ‘‘Happy Birthday’’ in Music10 and played the rendered soundfile for my dad over the phone on his birthday. And more answers came from Bruce Pennycook, who was also in residence at Colgate. (We would work through the night and end our sessions with breakfast at a local diner; I picked his brain every spare minute; he taught me how to do stereo panning and gave me his sub- bass oscillator instrument, LOW.) I started to really answer the question in Barry Vercoe’s summer workshop at the Massa- chusetts Institute of Technology, in which I learned music11.
    [Show full text]
  • LAC2006 4Th International Linux Audio Conference
    LAC2006 4th International Linux Audio Conference Lectures - Demos - Workshops Concerts - Linux Sound Night 27.–30.04.06 ZKM | Institute for Music and Acoustics Editorial It is already the fourth time that the Linux Audio Developers gather at the Institute for Music and Acoustics of the Center for Art and Media in Karlsruhe. All started with the presentation of ideas, concepts and results and quickly it became clear that the personal response towards the presentations increased the motivation and the quality of the developments. We were glad to welcome several key-developers here in Karlsruhe and to see how enthusiastically they all contributed to the conference. It turned out that the meetings filled the missing link in bet- ween the hard work of programming and communication in the discussion groups. Now the conference gained a quite professional level in its organisation structure: there are proceedings, papers, workshops and concerts and I hope that this time everything is happening as smooth as in the former years thanks to the preparations of Götz Dipper, Frank Neumann and Marc Riedel. The Technical University (TU) of Berlin has expressed their interest in hosting the conference. We are very glad about this and we take it as a further proof of the importance of the LAC conference. So, next year the conference will take place at the TU Berlin. We believe that the TU Berlin as a location of outstanding Linux Audio Development will offer us a great confe- rence next year! Ludger Bruemmer Head of the ZKM | Institute for Music and Acoustics Editorial Welcome! We would like to welcome you to the 4th International Linux Audio Conference (LAC2006), at the ZKM | Zentrum für Kunst und Medientechnologie (Center for Art and Media) in Karls- ruhe, Germany.
    [Show full text]
  • The Hybrid Mobile Instrument: Recoupling the Haptic, the Physical, and the Virtual
    THE HYBRID MOBILE INSTRUMENT: RECOUPLING THE HAPTIC, THE PHYSICAL, AND THE VIRTUAL A DISSERTATION SUBMITTED TO THE DEPARTMENT OF MUSIC AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Romain Pierre Denis Michon June 2018 © 2018 by Romain Pierre Denis Michon. All Rights Reserved. Re-distributed by Stanford University under license with the author. This work is licensed under a Creative Commons Attribution- Noncommercial 3.0 United States License. http://creativecommons.org/licenses/by-nc/3.0/us/ This dissertation is online at: http://purl.stanford.edu/rd318qn0219 ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Chris Chafe, Co-Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Julius Smith, III, Co-Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Ge Wang I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Matthew Wright, Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost for Graduate Education This signature page was generated electronically upon submission of this dissertation in electronic format.
    [Show full text]