Neuron-Modeled Audio Synthesis

Total Page:16

File Type:pdf, Size:1020Kb

Neuron-Modeled Audio Synthesis Neuron-modeled Audio Synthesis Jeff Snyder Aatish Bhatia Mike Mulshine Princeton University Music Princeton University Council Princeton University Music Department on Science and Technology Department 310 Woolworth Center 233 Lewis Library 310 Woolworth Center Princeton, NJ 08544 Princeton, NJ 08544 Princeton, NJ 08544 [email protected] [email protected] [email protected] ABSTRACT Two years later, the concept came up again in discussions This paper describes a project to create a software instru- with physicist Aatish Bhatia, and he was also curious about ment using a biological model of neuron behavior for audio how these models might sound. Bhatia and Snyder worked synthesis. The translation of the model to a usable audio on researching the existing models to explore, and Bhatia synthesis process is described, and a piece for laptop orches- handled the mathematics of turning the differential equa- tra created using the instrument is discussed. tions into a real-time solvable algorithm. Once they had a working synthesis algorithm, Snyder brought Mike Mul- shine into the project to work on integrating the algorithm Author Keywords into a usable instrument. neuron, biological model, software, musical instrument The expectation was never that this synthesis technique would sound dramatically different from traditional meth- CCS Concepts ods, but rather that it would provide an interesting alter- native to synthesis methods that have easily understood •Applied computing ! Sound and music computing; control parameters (pitch, amplitude, etc.). We hoped that Performing arts; Computational biology; by introducing control parameters relating to physical pro- cesses that are unfamiliar, such as \sodium channel acti- 1. INTRODUCTION vation", a more experimental approach to digital synthesis While the majority of audio synthesis methods aim for pow- would be opened up to the user. erful and clear control over sound parameters, some ap- proaches eschew this control in favor of a more experimen- 3. PRIOR ART tal and serendipitous outlook. One example is seen in the This project is in the tradition of physical modeling synthe- barely-controllable instrumental systems of David Tudor, sis, where mathematical models of real-world systems are the performance of which he described as \an act of discov- simulated for sound creation[3][10]. Much of the research ery. I try to find out what's there and not to make it do into this area so far has focused on modelling physical sys- what I want but to, you know, release what's there." [6] tems for which sound production or sound transformation The project described is one path toward this type of are the typical use cases, such as flute acoustics[2], room musical interaction, in which a biological model is used as reverberation[7], or guitar amplifier simulation[12]. This a synthesis technique with intent to discover exciting and project falls into a much smaller field: repurposing mathe- unexpected sonic possibilities. matical models that are not originally intended to describe audio systems for audio purposes. Examples include the 2. MOTIVATION FOR NEURON SYNTHE- use of strange attractors[8] and other creative misuses of SIS scientific research tools. The prior research with the closest relation to this project This idea came about when a member of the Princeton is another attempt to use a biological neuron model for syn- Laptop Orchestra (PLOrk), Mitch Nahmias, was explain- thesis. This is briefly included as part of Nick Collins' pa- ing how the lab he works at in the Electrical Engineering per, Errant Sound Synthesis [1]. Audio implementations of department used biological models of neurons to create op- the FitzHugh Nagumo and Termen Wang models of neu- tical computers. He mentioned that, under certain condi- ron behavior are mentioned and included in the library of tions, these neuron models could be coaxed to oscillate, and SuperCollider code that accompanies the paper. Collins' he mused that it might sound interesting to use this as a approach is similar to ours in that he also uses the Euler basis for audio synthesis. Together, Jeff Snyder and Nah- method to discretize the equations, and includes these al- mias created a quick prototype using the simplest possible gorithms in a publicly available library. We hope that our neuron model in the programming language ChucK[11], and paper adds more illuminating detail to the process, and a it made sound! We promptly lost that work in a computer more in-depth discussion of the usefulness of this concept crash, and dejectedly set the project aside for another day. in introducing non-linear control into instruments. 4. ADAPTING THE BIOLOGICAL MODEL FOR AUDIO SYNTHESIS Licensed under a Creative Commons Attribution While looking for an appropriate neuronal spiking model to 4.0 International License (CC BY 4.0). Copyright implement, we researched various alternatives. After read- remains with the author(s). ing Izhikevich's review of neural spiking models [5], we de- NIME’18, June 3-6, 2018, Blacksburg, Virginia, USA. cided to implement the Hodgkin-Huxley (HH) model [4]. 394 This is an important classic model in computational neu- 4. Use equation 2 to calculate the time constants roscience, comprising of 4 coupled non-linear differential equations, and various biologically inspired input param- 5. Use equation 3 to update the membrane voltage. This eters such as the activation and inactivation rates of ion is the output sound signal. (i.e. sodium and potassium) channels. While the HH model 6. Repeat steps 2-4. The tunable parameters and input is more computationally expensive than simplified, single- current can be varied as needed, allow a user to control purpose models of neuron spiking, it is also a very general the audio synthesis. model that can reproduce many different neuronal spiking patterns. It is therefore well suited to our goals of exploring Further details of discretizing the HH model (including the rich parameter space of neuronal spiking behaviors. biologically relevant values of parameters) can be found in To implement this model, we followed the standard Euler various online references 1. method to discretize the four differential equations. This Once we had a working audio synthesis version of the resulted in a set of four difference equations that determine model in ChucK[11], running as a custom class that gener- the membrane voltage Vm of a neuron. The first three equa- ates signal per-sample using input parameters from a MIDI tions are similar in form: device, we set about adapting it to be easier to interface with a variety of platforms. First, we adapted the ChucK n(t + ∆t) = αn∆t + n(t) 1 − (αn + βn)∆t code to be a pseudo-object in our OOPS Audio Library[9]. Since we had already integrated our OOPS library into the m(t + ∆t) = α ∆t + m(t)1 − (α + β )∆t (1) m m m JUCE framework, this made it simple to export a VST plu- h(t + ∆t) = αh∆t + h(t) 1 − (αh + βh)∆t gin, allowing the neuron synthesis to run inside a DAW, or in Max/MSP using the vst~object. It also allowed us The variables being updated here are n, m, h, which are to quickly prototype running the synthesis model on em- quantities between 0 and 1 representing the rate of potas- bedded hardware, such as our Genera brain, based on an sium channel activation, sodium channel activation, and STM32F7/H7 microcontroller[9]. sodium channel inactivation, respectively. The term ∆t is a timestep | a smaller timestep results in a more accurate simulation, but is more computationally expensive. The α and β variables are functions of the membrane voltage Vm(t) provided in the original paper by Hodgkin and Huxley [4]. To clean things up a bit, we define the following time constants: Figure 1: A waveform of a single neuron spike using cm cm cm the Neuron synthesizer (AC-coupled) τK (t) = 4 τNa(t) = 3 τl = (2) gK n(t) gNam(t) h gl These time constants depend on the activation/inactivation rates (n, m, and h), as well as on other biologically inspired tunable parameters (gK , gNa, gl, cm). They are used to update the membrane voltage Vm as follows: 1 1 1 V (t + ∆t) = V (t)1 − ∆t( + + ) m m τ τ τ Figure 2: An oscillation waveform created by the K Na l (3) I(t) V V V Neuron synthesizer (AC-coupled) + ∆t + k + Na + l Cm τK τNa τl There are three new tunable parameters here (VK , VNa, 5. USE CASE: CONNECTOME, A PIECE Vl), as well as an input current I(t) which we are free to vary. Thus, the rate of change of the membrane voltage USING THE NEURON INSTRUMENT is governed not only by the input current I(t), but also, 5.1 Compositional Concept via the time constants, on the activation/inactivation rates We wanted to quickly test the Neuron synthesizer's features (n, m, and h), all of which vary over time and, in turn, in a real-world musical setting. Collaboratively with the depend upon the membrane voltage (via the α and β func- Princeton Laptop Orchestra (PLOrk), we composed a struc- tions). This set of coupled equations thus results in rich tured improvisation piece exploring the possibilities of the feedback loops and non-linear effects, and the dependence instrument. This piece is called Connectome, named for the of the output voltage on the input parameters is difficult to map of neural pathways in the brain. Our goal was to create analytically predict. a piece that was inspired by the functioning of the brain, in To implement this model in code, we execute the following which the performers represented individual neurons. steps: Since neurons pairing with or communicating with other neurons seemed to be a basic metaphor for brain function, 1.
Recommended publications
  • Touchpoint: Dynamically Re-Routable Effects Processing As a Multi-Touch Tablet Instrument
    Proceedings ICMC|SMC|2014 14-20 September 2014, Athens, Greece Touchpoint: Dynamically Re-Routable Effects Processing as a Multi-Touch Tablet Instrument Nicholas K. Suda Owen S. Vallis, Ph.D California Institute of the Arts California Institute of the Arts 24700 McBean Pkwy. 24700 McBean Pkwy. Valencia, California 91355 Valencia, California 91355 United States United States [email protected] [email protected] ABSTRACT Touchpoint is a multi-touch tablet instrument which pre- sents the chaining-together of non-linear effects processors as its core music synthesis technique. In doing so, it uti- lizes the on-the-fly re-combination of effects processors as the central mechanic of performance. Effects Processing as Synthesis is justified by the fact that that the order in which non-linear systems are arranged re- sults in a diverse range of different output signals. Be- cause the Effects Processor Instrument is a collection of software, the signal processing ecosystem is virtual. This means that processors can be re-defined, re-configured, cre- Figure 1. The sprawl of peripherals used to control IDM ated, and destroyed instantaneously, as a “note-level” mu- artist Tim Exile’s live performance. sical decision within a performance. The software of Touchpoint consists of three compo- nents. The signal processing component, which is address- In time stretching, pitch correcting, sample replacing, no- ed via Open Sound Control (OSC), runs in Reaktor Core. ise reducing, sound file convolving, and transient flagging The touchscreen component runs in the iOS version of Le- their recordings, studio engineers of every style are reg- mur, and the networking component uses ChucK.
    [Show full text]
  • Synchronous Programming in Audio Processing Karim Barkati, Pierre Jouvelot
    Synchronous programming in audio processing Karim Barkati, Pierre Jouvelot To cite this version: Karim Barkati, Pierre Jouvelot. Synchronous programming in audio processing. ACM Computing Surveys, Association for Computing Machinery, 2013, 46 (2), pp.24. 10.1145/2543581.2543591. hal- 01540047 HAL Id: hal-01540047 https://hal-mines-paristech.archives-ouvertes.fr/hal-01540047 Submitted on 15 Jun 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Synchronous Programming in Audio Processing: A Lookup Table Oscillator Case Study KARIM BARKATI and PIERRE JOUVELOT, CRI, Mathématiques et systèmes, MINES ParisTech, France The adequacy of a programming language to a given software project or application domain is often con- sidered a key factor of success in software development and engineering, even though little theoretical or practical information is readily available to help make an informed decision. In this paper, we address a particular version of this issue by comparing the adequacy of general-purpose synchronous programming languages to more domain-specific
    [Show full text]
  • Resonant Spaces Notes
    Andrew McPherson Resonant Spaces for cello and live electronics Performance Notes Overview This piece involves three components: the cello, live electronic processing, and "sequenced" sounds which are predetermined but which render in real time to stay synchronized to the per- former. Each component is notated on a separate staff (or pair of staves) in the score. The cellist needs a bridge contact pickup on his or her instrument. It can be either integrated into the bridge or adhesively attached. A microphone will not work as a substitute, because the live sound plays back over speakers very near to the cello and feedback will inevitably result. The live sound simulates the sound of a resonating string (and, near the end of the piece, ringing bells). This virtual string is driven using audio from the cello. Depending on the fundamental frequency of the string and the note played by the cello, different pitches are produced and played back by speakers placed near the cellist. The sequenced sounds are predetermined, but are synchronized to the cellist by means of a series of cue points in the score. At each cue point, the computer pauses and waits for the press of a footswitch to advance. In some cases, several measures go by without any cue points, and it is up to the cellist to stay synchronized to the sequenced sound. An in-ear monitor with a click track is available in these instances to aid in synchronization. The electronics are designed to be controlled in real-time by the cellist, so no other technician is required in performance.
    [Show full text]
  • Peter Blasser CV
    Peter Blasser – [email protected] - 410 362 8364 Experience Ten years running a synthesizer business, ciat-lonbarde, with a focus on touch, gesture, and spatial expression into audio. All the while, documenting inventions and creations in digital video, audio, and still image. Disseminating this information via HTML web page design and YouTube. Leading workshops at various skill levels, through manual labor exploring how synthesizers work hand and hand with acoustics, culminating in montage of participants’ pieces. Performance as touring musician, conceptual lecturer, or anything in between. As an undergraduate, served as apprentice to guild pipe organ builders. Experience as racquetball coach. Low brass wind instrumentalist. Fluent in Java, Max/MSP, Supercollider, CSound, ProTools, C++, Sketchup, Osmond PCB, Dreamweaver, and Javascript. Education/Awards • 2002 Oberlin College, BA in Chinese, BM in TIMARA (Technology in Music and Related Arts), minors in Computer Science and Classics. • 2004 Fondation Daniel Langlois, Art and Technology Grant for the project “Shinths” • 2007 Baltimore City Grant for Artists, Craft Category • 2008 Baltimore City Grant for Community Arts Projects, Urban Gardening List of Appearances "Visiting Professor, TIMARA dep't, Environmental Studies dep't", Oberlin College, Oberlin, Ohio, Spring 2011 “Babier, piece for Dancer, Elasticity Transducer, and Max/MSP”, High Zero Festival of Experimental Improvised Music, Theatre Project, Baltimore, September 2010. "Sejayno:Cezanno (Opera)", CEZANNE FAST FORWARD. Baltimore Museum of Art, May 21, 2010. “Deerhorn Tapestry Installation”, Curators Incubator, 2009. MAP Maryland Art Place, September 15 – October 24, 2009. Curated by Shelly Blake-Pock, teachpaperless.blogspot.com “Deerhorn Micro-Cottage and Radionic Fish Drier”, Electro-Music Gathering, New Jersey, October 28-29, 2009.
    [Show full text]
  • Chuck: a Strongly Timed Computer Music Language
    Ge Wang,∗ Perry R. Cook,† ChucK: A Strongly Timed and Spencer Salazar∗ ∗Center for Computer Research in Music Computer Music Language and Acoustics (CCRMA) Stanford University 660 Lomita Drive, Stanford, California 94306, USA {ge, spencer}@ccrma.stanford.edu †Department of Computer Science Princeton University 35 Olden Street, Princeton, New Jersey 08540, USA [email protected] Abstract: ChucK is a programming language designed for computer music. It aims to be expressive and straightforward to read and write with respect to time and concurrency, and to provide a platform for precise audio synthesis and analysis and for rapid experimentation in computer music. In particular, ChucK defines the notion of a strongly timed audio programming language, comprising a versatile time-based programming model that allows programmers to flexibly and precisely control the flow of time in code and use the keyword now as a time-aware control construct, and gives programmers the ability to use the timing mechanism to realize sample-accurate concurrent programming. Several case studies are presented that illustrate the workings, properties, and personality of the language. We also discuss applications of ChucK in laptop orchestras, computer music pedagogy, and mobile music instruments. Properties and affordances of the language and its future directions are outlined. What Is ChucK? form the notion of a strongly timed computer music programming language. ChucK (Wang 2008) is a computer music program- ming language. First released in 2003, it is designed to support a wide array of real-time and interactive Two Observations about Audio Programming tasks such as sound synthesis, physical modeling, gesture mapping, algorithmic composition, sonifi- Time is intimately connected with sound and is cation, audio analysis, and live performance.
    [Show full text]
  • User Manual Mellotron V - WELCOME to the MELLOTRON the Company Was Called Mellotronics, and the First Product, the Mellotron Mark 1, Appeared in 1963
    USER MANUAL Special Thanks DIRECTION Frédéric BRUN Kévin MOLCARD DEVELOPMENT Pierre-Lin LANEYRIE Benjamin RENARD Marie PAULI Samuel LIMIER Baptiste AUBRY Corentin COMTE Mathieu NOCENTI Simon CONAN Geoffrey GORMOND Florian MARIN Matthieu COUROUBLE Timothée BÉHÉTY Arnaud BARBIER Germain MARZIN Maxime AUDFRAY Yann BURRER Adrien BARDET Kevin ARCAS Pierre PFISTER Alexandre ADAM Loris DE MARCO Raynald DANTIGNY DESIGN Baptiste LE GOFF Morgan PERRIER Shaun ELLWOOD Jonas SELLAMI SOUND DESIGN Victor MORELLO Boele GERKES Ed Ten EYCK Paul SCHILLING SPECIAL THANKS Terry MARDSEN Ben EGGEHORN Jay JANSSEN Paolo NEGRI Andrew CAPON Boele GERKES Jeffrey CECIL Peter TOMLINSON Fernando Manuel Chuck CAPSIS Jose Gerardo RENDON Richard COURTEL RODRIGUES Hans HOLEMA SANTANA JK SWOPES Marco CORREIA Greg COLE Luca LEFÈVRE Dwight DAVIES Gustavo BRAVETTI Ken Flux PIERCE George WARE Tony Flying SQUIRREL Matt PIKE Marc GIJSMAN Mat JONES Ernesto ROMEO Adrien KANTER Jason CHENEVAS-PAULE Neil HESTER MANUAL Fernando M RODRIGUES Vincent LE HEN (editor) Jose RENDON (Author) Minoru KOIKE Holger STEINBRINK Stephan VANKOV Charlotte METAIS Jack VAN © ARTURIA SA – 2019 – All rights reserved. 11 Chemin de la Dhuy 38240 Meylan FRANCE www.arturia.com Information contained in this manual is subject to change without notice and does not represent a commitment on the part of Arturia. The software described in this manual is provided under the terms of a license agreement or non-disclosure agreement. The software license agreement specifies the terms and conditions for its lawful use. No part of this manual may be reproduced or transmitted in any form or by any purpose other than purchaser’s personal use, without the express written permission of ARTURIA S.A.
    [Show full text]
  • Implementing Stochastic Synthesis for Supercollider and Iphone
    Implementing stochastic synthesis for SuperCollider and iPhone Nick Collins Department of Informatics, University of Sussex, UK N [dot] Collins ]at[ sussex [dot] ac [dot] uk - http://www.cogs.susx.ac.uk/users/nc81/index.html Proceedings of the Xenakis International Symposium Southbank Centre, London, 1-3 April 2011 - www.gold.ac.uk/ccmc/xenakis-international-symposium This article reflects on Xenakis' contribution to sound synthesis, and explores practical tools for music making touched by his ideas on stochastic waveform generation. Implementations of the GENDYN algorithm for the SuperCollider audio programming language and in an iPhone app will be discussed. Some technical specifics will be reported without overburdening the exposition, including original directions in computer music research inspired by his ideas. The mass exposure of the iGendyn iPhone app in particular has provided a chance to reach a wider audience. Stochastic construction in music can apply at many timescales, and Xenakis was intrigued by the possibility of compositional unification through simultaneous engagement at multiple levels. In General Dynamic Stochastic Synthesis Xenakis found a potent way to extend stochastic music to the sample level in digital sound synthesis (Xenakis 1992, Serra 1993, Roads 1996, Hoffmann 2000, Harley 2004, Brown 2005, Luque 2006, Collins 2008, Luque 2009). In the central algorithm, samples are specified as a result of breakpoint interpolation synthesis (Roads 1996), where breakpoint positions in time and amplitude are subject to probabilistic perturbation. Random walks (up to second order) are followed with respect to various probability distributions for perturbation size. Figure 1 illustrates this for a single breakpoint; a full GENDYN implementation would allow a set of breakpoints, with each breakpoint in the set updated by individual perturbations each cycle.
    [Show full text]
  • USB MIDI CONTROLLER INTRODUCTION This Is an Ultra-Compact MIDI Controller Which Serves to Control Music Software
    USB MIDI CONTROLLER INTRODUCTION This is an ultra-compact MIDI controller which serves to control music software. It is equipped with 25/49/61 velocity-sensitive keys as well as 1 fader and 4 rotary controls. The controller connects both PC and Mac(OTG convertor cable is required for connecting to Mobile phone and Pad). You can do without a bulky power adapter as power supply is via the USB bus. It’s also available to download the full user manual and other setting instructions from http://en.worlde.com.cn/. 2 STEPS FOR KS TO WORK 1) Connect your KS to the computer by USB cable. 2) Open your DAW in your computer and select KS as your hardware controller in your DAW. Now you are ready to go. CONNECTING A COMPUTER Use the included USB cable to connect the USB MIDI controller to a USB2.0 port on your computer. The power will turn on. Select the KS as MIDI controller within your music software and you are ready to go. DEVICE SETUP IN SOFTWARE To select KS as a controller for your digital audio workstation (DAW): 1. Connect KS to your computer using a standard USB cable. (If you are connecting KS to a USB hub, make sure it is a powered hub. If another device with USB3.0 is connected to USB hub at the same time, KS will not work properly at this time.) 2. Open your DAW. 3. Open your DAW's Preferences, Options, or Device Setup, select KS as your hardware controller, and then close t hat window.
    [Show full text]
  • Isupercolliderkit: a Toolkit for Ios Using an Internal Supercollider Server As a Sound Engine
    ICMC 2015 – Sept. 25 - Oct. 1, 2015 – CEMI, University of North Texas iSuperColliderKit: A Toolkit for iOS Using an Internal SuperCollider Server as a Sound Engine Akinori Ito Kengo Watanabe Genki Kuroda Ken’ichiro Ito Tokyo University of Tech- Watanabe-DENKI Inc. Tokyo University of Tech- Tokyo University of Tech- nology [email protected] nology nology [email protected]. [email protected]. [email protected]. ac.jp ac.jp ac.jp The editor client sends the OSC code-fragments to its server. ABSTRACT Due to adopting the model, SuperCollider has feature that a iSuperColliderKit is a toolkit for iOS using an internal programmer can dynamically change some musical ele- SuperCollider Server as a sound engine. Through this re- ments, phrases, rhythms, scales and so on. The real-time search, we have adapted the exiting SuperCollider source interactivity is effectively utilized mainly in the live-coding code for iOS to the latest environment. Further we attempt- field. If the iOS developers would make their application ed to detach the UI from the sound engine so that the native adopting the “sound-server” model, using SuperCollider iOS visual objects built by objective-C or Swift, send to the seems to be a reasonable choice. However, the porting situa- internal SuperCollider server with any user interaction tion is not so good. SonicPi[5] is the one of a musical pro- events. As a result, iSuperColliderKit makes it possible to gramming environment that has SuperCollider server inter- utilize the vast resources of dynamic real-time changing nally. However, it is only for Raspberry Pi, Windows and musical elements or algorithmic composition on SuperCol- OSX.
    [Show full text]
  • USB MIDI CONTROLLER INTRODUCTION This Is an Ultra-Compact and Light-Weight MIDI Controller Which Serves to Control Music Software
    USB MIDI CONTROLLER INTRODUCTION This is an ultra-compact and light-weight MIDI controller which serves to control music software. It is equipped with 25 velocity-sensitive keys, and 8 velocity-sensitive drum pads as well as 4 faders and 4 rotary controls. The controller connects both PC and Mac(OTG convertor cable is required for connecting to Mobile phone and Pad). You can do without a bulky power adapter as power supply is via the USB bus. The unit is supplied with a software editor which you can download from WORLDE website. The software editor will let you customize this USB MIDI controller to your own requirements. It’s also available to download the full user manual and other setting instructions from http://en.worlde.com.cn/. 3 STEPS FOR PANDAMINI TO WORK 1) Connect your PANDAMINI to the computer by USB cable. 2) Download the software editor from the download page of WORLDE website and customize all editable controllers, and create, save and load presets. 3) Open your DAW in your computer and select PANDAMINI as your hardware controller in your DAW. Now you are ready to go. CONNECTING A COMPUTER Use the included USB cable to connect the USB MIDI controller to a USB2.0 port on your computer. The power will turn on and the scene LED will light up. Select the PANDAMINI as MIDI controller within your music software and you are ready to go. FIRST STEPS WITH THE EDITOR The editor will let you customize all editable controllers, and create, save and load presets.
    [Show full text]
  • The Viability of the Web Browser As a Computer Music Platform
    Lonce Wyse and Srikumar Subramanian The Viability of the Web Communications and New Media Department National University of Singapore Blk AS6, #03-41 Browser as a Computer 11 Computing Drive Singapore 117416 Music Platform [email protected] [email protected] Abstract: The computer music community has historically pushed the boundaries of technologies for music-making, using and developing cutting-edge computing, communication, and interfaces in a wide variety of creative practices to meet exacting standards of quality. Several separate systems and protocols have been developed to serve this community, such as Max/MSP and Pd for synthesis and teaching, JackTrip for networked audio, MIDI/OSC for communication, as well as Max/MSP and TouchOSC for interface design, to name a few. With the still-nascent Web Audio API standard and related technologies, we are now, more than ever, seeing an increase in these capabilities and their integration in a single ubiquitous platform: the Web browser. In this article, we examine the suitability of the Web browser as a computer music platform in critical aspects of audio synthesis, timing, I/O, and communication. We focus on the new Web Audio API and situate it in the context of associated technologies to understand how well they together can be expected to meet the musical, computational, and development needs of the computer music community. We identify timing and extensibility as two key areas that still need work in order to meet those needs. To date, despite the work of a few intrepid musical Why would musicians care about working in explorers, the Web browser platform has not been the browser, a platform not specifically designed widely considered as a viable platform for the de- for computer music? Max/MSP is an example of a velopment of computer music.
    [Show full text]
  • An Arduino-Based MIDI Controller for Detecting Minimal Movements in Severely Disabled Children
    IT 16054 Examensarbete 15 hp Juni 2016 An Arduino-based MIDI Controller for Detecting Minimal Movements in Severely Disabled Children Mattias Linder Institutionen för informationsteknologi Department of Information Technology Abstract An Arduino-based MIDI Controller for Detecting Minimal Movements in Severely Disabled Children Mattias Linder Teknisk- naturvetenskaplig fakultet UTH-enheten In therapy, music has played an important role for children with physical and cognitive impairments. Due to the nature of different impairments, many traditional Besöksadress: instruments can be very hard to play. This thesis describes the development of a Ångströmlaboratoriet Lägerhyddsvägen 1 product in which electrical sensors can be used as a way of creating sound. These Hus 4, Plan 0 sensors can be used to create specially crafted controllers and thus making it possible for children with different impairments to create music or sound. This Postadress: thesis examines if it is possible to create such a device with the help of an Arduino Box 536 751 21 Uppsala micro controller, a smart phone and a computer. The end result is a product that can use several sensors simultaneously to either generate notes, change the Telefon: volume of a note or controlling the pitch of a note. There are three inputs for 018 – 471 30 03 specially crafted sensors and three static potentiometers which can also be used as Telefax: specially crafted sensors. The sensor inputs for the device are mini tele (2.5mm) 018 – 471 30 00 and any sensor can be used as long as it can be equipped with this connector. The product is used together with a smartphone application to upload different settings Hemsida: and a computer with a music work station which interprets the device as a MIDI http://www.teknat.uu.se/student synthesizer.
    [Show full text]