Neuron-Modeled Audio Synthesis

Neuron-Modeled Audio Synthesis

Neuron-modeled Audio Synthesis Jeff Snyder Aatish Bhatia Mike Mulshine Princeton University Music Princeton University Council Princeton University Music Department on Science and Technology Department 310 Woolworth Center 233 Lewis Library 310 Woolworth Center Princeton, NJ 08544 Princeton, NJ 08544 Princeton, NJ 08544 [email protected] [email protected] [email protected] ABSTRACT Two years later, the concept came up again in discussions This paper describes a project to create a software instru- with physicist Aatish Bhatia, and he was also curious about ment using a biological model of neuron behavior for audio how these models might sound. Bhatia and Snyder worked synthesis. The translation of the model to a usable audio on researching the existing models to explore, and Bhatia synthesis process is described, and a piece for laptop orches- handled the mathematics of turning the differential equa- tra created using the instrument is discussed. tions into a real-time solvable algorithm. Once they had a working synthesis algorithm, Snyder brought Mike Mul- shine into the project to work on integrating the algorithm Author Keywords into a usable instrument. neuron, biological model, software, musical instrument The expectation was never that this synthesis technique would sound dramatically different from traditional meth- CCS Concepts ods, but rather that it would provide an interesting alter- native to synthesis methods that have easily understood •Applied computing ! Sound and music computing; control parameters (pitch, amplitude, etc.). We hoped that Performing arts; Computational biology; by introducing control parameters relating to physical pro- cesses that are unfamiliar, such as \sodium channel acti- 1. INTRODUCTION vation", a more experimental approach to digital synthesis While the majority of audio synthesis methods aim for pow- would be opened up to the user. erful and clear control over sound parameters, some ap- proaches eschew this control in favor of a more experimen- 3. PRIOR ART tal and serendipitous outlook. One example is seen in the This project is in the tradition of physical modeling synthe- barely-controllable instrumental systems of David Tudor, sis, where mathematical models of real-world systems are the performance of which he described as \an act of discov- simulated for sound creation[3][10]. Much of the research ery. I try to find out what's there and not to make it do into this area so far has focused on modelling physical sys- what I want but to, you know, release what's there." [6] tems for which sound production or sound transformation The project described is one path toward this type of are the typical use cases, such as flute acoustics[2], room musical interaction, in which a biological model is used as reverberation[7], or guitar amplifier simulation[12]. This a synthesis technique with intent to discover exciting and project falls into a much smaller field: repurposing mathe- unexpected sonic possibilities. matical models that are not originally intended to describe audio systems for audio purposes. Examples include the 2. MOTIVATION FOR NEURON SYNTHE- use of strange attractors[8] and other creative misuses of SIS scientific research tools. The prior research with the closest relation to this project This idea came about when a member of the Princeton is another attempt to use a biological neuron model for syn- Laptop Orchestra (PLOrk), Mitch Nahmias, was explain- thesis. This is briefly included as part of Nick Collins' pa- ing how the lab he works at in the Electrical Engineering per, Errant Sound Synthesis [1]. Audio implementations of department used biological models of neurons to create op- the FitzHugh Nagumo and Termen Wang models of neu- tical computers. He mentioned that, under certain condi- ron behavior are mentioned and included in the library of tions, these neuron models could be coaxed to oscillate, and SuperCollider code that accompanies the paper. Collins' he mused that it might sound interesting to use this as a approach is similar to ours in that he also uses the Euler basis for audio synthesis. Together, Jeff Snyder and Nah- method to discretize the equations, and includes these al- mias created a quick prototype using the simplest possible gorithms in a publicly available library. We hope that our neuron model in the programming language ChucK[11], and paper adds more illuminating detail to the process, and a it made sound! We promptly lost that work in a computer more in-depth discussion of the usefulness of this concept crash, and dejectedly set the project aside for another day. in introducing non-linear control into instruments. 4. ADAPTING THE BIOLOGICAL MODEL FOR AUDIO SYNTHESIS Licensed under a Creative Commons Attribution While looking for an appropriate neuronal spiking model to 4.0 International License (CC BY 4.0). Copyright implement, we researched various alternatives. After read- remains with the author(s). ing Izhikevich's review of neural spiking models [5], we de- NIME’18, June 3-6, 2018, Blacksburg, Virginia, USA. cided to implement the Hodgkin-Huxley (HH) model [4]. 394 This is an important classic model in computational neu- 4. Use equation 2 to calculate the time constants roscience, comprising of 4 coupled non-linear differential equations, and various biologically inspired input param- 5. Use equation 3 to update the membrane voltage. This eters such as the activation and inactivation rates of ion is the output sound signal. (i.e. sodium and potassium) channels. While the HH model 6. Repeat steps 2-4. The tunable parameters and input is more computationally expensive than simplified, single- current can be varied as needed, allow a user to control purpose models of neuron spiking, it is also a very general the audio synthesis. model that can reproduce many different neuronal spiking patterns. It is therefore well suited to our goals of exploring Further details of discretizing the HH model (including the rich parameter space of neuronal spiking behaviors. biologically relevant values of parameters) can be found in To implement this model, we followed the standard Euler various online references 1. method to discretize the four differential equations. This Once we had a working audio synthesis version of the resulted in a set of four difference equations that determine model in ChucK[11], running as a custom class that gener- the membrane voltage Vm of a neuron. The first three equa- ates signal per-sample using input parameters from a MIDI tions are similar in form: device, we set about adapting it to be easier to interface with a variety of platforms. First, we adapted the ChucK n(t + ∆t) = αn∆t + n(t) 1 − (αn + βn)∆t code to be a pseudo-object in our OOPS Audio Library[9]. Since we had already integrated our OOPS library into the m(t + ∆t) = α ∆t + m(t)1 − (α + β )∆t (1) m m m JUCE framework, this made it simple to export a VST plu- h(t + ∆t) = αh∆t + h(t) 1 − (αh + βh)∆t gin, allowing the neuron synthesis to run inside a DAW, or in Max/MSP using the vst~object. It also allowed us The variables being updated here are n, m, h, which are to quickly prototype running the synthesis model on em- quantities between 0 and 1 representing the rate of potas- bedded hardware, such as our Genera brain, based on an sium channel activation, sodium channel activation, and STM32F7/H7 microcontroller[9]. sodium channel inactivation, respectively. The term ∆t is a timestep | a smaller timestep results in a more accurate simulation, but is more computationally expensive. The α and β variables are functions of the membrane voltage Vm(t) provided in the original paper by Hodgkin and Huxley [4]. To clean things up a bit, we define the following time constants: Figure 1: A waveform of a single neuron spike using cm cm cm the Neuron synthesizer (AC-coupled) τK (t) = 4 τNa(t) = 3 τl = (2) gK n(t) gNam(t) h gl These time constants depend on the activation/inactivation rates (n, m, and h), as well as on other biologically inspired tunable parameters (gK , gNa, gl, cm). They are used to update the membrane voltage Vm as follows: 1 1 1 V (t + ∆t) = V (t)1 − ∆t( + + ) m m τ τ τ Figure 2: An oscillation waveform created by the K Na l (3) I(t) V V V Neuron synthesizer (AC-coupled) + ∆t + k + Na + l Cm τK τNa τl There are three new tunable parameters here (VK , VNa, 5. USE CASE: CONNECTOME, A PIECE Vl), as well as an input current I(t) which we are free to vary. Thus, the rate of change of the membrane voltage USING THE NEURON INSTRUMENT is governed not only by the input current I(t), but also, 5.1 Compositional Concept via the time constants, on the activation/inactivation rates We wanted to quickly test the Neuron synthesizer's features (n, m, and h), all of which vary over time and, in turn, in a real-world musical setting. Collaboratively with the depend upon the membrane voltage (via the α and β func- Princeton Laptop Orchestra (PLOrk), we composed a struc- tions). This set of coupled equations thus results in rich tured improvisation piece exploring the possibilities of the feedback loops and non-linear effects, and the dependence instrument. This piece is called Connectome, named for the of the output voltage on the input parameters is difficult to map of neural pathways in the brain. Our goal was to create analytically predict. a piece that was inspired by the functioning of the brain, in To implement this model in code, we execute the following which the performers represented individual neurons. steps: Since neurons pairing with or communicating with other neurons seemed to be a basic metaphor for brain function, 1.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us