Applications in Heart Rate Variability

Total Page:16

File Type:pdf, Size:1020Kb

Applications in Heart Rate Variability Data Analysis through Auditory Display: Applications in Heart Rate Variability Mark Ballora Faculty of Music McGill University, Montréal May, 2000 A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements of the degree of Doctor of Philosophy in Music © Mark Ballora, May 2000 Table of Contents Abstract......................................................................................... v Résumé ........................................................................................ vi Acknowledgements ..........................................................................vii 1. Introduction 1.1 Purpose of Study.................................................................... 1 1.2 Auditory Display.................................................................... 2 1.3 Types of Auditory Display ........................................................ 3 1.4 Heart Rate Variability.............................................................. 4 1.5 Design of the Thesis................................................................ 6 2. Survey of Related Literature 2.1 Data in Music 2.1.1 Data Music—Making Art from Information............................ 8 2.1.2 Biofeedback Music........................................................14 2.1.3 Nonlinear Dynamics in Music ...........................................16 2.1.3.1 Fractal Music...................................................16 2.1.3.2 Mapping Chaotic (and other) Data ..........................18 2.1.4 Concluding Thoughts on Data as Music................................23 2.2 Auditory Display...................................................................25 2.2.1 Elements of Auditory and Visual Displays ............................25 2.2.2 Background Work in Auditory Display ................................27 2.2.3 Monitoring Implementations.............................................29 2.2.4 Analysis Implementations................................................30 2.2.4.1 Rings of Saturn.................................................31 2.2.4.2 Seismology .....................................................31 2.2.4.3 Financial Analysis.............................................33 2.2.4.4 Quantum Mechanics ..........................................34 2.2.4.5 Fluid Dynamics ................................................34 2.3 Heart Rate Variability.............................................................35 2.3.1 Spectral Analyses..........................................................36 2.3.2 Statistical Analyses........................................................37 2.3.3 Nonlinear Dynamics 2.3.3.1 Nonlinear dynamics and biological systems ...............37 2.3.3.2 Magnitude fluctuation analysis ..............................39 2.3.3.3 Spectrum of first-difference series ..........................41 2.3.3.4 Detrended fluctuation analysis...............................44 2.3.3.5 Cumulative variation amplitude analysis (CVAA)........44 3. Choice of Software 3.1 Software Synthesis.................................................................57 3.2 Method of Illustration: Unit Generators and Signal Flow Charts...........58 3.3 Software Synthesis and Real Time Systems ...................................59 3.4 Operational Features of SuperCollider 3.4.1 A virtual machine that runs at interrupt level..........................60 3.4.2 Dynamic typing............................................................62 3.4.3 Real time garbage collection.............................................62 3.4.4 Object oriented paradigm.................................................65 3.5 SuperCollider Syntax..............................................................68 3.6 Other Features of SuperCollider 3.6.1 Graphical User Interface..................................................69 ii 3.6.2 Ease of Use .................................................................70 3.6.3 Spawning Events ..........................................................70 3.6.4 Collection Classes .........................................................70 3.6.4 Sample Accurate Scheduling of Events ................................71 3.7 Another Example: Can the Ear Detect Randomized Phases? ...............71 4. Description of HRV Sonification 4.1 Development of a Heart Rate Variability Sonification Model...............74 4.1.1 Heart Rhythms in Csound 4.1.1.1 Description of Csound model................................74 4.1.1.2 Flowchart Illustration .........................................77 4.1.1.3 Evaluation of Csound model.................................77 4.1.2 Unit Generators Used in SuperCollider Sonification .................80 4.1.2.1 PSinGrain.......................................................80 4.1.2.2 Phase Modulator...............................................80 4.1.3.3 Wavetable.......................................................81 4.1.3.4 Band Limited Impulse Oscillator............................82 4.1.3.5 Klang ............................................................82 4.1.3.6 Envelope Generator ...........................................82 4.1.3 SuperCollider Sonification 1: Cumulative Variation Amplitude Analysis 4.1.3.1 Components of the CVAA Sonification....................83 4.1.3.1.1 Beat to Beat........................................85 4.1.3.1.2 NN/Median Filt ...................................85 4.1.3.1.3 NN50 ...............................................86 4.1.3.1.4 Wavelet.............................................86 4.1.3.1.5 Hilbert Transform.................................86 4.1.3.1.6 Median Filtered ...................................87 4.1.3.1.7 Timbres.............................................87 4.1.3.1.8 Median Running Window .......................87 4.1.3.2 Flowchart Illustration, Code and Demonstrations.........89 4.1.3.3 Evaluation of CVAA Sonification...........................89 4.1.4 SuperCollider Sonification 2: A General Model 4.1.4.1 Components of the Sonification .............................90 4.1.4.1.1 Discrete Events 4.1.4.1.1.1 NN Intervals ........................91 4.1.4.1.1.2 NN50 Intervals .....................91 4.1.4.1.2 Continuous Events................................91 4.1.4.1.2.1 Mean Value .........................92 4.1.4.1.2.2 Standard Deviation Value.........92 4.1.4.2 Flowchart Illustration, Code and Demonstrations.........92 4.1.4.3 Evaluation of General Model ................................94 4.2 Listening Perception Test 4.2.1 Purpose of the Test ........................................................96 4.2.2 Method ......................................................................97 4.2.3 Results.......................................................................98 4.2.4 Other Descriptive Statistics ............................................ 102 4.2.5 Results for Each Diagnosis............................................. 105 4.2.6 Discussion ................................................................ 109 4.3 SuperCollider Sonification 3: Diagnosis of Sleep Apnea 4.3.1 Modifications to General Model....................................... 111 4.3.2 Flowchart Illustration, Code, and Demonstration ................... 116 iii 5. Summary and Conclusions 5.1 Method of Sonification.......................................................... 119 5.2 Auditory Display in Cardiology ............................................... 121 5.3 Future Work ...................................................................... 121 5.4 General Guidelines for the Creation of Auditory Displays ................ 122 5.5 Concluding Thoughts............................................................ 123 Appendices 1. Fundamental Auditory Concepts and Terms 1. Sound and Time .............................................................. 125 2. Pitch............................................................................ 126 3. Timbre ......................................................................... 129 4. Volume ........................................................................ 133 5. Localization................................................................... 136 6. Phase........................................................................... 138 2. Nonlinear Dynamics 1. Iterative Functions, Asymptotic States and Chaos....................... 141 2. Fractals ........................................................................ 145 3. Scaled Noise .................................................................. 148 3. Description of the Poisson Distribution............................................. 151 4. Csound Code for Encoding Instrument Orchestra File............................ 153 5. SuperCollider code for HRV Sonification Models 1. CVAA Sonification ............................................................... 157 2. General Model ..................................................................... 160 3. Apnea Diagnosis Model .......................................................... 162 6. Listening Perception Test Materials 1. Training Session for Listening Perception Test ............................... 166 2. Listening Perception Test Response Forms.................................... 169 3. Listening Perception Test Visual Displays..................................... 171 References ..................................................................................... 195 Accompanying CD Audio: Track 1: Csound Sonification of Healthy Subject Tracks 2-29: Sound Files used for Listening Perception
Recommended publications
  • Synchronous Programming in Audio Processing Karim Barkati, Pierre Jouvelot
    Synchronous programming in audio processing Karim Barkati, Pierre Jouvelot To cite this version: Karim Barkati, Pierre Jouvelot. Synchronous programming in audio processing. ACM Computing Surveys, Association for Computing Machinery, 2013, 46 (2), pp.24. 10.1145/2543581.2543591. hal- 01540047 HAL Id: hal-01540047 https://hal-mines-paristech.archives-ouvertes.fr/hal-01540047 Submitted on 15 Jun 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Synchronous Programming in Audio Processing: A Lookup Table Oscillator Case Study KARIM BARKATI and PIERRE JOUVELOT, CRI, Mathématiques et systèmes, MINES ParisTech, France The adequacy of a programming language to a given software project or application domain is often con- sidered a key factor of success in software development and engineering, even though little theoretical or practical information is readily available to help make an informed decision. In this paper, we address a particular version of this issue by comparing the adequacy of general-purpose synchronous programming languages to more domain-specific
    [Show full text]
  • Wendy Reid Composer
    WENDY REID COMPOSER 1326 Shattuck Avenue #2 Berkeley, California 94709 [email protected] treepieces.net EDUCATION 1982 Stanford University, CCRMA, Post-graduate study Workshop in computer-generated music with lectures by John Chowning, Max Mathews, John Pierce and Jean-Claude Risset 1978-80 Mills College, M.A. in Music Composition Composition with Terry Riley, Robert Ashley and Charles Shere Violin and chamber music with the Kronos Quartet 1975-77 Ecoles D’Art Americaines, Palais de Fontainbleau and Paris, France: Composition with Nadia Boulanger; Classes in analysis, harmony, counterpoint, composition; Solfege with assistant Annette Dieudonne 1970-75 University of Southern California, School of Performing Arts, B.M. in Music Composition, minor in Violin Performance Composition with James Hopkins, Halsey Stevens and film composer David Raksin 1 AWARDS, GRANTS, and COMMISSIONS Meet The Composer/California Meet The Composer/New York Subito Composer Grant ASMC Grant Paul Merritt Henry Award Hellman Award The Oakland Museum The Nature Company Sound/Image Unlimited Graduate Assistantship California State Scholarship Honors at Entrance USC National Merit Award Finalist National Educational Development Award Finalist Commission, Brassiosaurus (Tomita/Djil/ Heglin):Tree Piece #52 Commission, Joyce Umamoto: Tree Piece #42 Commission, Abel-Steinberg-Winant Trio: Tree Piece #41 Commission, Tom Dambly: Tree Piece #31 Commission, Mary Oliver: Tree Piece #21 Commission, Don Buchla: Tree Piece #17 Commission, William Winant: Tree Piece #10 DISCOGRAPHY LP/Cassette: TREE PIECES (FROG RECORDS,1988/ FROG PEAK) CD: TREEPIECES(FROG RECORDS, 2002/ FROGPEAK) TREE PIECES volume 2 (NIENTE, 2004 / FROGPEAK) TREE PIECE SINGLE #1: LULU VARIATIONS (NIENTE, 2009) TREE PIECE SINGLE #2: LU-SHOO FRAGMENTS (NIENTE, 2010) 2 PUBLICATIONS Scores: Tree Pieces/Frog On Rock/Game of Tree/Klee Pieces/Glass Walls/Early Works (Frogpeak Music/Sound-Image/W.
    [Show full text]
  • Peter Blasser CV
    Peter Blasser – [email protected] - 410 362 8364 Experience Ten years running a synthesizer business, ciat-lonbarde, with a focus on touch, gesture, and spatial expression into audio. All the while, documenting inventions and creations in digital video, audio, and still image. Disseminating this information via HTML web page design and YouTube. Leading workshops at various skill levels, through manual labor exploring how synthesizers work hand and hand with acoustics, culminating in montage of participants’ pieces. Performance as touring musician, conceptual lecturer, or anything in between. As an undergraduate, served as apprentice to guild pipe organ builders. Experience as racquetball coach. Low brass wind instrumentalist. Fluent in Java, Max/MSP, Supercollider, CSound, ProTools, C++, Sketchup, Osmond PCB, Dreamweaver, and Javascript. Education/Awards • 2002 Oberlin College, BA in Chinese, BM in TIMARA (Technology in Music and Related Arts), minors in Computer Science and Classics. • 2004 Fondation Daniel Langlois, Art and Technology Grant for the project “Shinths” • 2007 Baltimore City Grant for Artists, Craft Category • 2008 Baltimore City Grant for Community Arts Projects, Urban Gardening List of Appearances "Visiting Professor, TIMARA dep't, Environmental Studies dep't", Oberlin College, Oberlin, Ohio, Spring 2011 “Babier, piece for Dancer, Elasticity Transducer, and Max/MSP”, High Zero Festival of Experimental Improvised Music, Theatre Project, Baltimore, September 2010. "Sejayno:Cezanno (Opera)", CEZANNE FAST FORWARD. Baltimore Museum of Art, May 21, 2010. “Deerhorn Tapestry Installation”, Curators Incubator, 2009. MAP Maryland Art Place, September 15 – October 24, 2009. Curated by Shelly Blake-Pock, teachpaperless.blogspot.com “Deerhorn Micro-Cottage and Radionic Fish Drier”, Electro-Music Gathering, New Jersey, October 28-29, 2009.
    [Show full text]
  • Interpretação Em Tempo Real Sobre Material Sonoro Pré-Gravado
    Interpretação em tempo real sobre material sonoro pré-gravado JOÃO PEDRO MARTINS MEALHA DOS SANTOS Mestrado em Multimédia da Universidade do Porto Dissertação realizada sob a orientação do Professor José Alberto Gomes da Universidade Católica Portuguesa - Escola das Artes Julho de 2014 2 Agradecimentos Em primeiro lugar quero agradecer aos meus pais, por todo o apoio e ajuda desde sempre. Ao orientador José Alberto Gomes, um agradecimento muito especial por toda a paciência e ajuda prestada nesta dissertação. Pelo apoio, incentivo, e ajuda à Sara Esteves, Inês Santos, Manuel Molarinho, Carlos Casaleiro, Luís Salgado e todos os outros amigos que apesar de se encontraram fisicamente ausentes, estão sempre presentes. A todos, muito obrigado! 3 Resumo Esta dissertação tem como foco principal a abordagem à interpretação em tempo real sobre material sonoro pré-gravado, num contexto performativo. Neste caso particular, material sonoro é entendido como música, que consiste numa pulsação regular e definida. O objetivo desta investigação é compreender os diferentes modelos de organização referentes a esse material e, consequentemente, apresentar uma solução em forma de uma aplicação orientada para a performance ao vivo intitulada Reap. Importa referir que o material sonoro utilizado no software aqui apresentado é composto por músicas inteiras, em oposição às pequenas amostras (samples) recorrentes em muitas aplicações já existentes. No desenvolvimento da aplicação foi adotada a análise estatística de descritores aplicada ao material sonoro pré-gravado, de maneira a retirar segmentos que permitem uma nova reorganização da informação sequencial originalmente contida numa música. Através da utilização de controladores de matriz com feedback visual, o arranjo e distribuição destes segmentos são alterados e reorganizados de forma mais simplificada.
    [Show full text]
  • Chuck: a Strongly Timed Computer Music Language
    Ge Wang,∗ Perry R. Cook,† ChucK: A Strongly Timed and Spencer Salazar∗ ∗Center for Computer Research in Music Computer Music Language and Acoustics (CCRMA) Stanford University 660 Lomita Drive, Stanford, California 94306, USA {ge, spencer}@ccrma.stanford.edu †Department of Computer Science Princeton University 35 Olden Street, Princeton, New Jersey 08540, USA [email protected] Abstract: ChucK is a programming language designed for computer music. It aims to be expressive and straightforward to read and write with respect to time and concurrency, and to provide a platform for precise audio synthesis and analysis and for rapid experimentation in computer music. In particular, ChucK defines the notion of a strongly timed audio programming language, comprising a versatile time-based programming model that allows programmers to flexibly and precisely control the flow of time in code and use the keyword now as a time-aware control construct, and gives programmers the ability to use the timing mechanism to realize sample-accurate concurrent programming. Several case studies are presented that illustrate the workings, properties, and personality of the language. We also discuss applications of ChucK in laptop orchestras, computer music pedagogy, and mobile music instruments. Properties and affordances of the language and its future directions are outlined. What Is ChucK? form the notion of a strongly timed computer music programming language. ChucK (Wang 2008) is a computer music program- ming language. First released in 2003, it is designed to support a wide array of real-time and interactive Two Observations about Audio Programming tasks such as sound synthesis, physical modeling, gesture mapping, algorithmic composition, sonifi- Time is intimately connected with sound and is cation, audio analysis, and live performance.
    [Show full text]
  • Implementing Stochastic Synthesis for Supercollider and Iphone
    Implementing stochastic synthesis for SuperCollider and iPhone Nick Collins Department of Informatics, University of Sussex, UK N [dot] Collins ]at[ sussex [dot] ac [dot] uk - http://www.cogs.susx.ac.uk/users/nc81/index.html Proceedings of the Xenakis International Symposium Southbank Centre, London, 1-3 April 2011 - www.gold.ac.uk/ccmc/xenakis-international-symposium This article reflects on Xenakis' contribution to sound synthesis, and explores practical tools for music making touched by his ideas on stochastic waveform generation. Implementations of the GENDYN algorithm for the SuperCollider audio programming language and in an iPhone app will be discussed. Some technical specifics will be reported without overburdening the exposition, including original directions in computer music research inspired by his ideas. The mass exposure of the iGendyn iPhone app in particular has provided a chance to reach a wider audience. Stochastic construction in music can apply at many timescales, and Xenakis was intrigued by the possibility of compositional unification through simultaneous engagement at multiple levels. In General Dynamic Stochastic Synthesis Xenakis found a potent way to extend stochastic music to the sample level in digital sound synthesis (Xenakis 1992, Serra 1993, Roads 1996, Hoffmann 2000, Harley 2004, Brown 2005, Luque 2006, Collins 2008, Luque 2009). In the central algorithm, samples are specified as a result of breakpoint interpolation synthesis (Roads 1996), where breakpoint positions in time and amplitude are subject to probabilistic perturbation. Random walks (up to second order) are followed with respect to various probability distributions for perturbation size. Figure 1 illustrates this for a single breakpoint; a full GENDYN implementation would allow a set of breakpoints, with each breakpoint in the set updated by individual perturbations each cycle.
    [Show full text]
  • Steven T. Kemper Department of Music | Rutgers, the State University of New Jersey 81 George St
    Steven T. Kemper Department of Music | Rutgers, The State University of New Jersey 81 George St. | New Brunswick, NJ 08901 Phone: 773-677-023 [email protected] www.stevenkemper.com EDUCATION Ph.D., Music Composition and Computer Technologies, University of Virginia, December 2012 Dissertation: From Sacred Narrative to Evocations of Ancientness: Mythical meaning in contemporary music M.M., Music Composition, Bowling Green State University, August 2006 B.A., Music, Bowdoin College, May 2003 Honors in music TEACHING EXPERIENCE Assistant Professor of Music, Rutgers University, 2013-present (reappointed Spring 2016) 07:700:515: Computer Composition (Fall 2014-2017) 07:700:470: Electroacoustic Musical Instrument Design (Fall 2018) 07:700:469: Interactive Computer Music (Spring 2015, 2016, 2018) 07:700:375: Composition Practicum (Spring 2015-2016) 07:700:284: Digital Audio Composition (Spring 2014, 2015, 2018, Fall 2017, 2018) 07:700:127: Introduction to Music Technology (Spring 2014, Fall 2013) 07:700:105: Making Music with Computers: Introduction to Digital Audio (Spring 2016, Fall 2014-2016) 07:701:X76: Composition Lessons, graduate/undergraduate (Spring 2014-present) 07:701:304: Rutgers Interactive Music Ensemble (Spring 2018, Fall 2018) 01:090:101: Handmade Sound: Making sound art and music with electronics, Aresty-Byrne Seminar (Fall 2014-2018) Adjunct Faculty, University of Virginia, 2011 MUSI 3370: Songwriting (Spring 2011) Instructor, University of Virginia, 2008-2012 MUSI 1310: Basic Musical Skills, Introduction to Music Theory
    [Show full text]
  • Isupercolliderkit: a Toolkit for Ios Using an Internal Supercollider Server As a Sound Engine
    ICMC 2015 – Sept. 25 - Oct. 1, 2015 – CEMI, University of North Texas iSuperColliderKit: A Toolkit for iOS Using an Internal SuperCollider Server as a Sound Engine Akinori Ito Kengo Watanabe Genki Kuroda Ken’ichiro Ito Tokyo University of Tech- Watanabe-DENKI Inc. Tokyo University of Tech- Tokyo University of Tech- nology [email protected] nology nology [email protected]. [email protected]. [email protected]. ac.jp ac.jp ac.jp The editor client sends the OSC code-fragments to its server. ABSTRACT Due to adopting the model, SuperCollider has feature that a iSuperColliderKit is a toolkit for iOS using an internal programmer can dynamically change some musical ele- SuperCollider Server as a sound engine. Through this re- ments, phrases, rhythms, scales and so on. The real-time search, we have adapted the exiting SuperCollider source interactivity is effectively utilized mainly in the live-coding code for iOS to the latest environment. Further we attempt- field. If the iOS developers would make their application ed to detach the UI from the sound engine so that the native adopting the “sound-server” model, using SuperCollider iOS visual objects built by objective-C or Swift, send to the seems to be a reasonable choice. However, the porting situa- internal SuperCollider server with any user interaction tion is not so good. SonicPi[5] is the one of a musical pro- events. As a result, iSuperColliderKit makes it possible to gramming environment that has SuperCollider server inter- utilize the vast resources of dynamic real-time changing nally. However, it is only for Raspberry Pi, Windows and musical elements or algorithmic composition on SuperCol- OSX.
    [Show full text]
  • The Viability of the Web Browser As a Computer Music Platform
    Lonce Wyse and Srikumar Subramanian The Viability of the Web Communications and New Media Department National University of Singapore Blk AS6, #03-41 Browser as a Computer 11 Computing Drive Singapore 117416 Music Platform [email protected] [email protected] Abstract: The computer music community has historically pushed the boundaries of technologies for music-making, using and developing cutting-edge computing, communication, and interfaces in a wide variety of creative practices to meet exacting standards of quality. Several separate systems and protocols have been developed to serve this community, such as Max/MSP and Pd for synthesis and teaching, JackTrip for networked audio, MIDI/OSC for communication, as well as Max/MSP and TouchOSC for interface design, to name a few. With the still-nascent Web Audio API standard and related technologies, we are now, more than ever, seeing an increase in these capabilities and their integration in a single ubiquitous platform: the Web browser. In this article, we examine the suitability of the Web browser as a computer music platform in critical aspects of audio synthesis, timing, I/O, and communication. We focus on the new Web Audio API and situate it in the context of associated technologies to understand how well they together can be expected to meet the musical, computational, and development needs of the computer music community. We identify timing and extensibility as two key areas that still need work in order to meet those needs. To date, despite the work of a few intrepid musical Why would musicians care about working in explorers, the Web browser platform has not been the browser, a platform not specifically designed widely considered as a viable platform for the de- for computer music? Max/MSP is an example of a velopment of computer music.
    [Show full text]
  • Talbertronic Festival Workshop I
    ◊◊ THE OBERLIN COLLEGE CONSERVATORY OF MUSIC PRESENTS ◊◊ Talbert ronic Festival March 2-4, 2017 Oberlin, Ohio Dear Friends, The writer Bill Bryson observed that “few things last for more than a generation in America.” Indeed, even in the slow-to-change world of academic institutions, it is often the case that non-traditional programs or departments come and go in a decade or two. And yet we gather this weekend in honor of John Talbert’s retirement to celebrate the sustained energy and success of the TIMARA Department as it approaches the 50th anniversary of its origins. Our longevity has a lot to do with our adaptability, and our adaptability over the past 38 years has a lot to do with John. Even as he walks out the door, John remains a step ahead, always on the lookout for new methods and technologies but also wise in his avoidance of superficial trends. Take a moment this weekend to consider the number and variety of original compositions, artworks, performances, installations, recordings, instrument designs, and other projects that John has influenced and help bring into being during his time at Oberlin. All the while, John has himself designed and built literally rooms full of unique and reliable devices that invite student and faculty artists to express themselves with sonic and visual media. Every bit of the teaching and learning that transpires each day in TIMARA is influenced by John and will continue to be for years to come. Even when he knows better (which by now is just about always), he is willing to trust his colleagues, humor us faculty and our outlandish requests, and let students make personal discoveries through experimentation.
    [Show full text]
  • Editorial: Alternative Histories of Electroacoustic Music
    This is a repository copy of Editorial: Alternative histories of electroacoustic music. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/119074/ Version: Accepted Version Article: Mooney, J orcid.org/0000-0002-7925-9634, Schampaert, D and Boon, T (2017) Editorial: Alternative histories of electroacoustic music. Organised Sound, 22 (02). pp. 143-149. ISSN 1355-7718 https://doi.org/10.1017/S135577181700005X This article has been published in a revised form in Organised Sound http://doi.org/10.1017/S135577181700005X. This version is free to view and download for private research and study only. Not for re-distribution, re-sale or use in derivative works. © Cambridge University Press Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ EDITORIAL: Alternative Histories of Electroacoustic Music In the more than twenty years of its existence, Organised Sound has rarely focussed on issues of history and historiography in electroacoustic music research.
    [Show full text]
  • 62 Years and Counting: MUSIC N and the Modular Revolution
    62 Years and Counting: MUSIC N and the Modular Revolution By Brian Lindgren MUSC 7660X - History of Electronic and Computer Music Fall 2019 24 December 2019 © Copyright 2020 Brian Lindgren Abstract. MUSIC N by Max Mathews had two profound impacts in the world of music ​ synthesis. The first was the implementation of modularity to ensure a flexibility as a tool for the user; with the introduction of the unit generator, the instrument and the compiler, composers had the building blocks to create an unlimited range of sounds. The second was the impact of this implementation in the modular analog synthesizers developed a few years later. While Jean-Claude Risset, a well known Mathews associate, asserts this, Mathews actually denies it. They both are correct in their perspectives. Introduction Over 76 years have passed since the invention of the first electronic general purpose computer,1 the ENIAC. Today, we carry computers in our pockets that can perform millions of times more calculations per second.2 With the amazing rate of change in computer technology, it's hard to imagine that any development of yesteryear could maintain a semblance of relevance today. However, in the world of music synthesis, the foundations that were laid six decades ago not only spawned a breadth of multifaceted innovation but continue to function as the bedrock of important digital applications used around the world today. Not only did a new modular approach implemented by its creator, Max Mathews, ensure that the MUSIC N lineage would continue to be useful in today’s world (in one of its descendents, Csound) but this approach also likely inspired the analog synthesizer engineers of the day, impacting their designs.
    [Show full text]