Music Information Retrieval in Live Coding

Total Page:16

File Type:pdf, Size:1020Kb

Music Information Retrieval in Live Coding Anna Xambo,´ ∗ Alexander Lerch,† Music Information Retrieval and Jason Freeman† ∗Music Technology in Live Coding: A Department of Music Norwegian University of Science and Theoretical Framework Technology 7491 Trondheim, Norway †Center for Music Technology Georgia Institute of Technology 840 McMillan St NW Atlanta, Georgia 30332 USA [email protected] {alexander.lerch, jason.freeman}@gatech.edu Downloaded from http://direct.mit.edu/comj/article-pdf/42/4/9/1857217/comj_a_00484.pdf by guest on 02 October 2021 Abstract: Music information retrieval (MIR) has a great potential in musical live coding because it can help the musician–programmer to make musical decisions based on audio content analysis and explore new sonorities by means of MIR techniques. The use of real-time MIR techniques can be computationally demanding and thus they have been rarely used in live coding; when they have been used, it has been with a focus on low-level feature extraction. This article surveys and discusses the potential of MIR applied to live coding at a higher musical level. We propose a conceptual framework of three categories: (1) audio repurposing, (2) audio rewiring, and (3) audio remixing. We explored the three categories in live performance through an application programming interface library written in SuperCollider, MIRLC. We found that it is still a technical challenge to use high-level features in real time, yet using rhythmic and tonal properties (midlevel features) in combination with text-based information (e.g., tags) helps to achieve a closer perceptual level centered on pitch and rhythm when using MIR in live coding. We discuss challenges and future directions of utilizing MIR approaches in the computer music field. Live coding in music is an improvisation practice retrieval of sounds from both crowdsourced and based on generating code in real time by either personal databases. writing it directly or using interactive programming This article explores the challenges and opportu- (Brown 2006; Collins et al. 2007; Rohrhuber et al. nities that MIR techniques can offer to live coding 2007; Freeman and Troyer 2011). Music information practices. Its main contributions are: a survey re- retrieval (MIR) is an interdisciplinary research field view of the state of the art of MIR in live coding; that targets, generally speaking, the analysis and a categorization of approaches to live coding using retrieval of music-related information, with a focus MIR illustrated with examples; an implementation on extracting information from audio recordings in SuperCollider of the three approaches that are (Lerch 2012). Applying MIR techniques to live coding demonstrated in test-bed performances; and a dis- can contribute to the process of music generation, cussion of future directions of real-time computer both creatively and computationally. A potential music generation based on MIR techniques. scenario would be to create categorizations of audio streams and extract information on timbre and Background performance content, as well as drive semiautomatic audio remixing, enabling the live coder to focus on In this section, we overview the state of the art of high-level musical properties and decisions. Another live coding environments and MIR related to live potential scenario is to be able to model a high-level performance applications. music space with certain algorithmic behaviors and allow the live coder to combine the semi-automatic Live Coding Programming Languages Computer Music Journal, 42:4, pp. 9–25, Winter 2018 doi:10.1162/COMJ a 00484 The terms “live coding language” and “live coding c 2019 Massachusetts Institute of Technology. environment,” which support the activity of live Xamboetal.´ 9 coding in music, will be used interchangeably in MIR in commercial software. The present article this article. Programming languages that have been aims to help fill that gap. used for live coding include ChucK (Wang 2008), Extempore (previously Impromptu; Sorensen 2018), FoxDot (Kirkbride 2016), Overtone (Aaron and Real-Time Feature Extraction Tools for Live Blackwell 2013), Sonic Pi (Aaron and Blackwell Coding Environments 2013), SuperCollider (McCartney 2002), TidalCycles (McLean and Wiggins 2010), Max and Pd (Puckette In this section, we present a largely chronological Downloaded from http://direct.mit.edu/comj/article-pdf/42/4/9/1857217/comj_a_00484.pdf by guest on 02 October 2021 2002), and Scratch (Resnick et al. 2009). With and functional analysis of existing MIR tools for live the advent and development of JavaScript and Web coding and interactive programming to understand Audio, a new generation of numerous Web-based live the characteristics and properties of the features coding programming languages has emerged, e.g., supported, identify the core set of features, and Gibber (Roberts and Kuchera-Morin 2012). Visual discuss whether the existing tools satisfy the live coding is also an emerging practice, notably requirements of live coders. with Hydra (https://github.com/ojack/hydra) and Nearly all MIR systems extract a certain interme- Cyril (http://cyrilcode.com/). diate feature representation from the input audio and use this representation for inference—for example, classification or regression. Although still an open Commercial Software Using MIR debate, there is some common ground in the MIR for Live Performance literature about the categorization of audio features between low-, mid-, and high-level. In an earlier Real-time MIR has been investigated and imple- publication, the second author referred to low-level mented in mainstream software ranging from: digi- features as instantaneous features, extracted from tal audio workstations (DAWs) such as Ableton Live; small blocks of audio and describing various, mostly karaoke software such as Karaoki (http://www.pcdj technical properties, such as the spectral shape, .com/karaoke-software/karaoki/) and My Voice intensity, or a simple characterization of pitched Karaoke (http://www.emediamusic.com/karaoke content (Lerch 2012). Serra et al. (2013) refer to -software/my-voice-karaoke.html); DJ software such tonal and temporal descriptors as midlevel fea- as Traktor (https://www.native-instruments.com tures. High-level features usually describe semantic /en/products/traktor/), Dex 3 (http://www.pcdj.com characteristics, e.g., emotion, mood, musical form, /dj-software/dex-3/), and Algoriddim’s Djay Pro and style (Lerch 2012), or overall musical, cultural, 2 (https://www.algoriddim.com/); and song re- and psychological characteristics, such as genre, trieval or query-by-humming applications such as harmony, mood, rhythm, and tonality (Serra et al. Midomi (https://www.midomi.com/) and Sound- 2013). The distinction between low-, mid-, and Hound (https://soundhound.com/). Typically, these high-level audio features is used here because it solutions include specialized MIR tasks, such as provides a sufficient level of granularity to be useful pitch tracking for the karaoke software, and beat to the live coder. and key detection for the DJ software. In these ap- Various audio feature extraction tools for live plications, the MIR tasks are focused on high-level coding have existed for some time. In this section, musical information (as opposed to low-level feature we present an overview of existing tools for the three extraction) and thus inspire this research. There are environments ChucK (Wang 2008), Max (Puckette several collaborations and conversations between 2002), and SuperCollider (McCartney 2002), because industry and research looking at real-time MIR of their popularity, extensive functionality, and applied to composition, editing, and performance maturity. The findings and conclusions, however, (Bernardini et al. 2007; Serra et al. 2013). To our are generalizable to other live coding environments. knowledge, there has been little research on the A illustrative selection of MIR tools for analysis is synergies between MIR in musical live coding and listed in Table 1. 10 Computer Music Journal Table 1. Selected MIR Tools for Live Coding Environments Live Coding Library Type MIR Tool Authors Environment Included Unit Analyzer (UAna) Wang, Fiebrink, and Cook (2007) ChucK fiddle∼,bonk∼ and sigmund∼ objects Puckette, Apel, and Zicarelli (1998) Max Built-in UGens for machine listening SuperCollider Community SuperCollider Third-party SMIRK Fiebrink, Wang, and Cook (2008) ChucK Downloaded from http://direct.mit.edu/comj/article-pdf/42/4/9/1857217/comj_a_00484.pdf by guest on 02 October 2021 LibXtract Jamie Bullock (2007) Max, SuperCollider Zsa.Descriptors Malt and Jourdan (2008) Max analyzer∼ object Tristan Jehan Max Mubu Schnell et al. (2009) Max SCMIR Nick Collins (2011) SuperCollider Freesound quark Gerard Roma SuperCollider Figure 1 shows a timeline of the analyzed MIR Considering these most popular features, it is tools from 1990s to present, which provides a histor- noticeable that the majority are low-level (instan- ical and technological perspective of the evolution taneous) features (e.g., RMS or spectral centroid). of the programming environments and dependent These features have been widely used in audio con- libraries. As shown in Figure 2, the live coding tent analysis—for example, to classify audio signals. environment that includes most audio features is In live coding, they can be helpful to determine SuperCollider, and the third-party libraries with audiovisual changes in real time, yet the amount the largest number of features are LibXtract and of data can be overwhelming and most of
Recommended publications
  • An AI Collaborator for Gamified Live Coding Music Performances
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Falmouth University Research Repository (FURR) Autopia: An AI Collaborator for Gamified Live Coding Music Performances Norah Lorway,1 Matthew Jarvis,1 Arthur Wilson,1 Edward J. Powley,2 and John Speakman2 Abstract. Live coding is \the activity of writing (parts of) regards to this dynamic between the performer and audience, a program while it runs" [20]. One significant application of where the level of risk involved in the performance is made ex- live coding is in algorithmic music, where the performer mod- plicit. However, in the context of the system proposed in this ifies the code generating the music in a live context. Utopia3 paper, we are more concerned with the effect that this has on is a software tool for collaborative live coding performances, the performer themselves. Any performer at an algorave must allowing several performers (each with their own laptop pro- be prepared to share their code publicly, which inherently en- ducing its own sound) to communicate and share code during courages a mindset of collaboration and communal learning a performance. We propose an AI bot, Autopia, which can with live coders. participate in such performances, communicating with human performers through Utopia. This form of human-AI collabo- 1.2 Collaborative live coding ration allows us to explore the implications of computational creativity from the perspective of live coding. Collaborative live coding takes its roots from laptop or- chestra/ensemble such as the Princeton Laptop Orchestra (PLOrk), an ensemble of computer based instruments formed 1 Background at Princeton University [19].
    [Show full text]
  • Tosca: an OSC Communication Plugin for Object-Oriented Spatialization Authoring
    ICMC 2015 – Sept. 25 - Oct. 1, 2015 – CEMI, University of North Texas ToscA: An OSC Communication Plugin for Object-Oriented Spatialization Authoring Thibaut Carpentier UMR 9912 STMS IRCAM-CNRS-UPMC 1, place Igor Stravinsky, 75004 Paris [email protected] ABSTRACT or consensus on the representation of spatialization metadata (in spite of several initiatives [11, 12]) complicates the inter- The paper presents ToscA, a plugin that uses the OSC proto- exchange between applications. col to transmit automation parameters between a digital audio This paper introduces ToscA, a plugin that allows to commu- workstation and a remote application. A typical use case is the nicate automation parameters between a digital audio work- production of an object-oriented spatialized mix independently station and a remote application. The main use case is the of the limitations of the host applications. production of massively multichannel object-oriented spatial- ized mixes. 1. INTRODUCTION There is today a growing interest in sound spatialization. The 2. WORKFLOW movie industry seems to be shifting towards “3D” sound sys- We propose a workflow where the spatialization rendering tems; mobile devices have radically changed our listening engine lies outside of the workstation (Figure 1). This archi- habits and multimedia broadcast companies are evolving ac- tecture requires (bi-directional) communication between the cordingly, showing substantial interest in binaural techniques DAW and the remote renderer, and both the audio streams and and interactive content; massively multichannel equipments the automation data shall be conveyed. After being processed and transmission protocols are available and they facilitate the by the remote renderer, the spatialized signals could either installation of ambitious speaker setups in concert halls [1] or feed the reproduction setup (loudspeakers or headphones), be exhibition venues.
    [Show full text]
  • Chuck: a Strongly Timed Computer Music Language
    Ge Wang,∗ Perry R. Cook,† ChucK: A Strongly Timed and Spencer Salazar∗ ∗Center for Computer Research in Music Computer Music Language and Acoustics (CCRMA) Stanford University 660 Lomita Drive, Stanford, California 94306, USA {ge, spencer}@ccrma.stanford.edu †Department of Computer Science Princeton University 35 Olden Street, Princeton, New Jersey 08540, USA [email protected] Abstract: ChucK is a programming language designed for computer music. It aims to be expressive and straightforward to read and write with respect to time and concurrency, and to provide a platform for precise audio synthesis and analysis and for rapid experimentation in computer music. In particular, ChucK defines the notion of a strongly timed audio programming language, comprising a versatile time-based programming model that allows programmers to flexibly and precisely control the flow of time in code and use the keyword now as a time-aware control construct, and gives programmers the ability to use the timing mechanism to realize sample-accurate concurrent programming. Several case studies are presented that illustrate the workings, properties, and personality of the language. We also discuss applications of ChucK in laptop orchestras, computer music pedagogy, and mobile music instruments. Properties and affordances of the language and its future directions are outlined. What Is ChucK? form the notion of a strongly timed computer music programming language. ChucK (Wang 2008) is a computer music program- ming language. First released in 2003, it is designed to support a wide array of real-time and interactive Two Observations about Audio Programming tasks such as sound synthesis, physical modeling, gesture mapping, algorithmic composition, sonifi- Time is intimately connected with sound and is cation, audio analysis, and live performance.
    [Show full text]
  • Confessions-Of-A-Live-Coder.Pdf
    Proceedings of the International Computer Music Conference 2011, University of Huddersfield, UK, 31 July - 5 August 2011 CONFESSIONS OF A LIVE CODER Thor Magnusson ixi audio & Faculty of Arts and Media University of Brighton Grand Parade, BN2 0JY, UK ABSTRACT upon writing music with programming languages for very specific reasons and those are rarely comparable. This paper describes the process involved when a live At ICMC 2007, in Copenhagen, I met Andrew coder decides to learn a new musical programming Sorensen, the author of Impromptu and member of the language of another paradigm. The paper introduces the aa-cell ensemble that performed at the conference. We problems of running comparative experiments, or user discussed how one would explore and analyse the studies, within the field of live coding. It suggests that process of learning a new programming environment an autoethnographic account of the process can be for music. One of the prominent questions here is how a helpful for understanding the technological conditioning functional programming language like Impromptu of contemporary musical tools. The author is conducting would influence the thinking of a computer musician a larger research project on this theme: the part with background in an object orientated programming presented in this paper describes the adoption of a new language, such as SuperCollider? Being an avid user of musical programming environment, Impromptu [35], SuperCollider, I was intrigued by the perplexing code and how this affects the author’s musical practice. structure and work patterns demonstrated in the aa-cell performances using Impromptu [36]. I subsequently 1. INTRODUCTION decided to embark upon studying this environment and perform a reflexive study of the process.
    [Show full text]
  • FAUST Tutorial for Functional Programmers
    FAUST Tutorial for Functional Programmers Y. Orlarey, S. Letz, D. Fober, R. Michon ICFP 2017 / FARM 2017 What is Faust ? What is Faust? A programming language (DSL) to build electronic music instruments Some Music Programming Languages DARMS Kyma 4CED MCL DCMP LOCO PLAY2 Adagio MUSIC III/IV/V DMIX LPC PMX AML Elody Mars MusicLogo POCO AMPLE EsAC MASC Music1000 POD6 Antescofo Euterpea Max MUSIC7 POD7 Arctic Extempore Musictex PROD Autoklang MidiLisp Faust MUSIGOL Puredata Bang MidiLogo MusicXML PWGL Canon Flavors Band MODE Musixtex Ravel CHANT Fluxus MOM NIFF FOIL Moxc SALIERI Chuck NOTELIST FORMES MSX SCORE CLCE Nyquist FORMULA MUS10 ScoreFile CMIX OPAL Fugue MUS8 SCRIPT Cmusic OpenMusic Gibber MUSCMP SIREN CMUSIC Organum1 GROOVE MuseData SMDL Common Lisp Outperform SMOKE Music GUIDO MusES Overtone SSP Common HARP MUSIC 10 PE Music Haskore MUSIC 11 SSSP Patchwork Common HMSL MUSIC 360 ST Music PILE Notation INV MUSIC 4B Supercollider Pla invokator MUSIC 4BF Symbolic Composer Csound PLACOMP KERN MUSIC 4F Tidal CyberBand PLAY1 Keynote MUSIC 6 Brief Overview to Faust Faust offers end-users a high-level alternative to C to develop audio applications for a large variety of platforms. The role of the Faust compiler is to synthesize the most efficient implementations for the target language (C, C++, LLVM, Javascript, etc.). Faust is used on stage for concerts and artistic productions, for education and research, for open sources projects and commercial applications : What Is Faust Used For ? Artistic Applications Sonik Cube (Trafik/Orlarey), Smartfaust (Gracia), etc. Open-Source projects Guitarix: Hermann Meyer WebAudio Applications YC20 Emulator Thanks to the HTML5/WebAudio API and Asm.js it is now possible to run synthesizers and audio effects from a simple web page ! Sound Spatialization Ambitools: Pierre Lecomte, CNAM Ambitools (Faust Award 2016), 3-D sound spatialization using Ambisonic techniques.
    [Show full text]
  • Implementing Stochastic Synthesis for Supercollider and Iphone
    Implementing stochastic synthesis for SuperCollider and iPhone Nick Collins Department of Informatics, University of Sussex, UK N [dot] Collins ]at[ sussex [dot] ac [dot] uk - http://www.cogs.susx.ac.uk/users/nc81/index.html Proceedings of the Xenakis International Symposium Southbank Centre, London, 1-3 April 2011 - www.gold.ac.uk/ccmc/xenakis-international-symposium This article reflects on Xenakis' contribution to sound synthesis, and explores practical tools for music making touched by his ideas on stochastic waveform generation. Implementations of the GENDYN algorithm for the SuperCollider audio programming language and in an iPhone app will be discussed. Some technical specifics will be reported without overburdening the exposition, including original directions in computer music research inspired by his ideas. The mass exposure of the iGendyn iPhone app in particular has provided a chance to reach a wider audience. Stochastic construction in music can apply at many timescales, and Xenakis was intrigued by the possibility of compositional unification through simultaneous engagement at multiple levels. In General Dynamic Stochastic Synthesis Xenakis found a potent way to extend stochastic music to the sample level in digital sound synthesis (Xenakis 1992, Serra 1993, Roads 1996, Hoffmann 2000, Harley 2004, Brown 2005, Luque 2006, Collins 2008, Luque 2009). In the central algorithm, samples are specified as a result of breakpoint interpolation synthesis (Roads 1996), where breakpoint positions in time and amplitude are subject to probabilistic perturbation. Random walks (up to second order) are followed with respect to various probability distributions for perturbation size. Figure 1 illustrates this for a single breakpoint; a full GENDYN implementation would allow a set of breakpoints, with each breakpoint in the set updated by individual perturbations each cycle.
    [Show full text]
  • Isupercolliderkit: a Toolkit for Ios Using an Internal Supercollider Server As a Sound Engine
    ICMC 2015 – Sept. 25 - Oct. 1, 2015 – CEMI, University of North Texas iSuperColliderKit: A Toolkit for iOS Using an Internal SuperCollider Server as a Sound Engine Akinori Ito Kengo Watanabe Genki Kuroda Ken’ichiro Ito Tokyo University of Tech- Watanabe-DENKI Inc. Tokyo University of Tech- Tokyo University of Tech- nology [email protected] nology nology [email protected]. [email protected]. [email protected]. ac.jp ac.jp ac.jp The editor client sends the OSC code-fragments to its server. ABSTRACT Due to adopting the model, SuperCollider has feature that a iSuperColliderKit is a toolkit for iOS using an internal programmer can dynamically change some musical ele- SuperCollider Server as a sound engine. Through this re- ments, phrases, rhythms, scales and so on. The real-time search, we have adapted the exiting SuperCollider source interactivity is effectively utilized mainly in the live-coding code for iOS to the latest environment. Further we attempt- field. If the iOS developers would make their application ed to detach the UI from the sound engine so that the native adopting the “sound-server” model, using SuperCollider iOS visual objects built by objective-C or Swift, send to the seems to be a reasonable choice. However, the porting situa- internal SuperCollider server with any user interaction tion is not so good. SonicPi[5] is the one of a musical pro- events. As a result, iSuperColliderKit makes it possible to gramming environment that has SuperCollider server inter- utilize the vast resources of dynamic real-time changing nally. However, it is only for Raspberry Pi, Windows and musical elements or algorithmic composition on SuperCol- OSX.
    [Show full text]
  • Pynchon's Sound of Music
    Pynchon’s Sound of Music Christian Hänggi Pynchon’s Sound of Music DIAPHANES PUBLISHED WITH SUPPORT BY THE SWISS NATIONAL SCIENCE FOUNDATION 1ST EDITION ISBN 978-3-0358-0233-7 10.4472/9783035802337 DIESES WERK IST LIZENZIERT UNTER EINER CREATIVE COMMONS NAMENSNENNUNG 3.0 SCHWEIZ LIZENZ. LAYOUT AND PREPRESS: 2EDIT, ZURICH WWW.DIAPHANES.NET Contents Preface 7 Introduction 9 1 The Job of Sorting It All Out 17 A Brief Biography in Music 17 An Inventory of Pynchon’s Musical Techniques and Strategies 26 Pynchon on Record, Vol. 4 51 2 Lessons in Organology 53 The Harmonica 56 The Kazoo 79 The Saxophone 93 3 The Sounds of Societies to Come 121 The Age of Representation 127 The Age of Repetition 149 The Age of Composition 165 4 Analyzing the Pynchon Playlist 183 Conclusion 227 Appendix 231 Index of Musical Instruments 233 The Pynchon Playlist 239 Bibliography 289 Index of Musicians 309 Acknowledgments 315 Preface When I first read Gravity’s Rainbow, back in the days before I started to study literature more systematically, I noticed the nov- el’s many references to saxophones. Having played the instru- ment for, then, almost two decades, I thought that a novelist would not, could not, feature specialty instruments such as the C-melody sax if he did not play the horn himself. Once the saxophone had caught my attention, I noticed all sorts of uncommon references that seemed to confirm my hunch that Thomas Pynchon himself played the instrument: McClintic Sphere’s 4½ reed, the contra- bass sax of Against the Day, Gravity’s Rainbow’s Charlie Parker passage.
    [Show full text]
  • The Viability of the Web Browser As a Computer Music Platform
    Lonce Wyse and Srikumar Subramanian The Viability of the Web Communications and New Media Department National University of Singapore Blk AS6, #03-41 Browser as a Computer 11 Computing Drive Singapore 117416 Music Platform [email protected] [email protected] Abstract: The computer music community has historically pushed the boundaries of technologies for music-making, using and developing cutting-edge computing, communication, and interfaces in a wide variety of creative practices to meet exacting standards of quality. Several separate systems and protocols have been developed to serve this community, such as Max/MSP and Pd for synthesis and teaching, JackTrip for networked audio, MIDI/OSC for communication, as well as Max/MSP and TouchOSC for interface design, to name a few. With the still-nascent Web Audio API standard and related technologies, we are now, more than ever, seeing an increase in these capabilities and their integration in a single ubiquitous platform: the Web browser. In this article, we examine the suitability of the Web browser as a computer music platform in critical aspects of audio synthesis, timing, I/O, and communication. We focus on the new Web Audio API and situate it in the context of associated technologies to understand how well they together can be expected to meet the musical, computational, and development needs of the computer music community. We identify timing and extensibility as two key areas that still need work in order to meet those needs. To date, despite the work of a few intrepid musical Why would musicians care about working in explorers, the Web browser platform has not been the browser, a platform not specifically designed widely considered as a viable platform for the de- for computer music? Max/MSP is an example of a velopment of computer music.
    [Show full text]
  • Estuary: Browser-Based Collaborative Projectional Live Coding of Musical Patterns
    Estuary: Browser-based Collaborative Projectional Live Coding of Musical Patterns David Ogborn Jamie Beverley Luis Navarro del Angel McMaster University McMaster University McMaster University [email protected] [email protected] [email protected] Eldad Tsabary Alex McLean Concordia University Deutsches Museum [email protected] [email protected] ABSTRACT This paper describes the initial design and development of Estuary, a browser-based collaborative projectional editing environment built on top of the popular TidalCycles language for the live coding of musical pattern. Key features of Estuary include a strict form of structure editing (making syntactical errors impossible), a click-only border-free approach to interface design, explicit notations to modulate the liveness of different parts of the code, and a server-based network collaboration system that can be used for many simultaneous collaborative live coding performances, as well as to present different views of the same live coding activity. Estuary has been developed using Reflex-DOM, a Haskell-based framework for web development whose strictness promises robustness and security advantages. 1 Goals 1.1 Projectional Editing and Live Coding The interface of the text editor is focused on alphabetic characters rather than on larger tokens that directly represent structures specific to a certain way of notating algorithms. Programmers using text editors, including live coding artists using text-based programming languages as their media, then have to devote cognitive resources to dealing with various kinds of mistakes and delays, including but not limited to accidentally typing the wrong character, not recalling the name of an entity, attempting to use a construct from one language in another, and misrecognition of “where one is” in the hierarchy of structures, among many other possible challenges.
    [Show full text]
  • DVD Program Notes
    DVD Program Notes Part One: Thor Magnusson, Alex Click Nilson is a Swedish avant McLean, Nick Collins, Curators garde codisician and code-jockey. He has explored the live coding of human performers since such Curators’ Note early self-modifiying algorithmic text pieces as An Instructional Game [Editor’s note: The curators attempted for One to Many Musicians (1975). to write their Note in a collaborative, He is now actively involved with improvisatory fashion reminiscent Testing the Oxymoronic Potency of of live coding, and have left the Language Articulation Programmes document open for further interaction (TOPLAP), after being in the right from readers. See the following URL: bar (in Hamburg) at the right time (2 https://docs.google.com/document/d/ AM, 15 February 2004). He previously 1ESzQyd9vdBuKgzdukFNhfAAnGEg curated for Leonardo Music Journal LPgLlCe Mw8zf1Uw/edit?hl=en GB and the Swedish Journal of Berlin Hot &authkey=CM7zg90L&pli=1.] Drink Outlets. Alex McLean is a researcher in the area of programming languages for Figure 1. Sam Aaron. the arts, writing his PhD within the 1. Overtone—Sam Aaron Intelligent Sound and Music Systems more effectively and efficiently. He group at Goldsmiths College, and also In this video Sam gives a fast-paced has successfully applied these ideas working within the OAK group, Uni- introduction to a number of key and techniques in both industry versity of Sheffield. He is one-third of live-programming techniques such and academia. Currently, Sam the live-coding ambient-gabba-skiffle as triggering instruments, scheduling leads Improcess, a collaborative band Slub, who have been making future events, and synthesizer design.
    [Show full text]
  • Live Coding Practice
    Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME07), New York, NY, USA Live Coding Practice Click Nilson University of Sussex Department of Informatics Falmer, Brighton, BN1 9QH +44 (0)1273 877837 [email protected] ABSTRACT I wish to discuss the practice of live coding, not only as an area of investigation and activity, but also in the sense of Live coding is almost the antithesis of immediate physical actually practising to improve ability. I shall describe the musicianship, and yet, has attracted the attentions of a number results of a month long experiment where live coding was of computer-literate musicians, as well as the music-savvy undertaken every day in an attempt to cast live coding as akin programmers that might be more expected. It is within the to more familiar forms of musical practice. context of live coding that I seek to explore the question of practising a contemporary digital musical instrument, which is Exercises will be discussed that may be found efficacious to often raised as an aside but more rarely carried out in research improve live coding abilities. In relation to these, literature (though see [12]). At what stage of expertise are the members from developmental and educational psychology (both of of the live coding movement, and what practice regimes might music and computing), and theories of expertise and skill help them to find their true potential? acquisition, were consulted. The possibility of expert performance [7] in live coding is discussed. Keywords I should state at the outset that I will be focusing primarily on Practice, practising, live coding one particular form of live coding in this paper, that of programming a computer on a concert platform as a performance act, often as an unaccompanied soloist.
    [Show full text]