A Live Coding Framework for Gestural Performance and Electronic Music

Total Page:16

File Type:pdf, Size:1020Kb

A Live Coding Framework for Gestural Performance and Electronic Music Proceedings of the 17th Linux Audio Conference (LAC-19), CCRMA, Stanford University, USA, March 23–26, 2019 SC-HACKS: A LIVE CODING FRAMEWORK FOR GESTURAL PERFORMANCE AND ELECTRONIC MUSIC Iannis Zannos Department of Audiovisual Arts Ionian University, Corfu, Greece [email protected] ABSTRACT demanding task which is subject of research in various gestural in- terface applications. The work related in the present paper repre- This paper presents a library for SuperCollider that enables live cod- sents an indirect and bottom-up approach to the topic, based on DIY ing adapted to two domains of performance: telematic dance with and open source components and emphasizing transparency and self- wireless sensors and electroacoustic music performance. The library sufficiency at each step. It does not address the task of gesture recog- solves some fundamental issues of usability in SuperCollider which nition, but rather it aims at supporting live coding in conjunction with have been also addressed by the established live-coding framework dancers and instrumental performers. Ongoing experiments together JITLib, such as modifying synth and pattern processes while they with such performers, are helping to identify low-level tasks and fea- are working, linking control and audio i/o between synths, and gen- tures which are essential for practical work. This type of work is eration of GUIs. It offers new implementations, which are more purely empirical, and tries to identify useability criteria purely from compact and easy to use while emphasizing transparency and scal- practice, rather than to develop features that are inferred from known ability of code. It introduces binary operators which when coupled interaction paradigms in other related domains. At this stage of the to polymorphism facilitate live coding. Several foundation classes project it is still too early to formulate conclusions from these ex- are introduced whose purpose is to support programming patterns or periments. Instead, this paper concentrates on the fundamentals of commonly used practices such as the observer pattern, function call- the implemenation framework on which this work is based. These backs and system-wide object messaging between language, server are readily identifiable and their potential impact on further develop- processes and GUI. ment work as well as experiments are visible. This paper therefore The use of the library is demonstrated in two contexts: a telem- describes the basic principles and design strategy of the sc-hacks li- atic dance project with custom low-cost movement sensors, and dig- brary, and discusses its perceived impact on performances. Finally, ital implementations of early electroacoustic music scores by J. Har- it outlines some future perspectives for work involving data analysis vey and K. Stockhausen. The latter involves coding of a complex and machine learning. score and generation of a GUI representation with time tracking and live control. 1.2. Live Coding Frameworks in SuperCollider 1.2.1. Types of Live Coding Frameworks 1. BACKGROUND Live Coding libraries can be divided into two main categories de- 1.1. Bridging Live Coding and Gestural Interaction pending on the level of generality of their implementation and their application scope. First, there are libraries which extend Super- The performance practice known as live coding emerged from the Collider usage in order to simplify the coding of very behaviors ability of software to modify state and behavior through the inter- or features which are very common in performance, but are other- active evaluation of code fragments and to synthesize audio at run- wise inconvenient to code in SuperCollider. To this category be- time. As a result, several programming environments and technolo- longs the JITLib framework. JITLib (Just-In-Time programming Li- gies supporting live coding have been developed in the past 20 years, brary) has been around since at least August 2006, with an early such as SuperCollider[1], Impromptu[2], ChucK[3] , Extempore[4], version since ca 2000 1 and is very widely used in the commu- Gibber[5], and others. It has been noted, however, that such envi- nity, being the de-facto go-to tool for live coding in SuperCollider. ronments and practices suffer from a lack of immediacy and those The second category consists of libraries that concentrate on spe- visible gestural elements that are traditionally associated with live cialized usage scenarios and attempt to create domain-specific mini- performance [6]. Recent research projects attempt to re-introduce languages for those scenarios on top of SuperCollider. Such are: gestural aspects or to otherwise support social and interactive ele- IXI-Lang (a sequencer / sample playing mini-language by Thor Mag- ments in musical performance using technologies associated with nusson [12]), SuperSampler (a polyphonic concatenative sampler live coding ([7], [8], [9], [10]). Amongst various types of gestural with automatic arrangement of sounds on a 2-dimensional plane, interaction, dance is arguably the one least related to textual coding. by Shu-Cheng Allen Wu [13]), and Colliding (An "environment for Few recent studies exist which prepare the field for bridging dance synthesis-oientd live coding", simplifying the coding of Unit Gen- with coding ([11]). The challenges in this domain can be summa- erator graphs, by Gerard Roma [14]). Finally, TidalCycles by Alex rized as the problem of bridging the symbolic domains of dance and McLean [15] should be mentioned, which develops its own live cod- music notation and the subsymbolic numerical domain of control ing language based on Haskell and focussing on the coding of com- data streams input from sensors. This also implies translating be- tween continuous streams of data and individual timed events, pos- 1See https://swiki.hfbk-hamburg.de/ sibly tagged with symbolic values. This is a technologically higly MusicTechnology/566 (accessed 20-December-2018) Proceedings of the 17th Linux Audio Conference (LAC-19), CCRMA, Stanford University, USA, March 23–26, 2019 plex layers of synchronized beat cycles with sample playback and a Class which holds a Dictionary and ensures that a predefined cus- synthesis, and uses the SuperCollider synthesis server as audio en- tom function is executed each time that a value is stored in one of gine. the keys of the Dictionary. Sc-hacks defines a subclass of Environ- mentRedirect similar to ProxySpace, but defines a custom function 1.2.2. sc-hacks Objectives and Approach that provides extra flexibility in setting values which is useful during performance in accessing control parameters. This enables keeping sc-hacks belongs to the first category of frameworks, and its initial track of which parameter refers to which process, storing parameter motivation was partly to implement some of the solutions of JITLib values between subsequent starts of a process belonging to a player, in more robust, simple, and general ways. In parallel, inspiration and updating GUI elements to display values as these change. Ad- from ChucK’s => operator led to the development of a minimal ex- ditionally, sc-hacks makes the environment of the player current af- tension of the language based on 4 binary operators (+>, <+, *> ter certain operations, in order to make the current context the one *<), which, coupled with polymorphism, permit simplified and com- normally expected by the performer. This however is not always a pact coding of several common sound-structure coding patterns. Fur- secure solution. For this reason, the target environment can be pro- thermore, the implementation of some basic programming patterns 2 vided as adjective argument in binary operators involving players, opened new possibilities for the creation of GUI elements which up- which ensures that code will work as expected even when changing date their state. This led to a proliferation of GUI building and man- the order of execution of code in irregular manner. agement facilities and resulted in several interfaces for live coding tasks, such as a code browser based on the concept of code snippets, 2.1.3. Modifying event generating processes on the fly a browser for editing and controlling the behavior of named players holding synth or pattern items, and shortcuts for building GUI wid- Event generating algorihm processes are implemented in SuperCol- gets displaying values of parameters controlled by OSC, MIDI or al- lider through class Pbind. Pbind takes an array of keys and associ- gorithmic processes. Finally, ongoing experiments with dancers and ated streams as argument and creates a Routine that calculates pa- instrumentalists are giving rise to new interface and notation ideas. rameters and event types for each set of keys and values obtained The current focus is on building tools for recording, visualising and from their associated streams, and schedules them according to the playback of data received from wireless sensors via OSC, in order duration obtained from the stream stored under the key dur. The im- to experiment with the data in performance, and to apply machine- plementation of Pbind allows no access to the values of each event, learning algorithms on them. i.e. it is not possible to read or to modify the value of a key at any moment. Furthermore, it is not possible to modify the structure of the dictionary of keys and streams while its event-generating pro- 2. APPROACH cess is playing. This means that Pbind processes cannot be modified 2.1. Players and Player Environments interactively while they are playing. In order to circumvent this lim- itation, a number of techniques have been devised which require to JITLib addresses four fundamental problems in coding for concur- add code for any key that one wishes to read or to modify. JITLib rent sound processes: (a) Use of named placeholders for sound gen- uses such techniques and also provides a way to substitute a Pbind erating processes, (b) managing the control parameters of processes process while it is running with a new one, thereby indirectly al- in separate namespaces, (c) modifying event-generating algorithmic lowing modification of that process.
Recommended publications
  • Confessions-Of-A-Live-Coder.Pdf
    Proceedings of the International Computer Music Conference 2011, University of Huddersfield, UK, 31 July - 5 August 2011 CONFESSIONS OF A LIVE CODER Thor Magnusson ixi audio & Faculty of Arts and Media University of Brighton Grand Parade, BN2 0JY, UK ABSTRACT upon writing music with programming languages for very specific reasons and those are rarely comparable. This paper describes the process involved when a live At ICMC 2007, in Copenhagen, I met Andrew coder decides to learn a new musical programming Sorensen, the author of Impromptu and member of the language of another paradigm. The paper introduces the aa-cell ensemble that performed at the conference. We problems of running comparative experiments, or user discussed how one would explore and analyse the studies, within the field of live coding. It suggests that process of learning a new programming environment an autoethnographic account of the process can be for music. One of the prominent questions here is how a helpful for understanding the technological conditioning functional programming language like Impromptu of contemporary musical tools. The author is conducting would influence the thinking of a computer musician a larger research project on this theme: the part with background in an object orientated programming presented in this paper describes the adoption of a new language, such as SuperCollider? Being an avid user of musical programming environment, Impromptu [35], SuperCollider, I was intrigued by the perplexing code and how this affects the author’s musical practice. structure and work patterns demonstrated in the aa-cell performances using Impromptu [36]. I subsequently 1. INTRODUCTION decided to embark upon studying this environment and perform a reflexive study of the process.
    [Show full text]
  • FAUST Tutorial for Functional Programmers
    FAUST Tutorial for Functional Programmers Y. Orlarey, S. Letz, D. Fober, R. Michon ICFP 2017 / FARM 2017 What is Faust ? What is Faust? A programming language (DSL) to build electronic music instruments Some Music Programming Languages DARMS Kyma 4CED MCL DCMP LOCO PLAY2 Adagio MUSIC III/IV/V DMIX LPC PMX AML Elody Mars MusicLogo POCO AMPLE EsAC MASC Music1000 POD6 Antescofo Euterpea Max MUSIC7 POD7 Arctic Extempore Musictex PROD Autoklang MidiLisp Faust MUSIGOL Puredata Bang MidiLogo MusicXML PWGL Canon Flavors Band MODE Musixtex Ravel CHANT Fluxus MOM NIFF FOIL Moxc SALIERI Chuck NOTELIST FORMES MSX SCORE CLCE Nyquist FORMULA MUS10 ScoreFile CMIX OPAL Fugue MUS8 SCRIPT Cmusic OpenMusic Gibber MUSCMP SIREN CMUSIC Organum1 GROOVE MuseData SMDL Common Lisp Outperform SMOKE Music GUIDO MusES Overtone SSP Common HARP MUSIC 10 PE Music Haskore MUSIC 11 SSSP Patchwork Common HMSL MUSIC 360 ST Music PILE Notation INV MUSIC 4B Supercollider Pla invokator MUSIC 4BF Symbolic Composer Csound PLACOMP KERN MUSIC 4F Tidal CyberBand PLAY1 Keynote MUSIC 6 Brief Overview to Faust Faust offers end-users a high-level alternative to C to develop audio applications for a large variety of platforms. The role of the Faust compiler is to synthesize the most efficient implementations for the target language (C, C++, LLVM, Javascript, etc.). Faust is used on stage for concerts and artistic productions, for education and research, for open sources projects and commercial applications : What Is Faust Used For ? Artistic Applications Sonik Cube (Trafik/Orlarey), Smartfaust (Gracia), etc. Open-Source projects Guitarix: Hermann Meyer WebAudio Applications YC20 Emulator Thanks to the HTML5/WebAudio API and Asm.js it is now possible to run synthesizers and audio effects from a simple web page ! Sound Spatialization Ambitools: Pierre Lecomte, CNAM Ambitools (Faust Award 2016), 3-D sound spatialization using Ambisonic techniques.
    [Show full text]
  • DVD Program Notes
    DVD Program Notes Part One: Thor Magnusson, Alex Click Nilson is a Swedish avant McLean, Nick Collins, Curators garde codisician and code-jockey. He has explored the live coding of human performers since such Curators’ Note early self-modifiying algorithmic text pieces as An Instructional Game [Editor’s note: The curators attempted for One to Many Musicians (1975). to write their Note in a collaborative, He is now actively involved with improvisatory fashion reminiscent Testing the Oxymoronic Potency of of live coding, and have left the Language Articulation Programmes document open for further interaction (TOPLAP), after being in the right from readers. See the following URL: bar (in Hamburg) at the right time (2 https://docs.google.com/document/d/ AM, 15 February 2004). He previously 1ESzQyd9vdBuKgzdukFNhfAAnGEg curated for Leonardo Music Journal LPgLlCe Mw8zf1Uw/edit?hl=en GB and the Swedish Journal of Berlin Hot &authkey=CM7zg90L&pli=1.] Drink Outlets. Alex McLean is a researcher in the area of programming languages for Figure 1. Sam Aaron. the arts, writing his PhD within the 1. Overtone—Sam Aaron Intelligent Sound and Music Systems more effectively and efficiently. He group at Goldsmiths College, and also In this video Sam gives a fast-paced has successfully applied these ideas working within the OAK group, Uni- introduction to a number of key and techniques in both industry versity of Sheffield. He is one-third of live-programming techniques such and academia. Currently, Sam the live-coding ambient-gabba-skiffle as triggering instruments, scheduling leads Improcess, a collaborative band Slub, who have been making future events, and synthesizer design.
    [Show full text]
  • Live Coding for Human-In- The-Loop Simulation
    Live coding for Human-in- the-Loop Simulation Ben Swift Research School of Computer Science Australian National University live coding the practice of writing & editing live programs Outline • Extempore: a software environment for live coding • Live demo: particle-in-Cell (PIC) simulation • Future directions: human-in-the-loop processing in simulation, modelling & decision support extempore • Open-source (MIT Licence) & available for OSX, Linux & Windows (http://github.com/digego/extempore) • Lisp-style syntax (Scheme-inspired) • High-performance compiled code (LLVM JIT) • Toll-free interop with C & Fortran • Hot-swappable at a function level programmer TCP extempore (local or remote) extempore Extempore code assembler & LLVM IR machine code any function can be redefined on-the-fly programmer extempore extempore extempore extempore extempore extempore extempore extempore extempore C/Fortran output main Extempore program C/Fortran output main extempore Extempore program changes main C/Fortran output init push solve extempore Extempore program changes main push C/Fortran output solve init extempore Extempore program changes main push C/Fortran output print init solve extempore Extempore program changes main push C/Fortran output print init solve extempore Extempore program changes main push C/Fortran output print init solve extempore Extempore program changes main push C/Fortran vis init solve extempore Extempore program changes main init push vis solve extempore The power of live programming • Change parameters on the fly, with feedback • Interactively add debug output (no loss of state) • Switch optimisers/solvers etc. on-the-fly (modularity in codebase & libraries helps) what does this have to do with DSI? Similarities • Numerical modelling (HPC-style problems) • High-dimensional, non-linear parameter spaces • Long-running jobs = long delays for feedback Opportunities • Joint simulation, experimentation and wargaming (e.g.
    [Show full text]
  • Alistair Riddell
    [email protected] [email protected] [email protected] http://www.alistairriddell.com au.linkedin.com/in/alistairriddell? https://www.researchgate.net https://independent.academia.edu/AlistairRiddell Alistair Riddell Brief Biography Over the past 40 years Melbourne born Alistair Riddell has been active as a musician, composer, performer, creative coder, installation artist, collaborator, educator, supervisor, writer and promoter of evolving digital creative practices. Beginning with a focus on music composition using technology in the 1980s, his interests have ranged widely over the digital domain where he developed particular interests in the processes of interaction between the creative mind and physical objects. He has spent considerable time experimenting with technology and ideas, and many of his completed works have been publicly presented in a diverse range of curated contexts. Alistair holds degrees in Music and Computer Science from La Trobe University and a PhD in music composition from Princeton University. He lectured in Sound Art and Physical Computing at the Australian National University after which he returned to Melbourne and has worked as a freelance artist/researcher collaborating on projects around Australia. 2 Education Princeton University (New Jersey, USA) Naumberg Fellowship 1993 PhD Thesis title: Composing the Interface 1991 Master of Fine Arts La Trobe University (Melbourne, Australia) 1989 Master of Arts Thesis title: A Perspective on the Acoustic Piano as a Performance Medium under Machine Control 1986 Post Graduate Diploma in Computer Science 1981 BA (Hons) Job History/Appointments/Projects (2000 onwards) Carillon Keyboard Project. National Capital Authority. Canberra. (October/November 2019) Collective Social Intelligence (@ foysarcade.com ) Research work (June 2017 – Nov 2019) Australian Catholic University.
    [Show full text]
  • Reflections on Learning Coding As a Musician V. 1
    REFLECTIONS ON LEARNING LIVE CODING AS A MUSICIAN May Cheung LiveCode.NYC [email protected] ABSTRACT Live-coding music has become a large trend in recent years as programmers and digital artists alike have taken an interest in creating algorithmic music in live setings. Interestingly, a growing number of musicians are becoming live coders and they are becoming more aware of its potential to create live electronic music using a variety of coding languages. As a musician who live codes music, it became apparent to me that my process in creating musical structure, harmonic ideas and melodic ideas were different than live coders who came from a programming background. Using the live code environment Sonic Pi (based on the programming language Ruby, created by Dr. Sam Aaron) as a reference, we argue the parallels and differences between the experience of live coding music versus the experience of performing and playing music. In rare occasions, the dichotomy of the two worlds converge as we present the work of Anne Veinberg, a renown classically-trained pianist who uses a MIDI keyboard as an interface to code through a program created by composer-performer Felipe Ignacio Noriega. 1. INTRODUCTION Afer spending many years as a musician from a jazz performance background, I was excited to try a new method of creating music. Introduced to a live coding program called Sonic Pi through friends in January 2018, I have had many opportunities to live code and present in front of audiences: from sizable nightclub crowds to atentive teenagers during their coding summer camp at NYU.
    [Show full text]
  • Audio Visual Instrument in Puredata
    Francesco Redente Production Analysis for Advanced Audio Project Option - B Student Details: Francesco Redente 17524 ABHE0515 Submission Code: PAB BA/BSc (Hons) Audio Production Date of submission: 28/08/2015 Module Lecturers: Jon Pigrem and Andy Farnell word count: 3395 1 Declaration I hereby declare that I wrote this written assignment / essay / dissertation on my own and without the use of any other than the cited sources and tools and all explanations that I copied directly or in their sense are marked as such, as well as that the dissertation has not yet been handed in neither in this nor in equal form at any other official commission. II2 Table of Content Introduction ..........................................................................................4 Concepts and ideas for GSO ........................................................... 5 Methodology ....................................................................................... 6 Research ............................................................................................ 7 Programming Implementation ............................................................. 8 GSO functionalities .............................................................................. 9 Conclusions .......................................................................................13 Bibliography .......................................................................................15 III3 Introduction The aim of this paper will be to present the reader with a descriptive analysis
    [Show full text]
  • Distributed Musical Decision-Making in an Ensemble of Musebots: Dramatic Changes and Endings
    Distributed Musical Decision-making in an Ensemble of Musebots: Dramatic Changes and Endings Arne Eigenfeldt Oliver Bown Andrew R. Brown Toby Gifford School for the Art and Design Queensland Sensilab Contemporary Arts University of College of Art Monash University Simon Fraser University New South Wales Griffith University Melbourne, Australia Vancouver, Canada Sydney, Australia Brisbane, Australia [email protected] [email protected] [email protected] [email protected] Abstract tion, while still allowing each musebot to employ different A musebot is defined as a piece of software that auton- algorithmic strategies. omously creates music and collaborates in real time This paper presents our initial research examining the with other musebots. The specification was released affordances of a multi-agent decision-making process, de- early in 2015, and several developers have contributed veloped in a collaboration between four coder-artists. The musebots to ensembles that have been presented in authors set out to explore strategies by which the musebot North America, Australia, and Europe. This paper de- ensemble could collectively make decisions about dramatic scribes a recent code jam between the authors that re- structure of the music, including planning of more or less sulted in four musebots co-creating a musical structure major changes, the biggest of which being when to end. that included negotiated dynamic changes and a negoti- This is in the context of an initial strategy to work with a ated ending. Outcomes reported here include a demon- stration of the protocol’s effectiveness across different distributed decision-making process where each author's programming environments, the establishment of a par- musebot agent makes music, and also contributes to the simonious set of parameters for effective musical inter- decision-making process.
    [Show full text]
  • Two Perspectives on Rebooting Computer Music Education: Composition and Computer Science
    TWO PERSPECTIVES ON REBOOTING COMPUTER MUSIC EDUCATION: COMPOSITION AND COMPUTER SCIENCE Ben Swift Charles Patrick Martin Alexander Hunter Research School of Research School of School of Music Computer Science Computer Science The Australian National The Australian National The Australian National University University University [email protected] [email protected] [email protected] 1. ABSTRACT puter music practices that have emerged so far. Laptop ensembles and orchestras, in addition to being hubs for collectives of experimental musicians, have be- come a popular feature in music technology tertiary ed- 3. BACKGROUND ucation curricula. The (short) history of such groups re- While computer music ensembles are hardly a new phe- veals tensions in what these groups are for, and where they nomenon (see Bischoff et al. 1978, for an early example), fit within their enfolding institutions. Are the members interest in orchestras of laptops surged in the 2000s at programmers, composers, or performers? Should laptop universities in the USA including PlOrk (Trueman et al. ensemble courses focus on performance practice, compo- 2006), SlOrk (Wang et al. 2009), L2Ork (Bukvic et al. sition, or digital synthesis? Should they be anarchic or hi- 2010), and others. In contrast to electronic music collec- erarchical? Eschewing potential answers, we instead pose tives, the "*Ork" model pioneered in Princeton’s PlOrk a new question: what happens when computer science stu- adapted the hierarchical structure of a university orches- dents and music students are jumbled together in the same tra, being composed of a large number of musicians with group? In this paper we discuss what a laptop ensemble identical laptop and speaker setups.
    [Show full text]
  • A Rationale for Faust Design Decisions
    A Rationale for Faust Design Decisions Yann Orlarey1, Stéphane Letz1, Dominique Fober1, Albert Gräf2, Pierre Jouvelot3 GRAME, Centre National de Création Musicale, France1, Computer Music Research Group, U. of Mainz, Germany2, MINES ParisTech, PSL Research University, France3 DSLDI, October 20, 2014 1-Music DSLs Quick Overview Some Music DSLs DARMS DCMP PLACOMP DMIX LPC PLAY1 4CED MCL Elody Mars PLAY2 Adagio MUSIC EsAC MASC PMX AML III/IV/V Euterpea Max POCO AMPLE MusicLogo Extempore MidiLisp POD6 Arctic Music1000 Faust MidiLogo POD7 Autoklang MUSIC7 Flavors MODE PROD Bang Musictex Band MOM Puredata Canon MUSIGOL Fluxus Moxc PWGL CHANT MusicXML FOIL MSX Ravel Chuck Musixtex FORMES MUS10 SALIERI CLCE NIFF FORMULA MUS8 SCORE CMIX NOTELIST Fugue MUSCMP ScoreFile Cmusic Nyquist Gibber MuseData SCRIPT CMUSIC OPAL GROOVE MusES SIREN Common OpenMusic GUIDO Lisp Music MUSIC 10 Organum1 SMDL HARP Common MUSIC 11 Outperform SMOKE Music Haskore MUSIC 360 Overtone SSP Common HMSL MUSIC 4B PE SSSP Music INV Notation MUSIC Patchwork ST invokator 4BF Csound PILE Supercollider KERN MUSIC 4F CyberBand Pla Symbolic Keynote MUSIC 6 Composer Kyma Tidal LOCO First Music DSLs, Music III/IV/V (Max Mathews) 1960: Music III introduces the concept of Unit Generators 1963: Music IV, a port of Music III using a macro assembler 1968: Music V written in Fortran (inner loops of UG in assembler) ins 0 FM; osc bl p9 p10 f2 d; adn bl bl p8; osc bl bl p7 fl d; adn bl bl p6; osc b2 p5 p10 f3 d; osc bl b2 bl fl d; out bl; FM synthesis coded in CMusic Csound Originally developed
    [Show full text]
  • Appendix D List of Performances
    Durham E-Theses Social Systems for Improvisation in Live Computer Music KNOTTS, MICHELLE,ANNE How to cite: KNOTTS, MICHELLE,ANNE (2018) Social Systems for Improvisation in Live Computer Music, Durham theses, Durham University. Available at Durham E-Theses Online: http://etheses.dur.ac.uk/13151/ Use policy The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in Durham E-Theses • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full Durham E-Theses policy for further details. Academic Support Oce, Durham University, University Oce, Old Elvet, Durham DH1 3HP e-mail: [email protected] Tel: +44 0191 334 6107 http://etheses.dur.ac.uk Social Systems for Improvisation in Live Computer Music Shelly KNOTTS Commentary on the portfolio of compositions. Submitted in fulfillment of the requirements for the degree of Doctor of Philosophy by Composition in the Department of Music Durham University May 21, 2018 i Declaration of Authorship I, Shelly KNOTTS, declare that this thesis titled “Social Systems for Improvisation in Live Computer Music” and the work presented in it are my own. I confirm that: • This work was done wholly or mainly while in candidature for a research de- gree at this University.
    [Show full text]
  • Disruption and Creativity in Live Coding
    Disruption and creativity in live coding Ushini Attanayake∗, Ben Swift∗, Henry Gardner∗ and Andrew Sorenseny ∗Research School of Computer Science Australian National University, Canberra, ACT 2600, Australia Email: fushini.attanayake,henry.gardner,[email protected] yMOSO Corporation, Launceston, Tasmania, Australia Email: [email protected] Abstract—Live coding is an artistic performance practice There has been a long-standing interest in the development where computer music and graphics are created, displayed and of intelligent agents as collaborators in creative live musical manipulated in front of a live audience together with projections performance [10]–[13]. However, in most cases the musician of the (predominantly text based) computer code. Although the pressures of such performance practices are considerable, and the intelligent agent collaborate and interact in the sonic until now only familiar programming aids (for example syntax domain, i.e. the musical output of human and agent are mixed highlighting and textual overlays) have been used to assist live together, with both being able to listen and respond to each coders. In this paper we integrate an intelligent agent into a live other. Comparatively few systems explore the possibility of an coding system and examine how that agent performed in two intelligent agent which interacts with the instrument directly, contrasting modes of operation—as a recommender, where on request the the agent would generate (but not execute) domain- although examples of this exist in the NIME community [14], appropriate code transformations in the performer’s text editor, [15]. The nature of live coding, with the live coder as an and as a disruptor, where the agent generated and immediately EUP building and modifying their code (i.e.
    [Show full text]