Distributed Musical Decision-Making in an Ensemble of Musebots: Dramatic Changes and Endings
Total Page:16
File Type:pdf, Size:1020Kb
Distributed Musical Decision-making in an Ensemble of Musebots: Dramatic Changes and Endings Arne Eigenfeldt Oliver Bown Andrew R. Brown Toby Gifford School for the Art and Design Queensland Sensilab Contemporary Arts University of College of Art Monash University Simon Fraser University New South Wales Griffith University Melbourne, Australia Vancouver, Canada Sydney, Australia Brisbane, Australia [email protected] [email protected] [email protected] [email protected] Abstract tion, while still allowing each musebot to employ different A musebot is defined as a piece of software that auton- algorithmic strategies. omously creates music and collaborates in real time This paper presents our initial research examining the with other musebots. The specification was released affordances of a multi-agent decision-making process, de- early in 2015, and several developers have contributed veloped in a collaboration between four coder-artists. The musebots to ensembles that have been presented in authors set out to explore strategies by which the musebot North America, Australia, and Europe. This paper de- ensemble could collectively make decisions about dramatic scribes a recent code jam between the authors that re- structure of the music, including planning of more or less sulted in four musebots co-creating a musical structure major changes, the biggest of which being when to end. that included negotiated dynamic changes and a negoti- This is in the context of an initial strategy to work with a ated ending. Outcomes reported here include a demon- stration of the protocol’s effectiveness across different distributed decision-making process where each author's programming environments, the establishment of a par- musebot agent makes music, and also contributes to the simonious set of parameters for effective musical inter- decision-making process. Questions that needed to be ad- action between the musebots, and strategies for coordi- dressed, then, included: nation of episodic structure and conclusion. • how each musebot should relate its decision-making to its music generation strategy; Introduction • how it should respond to the collective, updating both its future decisions and musical plan; Musebots are pieces of software that autonomously create music collaboratively with other musebots. A defining goal • what decision messages should be used; of the musebot project (Bown et al. 2015) is to establish a • whether decision-making agents should share common creative platform for experimenting with musical autono- code; my, open to people developing cutting-edge music intelli- • and whether decision-making agents should strictly gence, or simply exploring the creative potential of genera- conform to given agreements about the decision- tive processes in music. making process. A larger and longer-term goal for the project has been a We describe these explorations and their outcomes be- sharing of ideas about musebot programming, as well as low, but first provide a brief overview of the musebot re- the sharing of code. There already exists substantial re- search context to date. search into Musical Metacreation (MuMe) systems, with some impressive results. However, much of the creative work in this field is idiosyncratic, comprising ad hoc A Brief History of Musebot Ensembles standalone systems, and as a result the outcomes can be opaque. In such a diverse environment, it is difficult for The premiere musebots ensemble occurred in July 2015 as artistic researchers to share their ideas or their code, or an installation at the International Conference on Computa- work out ways that their systems might be incorporated tional Creativity (ICCC) in Park City, and was followed in into other’s creative workflows. Musebots, responding to August 2015 at the International Symposium of Electronic these challenges, are small modular units designed to be Art (ISEA) in Vancouver. Since then, it has been presented shared and studied by others. By making collaboration at the Generative Art Festival in Venice in December 2015, central, and agreeing on communications protocols be- the New Interfaces for Musical Expression (NIME) con- tween computational agents, the musebot project encour- ference in Brisbane in July 2016, and the Sound and Music ages developers to make transparent their system’s opera- Computing (SMC) conference in Hamburg in August 2016. The first musebot ensembles are more fully de- scribed elsewhere (Eigenfeldt et al. 2015), along with is- The gain value will be sent to the musebot after all sues and questions raised by these performances. musebots are loaded. This allows for rough mixing of The Chill-out Sessions – so named due to an initial de- musebot levels. A delay value, in seconds, will delay the sire to provide musebots as an alternative listening space to launching of that musebot by that length of time. Launch- the dance rhythms of Algoraves (Collins and McLean ing begins after the delay, and may take several seconds, so 2014) – have consisted of ensembles curated by the first it cannot be assumed that the musebot will begin playing at author from a growing pool of publicly shared musebots. that specific time. A kill value, in seconds, will cause the The musebot test suite1 currently has a repository of over Conductor to send a kill message to the musebot after that sixty shared musebots – including source code – by nine time once the musebot has been launched, taking into ac- different developers. count any delay. Thus, “delay 20 kill 20” will cause the musebot to launch with a delay of twenty seconds, then be Ensembles killed 20 seconds later. Combining delay and kill messages Curation of ensembles for installation has consisted of allow for the ensemble to dynamically vary during presen- combining musebots based upon their musical function. tation time. For example, several ensembles consist of one or more of the following: a beat musebot, a bass musebot, a pad Broadcasting Messages musebot, a melody musebot, and a harmony generating As mentioned, the Conductor also serves as a central hub musebot. A contrasting ensemble might involve combining through which messages are passed by the individual several noise musebots, or a grouping of only beat muse- musebots. Broadcast messages are general messages sent bots (see Table 1). The diversity of musebots is highlighted from one musebot to the ensemble, and include the ID of in their presentation: if listeners don’t find the current en- the originating musebot, the message type, and the data semble particularly interesting, they can wait five minutes (see Figure 1). and be presented with something completely different. Fig. 1. Musebot configuration for messaging. musebot_A Table 1. Musebot types available online Type Number available Bass Generators 5 Beat Generators 13 Harmony Generators 5 Keys/Pads Generators 6 Melody Generators 19 Noise/Texture Generators 16 sends a notepool message with its ID, and this message is Musebot ensembles are launched by a master Conductor, passed to the entire ensemble. whose function is also to provide a centralised timing source by broadcasting a running clock, as well as serving The Musebot conductor provides synchronisation (tim- as a central network hub through which all messages are ing) and message passing infrastructure, but beyond this broadcast (see below) via OSC (Wright 1997). A desire for musebot designers need to specify protocols of coordina- the musebot project is to be platform agnostic, so musebots tion. Our investigations focused on strategies for minimal are standalone applications (Mac and PC) that do not need effective musical coordination with minimal reliance on to run in the development environment. As a result, ‘top-down’ intervention. This is modeled on a conductor- launching musebot ensembles on a single computer does less small ensemble where independent agents (musicians) take some time, as individual audio applications are started agree on a few musical constraints (e.g., tempo, key, meter, up. genre) and then improvise around these. Ensembles are organized as text files, in the following An example message, given the musebot configuration format: in Figure 1 might be: tempo [BPM] duration [seconds] musebot_name message value… broadcast/notepool/musebot_A 60 62 65 67 68 musebot_name message value… … The information shared between musebots has tended to be surface detail, such as a current pool of pitches, density Messages currently used in the ensembles include: of onsets, and volume. For example, the harmony generat- • gain (0.0 - 1.0) ing musebots have been used to provide a generated chord • delay (in seconds) progression, sent as both pitch-class sets (independent of range) and notepool messages (which indicate specific • kill (in seconds). pitches). Musebots have used this information to constrain their choice of pitches, resulting in congruent harmony. 1 http://musicalmetacreation.org/musebot-test-suite/ Other musebots (usually the beat generators) have mes- saged their current onset density; musebots that respond to an ‘experimental prog rock’ performance. Each developer this message can decide to follow, or oppose, this density, worked in a different software environment (Pure Data, resulting in an audible interaction. Max, Extempore and Java) and were free to implement any To reiterate, the broadcast messages passed between algorithmic processes. Following discussion about possible musebots are not themselves defined in the specification, minimal coordinating parameters, we chose