Jam Tomorrow: Collaborative Music Generation in Croquet Using OpenAL Florian Thalmann [email protected] Markus Gaelli [email protected] Institute of Computer Science and Applied Mathematics, University of Bern Neubruckstrasse¨ 10, CH-3012 Bern, Switzerland Abstract posing [15] [13] [1] or collaborative open source record- ing [12]. People using such compositional or improvisa- 1 We propose a music generation software that allows tional software are restricted to working on one musical large numbers of users to collaborate. In a virtual world, piece at a time. Our idea is to develop a software that lets groups of users generate music simultaneously at different users collaborate in generating multiple musical pieces si- places in a room. This can be realized using OpenAL sound multaneously within a modern virtual world. People using sources. The generated musical pieces have to be modi- this software will have compositional and improvisational fiable while they are playing and all collaborating users possibilities. should immediately see and hear the results of such mod- In this paper, we start by presenting our conceptual ideas. ifications. We are testing these concepts within Croquet by In section 3 we explain the technology we use for our explo- implementing a software called Jam Tomorrow. rative project, Jam Tomorrow. We continue in section 4 by giving you a more detailed description of the musical con- cept and the architecture of Jam Tomorrow. In the same sec- 1. Introduction tion we describe the major problems we encountered during implementation. Finally, we conclude by discussing future work. “It’s very good jam” said the Queen. “Well, I don’t want any today, at any rate.” “You couldn’t have it if you did want it,” the 2. Concept: Multiple Musicians in One Virtual Queen said. “The rule is jam tomorrow and jam Room yesterday but never jam today.” “It must come sometimes to ‘jam today,”’Alice Our idea is to build an application that consists of sev- objected. eral similar musical editors (graphical user interfaces) dis- “No it can’t,” said the Queen. “It’s jam every tributed within a virtual room. Each of these editors is other day; today isn’t any other day, you know.” linked to a musical piece and can be used as an interface “I don’t understand you,” said Alice. “It’s dread- for modifying it. The corresponding musical piece is heard fully confusing.” from a sound source located at the editor’s position. Lewis Carroll, Through the Looking Glass, 1871. The virtual room is visited by multiple users, which can move around freely between the editors. Depending on a Even though the idea of collaborative music generation user’s distance from an editor, volume and panning of this on computer networks is almost thirty years old, today there editor’s music changes. So, a user can find the editor of are only a few projects in this domain [15]. Improvisation the music he is interested in, simply by following sound over the web has always been technically constrained, due sources. to traffic limitations and synchronicity problems, but is now Every user can use any editor. If somebody modifies a possible using MIDI [3] and even using compressed au- musical piece, this has to be immediately audible to all users dio [14]. There are also projects about collaborative com- near its editor. Of course, this constrains the musical possi- 1Fourth International Conference on Creating, Connecting and Collab- bilities a lot. For our ideas of such a flexible and collabora- orating through Computing (C5’06) pages 73–78 tive music, see section 4.2. With our concept, we can have a large number of tunes in this TSound is played, its OpenAL source is located at the one huge musical world. Users in this world can collaborate TFrame’s position in the Croquet space. in groups or work alone, modify an existing tune or start a TSound or similar classes like OpenALStreamingSound, new one. It is certainly very interesting to walk through are currently the only way to use OpenAL in Croquet. If we this world and hear the editors play the music reflecting the want to play a sound, we have to copy its sound buffer to a common likes of the users. TSound object’s sound buffer and then start streaming the TSound (fill the OpenAL buffers). We have to do this once, 3. Technology: Croquet Project and OpenAL if we plan to use the same sound repeatedly. But if we want to play different sounds, we have to copy the buffer and OpenAL is an API that “allows a programmer to po- stream the sound for every new sound. sition audio sources in a three-dimensional space around If we do this for short sounds a user has saved or gen- a listener, producing reasonable fading and panning erates locally, performance is fairly good. But performance for each source so that the environment seems three- gets worse, as soon as sounds have to be sent over the net- dimensional” [6]. This is exactly what we need for the work. For example with the current voice chatting imple- placement of our musical pieces within the room. Unfor- mentation (TVoiceRecorder), when somebody records au- tunately, at the time, OpenAL does not support MIDI yet, dio, it takes too much time for the sound to reach the other so we are restricted to sampled sound [2]. participant’s computers. For musicians playing simultane- “Croquet is a combination of open source computer ously, latency time needs to be very close to zero. software and network architecture that supports deep col- laboration and resource sharing among large numbers of Due to these problems, we are limited to using locally users” [16] and it is implemented in Squeak. Although saved or generated sounds. Improvisation over network Croquet is still being developed, it seems to be the perfect using microphones is therefore not yet possible. To inte- framework for the integration of our application. grate realtime MIDI functions using TSounds would result A Croquet environment is divided into spaces, in which in an even more unsatisfying performance. We have ex- users can move around using their avatar. Unlike in client- perimented with Squeak’s FMSynth, but the translation to server applications, each participant in a Croquet network TSounds is too slow for complex MIDI messages. How- session, has an independent image holding the entire appli- ever, it could be possible to build a simple MIDI applica- cation on his computer. At the moment, for establishing a tion, where every note is pregenerated as TSound. connection between Croquet users, all images have to be in We hope, that OpenAL will soon support MIDI for this the same state and contain the same program code. To keep would increase our musical flexibility dramatically. There the images synchronized, every message sent to a Croquet are software products that translate MIDI to audio [17], but object in a space, is replicated in the other participants im- not yet in realtime. age. To replicate a message manually, the #meta message At the moment, Croquet cannot be used for virtual band has to be sent. For instance, ‘self update’ becomes ‘self playing in realtime yet. This is why we chose to build a first meta update’. For understanding our architecture, it is im- application that does not need to be strongly synchronous. portant to know these principles. In our concept it is important, that every participant hears exactly the same music, but not necessarily at the same time. 3.1. Croquet’s OpenAL Interface and its For example, it is possible, that the user who presses the Performance ’play’-button, hears the music a bit earlier than his fellow users. An implementation of OpenAL is already integrated in Croquet and the sampled sound file formats wav, aif and mp3 are supported. During the development of Jam To- morrow, we explored the possibilities of Croquet’s OpenAL 4. Implementation: Jam Tomorrow interface. Before talking about the implementation of our project, we would like to discuss the performance limita- tions we have. For experimentation with Croquet and OpenAL, we have The basic OpenAL Croquet class we use is TSound.A built a simple prototype called Jam Tomorrow. It consists of TSound holds a SoundBuffer containing a sampled sound a number of editors (GUI windows), each of them linked to and manages the OpenAL buffers and sources needed for a sound player. Users can add different types of tracks to playing. It also understands common control messages like a player, modify and play them. We also integrated a func- #play, #pause and #stop. One can assign a TFrame (a Cro- tion to record samples and add them to the player. But this quet object with a position) to a TSound, so that when function cannot be used yet, due to its poor performance. 4.1. A Sample Scenario 4.2. Musical Concept Even though, for our explorative prototype, we devel- oped a rather simple musical concept, this is just a sugges- As an example, we have a Croquet space with three ed- tion that may be replaced or expanded in later development. itors and four user avatars in it (see Fig. 1). Jane and John However, this concept is simple to implement and fulfills collaborate modifying the musical piece in editor 1. We the following needs of a flexible musical form. will see later, what exactly they are doing. They both hear Approaching one of the editors, a user should immedi- the music they are producing in editor 1, but they also hear ately hear what the music generated there is about.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-