Twenty Years of Ircam Spat: Looking Back, Looking Forward
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Playing (With) Sound of the Animation of Digitized Sounds and Their Reenactment by Playful Scenarios in the Design of Interactive Audio Applications
Playing (with) Sound Of the Animation of Digitized Sounds and their Reenactment by Playful Scenarios in the Design of Interactive Audio Applications Dissertation by Norbert Schnell Submitted for the degree of Doktor der Philosophie Supervised by Prof. Gerhard Eckel Prof. Rolf Inge Godøy Institute of Electronic Music and Acoustics University of Music and Performing Arts Graz, Austria October 2013 Abstract Investigating sound and interaction, this dissertation has its foundations in over a decade of practice in the design of interactive audio applications and the development of software tools supporting this design practice. The concerned applications are sound installations, digital in- struments, games, and simulations. However, the principal contribution of this dissertation lies in the conceptualization of fundamental aspects in sound and interactions design with recorded sound and music. The first part of the dissertation introduces two key concepts, animation and reenactment, that inform the design of interactive audio applications. While the concept of animation allows for laying out a comprehensive cultural background that draws on influences from philosophy, science, and technology, reenactment is investigated as a concept in interaction design based on recorded sound materials. Even if rarely applied in design or engineering – or in the creative work with sound – the no- tion of animation connects sound and interaction design to a larger context of artistic practices, audio and music technologies, engineering, and philosophy. Starting from Aristotle’s idea of the soul, the investigation of animation follows the parallel development of philosophical con- cepts (i.e. soul, mind, spirit, agency) and technical concepts (i.e. mechanics, automation, cybernetics) over many centuries. -
Tosca: an OSC Communication Plugin for Object-Oriented Spatialization Authoring
ICMC 2015 – Sept. 25 - Oct. 1, 2015 – CEMI, University of North Texas ToscA: An OSC Communication Plugin for Object-Oriented Spatialization Authoring Thibaut Carpentier UMR 9912 STMS IRCAM-CNRS-UPMC 1, place Igor Stravinsky, 75004 Paris [email protected] ABSTRACT or consensus on the representation of spatialization metadata (in spite of several initiatives [11, 12]) complicates the inter- The paper presents ToscA, a plugin that uses the OSC proto- exchange between applications. col to transmit automation parameters between a digital audio This paper introduces ToscA, a plugin that allows to commu- workstation and a remote application. A typical use case is the nicate automation parameters between a digital audio work- production of an object-oriented spatialized mix independently station and a remote application. The main use case is the of the limitations of the host applications. production of massively multichannel object-oriented spatial- ized mixes. 1. INTRODUCTION There is today a growing interest in sound spatialization. The 2. WORKFLOW movie industry seems to be shifting towards “3D” sound sys- We propose a workflow where the spatialization rendering tems; mobile devices have radically changed our listening engine lies outside of the workstation (Figure 1). This archi- habits and multimedia broadcast companies are evolving ac- tecture requires (bi-directional) communication between the cordingly, showing substantial interest in binaural techniques DAW and the remote renderer, and both the audio streams and and interactive content; massively multichannel equipments the automation data shall be conveyed. After being processed and transmission protocols are available and they facilitate the by the remote renderer, the spatialized signals could either installation of ambitious speaker setups in concert halls [1] or feed the reproduction setup (loudspeakers or headphones), be exhibition venues. -
Developing Sound Spatialization Tools for Musical Applications with Emphasis on Sweet Spot and Off-Center Perception
Sweet [re]production: Developing sound spatialization tools for musical applications with emphasis on sweet spot and off-center perception Nils Peters Music Technology Area Department of Music Research Schulich School of Music McGill University Montreal, QC, Canada October 2010 A thesis submitted to McGill University in partial fulfillment of the requirements for the degree of Doctor of Philosophy. c 2010 Nils Peters 2010/10/26 i Abstract This dissertation investigates spatial sound production and reproduction technology as a mediator between music creator and listener. Listening experiments investigate the per- ception of spatialized music as a function of the listening position in surround-sound loud- speaker setups. Over the last 50 years, many spatial sound rendering applications have been developed and proposed to artists. Unfortunately, the literature suggests that artists hardly exploit the possibilities offered by novel spatial sound technologies. Another typical drawback of many sound rendering techniques in the context of larger audiences is that most listeners perceive a degraded sound image: spatial sound reproduction is best at a particular listening position, also known as the sweet spot. Structured in three parts, this dissertation systematically investigates both problems with the objective of making spatial audio technology more applicable for artistic purposes and proposing technical solutions for spatial sound reproductions for larger audiences. The first part investigates the relationship between composers and spatial audio tech- nology through a survey on the compositional use of spatialization, seeking to understand how composers use spatialization, what spatial aspects are essential and what functionali- ties spatial audio systems should strive to include. The second part describes the development process of spatializaton tools for musical applications and presents a technical concept. -
Libro Publicacion Oficial A4
44º CONGRESO ESPAÑOL DE ACÚSTICA ENCUENTRO IBÉRICO DE ACCÚSTICA EAA EUROPEAN SYMPOSIUM ON ENVIRONMENTAL ACOUSTICS AND NOISE MAPPING THE MUSIC FROM XX-XXI CEENTURY AND ITS IMPLICATIIONS AND CONSEQUENCES IN THE ACOUSTICS IN THE AUDITORIO 400 OF JEAN NOUVEL AT THE NATIONAL MUSEUM REINA SOFIA CENTER OF MADRID PACS: 43.75.Cd Autoores: del Cerro Escobar, Emiliano; Ortiz Benito, Silvia Mª Institución: Universidad Alfonso X El Sabio Dirección: Avda. de la Universidad, 1 Población: 28691 Villanueva de la Cañada, Madrid País: España Teléfono: 918109146; 918109726 Fax: 918109101 E-Mail: [email protected]; [email protected] ABSTRACT. The Reina Sofia Museum in Madrid, has an auditorium designed by Frennch architect Jean Nouvel, where the Spanish Culture Miniistry focuses its season of concerts dedicated to the music of XX-XXI century. This article will explain the special dedication of the museum and its musicall center, analyzing electronic music composed and/or performed at this site, as well as the breakdown of specialized patches to apply concepts of space, special effects and digital signal processing to various sound sources. The article will revise also, acoustic and architectural implications of this typee of music inside a concert hall as the Auditorio400 in Madrid. RESUMEN. El Museo Reina Sofía de Madrid, tiene un auditorio diseñado por el arquitecto francés Jean Nouvel, donde el Ministerio de Cultura español celebra la temporada de conciertos dedicada a la música del siglo XX y XXI. El artículo explicará la dedicación del museo, en concreto su centro musical, analizando la música electrónica compuesta y/o interrpretada en este espacio, así como el desglose de “patches” especializados para aplicar conceptos de espacio, tratamiento digital y efectos especiales a diversas fuentes sonoras. -
Kaija Saariaho's Early Electronic Music at IRCAM, 1982–87
Encoding Post-Spectral Sound: Kaija Saariaho’s Early Electronic Music at IRCAM, 1982–87 * Landon Morrison NOTE: The examples for the (text-only) PDF version of this item are available online at: https://www.mtosmt.org/issues/mto.21.27.3/mto.21.27.3.morrison.php KEYWORDS: Kaija Saariaho, IRCAM, post-spectralism, synthesis, archival research, media and software studies ABSTRACT: This article examines computer-based music (ca. 1982–87) created by Finnish composer Kaija Saariaho at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in Paris. A detailed account of archival materials for an early étude in voice synthesis, Vers le blanc (1982), demonstrates the music- theoretical import of software to Saariaho’s development of a robust compositional method that resonated with the emergent aesthetics of a post-spectral milieu. Subsequent analyses of two additional works from this period —Jardin secret II (1984–86) for harpsichord and tape, and IO (1987) for large ensemble and electronics—serve to illustrate Saariaho’s extension of this method into instrumental settings. Specific techniques highlighted include the use of interpolation systems to create continuous processes of transformation, the organization of individual musical parameters into multidimensional formal networks, and the exploration of harmonic structures based on the analysis of timbral phenomena. Relating these techniques to the affordances of contemporaneous IRCAM technologies, including CHANT, FORMES, and Saariaho’s own customized program, “transkaija,” this article adopts a transductive approach to archival research that is responsive to the diverse media artifacts associated with computer-based composition. Received May 2020 Volume 27, Number 3, September 2021 Copyright © 2021 Society for Music Theory 1. -
Akrata, for 16 Winds by Iannis Xenakis: Analyses1
AKRATA, FOR 16 WINDS BY IANNIS XENAKIS: ANALYSES1 Stéphan Schaub: Ircam / University of Paris IV Sorbonne In Makis Solomos, Anastasia Georgaki, Giorgos Zervos (ed.), Definitive Proceedings of the “International Symposium Iannis Xenakis” (Athens, May 2005), www.iannis-xenakis.org, October 2006. Paper first published in A. Georgaki, M. Solomos (éd.), International Symposium Iannis Xenakis. Conference Proceedings, Athens, May 2005, p. 138-149. This paper was selected for the Definitive Proceedings by the scientific committee of the symposium: Anne-Sylvie Barthel-Calvet (France), Agostino Di Scipio (Italy), Anastasia Georgaki (Greece), Benoît Gibson (Portugal), James Harley (Canada), Peter Hoffmann (Germany), Mihu Iliescu (France), Sharon Kanach (France), Makis Solomos (France), Ronald Squibbs (USA), Georgos Zervos (Greece) … a sketch in which I make use of the theory of groups 2 ABSTRACT In the following paper the author presents a reconstruction of the formal system developed by Iannis Xenakis for the composition of Akrata, for 16 winds (1964-1965). The reconstruction is based on a study of both the score and the composer’s preparatory sketches to his composition. Akrata chronologically follows Herma for piano (1961) and precedes Nomos Alpha for cello (1965-1966). All three works fall within the “symbolic music” category. As such, they were composed under the constraint of having a temporal function unfold “algorithmically” the work’s “outside time” architecture into the “in time” domain. The system underlying Akrata, undergoes several mutations over the course of the music’s unfolding. In its last manifestation, it bears strong resemblances with the better known system later applied to the composition of Nomos Alpha. -
Implementing Stochastic Synthesis for Supercollider and Iphone
Implementing stochastic synthesis for SuperCollider and iPhone Nick Collins Department of Informatics, University of Sussex, UK N [dot] Collins ]at[ sussex [dot] ac [dot] uk - http://www.cogs.susx.ac.uk/users/nc81/index.html Proceedings of the Xenakis International Symposium Southbank Centre, London, 1-3 April 2011 - www.gold.ac.uk/ccmc/xenakis-international-symposium This article reflects on Xenakis' contribution to sound synthesis, and explores practical tools for music making touched by his ideas on stochastic waveform generation. Implementations of the GENDYN algorithm for the SuperCollider audio programming language and in an iPhone app will be discussed. Some technical specifics will be reported without overburdening the exposition, including original directions in computer music research inspired by his ideas. The mass exposure of the iGendyn iPhone app in particular has provided a chance to reach a wider audience. Stochastic construction in music can apply at many timescales, and Xenakis was intrigued by the possibility of compositional unification through simultaneous engagement at multiple levels. In General Dynamic Stochastic Synthesis Xenakis found a potent way to extend stochastic music to the sample level in digital sound synthesis (Xenakis 1992, Serra 1993, Roads 1996, Hoffmann 2000, Harley 2004, Brown 2005, Luque 2006, Collins 2008, Luque 2009). In the central algorithm, samples are specified as a result of breakpoint interpolation synthesis (Roads 1996), where breakpoint positions in time and amplitude are subject to probabilistic perturbation. Random walks (up to second order) are followed with respect to various probability distributions for perturbation size. Figure 1 illustrates this for a single breakpoint; a full GENDYN implementation would allow a set of breakpoints, with each breakpoint in the set updated by individual perturbations each cycle. -
The Viability of the Web Browser As a Computer Music Platform
Lonce Wyse and Srikumar Subramanian The Viability of the Web Communications and New Media Department National University of Singapore Blk AS6, #03-41 Browser as a Computer 11 Computing Drive Singapore 117416 Music Platform [email protected] [email protected] Abstract: The computer music community has historically pushed the boundaries of technologies for music-making, using and developing cutting-edge computing, communication, and interfaces in a wide variety of creative practices to meet exacting standards of quality. Several separate systems and protocols have been developed to serve this community, such as Max/MSP and Pd for synthesis and teaching, JackTrip for networked audio, MIDI/OSC for communication, as well as Max/MSP and TouchOSC for interface design, to name a few. With the still-nascent Web Audio API standard and related technologies, we are now, more than ever, seeing an increase in these capabilities and their integration in a single ubiquitous platform: the Web browser. In this article, we examine the suitability of the Web browser as a computer music platform in critical aspects of audio synthesis, timing, I/O, and communication. We focus on the new Web Audio API and situate it in the context of associated technologies to understand how well they together can be expected to meet the musical, computational, and development needs of the computer music community. We identify timing and extensibility as two key areas that still need work in order to meet those needs. To date, despite the work of a few intrepid musical Why would musicians care about working in explorers, the Web browser platform has not been the browser, a platform not specifically designed widely considered as a viable platform for the de- for computer music? Max/MSP is an example of a velopment of computer music. -
UCLA Electronic Theses and Dissertations
UCLA UCLA Electronic Theses and Dissertations Title Performing Percussion in an Electronic World: An Exploration of Electroacoustic Music with a Focus on Stockhausen's Mikrophonie I and Saariaho's Six Japanese Gardens Permalink https://escholarship.org/uc/item/9b10838z Author Keelaghan, Nikolaus Adrian Publication Date 2016 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California UNIVERSITY OF CALIFORNIA Los Angeles Performing Percussion in an Electronic World: An Exploration of Electroacoustic Music with a Focus on Stockhausen‘s Mikrophonie I and Saariaho‘s Six Japanese Gardens A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Musical Arts by Nikolaus Adrian Keelaghan 2016 © Copyright by Nikolaus Adrian Keelaghan 2016 ABSTRACT OF THE DISSERTATION Performing Percussion in an Electronic World: An Exploration of Electroacoustic Music with a Focus on Stockhausen‘s Mikrophonie I and Saariaho‘s Six Japanese Gardens by Nikolaus Adrian Keelaghan Doctor of Musical Arts University of California, Los Angeles, 2016 Professor Robert Winter, Chair The origins of electroacoustic music are rooted in a long-standing tradition of non-human music making, dating back centuries to the inventions of automaton creators. The technological boom during and following the Second World War provided composers with a new wave of electronic devices that put a wealth of new, truly twentieth-century sounds at their disposal. Percussionists, by virtue of their longstanding relationship to new sounds and their ability to decipher complex parts for a bewildering variety of instruments, have been a favored recipient of what has become known as electroacoustic music. -
62 Years and Counting: MUSIC N and the Modular Revolution
62 Years and Counting: MUSIC N and the Modular Revolution By Brian Lindgren MUSC 7660X - History of Electronic and Computer Music Fall 2019 24 December 2019 © Copyright 2020 Brian Lindgren Abstract. MUSIC N by Max Mathews had two profound impacts in the world of music synthesis. The first was the implementation of modularity to ensure a flexibility as a tool for the user; with the introduction of the unit generator, the instrument and the compiler, composers had the building blocks to create an unlimited range of sounds. The second was the impact of this implementation in the modular analog synthesizers developed a few years later. While Jean-Claude Risset, a well known Mathews associate, asserts this, Mathews actually denies it. They both are correct in their perspectives. Introduction Over 76 years have passed since the invention of the first electronic general purpose computer,1 the ENIAC. Today, we carry computers in our pockets that can perform millions of times more calculations per second.2 With the amazing rate of change in computer technology, it's hard to imagine that any development of yesteryear could maintain a semblance of relevance today. However, in the world of music synthesis, the foundations that were laid six decades ago not only spawned a breadth of multifaceted innovation but continue to function as the bedrock of important digital applications used around the world today. Not only did a new modular approach implemented by its creator, Max Mathews, ensure that the MUSIC N lineage would continue to be useful in today’s world (in one of its descendents, Csound) but this approach also likely inspired the analog synthesizer engineers of the day, impacting their designs. -
Bringing Csound to a Modern Production Environment
Mark Jordan-Kamholz and Dr.Boulanger. Bringing Csound to a Modern Production Environment BRINGING CSOUND TO A MODERN PRODUCTION ENVIRONMENT WITH CSOUND FOR LIVE Mark Jordan-Kamholz mjordankamholz AT berklee.edu Dr. Richard Boulanger rboulanger AT berklee.edu Csound is a powerful and versatile synthesis and signal processing system and yet, it has been far from convenient to use the program in tandem with a modern Digital Audio Workstation (DAW) setup. While it is possible to route MIDI to Csound, and audio from Csound, there has never been a solution that fully integrated Csound into a DAW. Csound for Live attempts to solve this problem by using csound~, Max and Ableton Live. Over the course of this paper, we will discuss how users can use Csound for Live to create Max for Live Devices for their Csound instruments that allow for quick editing via a GUI; tempo-synced and transport-synced operations and automation; the ability to control and receive information from Live via Ableton’s API; and how to save and recall presets. In this paper, the reader will learn best practices for designing devices that leverage both Max and Live, and in the presentation, they will see and hear demonstrations of devices used in a full song, as well as how to integrate the powerful features of Max and Live into a production workflow. 1 Using csound~ in Max and setting up Csound for Live Rendering a unified Csound file with csound~ is the same as playing it with Csound. Sending a start message to csound~ is equivalent to running Csound from the terminal, or pressing play in CsoundQT. -
Sprechen Über Neue Musik
Sprechen über Neue Musik Eine Analyse der Sekundärliteratur und Komponistenkommentare zu Pierre Boulez’ Le Marteau sans maître (1954), Karlheinz Stockhausens Gesang der Jünglinge (1956) und György Ligetis Atmosphères (1961) Dissertation zur Erlangung des Doktorgrades der Philosophie (Dr. phil.) vorgelegt der Philosophischen Fakultät II der Martin-Luther-Universität Halle-Wittenberg, Institut für Musik, Abteilung Musikwissenschaft von Julia Heimerdinger ∗∗∗ Datum der Verteidigung 4. Juli 2013 Gutachter: Prof. Dr. Wolfgang Auhagen Prof. Dr. Wolfgang Hirschmann I Inhalt 1. Einleitung 1 2. Untersuchungsgegenstand und Methode 10 2.1. Textkorpora 10 2.2. Methode 12 2.2.1. Problemstellung und das Programm MAXQDA 12 2.2.2. Die Variablentabelle und die Liste der Codes 15 2.2.3. Auswertung: Analysefunktionen und Visual Tools 32 3. Pierre Boulez: Le Marteau sans maître (1954) 35 3.1. „Das Glück einer irrationalen Dimension“. Pierre Boulez’ Werkkommentare 35 3.2. Die Rätsel des Marteau sans maître 47 3.2.1. Die auffällige Sprache zu Le Marteau sans maître 58 3.2.2. Wahrnehmung und Interpretation 68 4. Karlheinz Stockhausen: Gesang der Jünglinge (elektronische Musik) (1955-1956) 85 4.1. Kontinuum. Stockhausens Werkkommentare 85 4.2. Kontinuum? Gesang der Jünglinge 95 4.2.1. Die auffällige Sprache zum Gesang der Jünglinge 101 4.2.2. Wahrnehmung und Interpretation 109 5. György Ligeti: Atmosphères (1961) 123 5.1. Von der musikalischen Vorstellung zum musikalischen Schein. Ligetis Werkkommentare 123 5.1.2. Ligetis auffällige Sprache 129 5.1.3. Wahrnehmung und Interpretation 134 5.2. Die große Vorstellung. Atmosphères 143 5.2.2. Die auffällige Sprache zu Atmosphères 155 5.2.3.