Science by Ear. an Interdisciplinary Approach to Sonifying Scientific
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Sonification As a Means to Generative Music Ian Baxter
Sonification as a means to generative music By: Ian Baxter A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy The University of Sheffield Faculty of Arts & Humanities Department of Music January 2020 Abstract This thesis examines the use of sonification (the transformation of non-musical data into sound) as a means of creating generative music (algorithmic music which is evolving in real time and is of potentially infinite length). It consists of a portfolio of ten works where the possibilities of sonification as a strategy for creating generative works is examined. As well as exploring the viability of sonification as a compositional strategy toward infinite work, each work in the portfolio aims to explore the notion of how artistic coherency between data and resulting sound is achieved – rejecting the notion that sonification for artistic means leads to the arbitrary linking of data and sound. In the accompanying written commentary the definitions of sonification and generative music are considered, as both are somewhat contested terms requiring operationalisation to correctly contextualise my own work. Having arrived at these definitions each work in the portfolio is documented. For each work, the genesis of the work is considered, the technical composition and operation of the piece (a series of tutorial videos showing each work in operation supplements this section) and finally its position in the portfolio as a whole and relation to the research question is evaluated. The body of work is considered as a whole in relation to the notion of artistic coherency. This is separated into two main themes: the relationship between the underlying nature of the data and the compositional scheme and the coherency between the data and the soundworld generated by each piece. -
Proceedings-Print.Pdf ISSN: 2175-6759 ISBN: 978-85-76694-75-5
Edited by: Flávio Luiz Schiavoni Rodrigo Schramm José Eduardo Fornari Novo Junior Leandro Lesqueves Costalonga ISSN 2175-6759 Ficha catalográfica elaborada pelo Setor de Processamento Técnico da Divisão de Biblioteca da UFSJ Simpósio Brasileiro de Computação Musical (15. : 2015 : Campinas, SP) Anais [recurso eletrônico] do 15º Simpósio Brasileiro de Computação Musical = 15th Brazilian Symposium on Computer Music (SBCM), 23 a 25 de novembro de 2015, Campinas, SP / editado por Flávio Luiz Schiavoni ... [et al.]. – Campinas: UNICAMP, 2015. Disponível em: http://compmus.ime.usp.br/sbcm2015/files/proceedings-print.pdf ISSN: 2175-6759 ISBN: 978-85-76694-75-5 1. Música por computador. 2. Arte e tecnologia. 3. Multimídia (Arte). I. Schiavoni, Flávio Luiz (Ed.). II. Título. CDU: 78:004 SBCM 2015 is organized by University of Campinas (UNICAMP) President: Jos´eTadeu Jorge Vice President for University Coordination: Alvaro´ Penteado Cr´osta Vice President for Research (PRP): Gl´aucia Maria Pastore Coordination of Interdisciplinary Centers (COCEN) Coordinator: Jurandir Zullo Junior Interdisciplinary Center for Studies on Sound Communication (NICS) Coordinator: Adriana do Nascimento Ara´ujo Mendes Art Institute, Department of Music Director: Fernando Augusto de Almeida Hashimoto Chief of the Department: Leandro Barsalini Coordinator of Graduate Studies in Music: Alexandre Zamith Almeida Coordinator of Undergraduate Studies in Music: Paulo J. Siqueira Tin´e Production Center Staff (Ceprod) Visual programming: Ivan Avelar Promotion Brazilian Computer -
ESA: a CLIM Library for Writing Emacs-Style Applications
ESA: A CLIM Library for Writing Emacs-Style Applications Robert Strandh Troels Henriksen LaBRI, Université Bordeaux 1 DIKU, University of Copenhagen 351, Cours de la Libération Universitetsparken 1, Copenhagen 33405 Talence Cedex [email protected] France [email protected] David Murray Christophe Rhodes ADMurray Associates Goldsmiths, University of London 10 Rue Carrier Belleuse, 75015 Paris New Cross Road, London, SE14 6NW [email protected] [email protected] ABSTRACT style applications. The central distinguishing feature of an We describe ESA (for Emacs-Style Application), a library Emacs-style application, for our purposes, is that it uses for writing applications with an Emacs look-and-feel within short sequences of keystrokes as the default method of in- the Common Lisp Interface Manager. The ESA library voking commands, and only occasionally requires the user to takes advantage of the layered design of CLIM to provide a type the full name of the command in a minibuffer. A spe- command loop that uses Emacs-style multi-keystroke com- cial keystroke (M-x) is used to invoke commands by name. mand invocation. ESA supplies other functionality for writ- This interaction style is substantially different from the one ing such applications such as a minibuffer for invoking ex- provided by CLIM by default, and therefore required us to tended commands and for supplying command arguments, write a different CLIM top level. Fortunately, CLIM was Emacs-style keyboard macros and numeric arguments, file designed to make this not only possible but fairly straight- and buffer management, and more. ESA is currently used forward, as the implementation of a replacement top level in two major CLIM applications: the Climacs text editor can build on the protocol layers beneath, just as the default (and the Drei text gadget integrated with the McCLIM im- top level is built. -
92 Airq Sonification As a Context for Mutual Contribution Between
ARANGO, J.J. AirQ Sonification as a context for mutual contribution between Science and Music Revista Música Hodie, Goiânia, V.18 - n.1, 2018, p. 92-102 AirQ Sonification as a context for mutual contribution between Science and Music Julián Jaramillo Arango (Caldas University Manizales, Colombia) [email protected] Abstract. This paper addresses a high-level discussion about the links between Science and Music, focusing on my own sonification and auditory display practices around air quality. Grounded on recent insights on interdisciplina- rity and sonification practices, the first sections will point out some potential contributions from scientific research to music studies and vice versa. I will propose the concept of mutualism to depict the interdependent status of Mu- sic and Science. The final sections will discuss air contamination as a complex contemporary problem, and will re- port three practice-based-design projects, AirQ jacket, Esmog Data and Breathe!, which outline different directions in facing local environmental awareness. Keywords. Science and Music, Sonification, Environmental Awarness, Sonology. A sonificação do AirQ como um contexto de contribuição mutua entre a Ciência e a Música. Resumo. Este artigo levanta uma discussão geral sobre as relações entre a Ciência e a Música, e se concentra em minhas praticas de sonificação e display auditivo da qualidade do ar. Baseado em reflexões recentes sobre a interdis- ciplinaridade e as práticas de sonificação as primeiras seções identificam algumas contribuições potenciais da pes- quisa científica aos estudos musicais e vice versa. O conceito de mutualismo será proposto para expressar a interde- pendência entre Música e Ciência. As seções finais discutem a contaminação do ar como um problema complexo de nossos dias e descreve três projetos de design-baseado-na-prática, AirQ jacket, Esmog Data e Breathe! que propõem direções diferentes para gerar consciência ambiental no contexto local. -
Connecting Time and Timbre Computational Methods for Generative Rhythmic Loops Insymbolic and Signal Domainspdfauthor
Connecting Time and Timbre: Computational Methods for Generative Rhythmic Loops in Symbolic and Signal Domains Cárthach Ó Nuanáin TESI DOCTORAL UPF / 2017 Thesis Director: Dr. Sergi Jordà Music Technology Group Dept. of Information and Communication Technologies Universitat Pompeu Fabra, Barcelona, Spain Dissertation submitted to the Department of Information and Communication Tech- nologies of Universitat Pompeu Fabra in partial fulfillment of the requirements for the degree of DOCTOR PER LA UNIVERSITAT POMPEU FABRA Copyright c 2017 by Cárthach Ó Nuanáin Licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 Music Technology Group (http://mtg.upf.edu), Department of Information and Communication Tech- nologies (http://www.upf.edu/dtic), Universitat Pompeu Fabra (http://www.upf.edu), Barcelona, Spain. III Do mo mháthair, Marian. V This thesis was conducted carried out at the Music Technology Group (MTG) of Universitat Pompeu Fabra in Barcelona, Spain, from Oct. 2013 to Nov. 2017. It was supervised by Dr. Sergi Jordà and Mr. Perfecto Herrera. Work in several parts of this thesis was carried out in collaboration with the GiantSteps team at the Music Technology Group in UPF as well as other members of the project consortium. Our work has been gratefully supported by the Department of Information and Com- munication Technologies (DTIC) PhD fellowship (2013-17), Universitat Pompeu Fabra, and the European Research Council under the European Union’s Seventh Framework Program, as part of the GiantSteps project ((FP7-ICT-2013-10 Grant agreement no. 610591). Acknowledgments First and foremost I wish to thank my advisors and mentors Sergi Jordà and Perfecto Herrera. Thanks to Sergi for meeting me in Belfast many moons ago and bringing me to Barcelona. -
SONIFICATION with MUSICAL CHARACTERISTICS: a PATH GUIDED by USER ENGAGEMENT Jonathan Middleton1, Jaakko Hakulinen2, Katariina Ti
The 24th International Conference on Auditory Display (ICAD 2018) June 10-15, 2018, Michigan Technological University SONIFICATION WITH MUSICAL CHARACTERISTICS: A PATH GUIDED BY USER ENGAGEMENT Jonathan Middleton1, Jaakko Hakulinen2, Katariina Tiitinen2, Juho Hella2, Tuuli Keskinen2, Pertti Huuskonen2, Juhani Linna2, Markku Turunen2, Mounia Ziat3 and Roope Raisamo2 1 Eastern Washington University, Department of Music, Cheney, WA 99203 USA, [email protected] 2 University of Tampere, Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere, FI-33014 Finland, {firstname.lastname}@uta.fi 3Northern Michigan University, Dept. Psychological Science, Marquette, MI 49855 USA, [email protected] ABSTRACT The use of musical elements and characteristics in sonification has been formally explored by members of Sonification with musical characteristics can engage the International Community for Auditory Display since users, and this dynamic carries value as a mediator the mid 1990’s [4], [5], [6]. A summary of music-related between data and human perception, analysis, and research and software development in sonification can interpretation. A user engagement study has been be obtained from Bearman and Brown [7]. Earlier designed to measure engagement levels from conditions attempts were made by Pollack and Flicks in 1954 [8], within primarily melodic, rhythmic, and chordal yet according to Paul Vickers [9] and other researchers, contexts. This paper reports findings from the melodic such as, Walker and Nees [10], the path for music in portion of the study, and states the challenges of using sonification remains uncertain or unclear. Vickers musical characteristics in sonifications via the recently raised the significant question: “how should perspective of form and function – a long standing sonification designers who wish their work to be more debate in Human-Computer Interaction. -
The Sonification Handbook Chapter 4 Perception, Cognition and Action In
The Sonification Handbook Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff Logos Publishing House, Berlin, Germany ISBN 978-3-8325-2819-5 2011, 586 pages Online: http://sonification.de/handbook Order: http://www.logos-verlag.com Reference: Hermann, T., Hunt, A., Neuhoff, J. G., editors (2011). The Sonification Handbook. Logos Publishing House, Berlin, Germany. Chapter 4 Perception, Cognition and Action in Auditory Display John G. Neuhoff This chapter covers auditory perception, cognition, and action in the context of auditory display and sonification. Perceptual dimensions such as pitch and loudness can have complex interactions, and cognitive processes such as memory and expectation can influence user interactions with auditory displays. These topics, as well as auditory imagery, embodied cognition, and the effects of musical expertise will be reviewed. Reference: Neuhoff, J. G. (2011). Perception, cognition and action in auditory display. In Hermann, T., Hunt, A., Neuhoff, J. G., editors, The Sonification Handbook, chapter 4, pages 63–85. Logos Publishing House, Berlin, Germany. Media examples: http://sonification.de/handbook/chapters/chapter4 8 Chapter 4 Perception, Cognition and Action in Auditory Displays John G. Neuhoff 4.1 Introduction Perception is almost always an automatic and effortless process. Light and sound in the environment seem to be almost magically transformed into a complex array of neural impulses that are interpreted by the brain as the subjective experience of the auditory and visual scenes that surround us. This transformation of physical energy into “meaning” is completed within a fraction of a second. However, the ease and speed with which the perceptual system accomplishes this Herculean task greatly masks the complexity of the underlying processes and often times leads us to greatly underestimate the importance of considering the study of perception and cognition, particularly in applied environments such as auditory display. -
ESA: a CLIM Library for Writing Emacs-Style Applications
ESA: A CLIM Library for Writing Emacs-Style Applications Robert Strandh Troels Henriksen LaBRI, Université Bordeaux 1 DIKU, University of Copenhagen 351, Cours de la Libération Universitetsparken 1, Copenhagen 33405 Talence Cedex [email protected] France [email protected] David Murray Christophe Rhodes ADMurray Associates Goldsmiths, University of London 10 Rue Carrier Belleuse, 75015 Paris New Cross Road, London, SE14 6NW [email protected] [email protected] ABSTRACT style applications. The central distinguishing feature of an We describe ESA (for Emacs-Style Application), a library Emacs-style application, for our purposes, is that it uses for writing applications with an Emacs look-and-feel within short sequences of keystrokes as the default method of in- the Common Lisp Interface Manager. The ESA library voking commands, and only occasionally requires the user to takes advantage of the layered design of CLIM to provide a type the full name of the command in a minibuffer. A spe- command loop that uses Emacs-style multi-keystroke com- cial keystroke (M-x) is used to invoke commands by name. mand invocation. ESA supplies other functionality for writ- This interaction style is substantially different from the one ing such applications such as a minibuffer for invoking ex- provided by CLIM by default, and therefore required us to tended commands and for supplying command arguments, write a different CLIM top level. Fortunately, CLIM was Emacs-style keyboard macros and numeric arguments, file designed to make this not only possible but fairly straight- and buffer management, and more. ESA is currently used forward, as the implementation of a replacement top level in two major CLIM applications: the Climacs text editor can build on the protocol layers beneath, just as the default (and the Drei text gadget integrated with the McCLIM im- top level is built. -
Signal Processing for Music Analysis Meinard Müller, Member, IEEE, Daniel P
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 0, NO. 0, 2011 1 Signal Processing for Music Analysis Meinard Müller, Member, IEEE, Daniel P. W. Ellis, Senior Member, IEEE, Anssi Klapuri, Member, IEEE, and Gaël Richard, Senior Member, IEEE Abstract—Music signal processing may appear to be the junior consumption, which is not even to mention their vital role in relation of the large and mature field of speech signal processing, much of today’s music production. not least because many techniques and representations originally This paper concerns the application of signal processing tech- developed for speech have been applied to music, often with good niques to music signals, in particular to the problems of ana- results. However, music signals possess specific acoustic and struc- tural characteristics that distinguish them from spoken language lyzing an existing music signal (such as piece in a collection) to or other nonmusical signals. This paper provides an overview of extract a wide variety of information and descriptions that may some signal analysis techniques that specifically address musical be important for different kinds of applications. We argue that dimensions such as melody, harmony, rhythm, and timbre. We will there is a distinct body of techniques and representations that examine how particular characteristics of music signals impact and are molded by the particular properties of music audio—such as determine these techniques, and we highlight a number of novel music analysis and retrieval tasks that such processing makes pos- the pre-eminence of distinct fundamental periodicities (pitches), sible. Our goal is to demonstrate that, to be successful, music audio the preponderance of overlapping sound sources in musical en- signal processing techniques must be informed by a deep and thor- sembles (polyphony), the variety of source characteristics (tim- ough insight into the nature of music itself. -
The Sonification Handbook Chapter 3 Psychoacoustics
The Sonification Handbook Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff Logos Publishing House, Berlin, Germany ISBN 978-3-8325-2819-5 2011, 586 pages Online: http://sonification.de/handbook Order: http://www.logos-verlag.com Reference: Hermann, T., Hunt, A., Neuhoff, J. G., editors (2011). The Sonification Handbook. Logos Publishing House, Berlin, Germany. Chapter 3 Psychoacoustics Simon Carlile Reference: Carlile, S. (2011). Psychoacoustics. In Hermann, T., Hunt, A., Neuhoff, J. G., editors, The Sonification Handbook, chapter 3, pages 41–61. Logos Publishing House, Berlin, Germany. Media examples: http://sonification.de/handbook/chapters/chapter3 6 Chapter 3 Psychoacoustics Simon Carlile 3.1 Introduction Listening in the real world is generally a very complex task since sounds of interest typically occur on a background of other sounds that overlap in frequency and time. Some of these sounds can represent threats or opportunities while others are simply distracters or maskers. One approach to understanding how the auditory system makes sense of this complex acoustic world is to consider the nature of the sounds that convey high levels of information and how the auditory system has evolved to extract that information. From this evolutionary perspective, humans have largely inherited this biological system so it makes sense to consider how our auditory systems use these mechanisms to extract information that is meaningful to us and how that knowledge can be applied to best sonify various data. One biologically important feature of a sound is its identity; that is, the spectro-temporal characteristics of the sound that allow us to extract the relevant information represented by the sound. -
Gravity's Reverb: Listening to Space-Time, Or Articulating The
GRAVITY’S REVERB: Listening to Space-Time, or Articulating the Sounds of Gravitational-Wave Detection STEFAN HELMREICH Massachusetts Institute of Technology http://orcid.org/0000-0003-0859-5881 I heard gravitational waves before they were detected.1 I was sitting in a pub in May 2015 with MIT physicists Scott Hughes and David Kaiser, my headphones looped into a laptop. Hughes was readying to play us some interstellar sounds: digital audio files, he explained, that were sonic simulations of what gravitational waves might sound like if, one day, such cosmic undulations arrived at detection devices on Earth. Gravitational waves are tiny but consequential wriggles in space-time, first theorized by Einstein in 1916— vibrations generated, for example, by such colossal events as the collision of black holes or the detonation of supernovae. Listening to Hughes’s .wav representations of black holes spiraling into one another, I heard washing-machine whirs, Ther- emin-like glissandos, and cybernetic chirps—noises reminiscent, I mused, of mid- twentieth century sci-fi movies (see Taylor 2001 on space-age music). Audio 1. Circular inspiral, spin 35.94% of maximum, orbital plane 0Њ,0Њ viewing angle. Created by Pei-Lan Hsu, using code written by Scott Hughes. Audio 2. Generic inspiral, eccentricity e = 0.7, inclination i =25Њ. Created by Pei-Lan Hsu, using code written by Scott Hughes. CULTURAL ANTHROPOLOGY, Vol. 31, Issue 4, pp. 464–492, ISSN 0886-7356, online ISSN 1548-1360. ᭧ by the American Anthropological Association. All rights reserved. DOI: 10.14506/ca31.4.02 GRAVITY’S REVERB These were sounds that, as it turned out, had a family resemblance to the actual detection signal that, come February 2016, would be made public by astronomers at the Laser Interferometer Gravitational-Wave Observatory (LIGO), an MIT-Caltech astronomy collaboration anchored at two massive an- tennae complexes, one in Hanford, Washington, another in Livingston, Louisiana. -
Comparison and Evaluation of Sonification Strategies for Guidance Tasks Gaetan Parseihian, Charles Gondre, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland Martinet
Comparison and Evaluation of Sonification Strategies for Guidance Tasks Gaetan Parseihian, Charles Gondre, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland Martinet To cite this version: Gaetan Parseihian, Charles Gondre, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland Mar- tinet. Comparison and Evaluation of Sonification Strategies for Guidance Tasks. IEEE Transac- tions on Multimedia, Institute of Electrical and Electronics Engineers, 2016, 18 (4), pp.674-686. 10.1109/TMM.2016.2531978. hal-01306618 HAL Id: hal-01306618 https://hal.archives-ouvertes.fr/hal-01306618 Submitted on 25 Apr 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. JOURNAL OF LATEX CLASS FILES, VOL. ?, NO. ?, ? 2016 1 Comparison and evaluation of sonification strategies for guidance tasks Gaetan¨ Parseihian, Charles Gondre, Mitsuko Aramaki, Senior Member, IEEE, Sølvi Ystad, Richard Kronland Martinet, Senior Member, IEEE Abstract This article aims to reveal the efficiency of sonification strategies in terms of rapidity, precision and overshooting in the case of a one-dimensional guidance task. The sonification strategies are based on the four main perceptual attributes of a sound (i.e. pitch, loudness, duration/tempo and timbre) and classified with respect to the presence or not of one or several auditory references.