Bell3d: an Audio-Based Astronomy Education System for Visually-Impaired Students Applications Research and and Research

Total Page:16

File Type:pdf, Size:1020Kb

Bell3d: an Audio-Based Astronomy Education System for Visually-Impaired Students Applications Research and and Research Bell3D: An Audio-based Astronomy Education System for Visually-impaired Students Applications Research and and Research Jamie Ferguson Keywords University of Aberdeen and University of Sonification, human-computer interaction, Glasgow, Scotland multimodal, visually impaired, software [email protected] This article presents a software system that allows astronomical data to be represented and analysed purely through sound. The goal is to use this for communicating astronomy to visually impaired students, as well as providing a new modality of astronomy experience for those who are sighted. The programme — known as Bell3D and developed at the University of Aberdeen — allows various stars to be selected from a database and then maps a variety of their astronomical parameters to audio parameters so that they can be analysed, compared and understood, solely through the medium of sound. Introduction and background is the “transformation of data relations to use of sonification as a method of pub- perceived relations in an acoustic signal” lic engagement with the Rosetta mis- The ability to experience the surrounding (Kramer et al., 2010). sion (European Space Agency, 2014); Universe is inherent and taken for granted NASA’s Earth+ system1 and the Harvard– by those who are sighted, but for those with Sonification, in many situations, is better Smithsonian Centre for Astrophysics’ Star a visual impairment, the night sky remains suited to analysing large datasets for pat- Songs: From X-rays to Music project2. almost entirely impenetrable. Utilising soni- terns and anomalies than visual modalities Furthermore, during the press coverage fication — the use of non-speech audio to (Kramer et al., 2010). For example, a sig- of the LIGO experiment’s recent detection convey information (Kramer et al., 2010) nificantly large dataset such as a genomic of gravitational waves (Abbot et al., 2016), — allows those with a visual impairment sequence, which would traditionally have the audification of the detection was used to engage with data that would previously taken a lot of time and resources to ana- ubiquitously by the media and has become have only been accessible via visual modal- lyse for patterns and anomalies visually, synonymous with the detection3. These ities, and therefore available exclusively to may be sped up and compacted using technologies have only recently begun to sighted people. Sonification allows tradi- sound, allowing it be analysed in the order be implemented as astronomy commu- tionally visually-oriented and data-driven of minutes. nication tools, but computing and audio fields such as astronomy to be explored technologies are now reaching a level and experienced by visually-impaired peo- Both sonification and audification have, of low cost and high performance that ple, some of whom may never have had the until recently, only been applied sporadi- gives sonification the potential to be used opportunity to enjoy and experience these cally throughout the history of astrophys- throughout astronomy communication and fields before. Furthermore, this technol- ical and astronomical research. The first education for the visually impaired. ogy allows a new, complementary modal- use of these technologies in the litera- ity to visual experience for sighted people, ture was during the Voyager 2 mission in with potential applications across data the late 1970s (Scarf, 1982). These tech- How the system works science, science communication, art and nologies have more recently been used to education. analyse radio plasma waves around Saturn The Bell3D system4 was developed after (Gurnet, 2005), as well as examining large realising the potential of sonification A simple example of sonification would be collections of solar wind data (Wicks et and audio-based data displays within a kettle whistling when the water within it al., 2016). Furthermore, recent work into a astronomy communication and educa- reaches around a hundred degrees centi- sonification prototype for analysing satel- tion, not only as an essential resource grade. The data in this example being the lite and radio telescope data (Candey et for visually-impaired people, but also as temperature of the water and the informa- al., 2006; Diaz-Merced, 2013) has shown a new, engaging medium of astronomy tion extracted from analysing the whistling that researchers within the astronomy and experience for those who are sighted. The being that the water has reached its boiling astrophysics communities are hopeful of programme is in its infancy and requires point. Another classic example is a Geiger– the potential of applications of sonification more work, but has so far proven to be an Müller radiation detector, which creates at the leading edge of these sciences. important step towards ubiquitous access an audible click when particles associ- to astronomy experience and education. ated with radioactivity strike the detec- The astronomy and astrophysics commu- tor. However, sonification is not to be con- nication communities have experienced a The software involves the user selecting a fused with audification. Audification is the surge in the use of sonification and audi- constellation and then specific stars within direct playback of data samples (Kramer, fication in the last decade. Recent works that constellation to be sonified — mean- 1993), as opposed to sonification which including the European Space Agency’s ing data relating to the selected stars is CAPjournal, No. 20, August 2016 35 Bell3D: An Audio-based Astronomy Education System for Visually-impaired Students transposed into audio signals. When stars are selected, their astronomical param- eters such as magnitude, location, size 1.0 and distance from Earth obtained from the SIMBAD database5 (Wenger, 2000) 0.5 are mapped onto audio parameters such ) as volume and pitch. The audio is spatial- 1 –2 ised, meaning that the user listening to the 0 0.0 sounds perceives that these sounds are in three-dimensional space, with audio –0.5 coming from particular areas around their Strain (1 head; the star’s equatorial coordinates are –1.0 used for this. This provides an engaging experience and gives the user an idea of where the star(s) reside in relation to Earth. The sounds produced are simple and easy to analyse. For each star there are 1.0 two sounds: a repeating “ping” sound and a continuous tone. Both can be turned on 0.5 and off independently. The former encodes ) 1 three data points within one sound — a –2 star’s distance from Earth, its location with 0 0.0 reference to the Earth and its apparent magnitude — and the latter encodes one –0.5 attribute — a star’s colour index. Strain (1 –1.0 The ping represents distance, location and apparent magnitude data values in the following manner: The star’s distance from Earth is represented sonically by how loud the ping is, with the volume of this sound being inversely proportional to 1.0 the star’s distance from Earth (the closer the star, the louder the sound); the star’s 0.5 location (based on its equatorial coordi- ) 1 nates) is represented by the sound’s per- –2 0 0.0 ceived location in the three-dimensional sound space; and, finally, the star’s appar- ent magnitude or brightness is represented –0.5 by the pitch of the ping sound, the pitch Strain (1 being proportional to the star’s apparent –1.0 magnitude. The second sound — the continuous tone which represents a star’s colour index — 0.30 0.35 0.40 0.45 is represented sonically by the pitch of the Time (sec) continuous tone. The higher the pitch of the tone, the further towards the red end of the Figure 1. The LIGO gravitational wave detection waveform, which was audified as a successful means of com- spectrum a star is, the lower the pitch, the municating the discovery in the media. Credit: LIGO Scientific Collaboration further toward blue end. This rather simple mapping of parameters Additional applications of the and audification have been employed allows the user to make basic assumptions system successfully within artistic endeavours about stars based on what they hear. For attempting to explore ideas combining sci- example, one can assume that Betelgeuse The obvious, prevalent use for the system ence and art on many occasions (Mast et is a red star, based on the fact that the pitch lies within astronomy education and com- al., 2015; Polli, 2005) and have been the of its tone sound is high, or that Proxima munication, but uses within the arts have basis of many sound artist’s practice, such Centauri is closer to Earth than Sirius, been discussed as potential further appli- as Robert Alexander6, Mélodie Fenez7, because its “ping” sound is louder. cations of the Bell3D software. Sonification Ryoji Ikeda8 and Rory Viner9. Bell3D has 36 CAPjournal, No. 20, August 2016 CAPjournal, No. 20, August 2016 the potential to be used in such a way in Astronomical Union’s Office of Astronomy severe visual impairment can use the sys- future either as a composition or perfor- for Development (OAD) in Cape Town, tem without a screen via the use of haptic mance tool for artists. South Africa. During this project the soft- technologies and speech recognition. This ware will benefit from input by researchers, further research will focus much more thor- The system was used as an astronomy- educators and outreach specialists at the oughly on investigating user perception teaching resource by Dr Wanda Diaz- South African Astronomical Observatory, and response to sonification within astron- Merced of the Harvard–Smithsonian as well as being used in OAD-led astron- omy in both visually-impaired and sighted Centre for Astrophysics during a recent omy lessons with students at the Athlone groups. The prototype described here series of lectures and workshops at South School for the Blind. This will allow for pre- is a useful preliminary investigation into Africa’s national science festival, SciFest liminary evaluation of visually impaired the software technology needed to facil- Africa10.
Recommended publications
  • Sonification As a Means to Generative Music Ian Baxter
    Sonification as a means to generative music By: Ian Baxter A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy The University of Sheffield Faculty of Arts & Humanities Department of Music January 2020 Abstract This thesis examines the use of sonification (the transformation of non-musical data into sound) as a means of creating generative music (algorithmic music which is evolving in real time and is of potentially infinite length). It consists of a portfolio of ten works where the possibilities of sonification as a strategy for creating generative works is examined. As well as exploring the viability of sonification as a compositional strategy toward infinite work, each work in the portfolio aims to explore the notion of how artistic coherency between data and resulting sound is achieved – rejecting the notion that sonification for artistic means leads to the arbitrary linking of data and sound. In the accompanying written commentary the definitions of sonification and generative music are considered, as both are somewhat contested terms requiring operationalisation to correctly contextualise my own work. Having arrived at these definitions each work in the portfolio is documented. For each work, the genesis of the work is considered, the technical composition and operation of the piece (a series of tutorial videos showing each work in operation supplements this section) and finally its position in the portfolio as a whole and relation to the research question is evaluated. The body of work is considered as a whole in relation to the notion of artistic coherency. This is separated into two main themes: the relationship between the underlying nature of the data and the compositional scheme and the coherency between the data and the soundworld generated by each piece.
    [Show full text]
  • 92 Airq Sonification As a Context for Mutual Contribution Between
    ARANGO, J.J. AirQ Sonification as a context for mutual contribution between Science and Music Revista Música Hodie, Goiânia, V.18 - n.1, 2018, p. 92-102 AirQ Sonification as a context for mutual contribution between Science and Music Julián Jaramillo Arango (Caldas University Manizales, Colombia) [email protected] Abstract. This paper addresses a high-level discussion about the links between Science and Music, focusing on my own sonification and auditory display practices around air quality. Grounded on recent insights on interdisciplina- rity and sonification practices, the first sections will point out some potential contributions from scientific research to music studies and vice versa. I will propose the concept of mutualism to depict the interdependent status of Mu- sic and Science. The final sections will discuss air contamination as a complex contemporary problem, and will re- port three practice-based-design projects, AirQ jacket, Esmog Data and Breathe!, which outline different directions in facing local environmental awareness. Keywords. Science and Music, Sonification, Environmental Awarness, Sonology. A sonificação do AirQ como um contexto de contribuição mutua entre a Ciência e a Música. Resumo. Este artigo levanta uma discussão geral sobre as relações entre a Ciência e a Música, e se concentra em minhas praticas de sonificação e display auditivo da qualidade do ar. Baseado em reflexões recentes sobre a interdis- ciplinaridade e as práticas de sonificação as primeiras seções identificam algumas contribuições potenciais da pes- quisa científica aos estudos musicais e vice versa. O conceito de mutualismo será proposto para expressar a interde- pendência entre Música e Ciência. As seções finais discutem a contaminação do ar como um problema complexo de nossos dias e descreve três projetos de design-baseado-na-prática, AirQ jacket, Esmog Data e Breathe! que propõem direções diferentes para gerar consciência ambiental no contexto local.
    [Show full text]
  • SONIFICATION with MUSICAL CHARACTERISTICS: a PATH GUIDED by USER ENGAGEMENT Jonathan Middleton1, Jaakko Hakulinen2, Katariina Ti
    The 24th International Conference on Auditory Display (ICAD 2018) June 10-15, 2018, Michigan Technological University SONIFICATION WITH MUSICAL CHARACTERISTICS: A PATH GUIDED BY USER ENGAGEMENT Jonathan Middleton1, Jaakko Hakulinen2, Katariina Tiitinen2, Juho Hella2, Tuuli Keskinen2, Pertti Huuskonen2, Juhani Linna2, Markku Turunen2, Mounia Ziat3 and Roope Raisamo2 1 Eastern Washington University, Department of Music, Cheney, WA 99203 USA, [email protected] 2 University of Tampere, Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere, FI-33014 Finland, {firstname.lastname}@uta.fi 3Northern Michigan University, Dept. Psychological Science, Marquette, MI 49855 USA, [email protected] ABSTRACT The use of musical elements and characteristics in sonification has been formally explored by members of Sonification with musical characteristics can engage the International Community for Auditory Display since users, and this dynamic carries value as a mediator the mid 1990’s [4], [5], [6]. A summary of music-related between data and human perception, analysis, and research and software development in sonification can interpretation. A user engagement study has been be obtained from Bearman and Brown [7]. Earlier designed to measure engagement levels from conditions attempts were made by Pollack and Flicks in 1954 [8], within primarily melodic, rhythmic, and chordal yet according to Paul Vickers [9] and other researchers, contexts. This paper reports findings from the melodic such as, Walker and Nees [10], the path for music in portion of the study, and states the challenges of using sonification remains uncertain or unclear. Vickers musical characteristics in sonifications via the recently raised the significant question: “how should perspective of form and function – a long standing sonification designers who wish their work to be more debate in Human-Computer Interaction.
    [Show full text]
  • The Sonification Handbook Chapter 4 Perception, Cognition and Action In
    The Sonification Handbook Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff Logos Publishing House, Berlin, Germany ISBN 978-3-8325-2819-5 2011, 586 pages Online: http://sonification.de/handbook Order: http://www.logos-verlag.com Reference: Hermann, T., Hunt, A., Neuhoff, J. G., editors (2011). The Sonification Handbook. Logos Publishing House, Berlin, Germany. Chapter 4 Perception, Cognition and Action in Auditory Display John G. Neuhoff This chapter covers auditory perception, cognition, and action in the context of auditory display and sonification. Perceptual dimensions such as pitch and loudness can have complex interactions, and cognitive processes such as memory and expectation can influence user interactions with auditory displays. These topics, as well as auditory imagery, embodied cognition, and the effects of musical expertise will be reviewed. Reference: Neuhoff, J. G. (2011). Perception, cognition and action in auditory display. In Hermann, T., Hunt, A., Neuhoff, J. G., editors, The Sonification Handbook, chapter 4, pages 63–85. Logos Publishing House, Berlin, Germany. Media examples: http://sonification.de/handbook/chapters/chapter4 8 Chapter 4 Perception, Cognition and Action in Auditory Displays John G. Neuhoff 4.1 Introduction Perception is almost always an automatic and effortless process. Light and sound in the environment seem to be almost magically transformed into a complex array of neural impulses that are interpreted by the brain as the subjective experience of the auditory and visual scenes that surround us. This transformation of physical energy into “meaning” is completed within a fraction of a second. However, the ease and speed with which the perceptual system accomplishes this Herculean task greatly masks the complexity of the underlying processes and often times leads us to greatly underestimate the importance of considering the study of perception and cognition, particularly in applied environments such as auditory display.
    [Show full text]
  • The Sonification Handbook Chapter 3 Psychoacoustics
    The Sonification Handbook Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff Logos Publishing House, Berlin, Germany ISBN 978-3-8325-2819-5 2011, 586 pages Online: http://sonification.de/handbook Order: http://www.logos-verlag.com Reference: Hermann, T., Hunt, A., Neuhoff, J. G., editors (2011). The Sonification Handbook. Logos Publishing House, Berlin, Germany. Chapter 3 Psychoacoustics Simon Carlile Reference: Carlile, S. (2011). Psychoacoustics. In Hermann, T., Hunt, A., Neuhoff, J. G., editors, The Sonification Handbook, chapter 3, pages 41–61. Logos Publishing House, Berlin, Germany. Media examples: http://sonification.de/handbook/chapters/chapter3 6 Chapter 3 Psychoacoustics Simon Carlile 3.1 Introduction Listening in the real world is generally a very complex task since sounds of interest typically occur on a background of other sounds that overlap in frequency and time. Some of these sounds can represent threats or opportunities while others are simply distracters or maskers. One approach to understanding how the auditory system makes sense of this complex acoustic world is to consider the nature of the sounds that convey high levels of information and how the auditory system has evolved to extract that information. From this evolutionary perspective, humans have largely inherited this biological system so it makes sense to consider how our auditory systems use these mechanisms to extract information that is meaningful to us and how that knowledge can be applied to best sonify various data. One biologically important feature of a sound is its identity; that is, the spectro-temporal characteristics of the sound that allow us to extract the relevant information represented by the sound.
    [Show full text]
  • Gravity's Reverb: Listening to Space-Time, Or Articulating The
    GRAVITY’S REVERB: Listening to Space-Time, or Articulating the Sounds of Gravitational-Wave Detection STEFAN HELMREICH Massachusetts Institute of Technology http://orcid.org/0000-0003-0859-5881 I heard gravitational waves before they were detected.1 I was sitting in a pub in May 2015 with MIT physicists Scott Hughes and David Kaiser, my headphones looped into a laptop. Hughes was readying to play us some interstellar sounds: digital audio files, he explained, that were sonic simulations of what gravitational waves might sound like if, one day, such cosmic undulations arrived at detection devices on Earth. Gravitational waves are tiny but consequential wriggles in space-time, first theorized by Einstein in 1916— vibrations generated, for example, by such colossal events as the collision of black holes or the detonation of supernovae. Listening to Hughes’s .wav representations of black holes spiraling into one another, I heard washing-machine whirs, Ther- emin-like glissandos, and cybernetic chirps—noises reminiscent, I mused, of mid- twentieth century sci-fi movies (see Taylor 2001 on space-age music). Audio 1. Circular inspiral, spin 35.94% of maximum, orbital plane 0Њ,0Њ viewing angle. Created by Pei-Lan Hsu, using code written by Scott Hughes. Audio 2. Generic inspiral, eccentricity e = 0.7, inclination i =25Њ. Created by Pei-Lan Hsu, using code written by Scott Hughes. CULTURAL ANTHROPOLOGY, Vol. 31, Issue 4, pp. 464–492, ISSN 0886-7356, online ISSN 1548-1360. ᭧ by the American Anthropological Association. All rights reserved. DOI: 10.14506/ca31.4.02 GRAVITY’S REVERB These were sounds that, as it turned out, had a family resemblance to the actual detection signal that, come February 2016, would be made public by astronomers at the Laser Interferometer Gravitational-Wave Observatory (LIGO), an MIT-Caltech astronomy collaboration anchored at two massive an- tennae complexes, one in Hanford, Washington, another in Livingston, Louisiana.
    [Show full text]
  • Comparison and Evaluation of Sonification Strategies for Guidance Tasks Gaetan Parseihian, Charles Gondre, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland Martinet
    Comparison and Evaluation of Sonification Strategies for Guidance Tasks Gaetan Parseihian, Charles Gondre, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland Martinet To cite this version: Gaetan Parseihian, Charles Gondre, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland Mar- tinet. Comparison and Evaluation of Sonification Strategies for Guidance Tasks. IEEE Transac- tions on Multimedia, Institute of Electrical and Electronics Engineers, 2016, 18 (4), pp.674-686. 10.1109/TMM.2016.2531978. hal-01306618 HAL Id: hal-01306618 https://hal.archives-ouvertes.fr/hal-01306618 Submitted on 25 Apr 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. JOURNAL OF LATEX CLASS FILES, VOL. ?, NO. ?, ? 2016 1 Comparison and evaluation of sonification strategies for guidance tasks Gaetan¨ Parseihian, Charles Gondre, Mitsuko Aramaki, Senior Member, IEEE, Sølvi Ystad, Richard Kronland Martinet, Senior Member, IEEE Abstract This article aims to reveal the efficiency of sonification strategies in terms of rapidity, precision and overshooting in the case of a one-dimensional guidance task. The sonification strategies are based on the four main perceptual attributes of a sound (i.e. pitch, loudness, duration/tempo and timbre) and classified with respect to the presence or not of one or several auditory references.
    [Show full text]
  • Sonification Strategies for the Film Rhythms of the Universe
    The 20th International Conference on Auditory Display (ICAD-2014) June 22-25, 2014, New York, USA SONIFICATION STRATEGIES FOR THE FILM RHYTHMS OF THE UNIVERSE Mark Ballora Penn State University School of Music/School of Theatre Music Bldg I, University Park, PA 16802-1901, USA [email protected] ABSTRACT Smoot remarked that all complex information – whether it be the history of the universe or the mapping of the human Design strategies are discussed for sonifications that were genome, is rendered in such a way as to fit onto something the created for the short film Rhythms of the Universe, which was size of a book or newspaper. This is what the human mind can conceived by a cosmologist and a musician, with multi-media take in. Aesthetic interpretations are vital to creating contributions from a number of artists and scientists. informative renderings. Sonification functions as an engagement factor in this scientific Following a review in Section 2 of prior work done in this outreach project, along with narration, music, and visualization. area, Section 3 will describe seven sonification examples from This paper describes how the sonifications were created from the film, followed by Concluding Remarks in Section 4. Audio datasets describing pulsars, the planetary orbits, gravitational examples may be downloaded from [2]. Sonification work for waves, nodal patterns in the sun’s surface, solar winds, this film has been described previously [3], although this paper extragalactic background light, and cosmic microwave focuses mostly on different design approaches than those background radiation. The film may be viewed online at [1], described there.
    [Show full text]
  • SONIFICATION: a PREHISTORY David Worrall Audio Arts And
    The 24th International Conference on Auditory Display (ICAD 2018) June 10-15, 2018, Michigan Technological University SONIFICATION: A PREHISTORY David Worrall Audio Arts and Acoustics Department, Columbia College Chicago, 600 South Michigan Avenue Chicago, IL 60605, USA. [email protected] ABSTRACT generated data sonification is a relatively recent formalized practice, cultural antecedents can be identified in all periods The idea that sound can convey information predates the of history. In the spirit of the adage “know from whence modern era, and certainly the computational present. Data you came,” anecdotes and general trends are instructive sonification can be broadly described as the creation, study because they provide insights into what people thought and use of the non-speech aural representation of sound was and how it could be used to communicate. information to convey information. As a field of The idea that sound can convey non-speech contemporary enquiry and practice, data sonification is information predates the modern era, and certainly the young, interdisciplinary and evolving; existing in parallel to computational present. By considering some ways in which the field of data visualization. Drawing on older practices sound has been used in the past to convey specific such as auditing, and the use of information messaging in information, this paper attempts to provide a focused music, this paper provides an historical understanding of historical understanding of ideas about sound and how its how sound and its representational deployment in deployment in communicating information has changed in communicating information has changed. In doing so, it the modern era. In doing so, it is hoped that this will assist aims to encourage a critical awareness of some of the socio- us to critically examine the socio-cultural assumptions we cultural as well as technical assumptions often adopted in have adopted, particularly with regard to the current sonifying data, especially those that have been developed in practice of data sonification.
    [Show full text]
  • Communicating Air: Alternative Pathways to Environmental Knowing Through Computational Ecomedia by Andrea Polli
    University of Plymouth PEARL https://pearl.plymouth.ac.uk 04 University of Plymouth Research Theses 01 Research Theses Main Collection 2011 Communicating Air: Alternative Pathways to Environmental Knowing through Computational Ecomedia Polli, Andrea http://hdl.handle.net/10026.1/889 University of Plymouth All content in PEARL is protected by copyright law. Author manuscripts are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author. Communicating Air: Alternative Pathways to Environmental Knowing through Computational Ecomedia by Andrea Polli BA (Johns Hopkins University) 1989 MFA (The School of the Art Institute of Chicago) 1992 A dissertation submitted to the University of Plymouth in partial fulfilment for the degree of DOCTOR OF PHILOSOPHY (PhD) Word count: 74,050 Supplemented by: Proof of Practice of three case studies on 1 DVD and 1 Audio CD Committee in Charge: First Supervisor: Dr Jill Scott, Zurich University of the Arts and the University of Plymouth Secondary Supervisor: Dr Angelika Hilbeck, ETHZ, Zurich Date of Submission: March 31, 2011 Polli ⓒ 2011 Acknowledgements In addition to her excellent supervisors, the author would like to express special thanks to the following persons who assisted in the development and implementation of this research: Kurt Ralske, for his programming expertise,
    [Show full text]
  • Interactive Sonification of Network Traffic to Support Cyber Security Situational Awareness
    Northumbria Research Link Citation: Debashi, Mohamed (2018) Interactive Sonification of Network Traffic to support Cyber Security Situational Awareness. Doctoral thesis, Northumbria University. This version was downloaded from Northumbria Research Link: http://nrl.northumbria.ac.uk/id/eprint/39458/ Northumbria University has developed Northumbria Research Link (NRL) to enable users to access the University’s research output. Copyright © and moral rights for items on NRL are retained by the individual author(s) and/or other copyright owners. Single copies of full items can be reproduced, displayed or performed, and given to third parties in any format or medium for personal research or study, educational, or not-for-profit purposes without prior permission or charge, provided the authors, title and full bibliographic details are given, as well as a hyperlink and/or URL to the original metadata page. The content must not be changed in any way. Full items must not be sold commercially in any format or medium without formal permission of the copyright holder. The full policy is available online: http://nrl.northumbria.ac.uk/policies.html Interactive Sonification of Network Traffic to support Cyber Security Situational Awareness Mohamed Debashi A thesis submitted in partial fulfilment of the requirements of the University of Northumbria at Newcastle for the degree of Doctor of Philosophy Research undertaken in the Department of Computer and Information Sciences October 2018 Abstract Maintaining situational awareness of what is happening within a computer network is challenging, not least because the behaviour happens within computers and communications networks, but also because data traffic speeds and volumes are beyond human ability to process.
    [Show full text]
  • Sonic Interaction Design Sonic Interaction Design
    Sonic Interaction Design Sonic Interaction Design edited by Karmen Franinovi ć and Stefania Serafin The MIT Press Cambridge, Massachusetts London, England © 2013 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. MIT Press books may be purchased at special quantity discounts for business or sales promotional use. For information, please email [email protected] or write to Special Sales Depart- ment, The MIT Press, 55 Hayward Street, Cambridge, MA 02142. This book was set in Stone Sans and Stone Serif by Toppan Best-set Premedia Limited, Hong Kong. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Sonic interaction design / edited by Karmen Franinovi ć and Stefania Serafin. pages cm Includes bibliographical references and index. ISBN 978-0-262-01868-5 (hardcover : alk. paper) 1. Sonic interaction design. 2. Product design. 3. Sound in design. 4. Human-computer interaction. I. Franinović , Karmen, 1975 — editor of compilation. II. Serafin, Stefania, 1973 — editor of compilation. TS171.S624 2013 004.01 ' 9 — dc23 2012028482 10 9 8 7 6 5 4 3 2 1 Contents Introduction vii I Emergent Topics 1 1 Listening to the Sounding Objects of the Past: The Case of the Car 3 Karin Bijsterveld and Stefan Krebs 2 The Experience of Sonic Interaction 39 Karmen Franinovi ć and Christopher Salter 3 Continuous Auditory and Tactile Interaction Design 77 Yon Visell, Roderick Murray-Smith, Stephen A.
    [Show full text]