SONIFICATION with MUSICAL CHARACTERISTICS: a PATH GUIDED by USER ENGAGEMENT Jonathan Middleton1, Jaakko Hakulinen2, Katariina Ti
Total Page:16
File Type:pdf, Size:1020Kb
The 24th International Conference on Auditory Display (ICAD 2018) June 10-15, 2018, Michigan Technological University SONIFICATION WITH MUSICAL CHARACTERISTICS: A PATH GUIDED BY USER ENGAGEMENT Jonathan Middleton1, Jaakko Hakulinen2, Katariina Tiitinen2, Juho Hella2, Tuuli Keskinen2, Pertti Huuskonen2, Juhani Linna2, Markku Turunen2, Mounia Ziat3 and Roope Raisamo2 1 Eastern Washington University, Department of Music, Cheney, WA 99203 USA, [email protected] 2 University of Tampere, Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere, FI-33014 Finland, {firstname.lastname}@uta.fi 3Northern Michigan University, Dept. Psychological Science, Marquette, MI 49855 USA, [email protected] ABSTRACT The use of musical elements and characteristics in sonification has been formally explored by members of Sonification with musical characteristics can engage the International Community for Auditory Display since users, and this dynamic carries value as a mediator the mid 1990’s [4], [5], [6]. A summary of music-related between data and human perception, analysis, and research and software development in sonification can interpretation. A user engagement study has been be obtained from Bearman and Brown [7]. Earlier designed to measure engagement levels from conditions attempts were made by Pollack and Flicks in 1954 [8], within primarily melodic, rhythmic, and chordal yet according to Paul Vickers [9] and other researchers, contexts. This paper reports findings from the melodic such as, Walker and Nees [10], the path for music in portion of the study, and states the challenges of using sonification remains uncertain or unclear. Vickers musical characteristics in sonifications via the recently raised the significant question: “how should perspective of form and function – a long standing sonification designers who wish their work to be more debate in Human-Computer Interaction. These results ‘musical’ approach the task?” Vickers asserts “the path can guide the design of more complex sonifications of we ought to take is currently unanswered.” Walker and multivariable data suitable for real life use. Nees [10] say that “questions of aesthetics and musicality remain open in the field of sonification.” The 1. INTRODUCTION interest in incorporating musical characteristics in sonification can be summarized from the research of Sonifications that rely on musical elements such as Brown et al. [11]: “the use of musical sounds has been rhythm, pitch, and harmony are often reserved for recommended because of the ease with which musical creative purposes, yet musical characteristics can offer sounds are perceived.” important contributions towards analysis [1]. When a variety of musical elements are integrated in auditory The use of musical characteristics in sonification can display for analytical contexts, such as process serve multiple purposes. To start, musical expressions that relate to aesthetics can enhance the user experience, monitoring, researchers can capture a variety of features and this can, in turn, enhance the perception of data. from multivariate data [2]. One concern, however, is that Positive user experiences can translate to more time musical representations can create distractions and that spent with the data analysis, and improving music adds something more than is needed for an interpretations. Finally, the mapping possibilities to analysis. In this sense, music is seen as a more favorable musical traits are numerous and this brings opportunities component of form rather than function – an issue tied to to associate multivariate data to different types of the long-standing Human-Computer Interaction (HCI) musical features and expressions. debate of which comes first [3]. The work reported in this paper was carried out in a This paper offers the view that musical elements can project called Data-to-Music. It focused on the facilitate analytical objectives from improved data development of a software to map data to musical perception, data analysis, and data interpretation through characteristics. The data came mostly from monitoring user engagement and enhanced experiences. the conditions of buildings, machines, weather, and athletic experiences. What made the project unique, was the focus on the user experience with music. In order to test the possibilities of musical This work is licensed under Creative Commons characteristics in data representation and display, the Attribution – Non Commercial 4.0 International research team designed and programmed the software License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/4.0/ called D2M (Figure 1). It enables the users to assign KWWSGRLRUJLFDG 35 The 24th International Conference on Auditory Display (ICAD 2018) June 10-15, 2018, Michigan Technological University Data-2-Music 2.1 Logged in as admin. weatherLog out weather Nov 20, 2016 - Nov 21, 2016 Edit Save settings Reset timelines 2 PM 06 PM Nov 20 06 AM 12 PM 06 PM Mon 21 06 AM 12 PM Playback controls Duration (seconds): 40 Loop: BPM: 60 Beats/timeline: 40 Play all Stop all Export MIDI Settings Clouds 8 Mute 6 4 2 0 12:00 15:00 18:00 21:00 00:00 03:00 06:00 09:00 12:00 Add stream Pitch Clouds: stream 1 Data to Pitch on instrument 6: marimba Settings Beeats/timeline 40 Scale c-major -4 -5 - Mute -6 MIDI Range 43 -7 60 -8 -9 enforce tonic 12:00 15:00 18:00 21:00 00:00 03:00 06:00 09:00 12:00 Figure 1: Weather mapping to MIDI notes with the D2M software. variables of time series data sets to “streams” which In the process of interpreting data, sound control different aspects of the generated audio. Each characteristics with musical qualities can contribute to stream can be associated with a unique or shared enhanced levels of engagement or distract from the instrument timbre and the data values can be set to analysis [12]. The challenge is to make sure the aesthetic control pitch, the complexity of rhythmic patterns, qualities of the auditory display do not compromise accent chords, articulation, or loudness of the coherence. We submit that data-to-music sonification instrument. The user can determine the duration of the can not only engage users, but levels of engagement generated music and the data are scaled accordingly. The with coherence can be increased by careful combinations data streams specify parameters of each generated note of musical characteristics. based on the given settings. The global structure of the A significant goal in HCI is the user experience, generated music is the result of the data content. which is often tied to aesthetics. Dillon [13] traces the Additional tools include the possibility to set importance of aesthetics, as a basic motivator, back to A. thresholds based both on data values and time range to Manslow (1954). Tractinsky et al. argue that aesthetic filter out some of the data. There are also preprocessing perceptions are highly correlated by individuals to tools for linear and logarithmic scaling, as well as ways interface usage; therefore, whether we intend to for inverting data values. The high degree of flexibility incorporate aesthetics by design into usage or not, of the tool enabled the team to create auditory displays aesthetics are most likely already tied to usage [3], [14]. that sound composed, even though the results were In 2006 Andrew Dillon [13] introduced the term prepared by an algorithm. mediation to describe an essential mechanism whose In this article, we first outline our sonification role is to translate information between data and human. approach and then describe a user survey that seeks to In describing this mechanism, he states that information measure to which extent the approach leads to greater technology is a mediator, among others, which “carries, levels of user engagement. The D2M software was used stores, retrieves, and presents data”; and from these to turn weather data into musical pieces used in the functions, information technology “can provide a survey. physical instantiation of data or the mechanism for making visible or audible the data of interest.” 2. FORM AND FUNCTION In this paper, we are guided by the research question: Can musical characteristics contribute to meaningful The form and function debate in HCI is similar to the data perception, analysis and interpretation? In one found in sonification with music. addressing this challenge, we reference two studies that focus on the significance of the audible experience 36 The 24th International Conference on Auditory Display (ICAD 2018) June 10-15, 2018, Michigan Technological University Playback controls Duration (seconds): 40 Loop: BPM: 60 Beats/timeline: 40 Play all Stop all Export MIDI Settings Clouds 6 Mute 4 2 0 00:00 12:00 15:00 18:00 21:00 00:00 09:00 06:00 00:00 Add stream Clouds: stream 2 Data to Pitch on instrument 6: marimba Settings Preprocessing BPM: 60 Beats/timeline 40 0 Mute -2 -4 -6 09:00 12:00 15:00 18:00 21:00 00:00 03:00 06:00 09:00 Clouds: stream 2 Rhythm Data to Pitch on instrument 6: marimba Settings meeline 40 data timestamps 6 varying 4 Mute simple 2 complex 2 0 09:00 12:00 15:00 18:00 21:00 00:00 03:00 06:00 09:00 Figure 2: Cloud cover data being mapped into two independent streams of pitch and rhythmic values. mentioned by Dillon and in particular the instantiation of time peripheral monitoring with sounds. Their main goal data called sonification. We will then seek to prove that was to create a continuous soundscape based on interpretive experiences with the aid of musical sonification that does not distract users from their main characteristics in the auditory display can offer task(s). In their study forest sounds were chosen for analytical advantages. continuous soundscapes during peripheral process In Human-Computer Interaction, auditory display monitoring. The decision to use sounds from nature has maintained an increasing role in offering new came after consulting the work of Gaver et al.