Visualization Research Center (VISUS)

Bachelorarbeit

Real-Time and Post-Hoc Visualizations of Guitar Performances as a Support for Music Education

Jakub Krawczuk

Course of Study: B.Sc. Informatik

Examiner: Jun. Prof. Michael Sedlmair

Supervisor: Frank Heyen, M.Sc.

Commenced: May 29, 2020

Completed: September 30, 2020

Abstract

Learning to play the guitar is a long and tedious process and many would-be players struggle at it. Specialized software applications can help music students and self-taught musicians by providing additional insights at the errors made during the learning process. This bachelor thesis describes the development of a prototypical visualization application for the use in musical education. It provides an overview of the theoretical background and technical requirements and describes the conception, implementation and evaluation of our prototype. We use MIDI data, which we receive from a polyphonic pickup that we mounted to a guitar. Our prototype consists of three visualizations that allow to analyze live input of this MIDI data, perform error detection, and see general patterns in a summary heat map of multiple MIDI recordings. As part of a user-centered design, we first collected requirements from a group of guitar players and a guitar teacher, and later evaluated our prototype with them. This evaluation revealed several limitations, but also showed that our concept can be useful to support beginners in learning to play guitar.

Kurzfassung

Gitarre spielen zu lernen ist ein langer und aufwändiger Prozess der vielen angehenden Spielern schwer fällt. Eine spezialisierte Softwareanwendung kann Musikschülern und eigenständigen Lern- ern helfen, indem sie zusätzliche Einblicke in die Fehler, welche beim Lernprozess gemacht werden, gewährt. Diese Bachelorarbeit beschreibt die Entwicklung des Protypen eines Visualisationssystems für die Nutzung in der Musikausbildung. Sie beinhaltet einen Überblick über die theoretischen Hintergründe und technischen Vorraussetzungen unseres Systems und beschreibt die Konzeption, Implementierung und Evaluierung unseres Prototypen. Wir verwenden MIDI Daten, welche wir von einem mehrkanaligen, an der Gitarre befestigten Tonabnehmer empfangen. Der Prototyp besteht aus drei Visualisierungen, welche erlauben, die MIDI Daten live zu analysieren, eine Fehlererken- nung durchzuführen und allgemeine Muster in einer zusammenfassenden Heatmap aus mehreren MIDI Aufnahmen zu erkennen. Als Bestandteil eines nutzerorientierten Designs, haben wir zuerst Anforderungen durch Befragung einer Gruppe von Gitarrenspielern und eines Gtirarrenlehers gesammelt und unseren Prototypen später mit denselben evaluiert. Diese Evaluaierung ließ einige Einschränkungen erkennen, zeigte aber auch, dass unser Konzept hilfreich bei der Unterstützung von Anfängern sein kann.

3

Contents

1 Introduction 11

2 Background 13 2.1 MIDI ...... 13 2.2 Hardware ...... 14 2.3 for Guitar ...... 15 2.4 Limits of Perception – Psychoacoustics ...... 17

3 Related Work 19 3.1 Music Visualization for Guitar ...... 19 3.2 Statistical Process Control ...... 20 3.3 Stacked Graphs for Time Series Data ...... 21

4 Requirements and Design 23 4.1 Requirements Analysis ...... 23 4.2 Tasks and Design Decisions ...... 24 4.3 Technical Implementation ...... 34

5 Evaluation 35 5.1 Limitations ...... 35 5.2 Qualitative User Study ...... 36

6 Conclusion and Outlook 41

Bibliography 43

5

List of Figures

1.1 Visualization of the automatically identified errors in a selected recording. .... 11

2.1 Fishman Triple Play mounted onto a guitar...... 15 2.2 TuxGuitar showing the guitar tabulature for “Breaking The Law“...... 16

3.1 DAW Piano Roll (Public Domain) ...... 20

4.1 Our main visualization approach that combines ideas from guitar tabs and piano rolls and serves as basis for most of our visualizations. The six guitar strings are ordered vertically as seen by a guitar player, time is mapped to the horizontal axis, and notes are drawn as rectangles on the string they are played on, labelled by the number of the fingered fret. An overview on the top allows to navigate using the focus and context technique...... 31 4.2 Visualization of a recording (blue) overlayed on top of the ground truth data (grey). This allows to directly compare each played note to the correct version...... 33 4.3 A heat map of all recordings (red), visualizing the times were a note was played. By looking at all recordings at once, we can see general patterns in the player’s performance...... 33 4.4 Our stacked area chart showing the changes in error rates for 12 recordings of “Breaking The Law“...... 34

5.1 Recording of “Smoke on the water“ by Deep Purple, the participant complained that the notes’ onsets were always too late...... 37 5.2 The same recording as in Figure 5.1 but all notes are moved 90ms towards the start. 38 5.3 Visualization of a user’s performance during the study, which they found “discour- aging“, despite making progress...... 38

7

Listings

2.1 Guitar tabulature for “Breaking The Law“ by Judas Priest ...... 16

4.1 Internal representation of a note played on the open A string...... 25 4.2 Par of the algorithm to find overlapping notes on a single string, dividing and sorting the notes into strings is omitted...... 27 4.3 Functions used to select the best matching pair of notes...... 29 4.4 Algorithm for timing errors classification, utilising tolerances to simulate human perception...... 30

9

1 Introduction

When learning or teaching an instrument, most students and teachers rely on only their hearing in order to identify errors. Many students, especially those without a previous musical education, struggle at this task and often feel discouraged. Since vision is the primary human sensory system, with much higher resolution and parallelism than hearing, additional visual feedback could improve music education by revealing details on errors and performance patterns. A visualization of the students performance can help to analyze where and how often what kinds of errors are made, and thereby provide additional feedback to what can be heard, even with the presence of a teacher. Visualization can furthermore provide insight into how the quality of performance develops over time, which in return can serve as a motivation similar to achievements in a game. This work aims at implementing a prototype of an interactive visualization system that supports students and teachers in music education by providing visual feedback on the students playing performance. The design procedure was inspired by design study methodology [SMM12] and structured as follows: First, requirements were collected by conducting a formative survey among guitarists of different skill levels as well as a guitar teacher. For a final evaluation, they werelater asked to try the finished prototype and give feedback on usability and features.

Figure 1.1: Visualization of the automatically identified errors in a selected recording.

11

2 Background

This chapter explains background on hardware, software, and musical notation.

2.1 MIDI

The Music Instrument Digital Interface (MIDI) is a technical standard agreed upon in 1982 for digital communication between electronic music instruments. At a time when each vendor had its own proprietary interface, the Japanese company Roland proposed to create a standard. MIDI became the first vendor-independent standard and has enabled the rapid growth of electronic music devices since the 80s. This standard describes the communication protocol, the hardware, and the electronic specifications. The latter two of these are outside the scope of this thesis. MIDI is a message based asynchronous protocol. Each message is one to three bytes long and describe music or controller inputs. The first byte is the status byte and its most significant bit(MSB) is always 1. This status byte can be followed by up to two data bytes. The upper for bits of the status byte describe the MIDI command and the lower for bits are the channel number. Consequently there are 8 different MIDI commands (the MSB of the status byte is fixed), and up to 16 independent channels can be sent over a single MIDI connection. The data bytes’ MSB is always 0, hence up to 14 bits of data can be send within a single message. For reliability and simplicity, we restrict our prototype to the ’note on’ and ’note off’ commands exclusively. The ’note on’ command marks the beginning of a note we should play and the ’note off’ command signals we should stop playing it. Both specify the note (pitch) in thefirstdata (second message) byte, together with the note’s velocity (attack, respectively release) in the last byte [Rot92]. For example the three-byte MIDI message “0x92 0x28 0x64“ contains the following data: • Command ’note on’ on channel 2 • Note number 40 (the note E2) • Velocity 100 Drawbacks of the MIDI protocol include the asynchronicity and the small data bandwidth, which lead to timing issues and jitter. Only one message can be send at any given time, which means multiple channels cannot send at the same time. The first specification of MIDI only included the transmission over a 5-pin-DIN connector cable. Over the years the standard has been extended and additional means of transport over USB, Ethernet, FireWire, and more have been standardized. We use the USB-MIDI standard created shortly after the first release of USB. Nowadays every PC has a USB port and all major operating systems ship with default drivers supporting USB-MIDI devices.

13 2 Background

The Web MIDI API (WebMIDI) is a proposed W3C standard which aims to enable direct access to local MIDI devices from web applications via a JavaScript API. The 2015 working draft is the technical base for our prototype [15]. At the time of this writing,WebMIDI is supported natively by browsers using the Chromium code base. For Firefox and Safari there exist extensions implementing the API. WebMIDI is powerful but also very low-level, often requiring developers to handle the binary format of MIDI messages themselves.

2.2 Hardware

The hardware used for this thesis includes an electric guitar and a MIDI-capable hexaphonic pickup attached to it.

2.2.1 Guitar

The guitar is is a fretted string instrument and one of the most popular music instrument of the 20th century [McS95]. Its most common variant has six different strings, each independent of the others, allowing to play multiple notes simultaneously. Each fret on the guitar’s fretboard shifts the pitch of the played note on the given string one halftone up when pressing the string down on it. Electric guitars usually have more than 20 frets and standard guitar tuning (EADGbe) tunes the strings’ pitches in steps of five, or in one case four, halftones apart from each other. As a consequence, the pitch of a played note is not sufficient to determine which string was plucked and which fretwas fingered, since the exact same note can be played at multiple fretboard positions.

2.2.2 Fishman Triple Play

The Fishman Triple Play https://www.fishman.com/tripleplay/ is a MIDI controller for guitars, which can be attached to every six-stringed steel-string guitar, effectively turning it into a MIDI controller. It consists of a wireless controller box with a hexaphonic pickup and sends messaged through a USB receiver dongle to the user’s computer. All parts are shown in ??. The pickup converts the pitch of the played notes into MIDI messages for each string separately and the controller uses one MIDI channel per string when sending the data to the USB receiver. Proprietary drivers and a software suite exist for Windows and Mac OS, but as the USB Receiver implements the USB-MIDI standard, it can be used to connect to other operating systems and any standard compliant MIDI devices, for example Digital Audio Workstation (DAW) software or software and hardware synthesizers. We chose the Fishman Triple Play for its hexaphonic pickup and low latency. As we want to record the fret and string of played notes in addition to their pitches, we need a polyphonic controller, and the Fishman Triple Play outperformed its competitors in latency experiments published on a polish web forum for digital guitar enthusiasts [Paj14].

14 2.3 Musical Notation for Guitar

Figure 2.1: Fishman Triple Play mounted onto a guitar.

2.3 Musical Notation for Guitar

This section describes the use of guitar tabulature for music notation, as well as software and file formats that can be used to process digital versions of tabulature.

2.3.1 Guitar Tabulature (Tabs)

Guitar tabulature, commonly referred to as tabs, are the most common representation of the fingering required to play a song. In their simplest form, guitar tabs are plain ASCII text files, but graphical version exist as well. The strings are represented by horizontal dotted lines and the required fret position is represented by its number, 0 representing the open string without any fretting. The strings are placed below each other ordered by descending pitch, the high e string is at the top. This matches the view from the perspective of guitarists as they are holding the guitar during playing. An example of an ASCII tab is provided in Listing 2.1. Tabs are read from left to right, the order of the notes is represented by their position. Chords are represented by writing the frets at the same horizontal position (for instance at the end of the example tab).

15 2 Background

Figure 2.2: TuxGuitar showing the guitar tabulature for “Breaking The Law“.

e|------|------|------| B|------|------|------| G|------|------|--2------| D|------|------|--2------| A|--0--2--3--0-2--3--0-2--3--|------0------0----3--2--|--0------| E|------|--1--3-----1-3-----3------|------|

Listing 2.1: Guitar tabulature for “Breaking The Law“ by Judas Priest

2.3.2 Tab and Score Editors

Two crucial bits of information lacking from simple tabs are the rhythm and the duration of the notes. Many guitarists use specialized writing software, that show the tab and the common western notation side-by-side. Two popular programs include GuitarPro [20a] and its open-source counterpart TuxGuitar [20d](Figure 2.2). Those editors focus on displaying and editing guitar music and they often include some form of synthesizer and metronome function to play along. They however do not support recording and thence can‘t provide feedback. We made extensive use of TuxGuitar for this thesis, due to it being free and having the option to export scores as MusicXML files. TuxGuitar was used to create own tabs and scores and to convert existing user made guitar tabs into MusicXML.

16 2.4 Limits of Perception – Psychoacoustics

2.3.3 MusicXML

MusicXML is an open, patent- and royalty-free standard for notation [Goo+01]. Inspired by the success of MIDI in the 1980s and its shortcoming as notation format, an XML based exchange format was created. It was standardized and released in 2004 by the World Wide Web Consortium (W3C) and is under continuous development. Widespread adoption followed soon after the release and it is currently supported by all major score writing software. Due to its XML basis, MusicXML is easy to read and parse, for example with the DOMParser, a built-in XML parser in all major web browsers.

2.4 Limits of Perception – Psychoacoustics

Human perception is limited. Both seeing and hearing have latency and bounded resolution. Discrimination experiments with clicking sounds showed an absolute maximum temporal resolution for hearing between 2 to 5 milliseconds (ms) [Kun08]. Experiencing simultaneity is relative, distinct events require a minimum of 30 ms to be perceived as successive and for a clear distinction of beats they should be at least 100 to 120 ms apart [Lon02]. When requested to synchronize to the tick of a metronome, humans anticipate the tick. Studies show a negative synchronization error of 20 to 80 ms [Asc00]. When synchronizing to complex patterns, like music, the error becomes smaller or even disappears. Additionally, by training one can reduce the error, trained experts like drummers show no negative synchronization error [Fis09]. For musical performances in a group, research has observed that a certain asynchronicity is always present [Ras88]. The onsets of the tones are refereed to “quasi-simultaneous“ and the mean asynchronicity amounts to around 30 to 50 ms.

17

3 Related Work

3.1 Music Visualization for Guitar

3.1.1 Rocksmith

Rocksmith [20c] is a video game by Ubisoft, which is advertised as an software to support learning to play the guitar. It features a 3D-animated version of the songs’ tabulature, an amplifier and effect simulator, as well as mini-games for technique training. Its main feature is the use of the players’ own guitar as the controller. A special adapter connecting the guitars output jack to the computers USB port is required. As the guitar has only a single mono audio output, the resulting signal is not separated into channels. Rocksmith has to guess the correctness of what is played during the analysis of the performed song. It is unable to differentiate between tones consisting of one played note and tones composed of multiple notes. A main advantage of Rocksmith is that it is keeping its players motivated by using gamification features like high-scores, achievements, and a difficulty that is either set by the user or adaptive computed on the basis of correctly and wrongly played notes.

3.1.2 Piano Rolls

Piano Rolls are a visualization technique based on the historical piano rolls used as a music storage medium. Most digital audio workstation (DAW) programs support a “piano roll“ visualization like Figure 3.1. The vertical axis represents the pitch of the note. For visual orientation, the corresponding piano keys often draw as well next to it. The horizontal axis depicts time, increasing from left to right. Measure lines and subdivisions for single beats are also present. Notes are represented by colored blocks. Stephen Malinowski created an piano-roll-like visualization with his Music Animation Ma- chine [20b]. It makes strong use of colors and shapes to represent pieces performed by whole ensembles. Music Animation Machine has mainly been used to create videos, that have been watched by millions of viewers on YouTube, but not for visual analysis.

19 3 Related Work

Figure 3.1: DAW Piano Roll (Public Domain)

3.2 Statistical Process Control

Statistical process control (SPC) was pioneered by Walter Shewhart in the 1920s to control man- ufacturing processes with statistical methods [She24]. SPC has been expanded and can be used to monitor any process where the conformity of the output can be measured. Control charts are a visualization used for analysis, they plot the measurements sampled at different points in time, for a inspector to review. We could use control charts to visualize the performance history of our player by considering each recording to be a sample of our process and the amount of errors (or error rate) to be the measurement. A limitation is, that control charts assume a process to be extensively studied and in statistical control, which means that the variation in the process is attributed to natural causes. During education we expect a continuous improvement rendering many of the statistical methods employed useless.

20 3.3 Stacked Graphs for Time Series Data

3.3 Stacked Graphs for Time Series Data

Stacked graphs are among the oldest and most commonly used visualization techniques for time- series data [WWS+16]. They are a shared-space technique employed to display multiple categories of data on a single common timeline. The use of filled areas simplifies identification but hinders comparison across series [CM84]. One should therefore limit the number of time series. One major drawback is that a single time series cannot use the whole vertical space, this is drawback is nullified when plotting correlated time series who always add up to the same number ateach point in time, for example percentages of a whole.

21

4 Requirements and Design

This chapter describes the development process of our visualization system by describing the requirement analysis, the tasks identified following the analysis and the design decisions to implement those tasks.

4.1 Requirements Analysis

Before starting to implement our system, we performed a requirements analysis to identify crucial points which our visualization should implement. Our intended target users are guitar students and teachers. We created a questionnaire and asked six guitar students and one professional guitar teacher what they would expect from such a software. None of them had a background or much experience in computer science or visualization. We also asked questions about their playing and learning habits and about their familiarity with musical notation and existing software. The participants of this survey were asked to later participate in our final evaluation, to see if our prototype meets their requirements.

4.1.1 Teacher

The surveyed guitar teacher, who has been teaching guitar for over 25 years, was opposed to the use of software as an assistance for education. Score and tab editors were alright in opinion, as they are merely a replacement for printed sheets. On the other hand the teacher was very sceptical of visualizations, stating they are not needed and considering them “highly distracting“. He advocated for the rapid development of the students’ hearing without visual help, as he argued that “music is perceived by hearing and not looking at it“. We still believe that visual cues can be helpful especially for students who have not developed these important hearing skills yet, and might even help to learn them.

4.1.2 Students

The surveyed students can be split into two groups based on their self reported playing time: Two were beginners (playing for less than 2 years) and four were intermediates (playing for more than 5 years). All surveyed students were familiar with guitar tabs and chord notation, and except for one beginner, who just began playing, they reported familiarity with Western Common Notation. Everyone agreed that a visualization may help them during education and considered gamification helpful provide motivation. The intermediate students proposed the following features they would like to see in a guitar learning software:

23 4 Requirements and Design

• Tab-like visualization of songs • Play interesting songs • Detection of played notes with visual control and a scoring system • Adjustable tempo • Step-by-step exercises with weekly goals

4.2 Tasks and Design Decisions

Based on the features requested by the students and our own thoughts, we identified the following tasks and components:

• Import tabs and saved recordings • Record user data and save it • Compare recording with ground truth data • Visualizations for – Ground truth data – Live performance – Single recording and the errors made – Multiple recordings – The performance history and progress

Step-by-step exercises requested by the students were not implemented, as they were outside the scope of this thesis.

4.2.1 Import Tabs and Saved Recordings

One requested feature was the possibility to “play interesting songs“. Since this notion is highly subjective we chose to allow the users to load arbitrary songs into our prototype by supporting MusicXML files as possible source for our ground truth data. We created components providing the functionality needed for this task, especially file handling, which includes storage, display, retrieval, and the parsing of MusicXML files. During parsing, we simplified the representation of notes by stripping information that was redundant or unnecessary for this work. An example for a note object is shown in Listing 4.1. It describes a note played on the open A string between 1308 and 1710 ms after the song started, the velocity property encodes the volume. We use this data model internally for all data, saved recordings are just arrays of this note saved as a JSON, to make loading a recording straight-forward.

24 4.2 Tasks and Design Decisions

{ "note":{ "guitarString": 4, "fret": 0, "velocity": 82 }, "start": 1308, "end": 1710 }

Listing 4.1: Internal representation of a note played on the open A string.

4.2.2 Recording and Saving

A key feature for a software used in education is recording user performances and storing them for later analysis. For this feature, we implemented a recorder with a metronome. When the user hits the start recording button, we begin listening for incoming MIDI ’note on’ and the corresponding ’note off’ messages. We keep track of the note currently played on each string and updateitsend time until a new ’note off’ message or a new ’note on’ arrives for this channel. We then updatethe channel state to represent the new note. Updating the currently still sounding note’s end time allows for an esthetic animation effect in the live visualization. After finishing the recording, we clean the recorded notes by removing notes which were probably recorded by mistake. For this, we use a very simple heuristic: notes that are shorter than 50ms (approx. 1200bpm) and notes with a velocity value of less than 20 are filtered from the recording. These thresholds should still allow for most songs that beginners will play. When saving the recording, we send the notes, the name of the song, and the current date to the backend server and save them as a JSON file on the user’s computer. When a song was loaded by the backend, the recordings that are linked to that song will appear in the recordings section, if no song was selected the user can choose to save this recording for use as a new song’s ground truth.

4.2.3 Error Classes

To provide meaningful insight to the users‘s performance and their learning progress, we need an algorithm to compare the recording to the ground truth data. To understand the process, one needs to know which errors can possibly be made while playing. We identified three classes of errors that can occur: • Note Errors • Timing Errors • Envelope Errors Note errors occur when the note’s pitch, or in our case the string-fret position of the note, does not match the one in the ground truth data. If one does consider rests to be notes as well, then missing a note and playing notes that are not part of the score can be considered a note error as well. We consider note errors to be the worst error class in playing as it is crucial for a player to actually play the notes they are supposed to play. Moreover, during learning most students first try to hit all the notes before practicing other aspects.

25 4 Requirements and Design

Timing errors, or rhythm errors, occur when playing the correct note but missing its timing. These errors can be subdivided into two categories: • Onset errors: Missing the onset (start) of a note, playing it either too early or too late • Note length error: Playing a note too short or too long Timing errors are often not corrected by the user, subsequently leading to additional errors. They are the most common errors in our system and the most difficult errors to eliminate. Envelope errors are mistakes in the notes’ volume and its changes over time. For the scope of this thesis, we consider notes to have the same volume over their whole duration. We make this simplification as we lack the necessary information from our ground truth data. Even if our data included dynamics information, the envelope recorded by our pickup generally does not match the envelope of the sound. This mismatch happens due to the fact that electric guitars are played through amplifiers and effect devices, which change the envelope of the sound, especially common effects such as distortion.

4.2.4 Automatic Error Classification

For this thesis, we designed an algorithm that classifies the recorded notes into error types by comparing them to the ground truth data. Our first approach was to compare the recordings at each time subdivision. We converted thedata into a time series with each point in time being associated the current state of (note played or no note). We then created a symmetric difference between the time series data of the ground truthand the data of the current recording. This allowed for a quick detection of correct time intervals, yet additional processing of the notes was needed. We later re-used parts of this approach to create the comparative visualization that shows multiple recordings at once. We then tried a direct comparison of the notes. Here, the main goal was to mimic human visual comparison behaviour in our algorithm and to match each recorded note to a single ground truth note for pitch and timing analysis. This approach looked promising and we abandoned our first one to focus on it instead. The algorithm can be split into four main parts: finding notes that are missing from the recording, finding notes that are not part of the score and played additionally, comparing the notes thatare overlapping each other, and putting those parts back into one list of errors.

Finding Overlapping Notes

We found that handling each guitar string independently was an effective approach, as the notes played on a single string cannot overlap and, after sorting, the start and end times are monotonic ∀8 : =>C48.4=3 <= =>C48+1.BC0AC. Our approach Listing 4.2 takes each recorded note and marks all ground truth notes that ended before the recorded note started as missed. If the notes overlap, the pair is saved to an overlap array for further analysis. We then check if the next ground truth notes still overlap the recorded note. The last overlapping note and its index into the ground truth array are then saved.

26 4.2 Tasks and Design Decisions

When we find the first non-overlapping ground truth note, we advance to the next recorded noteand resume our search at the saved index, when we encounter a miss, we have to check if the ground truth note was matched and skip marking it as missed. If no overlap for a given recorded note was found, it is marked as an additional, or surplus, note. // Variables to save the indexing position of the groundTruth array let resumeSearch_pos = 0; // The last sucessfully matched note let skipMiss: TimedNote | null = null;

// Iterate over each recorded note in the array stringArray.forEach((recordedNote) => { let j = resumeSearch_pos; // Notes before this position are already handled let isOverlapping = false; // Before checking, the recorded note does not overlap // Find overlapping notes while (j < gt.length) { const gtNote = gt[j]; // All groundTruth notes that end before the recorded note starts are missed if (gtNote.end < recordedNote.start) { // Some ground truth notes are checked more than once, // assert they did not previously matched if (gtNote !== skipMiss) { missed.add({ ...gtNote, state: NoteState.MISSED }); skipMiss = null; } j++; } else if ( gtNote.start <= recordedNote.end && gtNote.end >= recordedNote.start ){ // Notes overlap skipMiss = gtNote; // Mark that the groundtruth note should be skipped for the missing check in future passes isOverlapping = true; // Mark the recorded note as overlapping overlapping.push({ gt: gtNote, rec: recordedNote }); resumeSearch_pos = j; // Save the current index into the ground truth array j++; } else { // gtNote.start > recordedNote.end and we should move to the next recordedNote break; } } // We didn't get a single overlapping ground truth if (!isOverlapping) { // we didn't get a single overlapping ground truth extra.push({ ...recordedNote, state: NoteState.EXTRA }); } });

// If our recording ends before the last gt note, all following ones are missing for (let j = j_save; j < gt.length; j++) { if (skipMiss) {

27 4 Requirements and Design

skipMiss = null; continue; } missed.add({ ...gt[j], state: NoteState.MISSED }); }

Listing 4.2: Par of the algorithm to find overlapping notes on a single string, dividing and sorting the notes into strings is omitted.

Handling of Overlapping Notes

After the first pass through the data, we divide our recordings into three distinct sets: groundtruth notes that were missed, recorded notes that are unnecessary, and an array of pairs of overlapping notes. Handling the overlapping notes in an intuitive way was the hardest part of error detection, since one recorded note can overlap multiple ground truth notes and one ground truth note can overlap multiple recorded notes. For our classification we need a one-to-one comparison. We employed an iterative approach, beginning with the trivial method of using the first matching pair as our best fit. We then carefully evaluated each iteration of our algorithm, adding more complex methods in each step, resulting in the following algorithm:

1. Create two maps, one where the recorded notes are the keys and the values are the set of overlapping ground truth notes and one with the reverse mapping 2. We first iterate over all entries in the first map. This map has the recorded noteaskeyand arrays of ground truth notes as values. The order does not matter. For each entry of this map, we perform the following steps: a) For the recorded note we chose the best matching (Listing 4.3) candidate ground truth note. The best matching is determined by the closest start time difference of the note with the matching fret. b) We run the same best matching algorithm on the candidate note, this is either the recorded note from the previous step or a better match, so we will use this recorded note as the actual best match. c) We then add this matched pair to the result array and delete the notes from all possible candidate sets. d) Last, we mark the remaining notes from the two candidate sets which were not chosen as possibly missed (in case of ground truth note) or possibly extra notes. 3. We repeat this process for the second map; after that all recorded and ground truth notes have been be processed at least once. 4. Next, we cleanup the notes which were marked as possibly missed or extra but matched during a later pass on the data and mark the rest accordingly. 5. Last, we run an error classification algorithm Section 4.2.4 on the matched pairs

28 4.2 Tasks and Design Decisions

function findBestMatch( baseNote: TimedNote, candidates: Set ): TimedNote | undefined { if (candidates.size === 0) { return; } const sameNotes: TimedNote[] = []; const otherNotes: TimedNote[] = []; // First check if it is the same note for (const [note, _] of candidates.entries()) { if (baseNote.note.fret == note.note.fret) { sameNotes.push(note); } else { otherNotes.push(note); } } // Only consider the same notes on the same frets const notesToConsider = sameNotes.length ? sameNotes : otherNotes; return findBestMatchBasedOnTime(baseNote, notesToConsider); } function findBestMatchBasedOnTime( baseNote: TimedNote, candidates: TimedNote[] ): TimedNote { const deltas = candidates.map((value, i) => ({ i, startOffset: Math.abs(baseNote.start - value.start), endOffset: Math.abs(baseNote.end - value.end), }));

let minimum = Infinity; let bestMatch = 0; deltas.forEach((value) => { if (value.startOffset < minimum) { bestMatch = value.i; minimum = value.startOffset; } }); }

Listing 4.3: Functions used to select the best matching pair of notes.

Error Classification Algorithm

As described in Section 2.4 the human perception is limited, we therefore need to consider those effects when trying to automatically classify the timing errors between the recording and thescore. The algorithm is shown in Listing 4.4 We decided to use the upper bound of the “quasi-simultaneous“ time intervals for simultaneous perception.

29 4 Requirements and Design

The rhythm of the piece is determined by the onsets of the notes, the start times of the played notes are therefore more important for a song to feel right than the duration of the notes. An additional challenge arises due to the fact, that two consecutive notes in music notation have zero pause between them, but when playing a physical instrument the player needs to perform time-consuming movements to play the next note. As stated before the start of the notes is more important than the duration, so the actual played duration of a given note has to be shorter than its nominal duration. export function equalNotes(expectedNote: TimedNote, actualNote: TimedNote): NoteState { if ( expectedNote.note.fret !== actualNote.note.fret || expectedNote.note.guitarString !== actualNote.note.guitarString ){ // No timing analysis needed, the pitch doesnt match return NoteState.DIFFERENT; }

// A delta of 50 ms can be considered quasi-simultaneous for human hearing const psAcousticSimultanous = 50; const startDelta = expectedNote.start - actualNote.start;

// It is more important to hit the start than the end of the note for rhythm perception if (Math.abs(startDelta) < psAcousticSimultanous) { const endDelta = expectedNote.end - actualNote.end;

if(Math.abs(endDelta) < psAcousticSimultanous) { //start and end are quasi-simultaneous return NoteState.SAME; } if (endDelta < 0) { return NoteState.LONG; }

// We expect notes to be shorter than their nominal note duration from the score, // as the player needs time to play the next one perfectly // 100ms should be enough to change to the next note if (endDelta > 100) { return NoteState.SHORT; } else { return NoteState.SAME; } } // The start time is outside the quasi-simultaneous range if (startDelta < 0) { return NoteState.LATE; } else { return NoteState.EARLY; } }

Listing 4.4: Algorithm for timing errors classification, utilising tolerances to simulate human perception.

30 4.2 Tasks and Design Decisions

Figure 4.1: Our main visualization approach that combines ideas from guitar tabs and piano rolls and serves as basis for most of our visualizations. The six guitar strings are ordered vertically as seen by a guitar player, time is mapped to the horizontal axis, and notes are drawn as rectangles on the string they are played on, labelled by the number of the fingered fret. An overview on the top allows to navigate using the focus and context technique.

4.2.5 Visualizations

For all visual analysis tasks, except for the historical progress data, which uses its own visualization, we created a multi-granular visualization Figure 4.1. It is based on the piano rolls approach, but adapted to display notes in a tab-like view. During our requirement analysis, we noticed that our target users preferred a tab-like visualization due to its familiarity. We therefore chose to combine the layout of guitar tabs with the block-like encoding for notes used in piano rolls. Vertically, we arranged the strings in the same way as they are ordered in a tab. This way, each string to corresponds to a note pitch or piano key in a standard piano roll. The horizontal axis represents time, and we use a linear mapping, since we wanted the user to recognize the note lengths and the rhythm by simply looking at our visualization. When the ground truth data contains information about the measures, we draw lines to indicate them at their corresponding position. We evaluated the use of beats instead of seconds as a unit on the horizontal axis, but rejected the idea, as it would provide little benefit to our visualization and require us to create an algorithm which converts the timestamps of the MIDI messages to the beats and its subdivisions. The notes are displayed as rectangular blocks drawn on top of the strings, with rounded corners to make them clearer to distinguish when two are directly adjacent, as well as for aesthetic reasons. The note blocks’ horizontal position and extent are proportional to their position and note length in the score or recording. A key element is the border we draw around the rectangles, as it prevents adjacent notes from converging into one another, which would make overlapping note appear as a single, longer note. Unlike a piano, the guitar can play multiple notes on a single string, we visualize this by using the same approach as standard tabs, by simply writing the number of the fret into the rectangle.

31 4 Requirements and Design

Multi-Granularity

For our multi-granular approach we evaluated using mini visualizations of the songs in the ground truth and recordings lists, but we decided against this as they were expensive to compute and impractical when displayed in such a small space. Instead, we focused on visualizing the currently selected song only. We used the focus and context approach, with two connected visualization levels. At the top, we visualize an overview of the whole song, fitting all notes to the available width ofthe screen. Below, we show a magnified view of a ten seconds long part of the song, using a transparent rectangle to highlight the part in the above overview. The user can either brush the overview or pan the focus visualization to select or move the displayed part of the song.

Color-Coding for Different Sets of Notes

When using overlays for a comparative visualization of multiple data sets, we need to be able to distinguish which data belongs to which set. For this reason, we use color coding. We have three sets of notes belonging to each song and its selected recording: • Ground Truth Data • Recorded Data • Error representation For the ground truth data set, all notes are part of the score and there is no difference in meaning between the notes. Therefore we decided to color all notes in the same color. As the data has no special meaning, we use a neutral color, grey, as shown in Figure 4.1. The ground truth data set is part of all comparative visualizations and we always draw it in the background. The recorded data is always displayed on top of the ground truth data, so it has to use a different color to be distinguishable. Since the purpose of this visualization is an exploratory analysis of the recording without suggestions by our system, we draw those notes on top of the ground truth data with a semi-transparent light blue color (Figure 4.2). The representation of the automatically identified errors is more complex, as the errors are divided into three classes Section 4.2.3. We considered treating the two subdivisions of timing errors with separate colors, but we instead opted to write the exact type error into labels on top of the note rectangles. Since we decided to ignore the envelop category, we were left with two error categories and one category for correct notes. An obvious coloring scheme would be the traffic light colors red, yellow, and green for note errors, timing errors, and correct notes. Many users would intuitively understand it, but we wanted to design our visualization to be more inclusive for persons with color vision deficiency by default. We chose three colors from the cividis color map [NAR18]. The resulting visualization is shown in Figure 1.1. We also experimented with different shades of colors for each string. But since the strings are already vertically separated, this additional encoding was unnecessary, as the positional information is enough to recognize its meaning.

32 4.2 Tasks and Design Decisions

Figure 4.2: Visualization of a recording (blue) overlayed on top of the ground truth data (grey). This allows to directly compare each played note to the correct version.

Figure 4.3: A heat map of all recordings (red), visualizing the times were a note was played. By looking at all recordings at once, we can see general patterns in the player’s performance.

Comparative Overview of All Recordings

For the comparative visualization of all recordings to a selected song, we use a heat map. The heat map is based on the binning algorithm, which was first implemented as a candidate for automatic error detection (see Section 4.2.4). We decided to reduce the dimensionality of the data for the visualization. In this visualization, we do not differentiate which fret was played, instead falling back on a simple binary classification, 1 if a note was played on the string at the time, else 0,foreach time bin of each string for each recording. For each recording, we add the values of corresponding bins into an aggregation array and then we divide all bins by the number of recordings. The heat map is drawn by iterating over every bin and drawing a small rectangle at the time it represents. The value of each bin is used to interpolate a linear monochromatic color gradient. The resulting visualization can be seen in Figure 4.3.

33 4 Requirements and Design

Figure 4.4: Our stacked area chart showing the changes in error rates for 12 recordings of “Breaking The Law“.

Learning Process Visualization

For the visualization of the learning process we planned to use Statistical Process Control, as one of its key methods is the visualization of a process via control charts. After analysing our data, we found that although playing a song perfectly (or consistently bad) can be considered a stable process, but during learning to play song the process is not stable enough. We further experimented with control chart visualization techniques to monitor the amount (or the ratio) of errors made over time. Since we did not have a binary error classification, we decided to use a percentage stacked area chart to display the three error types. At the bottom, we plot the (almost) perfect notes, as we expected this portion to rise when players get better. In the middle section above the perfect notes, we place the timing errors, and at the top the note errors. We use the same coloring scheme as in the previous visualization, so that users can quickly understand its meaning. An example for this visualization is provided in Figure 4.4.

4.3 Technical Implementation

We implemented the prototype as a web application consisting of a visualization frontend and a backend server. Both are implemented as JavaScript / TypeScript applications, as our goal was to support as many platforms and system configurations as possible and use the same programming language and tooling for both components. For storage and retrieval, we built a backend server that handles the retrieval and storage of ground truth tabs and user performance recordings. The tabs and recordings are saved as files on the user’s computer and our server provided access via a HTTP API built with the express library. This server was necessary due to the fact that web applications are not allowed to access the computers file system directly. Our frontend was built using React.js (https://reactjs.org/) as DOM manipulation framework, combined with TypeScript (https://www.typescriptlang.org/) and JavaScript for the application logic. Other notable dependencies include the visualization library D3.js (https://d3js.org/) for its scale functions and color schemes and webmidi.js (https://github.com/djipco/webmidi) which provides a high level abstraction of the Web MIDI API, hiding the handling of raw MIDI messages and providing an easy-to-use API for the developer. The visualizations were created using the Canvas API in order to be able to render the live visualization at 60 frames per second, which would not be possible when using scalable vector graphics (SVG) instead.

34 5 Evaluation

During the development of the prototype, we used the rapid prototyping principle. Each feature was first implemented as a bare minimum version and directly evaluated in meetings between student and supervisor. We then added the missing features as we needed them. This chapter discusses the limitations we found along the way and in the final user study we conducted to gather additional feedback from guitarists.

5.1 Limitations

The prototype implementation we developed has several limitations, some caused by the hardware we use and some by our design decisions.

5.1.1 Hardware Limitations

Some common guitar techniques that are not properly recognized by the Fishman Triple Play controller are:

• Hammer-ons and pull-offs are not fully supported • Sliding: we receive many wrong notes (noise) in between the actually played ones, as the pickup cannot distinguish between sliding over notes and playing notes • Palm-muting: the Fishman Triple Play cannot distinguish between muted notes and their non-muted counterparts • Dead notes: we get a note on the string with an unreliable fret information, as dead notes are a percussive element and the pickup tries to assign a pitch to them

Further investigating those hardware limitations and testing solutions such as a more sophisticated software-side filtering or the combination of other data sources such as audio is left forfuture work.

35 5 Evaluation

5.1.2 Software Limitations

When parsing the MusicXML files, we ignore some annotations, for example “L.R.“ (let ring), which describes letting the note to fade out naturally. This cannot be distinguished from a note with a long duration. The dynamics of the performances, describing how hard or loud a note is played, are not considered in our design for two reasons: First, many tabs lack the information about the volume of each note. Second, most players do not need assistance in matching the volume of their playing. Handling any other MIDI message besides ’note on’ and ’note off’ as well as a synthesized MIDI playback were left out on purpose from the scope of this thesis, as the effort to implement them would be too large for the given time frame.

5.2 Qualitative User Study

A quantitative study was out of scope of this thesis, as it would require a larger group of users and more time to conduct. Instead we performed a small qualitative study with five participants. All participants were guitarists who already participated in our requirement analysis. Unfortunately no guitar teacher was available to participate in this evaluation. The setup of our user study consisted of the guitar with the mounted Fishman Triple Play controller, connected to a laptop computer running our visualization. We connected the laptop to a projector, as the laptop’s small screen would make it too difficult to perceive details while playing the guitar. The evaluation process was as follows: • Importing a tab chosen by the user • Record multiple performances and review the visualizations • Present the error progress chart • Present the heat map comparison • Afterwards ask the user to provide feedback We did not tell the users which limitations our software had, as we wanted them to try out everything and not constrain themselves to only play what we told them would work. This would allows us to better investigate what users were expecting or missing.

5.2.1 Selecting a Ground Truth Tab and Creating Recordings

The import process was liked by our participants for its simplicity and for the possibility to import tabs written by other software. The visualization of the score was intuitively understood and the participants liked the design choices.

36 5.2 Qualitative User Study

Figure 5.1: Recording of “Smoke on the water“ by Deep Purple, the participant complained that the notes’ onsets were always too late.

The recording process was criticized by all participants. They stated that “it doesn‘t feel right“ and complained about the notable latency between playing a note and it being displayed. This was caused by the limited computing resources of the computer used during the evaluation, that was not able to handle the high frame rate we used for the live visualization. Before continuing with the evaluation, we therefore restricted the length of our songs and recordings to a time span of around 30 seconds to improve the performance of our system. Still, our participants were not comfortable with the live visualization during the recording. Their main critique was the automatic scrolling to the right that was causing the moving notes to be blurry. They also disliked the limited look-ahead area. First, not one participant was able to play alongside the live visualization but as we asked them to try it multiple times, they got accustomed a bit. Still, they were demanding to “just show the whole tab statically and let the user play“. At our participants’ request, for the rest of the evaluation we chose songs they could already play by heart and asked them to synchronize their playing to the metronome tick.

5.2.2 Exploring The Recording

The visualization of a selected recording on top of the ground truth data was well received by our users. They pointed out that they liked the color scheme and focus and context approach, as they could quickly jump to other places in the song. Even without running our error detection algorithm, the users could spot some of their mistakes, especially when hitting more strings than intended or not muting strings and thereby playing additional noise. When reviewing their performances, the users complained that common techniques like bending, hammer-ons, pull-offs, and especially dead notes were not properly recognized. See Section 5.1 for further details on those limitations.

5.2.3 Automatic Error Detection

The visualization of the errors that users allegedly made revealed a major problem in our prototype implementation. Even though the users synchronized their playing to the built-in metronome, the participants were always shown to be late with the onsets of the notes, as can be seen in Figure 5.1. We could correct this error by aligning the notes by a fixed offset (Figure 5.2). For the rest of the evaluation we shifted all recorded notes 90ms towards the start when loading them.

37 5 Evaluation

Figure 5.2: The same recording as in Figure 5.1 but all notes are moved 90ms towards the start.

Figure 5.3: Visualization of a user’s performance during the study, which they found “discouraging“, despite making progress.

Another source of confusion were notes that were marked as perfect but did not visually match the ground truth data. We had to explain that our matching algorithm was trying to emulate human perception by using reasonable thresholds for what is considered correct. Afterwards the participants agreed with our error detection and visualization, except for some rare edge-cases, and mostly considered the visualized errors as their playing errors.

5.2.4 Learning Progress Chart

After recording at least five performances, we showed the participant our visualization of the learning process. The users first assumed, that it was a visual representation of a single song divided into measures, they did not read the description carefully and we needed to explain that it is in fact a visualization of all their recordings. After this explanation, the users reported having no problems understanding the chart, attributing this to the re-use of the same color scheme that we use for the error classes in our error visualization. Although the users could recognize that with every recording the amount of perfect notes increased, they stated it was “discouraging“ and “demotivating“ to see the amount of wrong notes that they supposedly played. Furthermore they said the difference in error rates between recordings was small and the improvement was barely visible, and asked for a more forgiving visualization. An example is provided in Figure 5.3.

5.2.5 Heat Map Visualization of Multiple Recordings

Not a single participant understood what the heat map was supposed to show and we needed to explain this feature in detail. We suspect this was due to the students having no experience in data visualization.

38 5.2 Qualitative User Study

The participants said that in its current form, the heat map is only usable when consecutive notes are not played on the same fret, as the notes blend too much into each other, as can be seen in Figure 4.3. They also disliked the red color and the linear gradient from transparent to opaque. In hindsight, we should have chosen a gamma corrected gradient and a more neutral color. Still, the participants could recognize which notes they commonly missed and during which parts of the song they played “messy“.

5.2.6 User Feedback

The last part of our evaluation was an informal interview with the participants were they were asked to provide overall feedback and to list features they considered mandatory for an educational visualization system. Below is a short summary of their answers. Features they liked and considered well implemented included • Loading and exploring of previous recordings • The automatic classification was considered to work well (after we fixed the issue withlag caused by a constant latency) • The visualization of the progress They missed some features they considered mandatory: • Latency discovery and automatic adjustment were considered crucial, our participants stated that they “would not use this software at all without this feature“ • Live playback of the ground truth notes • More support for common guitar techniques • A more familiar notation during the recordings Improvements that the participants suggested were the following: • The participants would like to be able to customizing the coloring scheme • For a better recording experience, they requested backing track playback, in particular drums and bass tracks Interestingly, no participant commented on the missing dynamics and volume analysis, confirming our assumption that this feature is less important than the ones we implemented.

39

6 Conclusion and Outlook

We created a guitar performance analysis prototype based on requirements collected from guitarists. Our approach uses MIDI data from a polyphonic MIDI pickup that is mounted to the user’s guitar. This data is then visualized alongside a ground truth tabulature to allow users to compare recorded and true notes, detect different types of errors, and see general patterns and progress. In a qualitative user study, we have tested whether our visualization approach can help guitar students in finding and recognizing errors in their performances. During the evaluation, our prototype was criticized mainly for its technical shortcomings and its limited support for different playing techniques, which were left out on purpose. The visualization approaches for the retrospective analysis of single performances and the visualization of the learning process were well received. Performance issues aside, the users liked recording and analyzing their own performances, but rejected the use of our live visualization approaches for sight-reading. The use of a heat map to visualize the variation in timing was not intuitive and the merging of adjacent notes in the error visualization was a major drawback. We still believe that this approach could prove useful, possibly by restricting it to only inspection visualization of a particular note view which would counteract the smearing. We have presented an automated error classification algorithm which worked very well, despite using simple heuristics. For some-edge cases it produced erroneous classifications, but their occurrence was low enough, as to not attract the attention of our study participants.

Outlook

Generalization for other fretted string instruments, like a four- or five-stringed bass, guitars with more than six strings, or similar instruments such as banjo mandolin is straight forward, as well as support for different tunings. For fret-less instruments like fret-less bass and violin, onewould need to handle the always imperfect pitch in addition to the notes, for example by mapping it to the nearest MIDI note with an additional pitch bend command. All of those instruments would need to be supported by additional MIDI pickups though, as the Fishman Triple Play only supports six-stringed guitars and four-stringed bass guitars. Plucking, striking, or bowing do not make a difference, as we currently do not distinguish the envelope of the tone, future work could explore this additional source of errors. The participants of our user study complained that many guitar techniques were not supported by our prototype, implementing all features of the MIDI controller and searching for solutions to the hardware limitations would surely attract more users. Some of the hardware limitation could be potentially overcome by using additional data such as the guitar’s audio output.

41 6 Conclusion and Outlook

Future work should certainly take into account the performance and latency of its visualization system, for example by utilising common techniques used for game development, or by computing an alignment to the ground truth after a recording is finished.

42 Bibliography

[15] Web MIDI API W3C Working Draft 17 March 2015. 2015. url: https://www.w3. org/TR/2015/WD-webmidi-20150317/ (visited on 09/29/2020) (cit. on p. 14). [20a] GuitarPro. 2020. url: https://www.guitar-pro.com/en/index.php (visited on 09/29/2020) (cit. on p. 16). [20b] Music Animation Machine. 2020. url: https://musanim.com/ (visited on 09/29/2020) (cit. on p. 19). [20c] Rocksmith. 2020. url: https://rocksmith.ubisoft.com/rocksmith/en-us/home/ (visited on 09/29/2020) (cit. on p. 19). [20d] TuxGuitar. 2020. url: http://www.tuxguitar.com.ar/ (visited on 09/29/2020) (cit. on p. 16). [Asc00] G. Aschersleben. Knowledge of results and the timing of actions. Max-Planck-Inst. für Psychologische Forschung, 2000 (cit. on p. 17). [CM84] W. S. Cleveland, R. McGill. “Graphical perception: Theory, experimentation, and application to the development of graphical methods”. In: Journal of the American statistical association 79.387 (1984), pp. 531–554 (cit. on p. 21). [Fis09] T. Fischinger. Zur Psychologie des Rhythmus: Präzision und Synchronisation bei Schlagzeugern. kassel university press GmbH, 2009 (cit. on p. 17). [Goo+01] M. Good et al. “MusicXML: An internet-friendly format for sheet music”. In: XML conference and expo. Citeseer. 2001, pp. 03–04 (cit. on p. 17). [Kun08] M. N. Kunchur. “Temporal resolution of hearing probed by bandwidth restriction”. In: Acta Acustica united with Acustica 94.4 (2008), pp. 594–603 (cit. on p. 17). [Lon02] J. London. “Cognitive constraints on metric systems: Some observations and hypothe- ses”. In: Music perception 19.4 (2002), pp. 529–550 (cit. on p. 17). [McS95] R. McSwain. “The power of the electric guitar”. In: Popular Music and Society 19.4 (1995), pp. 21–40. doi: 10.1080/03007769508591605. eprint: https://doi.org/10. 1080/03007769508591605. url: https://doi.org/10.1080/03007769508591605 (cit. on p. 14). [NAR18] J. R. Nuñez, C. R. Anderton, R. S. Renslow. “Optimizing colormaps with consideration for color vision deficiency to enable accurate interpretation of scientific data”. In: PLOS ONE 13.7 (Aug. 2018), pp. 1–14. doi: 10.1371/journal.pone.0199239. url: https://doi.org/10.1371/journal.pone.0199239 (cit. on p. 32). [Paj14] A. Pająk. Kto jest najszybszy w mieście, czyli latencja konwerterów MIDI. 2014. url: http://cyfrowogitarowo.pl/archiwa/56 (visited on 09/29/2020) (cit. on p. 14). [Ras88] R. A. Rasch. “Timing and synchronization in ensemble performance.” In: (1988) (cit. on p. 17).

43 [Rot92] J. Rothstein. MIDI: A comprehensive introduction. Vol. 7. AR Editions, Inc., 1992 (cit. on p. 13). [She24] W. A. Shewhart. “Some applications of statistical methods to the analysis of physical and engineering data”. In: Bell System Technical Journal 3.1 (1924), pp. 43–87 (cit. on p. 20). [SMM12] M. Sedlmair, M. Meyer, T. Munzner. “Design study methodology: Reflections from the trenches and the stacks”. In: IEEE transactions on visualization and computer graphics 18.12 (2012), pp. 2431–2440 (cit. on p. 11). [WWS+16] T. Wu, Y. Wu, C. Shi, H. Qu, W. Cui. “PieceStack: Toward Better Understanding of Stacked Graphs”. In: IEEE Transactions on Visualization and Computer Graphics 22.6 (2016), pp. 1640–1651 (cit. on p. 21).

All links were last followed on September 29, 2020. Declaration

I hereby declare that the work presented in this thesis is entirely my own and that I did not use any other sources and references than the listed ones. I have marked all direct or indirect statements from other sources contained therein as quotations. Neither this work nor significant parts of it were part of another examination procedure. I have not published this work in whole or in part before. The electronic copy is consistent with all submitted copies.

place, date, signature