A Time-Travel Debugger for Web Audio Applications
Total Page:16
File Type:pdf, Size:1020Kb
A Time-Travel Debugger for Web Audio Applications ABSTRACT Developing real-time audio applications, particularly Developing real-time audio applications, particularly those those with an element of user interaction, can be a challeng- with an element of user interaction, can be a challenging ing task. When things go wrong, it can be difficult to locate task. When things go wrong, it can be difficult to locate the the source of a problem when many parts of the program are source of a problem when many parts of the program are connected and interacting with one another in real-time. connected and interacting with one another in real-time. A traditional debugger allows the programmer to mark a We present a time-travel debugger for the Flow Web Au- breakpoint that pauses program execution and inspect the dio framework that allows developers to record a session program state. The programmer can insert multiple break- interacting with their program, playback that session with points and step forward to the next one by running the pro- the original timing still intact, and step through individual gram up until that point. Reverse debugging builds on this events to inspect the program state any point in time. In idea by allowing the programmer to step backwards from contrast to the browser's native debugging features, audio a breakpoint to uncover the cause of an error. Time-travel processing remains active while the time-travel debugger is debugging provides both of these capabilities, allowing the enabled; allowing developers to listen out for audio bugs or programmer to step both forwards and backwards in time. unexpected behaviour. These debuggers provide greater context when debugging, We describe three example use-cases for such a debugger. allowing the developer to understand how a program got to The first is error reproduction using the debuggers JSON a particular state and observe changes to that state over import/export capabilities to ensure the developer can repli- time. cate problematic sessions. The second is using the debugger Time-travel debugging in JavaScript has been an area as an exploratory aid instead of a tool for error finding. Fi- of interest for some time now. The Jardis debugger is a nally, we consider opportunities for the debugger's technol- low-level time-travelling debugger for the Node.js runtime ogy to be used by end-users as a means of recording, sharing, that works by hooking into the existing event loop and per- and remixing ideas. forming regular snapshots of the entire program state [3]. We conclude with some options for future development, The debugger also supports so-called "postmortem debug- including expanding the debugger's program state inspector ging" which allows a full program recording to be captured to allow for in situ data updates and a visualisation of the and reconstructed after-the-fact. Full-program snapshots current audio graph similar to existing Web Audio inspec- are streamed to a remote server where they can later be tors. loaded on another machine. A similar tool for the browser, Timelapse, was developed by Burg et al [4]. Like Jardis, Timelapse intercepts low- CCS Concepts level events in the runtime's event loop. It differs in its •Applied computing ! Sound and music computing; record/replay strategy; choosing to recreate events during •Software and its engineering ! Software maintenance tools; playback instead of capturing entire program snapshots. While Jardis and Timelapse rely on platform-specific extensions to work (node.js and WebKit respectively), Keywords McFly presents a method portable across all spec-compliant Time-travel debugging, Reverse debugging, Web Audio API, browsers [12]. Although implemented as an extension to the Declarative programming, Exploratory programming, Inter- Microsoft Edge browser, they note that McFly's architecture active audio applications builds on existing Web standards. Additionally, McFly sup- ports step-backwards commands at interactive speeds where 1. INTRODUCTION neither Jardis nor Timelapse can due to its intervallic snap- shot process. For debugging Web Audio applications specifically, the lit- erature is less rich. Adenot provides comprehensive notes on the performance characteristics of various audio nodes and Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: owner/author(s). tips on how to improve audio processing performance [2]. Web Audio Conference WAC-2021, July 5–7, 2021, Barcelona, Spain. Mozilla maintained the now-deprecated Web Audio Editor © 2021 Copyright held by the owner/author(s). [9] and Google offers a browser extension for their Chrome browser titled the Web Audio Inspector [6]. Both tools pro- vide a visual graph corresponding to the current Web Audio graph, letting developers get an overview of the signal chain and inspect the parameters of individual nodes. Develop- ers may also edit individual parameters in the Web Audio Editor, and these changes are reflected in real-time. Choi developed Canopy, a testing and verification Web app for Web Audio code [5]. Like the others, this tool provides a graphical view of the audio graph and provides capabilities for audio recording, exporting, and waveform inspection. 2. TIME-TRAVEL DEBUGGING The time-travel debugging systems we have mentioned Figure 1: The debugger's simple view. It provides controls share some common shortcomings, particularly when con- to enable/disable the debugger, step to the next or previ- sidering their use in audio application development. These ous action, play/pause timed playback, expand the window systems only support one browser, either using a browser- to show the current application model, and export/import specific extension API as seen in McFly or by building a the recording as JSON. new browser entirely as in Timelapse. It is important to note that different browsers often have different implemen- tations of the same API, including the Web Audio API [8], so cross-browser testing is critical in ensuring the applica- to be written in a pure functional style. Side effects are tion will work for all users. Crucially, none of these systems performed by Flow's runtime, with the result of those ef- explicitly support the Web Audio API. fects being passed to the user-defined update function as Turning our attention to existing debugging tools for the an action in the same way that interaction events are han- Web Audio API, we see these tools primarily focus on vi- dled. Flow's time-travel debugger works by hooking into sualising the audio graph or gaining performance. While the runtime and capturing incoming actions and their resul- Canopy can be a useful tool to verify signal processing code's tant models. Additionally, these snapshots are paired with correctness, it does little to determine how that code might an accurate timestamp relative to along with the program's be incorrect. These are all useful tools but do nothing to start. determine how a program ended up in an invalid state. Because producing actions is the only way to respond to We present a time-travel debugger for Web Audio appli- events or return from side effects, capturing these actions cations that attempts to address these issues. Some of the provides us with a simple yet powerful method of recording features of this debugger are listed below: a program. Developers may then enable the debugger. Doing so un- • Audio context remains running while the debugger is registers all event listeners in the application (including tim- active. This means the audio output is updated in real- ing and audio-related events, and pending asynchronous op- time when stepping between actions in the debugger. erations such as HTTP requests) and allows the developer • Timing information is preserved between actions bring- to step backwards and forwards in time. This is achieved ing support for accurate timed playback of a sequence through a debugging window that appears whenever an ap- of actions. plication is running in debug mode (figure 1) that can be expanded to show the current application model as in figure • JSON import/export for session recordings allows for 2. postmortem debugging. When stepping to any action, the model associated with • Cross-platform support: the debugger is available in that action is passed to the user-defined audio and view func- any browser that supports the Web Audio API. tions and then rendered. This approach differs significantly from existing time-travel debugging solutions that typically We have implemented our time-travel debugger as part of capture as much information as possible to recreate the pro- the Flow framework; a declarative framework for interactive gram state themselves. The debugger can work this way Web Audio applications based on the Model-View-Update because the model is always the single source of truth for architecture. Applications written in Flow have a single audio and visual output; side effects are managed by the source of application state, known as the model. Changes to runtime and must make their way through the model-view- this model are handled by a single update function that re- update architecture if their result is to be observed. ceives the current model and an action describing some event Notably, the audio context remains running while the de- (e.g. the user clicked a button or a tick from a timer) and bugger is enabled. This means the runtime faithfully renders returns a new model. That is then passed to separate func- audio graphs produced by stepping through actions. There tions for rendering the HTML and producing a Web Audio is little difference between the actions created while the pro- graph. More in-depth information on the Flow framework gram is running and those produced while the debugger is can be found in a previous paper [1]. active from the runtime's perspective.