Data2vis: Automatic Generation of Data Visualizations Using Sequence to Sequence Recurrent Neural Networks Victor Dibia Çagatay˘ Demiralp IBM Research IBM Research

Data2vis: Automatic Generation of Data Visualizations Using Sequence to Sequence Recurrent Neural Networks Victor Dibia Çagatay˘ Demiralp IBM Research IBM Research

Data2Vis: Automatic Generation of Data Visualizations Using Sequence to Sequence Recurrent Neural Networks Victor Dibia Çagatay˘ Demiralp IBM Research IBM Research ABSTRACT interactive declarative imperative Chart Visual Analysis Visualization Component Graphics Rapidly creating effective visualizations using expressive Templates Grammars Grammars Architectures APIs grammars is challenging for users who have limited time speed Excel Vega-Lite Vega Prefuse OpenGL expressiveness Google Charts ggplot2 D3 Processing DirectX and limited skills in statistics and data visualization. Even Tableau VizQL Protovis Java2D high-level, dedicated visualization tools often require users VizML HTML Canvas to manually select among data attributes, decide which trans- Brunel formations to apply, and specify mappings between visual Figure 1. Axis of visualization specification. Data visualizations are cre- ated with a spectrum of tools with a spectrum of speed and expressivity. encoding variables and raw or transformed attributes. Some of these tools are faster to create visualizations, others are more In this paper we introduce Data2Vis, a neural translation expressive [28]. model for automatically generating visualizations from given and spreadsheet applications (e.g., Microsoft Excel, Google !69 datasets. We formulate visualization generation as a sequence Spreadsheets) provide ease of use and speed in creating stan- to sequence translation problem where data specifications dard charts based on templates but offer limited expressivity are mapped to visualization specifications in a declarative and customization. language (Vega-Lite). To this end, we train a multilayered Declarative specification grammars such as ggplot2 [61], attention-based recurrent neural network (RNN) with long D3 [10], Vega [50], and Vega-Lite [49] provide a trade-off short-term memory (LSTM) units on a corpus of visualization between speed and expressivity. However, these grammars specifications. also come with steep learning curves, can be tedious to spec- Qualitative results show that our model learns the vocabulary ify depending on the syntax and abstraction level adopted, and syntax for a valid visualization specification, appropri- and can suffer from reusability issues. In fact, there is little ate transformations (count, bins, mean) and how to use com- known about the user experience with visualization grammars, mon data selection patterns that occur within data visualiza- beyond the degree with which they are used. For example, tions. Data2Vis generates visualizations that are comparable ggplot2 can be difficult for users who are not familiar with R. to manually-created visualizations in a fraction of the time, Vega, which is based on a JSON schema, can be tedious even with potential to learn more complex visualization strategies for users who are familiar with JSON. Even tools with higher- at scale. level abstractions such as the ones based on chart templates often require the user to manually select among data attributes, Author Keywords decide which statistical computations to apply, and specify Automated visualization design; neural machine translation; mappings between visual encoding variables and either the sequence to sequence models; deep learning; LSTM; raw data or the computational summaries. This task can be Vega-Lite. daunting with complex datasets especially for typical users who have limited time and limited skills in statistics and data INTRODUCTION visualization. Users create data visualizations using a range of tools with To address these challenges, researchers have proposed tech- a range of characteristics (Figure1). Some of these tools niques and tools to automate designing effective visualizations are more expressive, giving expert users more control, while [14, 20, 36, 43, 37, 48] and guide users in visual data explo- others are easier to learn and faster to create visualizations, ration [2, 19, 23, 44, 48, 52, 54, 59, 65, 66, 67]. appealing to general audiences. For instance, imperative APIs such as OpenGL and HTML Canvas provide greater expressiv- Prior techniques and tools for automated visualization de- ity and flexibility but require significant programming skills sign and visualization recommendation are based on rules and and effort. On the other hand, dedicated visual analysis tools heuristics. The need to explicitly enumerate rules or heuristics limits the application scalability of these approaches and does not take advantage of expertise codified within existing visu- alizations. Automated and guided visualization design and exploration can significantly benefit from implicitly learning these rules from examples (i.e., data), effectively incorporating both data and visualization design context. In this work, we formulate visualization design as a sequence ACM ISBN 978-1-4503-2138-9. to sequence translation problem. To operationalize our for- DOI: 10.1145/1235 mulation, we train an LSTM-based neural translation model (Data2Vis) on a corpus [46] of Vega-Lite visualization spec- ifications, taking advantage of Vega-Lite’s (and of similar grammars’) design motivation to support programmatic gen- eration. We demonstrate the model’s use in automatically generating visualizations with applications in democratizing the visualization authoring process for novice users and help- ing more experienced users jump start visualization design. Our contributions include 1) formulating visualization design as a sequence to sequence translation problem, 2) demonstrat- ing its viability by training a sequence to sequence model, Data2Vis, on a relatively small training dataset and then effec- Figure 2. A Vega-Lite specification (left) and the generated visualization tively generating visualizations of test data, and 3) integrating (right). Users can succinctly specify complex selections, transformations and interactions using the Vega-Lite grammar formatted in JSON [49]. Data2Vis into a web-based application that has been made pub- licly available at http://hci.stanford.edu/~cagatay/data2vis. Our work is the first in applying deep neural translation to visualization generation and has important implications for have limited support for customization. Conversely, these future work, opening the way to implicitly learn visualization grammars facilitate expressivity by enabling a combinatorial design and visual analysis rules from examples at scale. composition of low-level building blocks such as graphical marks, scales, visual encoding variables, and guides. However, In what follows, we first summarize related work followed by increased expressivity often decreases the speed with which details of the Data2Vis model and its training process. We then visualizations can be created and makes the learning more present our results, providing several visualization examples difficult, limiting the number of users who can effectively automatically generated using the trained model. Next we use the specification method. One of our aims with Data2Vis discuss the potential impact of Data2Vis and its current limita- is to bridge this gap between the speed and expressivity in tions and provide an agenda for future work. We conclude by specifying visualizations. summarizing our contributions and insights. Automated Visualization RELATED WORK Prior work proposes desiderata and tools (e.g., [14, 20, 37, 43, Our work is related to earlier efforts in effective visualization 48]) to automatically design effective visualizations, building specification, automated visualization design, and deep neural on Bertin’s study [7] of visual encoding variables and earlier networks (DNNs) for synthesis and machine translation. graphical perception research, e.g., [1,5, 18, 38, 43, 53]. Earlier research also develops interactive systems and recom- Declarative Visualization Specification mendation schemes [11, 25, 44, 52, 54, 57, 58, 59, 63, 65, Earlier data visualization work proposes grammars and al- 66, 67] to guide users in exploratory data analysis and visu- gebraic operators over data as well as visual encoding and alization design. PRIM-9 [23], GrandTour [2] SeeDB [59], design variables to specify visualizations (Figure1). Wilkin- Zenvisage [54], ShowMe [44], Voyager [66], Voyager 2 [67], son’s seminal work [62] introduces a grammar of graphics SAGE [48] and VizDeck [36] prioritize charts according to and its implementation (VizML), greatly shaping the subse- one or more evaluation measures such as data saliency, data quent research on visualization specification. Polaris [55] coverage, perceptual effectiveness, user task, and user pref- (now called Tableau) uses a table algebra drawn from Wilkin- erences. Similarly, Rank-by-Feature [52], AutoVis [65], and son’s grammar of graphics. The table algebra of Polaris later Foresight [19] use statistical criteria over data attributes and in- evolved to VizQL [27], forming the underlying representation stances in recommending and ranking visualizations. Data2Vis of Tableau visualizations. Wickham introduces ggplot2 [61], exhibits a departure from rule-based approaches of prior work a widely-popular package in the R statistical language, based both in conceptual formulation and technical approach taken. on Wilkinson’s grammar. Similarly, Protovis [9], D3 [10], The Data2Vis learns how to create visualization specifications Vega [50], Brunel [64], and Vega-Lite [49] all provide gram- from examples without retorting to enumeration of rules or mars to declaratively specify visualizations. Some of them heuristics, complementing earlier work. The Data2Vis

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us