Cavs: an Efficient Runtime System for Dynamic Neural Networks

Cavs: an Efficient Runtime System for Dynamic Neural Networks

Cavs: An Efficient Runtime System for Dynamic Neural Networks Shizhen Xu, Carnegie Mellon University, Tsinghua University; Hao Zhang, Graham Neubig, and Wei Dai, Carnegie Mellon University, Petuum Inc.; Jin Kyu Kim, Carnegie Mellon University; Zhijie Deng, Tsinghua University; Qirong Ho, Petuum Inc.; Guangwen Yang, Tsinghua University; Eric P. Xing, Petuum Inc. https://www.usenix.org/conference/atc18/presentation/xu-shizen This paper is included in the Proceedings of the 2018 USENIX Annual Technical Conference (USENIX ATC ’18). July 11–13, 2018 • Boston, MA, USA ISBN 978-1-939133-02-1 Open access to the Proceedings of the 2018 USENIX Annual Technical Conference is sponsored by USENIX. Cavs: An Efficient Runtime System for Dynamic Neural Networks 1;2Shizhen Xu†, 1;3Hao Zhang†, 1;3Graham Neubig, 3Wei Dai, 1Jin Kyu Kim, 2Zhijie Deng, 3Qirong Ho, 2Guangwen Yang, 3Eric P. Xing Carnegie Mellon University1, Tsinghua University2, Petuum Inc.3 (†equal contributions) Abstract cuDNN [6]), and compute clusters [56, 57, 7]. Recent deep learning (DL) models are moving more One dominant programming paradigm, adopted by DL and more to dynamic neural network (NN) architectures, toolkits such as Caffe and TensorFlow, is to represent a where the NN structure changes for every data sample. neural network as a static dataflow graph [32, 1], where However, existing DL programming models are ineffi- computation functions in the NN are associated with cient in handling dynamic network architectures because nodes in the graph, and input and output of the computa- of: (1) substantial overhead caused by repeating dataflow tion map to edges. It requires DL programmers to define graph construction and processing every example; (2) the network architecture (i.e. the dataflow graph) using difficulties in batched execution of multiple samples; (3) symbolic expressions, once before beginning execution. inability to incorporate graph optimization techniques Then, for a given graph and data samples, the software such as those used in static graphs. In this paper, we toolkits can automatically derive the correct algorithm present “Cavs”, a runtime system that overcomes these for training or inference, following backpropagation [21] bottlenecks and achieves efficient training and inference and auto-differentiation rules. With proper optimiza- of dynamic NNs. Cavs represents a dynamic NN as a tion, the execution of these static dataflow graphs can be static vertex function F and a dynamic instance-specific highly efficient; as the dataflow graph is fixed for all data, graph G. It avoids the overhead of repeated graph con- the evaluation of multiple samples through one graph struction by only declaring and constructing F once, and can be naturally batched, leveraging the improved par- allows for the use of static graph optimization techniques allelization capability of modern hardware (e.g. GPUs). on pre-defined operations in F. Cavs performs train- Moreover, by separating model declaration and execu- ing and inference by scheduling the execution of F fol- tion, it makes it possible for the graph to be optimized lowing the dependencies in G, hence naturally exposing once at declaration time [1], with these optimizations batched execution opportunities over different samples. benefiting the efficiency of processing arbitrary input Experiments comparing Cavs to state-of-the-art frame- data batches at execution time. works for dynamic NNs (TensorFlow Fold, PyTorch and While the dataflow graph has major efficiency advan- DyNet) demonstrate the efficacy of our approach: Cavs tages, its applicability highly relies on a key assump- achieves a near one order of magnitude speedup on train- tion – the graph (i.e. NN architecture) is fixed through- ing of dynamic NN architectures, and ablations verify the out the runtime. This assumption however breaks for effectiveness of our proposed design and optimizations. dynamic NNs, where the network architectures condi- tionally change with every input sample, such as NNs 1 Introduction that compute over sequences of variable lengths [22, 43], Deep learning (DL), which refers to a class of neural net- trees [45], and graphs [26]. works (NNs) with deep architectures, is now a workhorse Due to the growing interest in these sorts of dy- powering state-of-the-art results on a wide spectrum namic models, recent years have seen an increase in of tasks [53, 54, 30]. One reason for its widespread the popularity of frameworks based on dynamic declara- adoption is the variety and quality of software toolkits, tion [49, 33, 11], which declare a different dataflow graph such as Caffe [23], TensorFlow [1], PyTorch [36] and per sample. While dynamic declaration is convenient to DyNet [33, 34], which ease programming of DL models, developers as it removes the restriction that computation and speed computation by harnessing modern computing be completely specified before training begins, it exhibits hardware (e.g. GPUs), software libraries (e.g. CUDA, a few limitations. First, constructing a graph for every USENIX Association 2018 USENIX Annual Technical Conference 937 sample results in substantial overhead, which grows lin- batching, two libraries that allow for the use of dynamic early with the number of input instances. In fact, we NNs with automatic operation batching, we find that find graph construction takes longer time than the com- Cavs’ has significant performance advantages; on static putation in some frameworks (see §5.2). It also pre- graphs it performs equivalently or slightly better, and vents the application of complex static graph optimiza- on dynamic NNs with difficult-to-batch workloads (e.g. tion techniques (see §3.4). Moreover, since each sample Tree-LSTM [45] and Tree-FC [27]), Cavs demonstrates owns a dataflow graph specifying its unique computa- near one order of magnitude speedups across multiple tional pattern, batching together similarly shaped compu- dataset and hyper-parameter settings (§5.1). We further tations across instances is non-trivial. Without batching, investigate the effectiveness of our design choices: Cavs the computation is inefficient due to its lack of ability benefits from not only our proposed memory manage- to exploit modern computational hardware. While some ment strategy, but also various optimizations on graph progress has been made in recent research [34, 27], how execution, which were originally for static dataflow to automatically batch the computational operations from graphs and not applicable in dynamic declaration. different graphs remains a difficult problem. To summarize, we make three primary contributions in To address these challenges, we present Cavs, an ef- this paper: (1) We propose a novel representation for dy- ficient runtime system for dynamic NNs that exploits namic NNs, based on which we design four APIs and im- the recurrent and recursive nature of dynamic NNs. In- plement the Cavs runtime system (§3.1); (2) We propose stead of declaring a dataflow graph per sample, it decom- several novel strategies in Cavs for efficient training and poses a dynamic NN into two components: a static ver- inference of dynamic NNs: the batching policy (§3.2), a tex function F that is only declared (by the user) and memory management mechanism to guarantee the mem- optimized once before execution, and an input-specific ory coalescing (§3.3), and multiple graph execution op- graph G obtained via I/O at runtime. Cavs inherits the timizations (§3.4); (3) We compare Cavs to state-of-the- flexibility of symbolic programming [1, 12, 33] for DL; art systems for dynamic NNs (§5). We reveal the prob- it requires users to define F by writing symbolic expres- lems of existing systems, and report near 10x speedup sions in the same way as in static declaration. With F for Cavs on various experimental settings. We also ver- and G, the workflow of training or testing a dynamic NN ify the effectiveness of our proposed design strategies, is cast as scheduling the execution of F following the and quantize their contributions to the final performance. structure of the input graph G. Cavs will perform auto- 2 Background differentiation, schedule the execution following depen- dencies in G, and guarantee efficiency and correctness. 2.1 Dynamic Neural Networks Cavs’ design allows for highly efficient computation Successful NN models generally exhibit suitable archi- in dynamic graphs for a number of reasons. First, it tectures that capture the structures of the input data. allows the vertex function only to be defined and con- For example, convolutional neural networks [24, 53], structed once for any type of structured data, hence which apply fixed-structured operations to fixed-sized avoiding the overhead of repeated dataflow graph con- images, are highly effective precisely because they cap- struction. Second, as the dataflow graph encoded by the ture the spatial invariance common in computer vision vertex function is static throughout the runtime, it can domains [39, 44]. However, apart from images, many benefit from various static graph optimizations [1, 5, 12, forms of data are structurally complex and can not be 18](§3.4), which is not the case in the scenario of dy- readily captured by fixed-structured NNs. Appropriately namic declaration (§2.2). Moreover, it naturally exposes reflecting these structures in the NN design has shown ef- opportunities for batched computation, i.e. we are able fective in sentiment analysis [45], semantic similarity be- to parallelize the execution of F over multiple vertices tween sentence pairs [40], and image segmentation [26]. from different input graphs (§3.2) with the support of our To see this, we will take the constituency parsing prob- proposed memory management strategy (§3.3). lem as an example. Sentences in natural languages are To evaluate Cavs’ performance, we compare it to sev- often represented by their constituency parse tree [45, eral state-of-the-art systems supporting dynamic NNs. 31], whose structure varies depending on the content of We focus our experiments on GPU training, and verify the sentence itself (Fig. 1(a)).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us