Chainer: a Deep Learning Framework for Accelerating the Research Cycle

Chainer: a Deep Learning Framework for Accelerating the Research Cycle

Chainer: A Deep Learning Framework for Accelerating the Research Cycle Seiya Tokui, Ryosuke Okuta, Takuya Akiba, Yusuke Niitani, Toru Ogawa, Shunta Saito, Shuji Suzuki, Kota Uenishi, Brian Vogel, Hiroyuki Yamazaki Vincent [email protected] Preferred Networks, Inc. Tokyo, Japan ABSTRACT many developers used open-source deep learning frameworks such Software frameworks for neural networks play a key role in the de- as Caffe [25] or Torch [11]. Because deep learning was first used velopment and application of deep learning methods. In this paper, successfully in computer vision and speech recognition, early deep we introduce the Chainer framework, which intends to provide a learning frameworks were designed primarily for feed-forward net- flexible, intuitive, and high performance means of implementing the works such as convolutional neural networks (CNNs), which are full range of deep learning models needed by researchers and prac- effective for analyzing fixed-length data such as images. titioners. Chainer provides acceleration using Graphics Processing More recently, additional types of deep learning models have Units with a familiar NumPy-like API through CuPy, supports gen- become a major research topic. Following the impressive results eral and dynamic models in Python through Define-by-Run, and in game playing [36], deep reinforcement learning has become a also provides add-on packages for state-of-the-art computer vision promising research area. In addition, after recurrent neural net- models as well as distributed training. works (RNNs) demonstrated promising results on variable-length data such as text, the use of these models has increased. RNNs with CCS CONCEPTS Long Short-Term Memory (LSTM) are currently being used with success for machine translation [47] and conversation models [49]. • Computer systems organization → Neural networks. However, as most of the existing deep learning frameworks KEYWORDS are designed for image processing using CNNs, they lack support for abstracting data structures and training models to implement deep learning frameworks, GPU computing, distributed training, more general deep learning models. In addition, many existing computer vision frameworks use a domain-specific language for representing deep ACM Reference Format: learning models, along with an interpreter to translate them into Seiya Tokui, Ryosuke Okuta, Takuya Akiba, Yusuke Niitani, Toru Ogawa, a data structure stored in memory. Therefore, developers using Shunta Saito, and Shuji Suzuki, Kota Uenishi, Brian Vogel, Hiroyuki Ya- these frameworks cannot use standard programming language mazaki Vincent. 2019. Chainer: A Deep Learning Framework for Accelerat- debuggers–a significant problem as debugging is a major aspect in ing the Research Cycle. In The 25th ACM SIGKDD Conference on Knowledge developing and tuning deep learning models. Discovery and Data Mining (KDD ’19), August 4–8, 2019, Anchorage, AK, USA. We herein introduce Chainer, an open-source framework for ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3292500.3330756 deep learning that provides a simple and efficient support for imple- menting complex algorithms, training models, and tuning model 1 INTRODUCTION parameters. The remainder of the paper is organized as follows. Sec- Deep learning is driving the third wave of artificial intelligence tion 2 describes the standard architecture of the existing deep learn- research [29]. Recent investigations indicate that deep learning ing frameworks. Section 3 introduces the architecture of Chainer. is moving beyond its early successes in pattern recognition and Section 4 describes the performance techniques such as memory us- toward new applications in diverse domains and industries. To age optimizations and double backpropagation techniques. Section arXiv:1908.00213v1 [cs.LG] 1 Aug 2019 implement these research ideas, a software framework for deep 5 presents CuPy as a backend library for Graphics Processing Units learning is required. (GPUs). Section 6 describes distributed training capability. Section Implementing neural networks (NNs) requires a set of specialized 7 introduces ChainerCV, an add-on package for computer vision. building blocks, including multidimensional arrays, activation func- Section 8 presents the related work. Finally, Section 9 provides a tions, and automatic differentiation. To avoid duplicating these tools, summary and the directions for future work. Permission to make digital or hard copies of all or part of this work for personal or 2 BACKGROUND AND MOTIVATION classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation In typical NN frameworks, models are built in two phases, in a on the first page. Copyrights for components of this work owned by others than the paradigm that we name as Define-and-Run (Figure 1a). In the Define author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission phase, the computational graph of the model is first defined and and/or a fee. Request permissions from [email protected]. constructed. This phase corresponds to the instantiation of a neural KDD ’19, August 4–8, 2019, Anchorage, AK, USA network object based on a model definition that specifies the data © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-6201-6/19/08...$15.00 flow graph of inter-layer connections, initial weights, and activation https://doi.org/10.1145/3292500.3330756 functions. Automatic differentiation is typically used to define the (b) Define-by-Run: new approach (a) Define-and-Run: existing approach Figure 1: Relationship between computational graph construction and training. computations for both the forward and backward passes, with 3.1 On Demand Graph Construction optional graph optimizations being performed as well. In the Run Backpropagation is executed by bookkeeping the history of opera- phase, the actual forward and backward calculation of the graph is tions applied to the input arrays and backtracking the history. In performed. Provided a set of training examples, the model is trained the Define-by-Run paradigm, the history of operations is recorded in this phase by minimizing the loss function using optimization simultaneously with the forward computation applied to concrete algorithms such as stochastic gradient descent. input arrays. This can be achieved by creating a node in the compu- Under the Define-and-Run paradigm, static NN models such as tational graph for each variable and operation. The computational CNNs can be implemented easily. The model definition may be writ- graph only defines how to backtrack the operations applied tothe ten in a specific markup language such as Protobuf or YAML[18]. input, and does not define the forward computation. We define the The deep learning framework serves as an interpreter that executes computational graph by two types of nodes: variable nodes that the model definition, which can be regarded as an independent NN represent the variables involved in the computation, and function program. The NN program receives the inputs (data examples), pro- nodes that represent the operations applied to the variables. After cesses them (forward/backward computation), changes the model’s applying a function f to the input variable x and obtaining an internal state (updating), and outputs the results (predictions). output variable y, the function node nf contains a reference to the The Define-and-Run paradigm operates well for static models input node nx , and the output node ny contains a reference to the such as CNNs because having the full computational graph avail- function node nf . These references are used to backtrack the graph. able enables potential graph optimizations to improve the memory The program that defines the forward computation is similar efficiency and/or runtime performance. However, for implementing to any standard numerical computations that do not compute any other types of NN models, two major problems arise. gradients. The only difference is that each differentiable function The first is that it can be cumbersome to support general dynamic stores its computational history into the graph in addition to com- graphs, i.e., neural networks with control flow. In frameworks such puting its output. Because the graph construction only relies on as TensorFlow [6], the control flow decisions are defined in the data the execution trace of the program, it can be combined with arbi- flow graph using special operators such as Switch and Merge rather trary syntactic constructs of the host language, e.g., conditional than using the control flow syntaxes of the host language. branches and loops. Such a program generates a graph with a dif- The second problem is that under the Define-and-Run paradigm, ferent topology and size at each invocation, while maintaining the the inner mechanism of the neural network is not accessible to correct gradient computation. The power of the host language that the user. This presents several difficulties in the creation ofan we can use is not limited to such primitive language constructs; we effective model. For example, to debug and tune a model effectively, can also leverage high-level tools such as debuggers and profilers. a user must be able to observe what is occurring inside the model. However, as a large

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us