HPC and AI Initiatives for Supercomputer Fugaku and Future Prospects

Total Page:16

File Type:pdf, Size:1020Kb

HPC and AI Initiatives for Supercomputer Fugaku and Future Prospects HPC and AI Initiatives for Supercomputer Fugaku and Future Prospects Atsushi Nukariya Kazutoshi Akao Jin Takahashi Naoto Fukumoto Kentaro Kawakami Akiyoshi Kuroda Kazuo Minami Kento Sato Satoshi Matsuoka The development of AI technology is indispensable for the realization of Society 5.0, which has been announced as part of Japan's science and technology policy. As such development proceeds, the computational resources required for AI learning continue to increase. Through the development of the K computer and the supercomputer Fugaku (hereafter, Fugaku), Fujitsu has been providing high performance computing (HPC) systems with abundant computing re- sources. Now, in order to utilize the abundant computational resources of HPC systems for AI learning, we are working with RIKEN to develop an AI infrastructure on Fugaku. This article de- scribes the current status of our joint project with RIKEN to test and evaluate the performance of AI-related software on Fugaku and our future work on HPC and AI. 1. Introduction • Large amounts of data required for learning AI In 2016, Society 5.0 was proposed as part of • High-performance AI platform Japan's science and technology policy [1]. In the society • Talented AI personnel realized by Society 5.0, one aim is to add value to in- We aim for Fugaku to become the core infrastruc- dustry through the analysis of big data using AI. ture of Society 5.0 by providing a high-performance The development of AI technology is essential for AI platform that can process large amounts of data at the utilization of AI in society and to lead the world in high speed. Further, by providing a high-performance the realization of Society 5.0. As such development AI platform, we expect to attract talented AI personnel proceeds, the computational resources (e.g. processors, from around the world, which will promote develop- memory) required for AI learning and inference continue ment of AI technology. to increase. Against this background, the utilization of AI by high performance computing (HPC), which is rich in 2.2 Utilization of AI in HPC computational resources, is being promoted. This subsection describes the use of AI in HPC, in- This article describes the relationship between cluding the acceleration of deep learning and examples HPC and AI, and the AI-related work being performed of the application of AI to HPC applications. using the supercomputer Fugaku (hereafter, Fugaku). It 1) Acceleration of deep learning goes on to describe the future outlook of this initiative. The main reason for the rapid development of AI in recent years is the improvement of deep learning 2. HPC and AI technology. Deep learning can learn more features This section describes the relationship between than conventional machine learning, but this requires HPC and AI from the perspective of the use of AI in HPC. a large number of computations. Moreover, since there is a lot of data to be used for learning, a mechanism to 2.1 Fugaku as an AI platform process the data at high speed is necessary. Three factors are considered to be key contributors On the other hand, in recent years, the abun- to the development of AI technology, which is essential dance of computational resources has made learning for the realization of Society 5.0 [2]. in a realistic time possible. As a result, the use of deep Fujitsu Technical Review 1 HPC and AI Initiatives for Supercomputer Fugaku and Future Prospects learning is expanding. The CPUs of the K computer 3.2 Development of a deep learning and Fugaku have high computing performance, paral- framework lel performance, and wide memory bandwidth, which This subsection gives an overview of the deep makes them suitable for deep learning. Going forward, learning framework produced by the DL4Fugaku proj- AI applications that use deep learning are expected to ect and its performance evaluation. be increasingly used on HPC systems rich in computa- 1) Trends in and structure of deep learning frameworks tional resources, with increasing acceleration of deep PyTorch, TensorFlow, Chainer, and MXNet, among learning. other deep learning frameworks, have been released 2) Application of AI technology to applications for as OSS. PyTorch and TensorFlow adopt a design in HPC which the application programming interface (API) Initiatives have also been launched to realize part, which is provided for various languages such as Society 5.0 through the application of AI technology Python, and the operation part, which is implemented to HPC applications. For example, deep learning is in C++, are separate (Figure 1). The reason why the expected to be useful in coping with large-scale natu- operation part is implemented in C++ is that most ral disasters by improving the resolution of satellite of the deep learning and inference processing that images and cloud removal to provide a more detailed require high-speed operation is performed in the op- picture of the ground surface [3]. AI is also being ap- eration part. plied to system software tuning using deep learning for The main processing content of the operation part the enhancement of HPC [4]. On the other hand, AI is in deep learning is matrix operations. Thus learning also expected to be used for the acceleration of data and inference are sped up by calling on arithmetic li- augmentation, which is a method for improving the ac- braries optimized for Intel CPUs [9], such as the oneAPI curacy of deep learning itself [5]. Deep Neural Network Library (oneDNN) and the Basic The development of an AI platform for HPC is also Linear Algebra Subprograms (BLAS), from the opera- important for performing AI-related computations as tion part. part of applications for HPC. 2) Demonstration of deep learning performance of Fugaku with Chainer 3. Deep learning initiatives for Fugaku To verify the feasibility of improving the per- This section describes the DL4Fugaku project and formance of AI software by Fugaku, we performed other deep learning initiatives for Fugaku. performance tuning and evaluation during massively parallel execution using Chainer, which is widely used 3.1 Launch of DL4Fugaku project in Japan. The DL4Fugaku project was launched with Although there are several algorithms for convo- the aim of operating AI software at high speed in lution operations, which are frequently used in deep order to provide Fugaku as an AI platform [6]. This learning, in Chainer, they are often implemented using project aims to develop AI software by tuning and op- GEMM (General Matrix Multiply) operations. Therefore, eration verification, and RIKEN and Fujitsu have signed with a view to combining performance tuning and eval- a memorandum of understanding under which they uation with the deep learning library dnnl_aarch64 are conducting joint research and development [7]. and the mathematical library of the system software Research and development of frameworks and applica- Technical Computing Suite (TCS) of the K computer, we tions that enable efficient distributed learning is also created a prototype of a general-purpose high-speed underway. kernel using GEMM operations so that it could be used The software developed by the project will be on the CPUs of the K computer and Fugaku, and we released on an ongoing basis as open-source software tuned it on the K computer. The tuning performed was (OSS) [8]. thread parallelization of convolution operations with the mathematical library of TCS and acceleration of the Adam optimization algorithm. This made the learning operation on the K computer 36.4 times faster. 2 Fujitsu Technical Review HPC and AI Initiatives for Supercomputer Fugaku and Future Prospects ntel Fuau F licatins thn etc licatins thn etc ee leanin aework ee leanin aework TensFlwTorch TensFlwTorch at thn etc at thn etc eatin at eatin at TS BLAS ne atheatical nnlaach liay Optiiatin f F Figure 1 Internal structure of deep learning frameworks. 450,000 with acs 400,000 improved in proportion to the number of racks, and one is 350,000 can expect considerable performance improvement in 300,000 larger configurations. 250,000 Chainer moved into a maintenance phase in 200,000 2019 and migration to PyTorch is now being encour- valuatin cnitins 150,000 aged [10]. Therefore, future performance evaluation • haine 100,000 of massively parallel execution will be performed using • haine PyTorch. 50,000 • ataset aeet 0 3) Acceleration initiatives for Fugaku 0 20 40 60 80 Fujitsu is developing the dnnl_aarch64 and ue iaes cesse e secn is ue acs TCS math libraries optimized for the Fugaku CPU Figure 2 (A64FX), which uses the ARM architecture. Replacing Performance of distributed learning using Chainer oneDNN and BLAS optimized for Intel CPUs with the (ResNet-50). dnnl_aarch64 and TCS math libraries, respectively, is expected to accelerate deep learning frameworks on Fugaku. On Fugaku, we performed further parallel tuning, 4) ResNet-50 evaluation with PyTorch/TensorFlow such as accelerating I/O processing by pseudo-staging The dnnl_aarch64 and TCS mathematical libraries of the newly designed file system Lightweight Layered were incorporated into PyTorch and TensorFlow, and IO Accelerator (LLIO) and acceleration of the import ResNet-50 v1.5 performance evaluation was performed processing for Python packages. Through this tun- using A64FX. ing, we achieved learning processing performance of With PyTorch, we achieved 98.8 ips for learning 410,726 ips (images per second) on 72 racks (27,648 and 383.7 ips for inferencing on a single node (red nodes) using the ResNet-50 v1.5 image recognition figures in Table 1). We also evaluated the performance neural network. As shown in Figure 2, the performance of Horovod, a distributed learning library for deep Fujitsu Technical Review 3 HPC and AI Initiatives for Supercomputer Fugaku and Future Prospects Table 1 Single-node performance with PyTorch/TensorFlow.
Recommended publications
  • File System and Power Management Enhanced for Supercomputer Fugaku
    File System and Power Management Enhanced for Supercomputer Fugaku Hideyuki Akimoto Takuya Okamoto Takahiro Kagami Ken Seki Kenichirou Sakai Hiroaki Imade Makoto Shinohara Shinji Sumimoto RIKEN and Fujitsu are jointly developing the supercomputer Fugaku as the successor to the K computer with a view to starting public use in FY2021. While inheriting the software assets of the K computer, the plan for Fugaku is to make improvements, upgrades, and functional enhancements in various areas such as computational performance, efficient use of resources, and ease of use. As part of these changes, functions in the file system have been greatly enhanced with a focus on usability in addition to improving performance and capacity beyond that of the K computer. Additionally, as reducing power consumption and using power efficiently are issues common to all ultra-large-scale computer systems, power management functions have been newly designed and developed as part of the up- grading of operations management software in Fugaku. This article describes the Fugaku file system featuring significantly enhanced functions from the K computer and introduces new power management functions. 1. Introduction application development environment are taken up in RIKEN and Fujitsu are developing the supercom- separate articles [1, 2], this article focuses on the file puter Fugaku as the successor to the K computer and system and operations management software. The are planning to begin public service in FY2021. file system provides a high-performance and reliable The Fugaku is composed of various kinds of storage environment for application programs and as- system software for supporting the execution of su- sociated data.
    [Show full text]
  • SOL: Effortless Device Support for AI Frameworks Without Source Code Changes
    SOL: Effortless Device Support for AI Frameworks without Source Code Changes Nicolas Weber and Felipe Huici NEC Laboratories Europe Abstract—Modern high performance computing clusters heav- State of the Art Proposed with SOL ily rely on accelerators to overcome the limited compute power API (Python, C/C++, …) API (Python, C/C++, …) of CPUs. These supercomputers run various applications from different domains such as simulations, numerical applications or Framework Core Framework Core artificial intelligence (AI). As a result, vendors need to be able to Device Backends SOL efficiently run a wide variety of workloads on their hardware. In the AI domain this is in particular exacerbated by the Fig. 1: Abstraction layers within AI frameworks. existance of a number of popular frameworks (e.g, PyTorch, TensorFlow, etc.) that have no common code base, and can vary lines of code to their scripts in order to enable SOL and its in functionality. The code of these frameworks evolves quickly, hardware support. making it expensive to keep up with all changes and potentially We explore two strategies to integrate new devices into AI forcing developers to go through constant rounds of upstreaming. frameworks using SOL as a middleware, to keep the original In this paper we explore how to provide hardware support in AI frameworks without changing the framework’s source code in AI framework unchanged and still add support to new device order to minimize maintenance overhead. We introduce SOL, an types. The first strategy hides the entire offloading procedure AI acceleration middleware that provides a hardware abstraction from the framework, and the second only injects the necessary layer that allows us to transparently support heterogenous hard- functionality into the framework to enable the execution, but ware.
    [Show full text]
  • Deep Learning Frameworks | NVIDIA Developer
    4/10/2017 Deep Learning Frameworks | NVIDIA Developer Deep Learning Frameworks The NVIDIA Deep Learning SDK accelerates widely­used deep learning frameworks such as Caffe, CNTK, TensorFlow, Theano and Torch as well as many other deep learning applications. Choose a deep learning framework from the list below, download the supported version of cuDNN and follow the instructions on the framework page to get started. Caffe is a deep learning framework made with expression, speed, and modularity in mind. Caffe is developed by the Berkeley Vision and Learning Center (BVLC), as well as community contributors and is popular for computer vision. Caffe supports cuDNN v5 for GPU acceleration. Supported interfaces: C, C++, Python, MATLAB, Command line interface Learning Resources Deep learning course: Getting Started with the Caffe Framework Blog: Deep Learning for Computer Vision with Caffe and cuDNN Download Caffe Download cuDNN The Microsoft Cognitive Toolkit —previously known as CNTK— is a unified deep­learning toolkit from Microsoft Research that makes it easy to train and combine popular model types across multiple GPUs and servers. Microsoft Cognitive Toolkit implements highly efficient CNN and RNN training for speech, image and text data. Microsoft Cognitive Toolkit supports cuDNN v5.1 for GPU acceleration. Supported interfaces: Python, C++, C# and Command line interface Download CNTK Download cuDNN TensorFlow is a software library for numerical computation using data flow graphs, developed by Google’s Machine Intelligence research organization. TensorFlow supports cuDNN v5.1 for GPU acceleration. Supported interfaces: C++, Python Download TensorFlow Download cuDNN https://developer.nvidia.com/deep­learning­frameworks 1/3 4/10/2017 Deep Learning Frameworks | NVIDIA Developer Theano is a math expression compiler that efficiently defines, optimizes, and evaluates mathematical expressions involving multi­dimensional arrays.
    [Show full text]
  • 1 Amazon Sagemaker
    1 Amazon SageMaker 13:45~14:30 Amazon SageMaker 14:30~15:15 Amazon SageMaker re:Invent 15:15~15:45 Q&A | 15:45~17:00 Amazon SageMaker 20 SmartNews Data Scientist, Meng Lee Sagemaker SageMaker - FiNC FiNC Technologies SIGNATE Amazon SageMaker SIGNATE CTO 17:00~17:15© 2018, Amazon Web Services, Inc. or itsQ&A Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker Makoto Shimura, Solutions Architect 2019/01/15 © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark • • • • ⎼ Amazon Athena ⎼ AWS Glue ⎼ Amazon SageMaker © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark • • Amazon SageMaker • Amazon SageMasker • SageMaker SDK • [ | | ] • Amazon SageMaker • © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark 開発 学習 推論推論 学習に使うコードを記述 大量の GPU 大量のCPU や GPU 小規模データで動作確認 大規模データの処理 継続的なデプロイ 試行錯誤の繰り返し 様々なデバイスで動作 © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark 開発 学習 推論推論 エンジニアがプロダク データサイエンティストが開発環境で作業 ション環境に構築 開発と学習を同じ 1 台のインスタンスで実施 API サーバにデプロイ Deep Learning であれば GPU インスタンスを使用 エッジデバイスで動作 © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark & • 開発 学習 推論推論 • エンジニアがプロダク データサイエンティストが開発環境で作業 • ション環境に構築 開発と学習を同じ 1 台のインスタンスで実施 API サーバにデプロイ • Deep Learning であれば GPU インスタンスを使用 エッジデバイスで動作 • API • • © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Confidential and Trademark Amazon SageMaker © 2018, Amazon Web Services, Inc.
    [Show full text]
  • Intel® Optimized AI Frameworks
    Intel® optimized AI frameworks Dr. Fabio Baruffa & Shailen Sobhee Technical Consulting Engineers, Intel IAGS Visit: www.intel.ai/technology Speed up development using open AI software Machine learning Deep learning TOOLKITS App Open source platform for building E2E Analytics & Deep learning inference deployment Open source, scalable, and developers AI applications on Apache Spark* with distributed on CPU/GPU/FPGA/VPU for Caffe*, extensible distributed deep learning TensorFlow*, Keras*, BigDL TensorFlow*, MXNet*, ONNX*, Kaldi* platform built on Kubernetes (BETA) Python R Distributed Intel-optimized Frameworks libraries * • Scikit- • Cart • MlLib (on Spark) * * And more framework Data learn • Random • Mahout optimizations underway • Pandas Forest including PaddlePaddle*, scientists * * • NumPy • e1071 Chainer*, CNTK* & others Intel® Intel® Data Analytics Intel® Math Kernel Library Kernels Distribution Acceleration Library Library for Deep Neural Networks for Python* (Intel® DAAL) (Intel® MKL-DNN) developers Intel distribution High performance machine Open source compiler for deep learning optimized for learning & data analytics Open source DNN functions for model computations optimized for multiple machine learning library CPU / integrated graphics devices (CPU, GPU, NNP) from multiple frameworks (TF, MXNet, ONNX) 2 Visit: www.intel.ai/technology Speed up development using open AI software Machine learning Deep learning TOOLKITS App Open source platform for building E2E Analytics & Deep learning inference deployment Open source, scalable,
    [Show full text]
  • Theano: a Python Framework for Fast Computation of Mathematical Expressions (The Theano Development Team)∗
    Theano: A Python framework for fast computation of mathematical expressions (The Theano Development Team)∗ Rami Al-Rfou,6 Guillaume Alain,1 Amjad Almahairi,1 Christof Angermueller,7, 8 Dzmitry Bahdanau,1 Nicolas Ballas,1 Fred´ eric´ Bastien,1 Justin Bayer, Anatoly Belikov,9 Alexander Belopolsky,10 Yoshua Bengio,1, 3 Arnaud Bergeron,1 James Bergstra,1 Valentin Bisson,1 Josh Bleecher Snyder, Nicolas Bouchard,1 Nicolas Boulanger-Lewandowski,1 Xavier Bouthillier,1 Alexandre de Brebisson,´ 1 Olivier Breuleux,1 Pierre-Luc Carrier,1 Kyunghyun Cho,1, 11 Jan Chorowski,1, 12 Paul Christiano,13 Tim Cooijmans,1, 14 Marc-Alexandre Cotˆ e,´ 15 Myriam Cotˆ e,´ 1 Aaron Courville,1, 4 Yann N. Dauphin,1, 16 Olivier Delalleau,1 Julien Demouth,17 Guillaume Desjardins,1, 18 Sander Dieleman,19 Laurent Dinh,1 Melanie´ Ducoffe,1, 20 Vincent Dumoulin,1 Samira Ebrahimi Kahou,1, 2 Dumitru Erhan,1, 21 Ziye Fan,22 Orhan Firat,1, 23 Mathieu Germain,1 Xavier Glorot,1, 18 Ian Goodfellow,1, 24 Matt Graham,25 Caglar Gulcehre,1 Philippe Hamel,1 Iban Harlouchet,1 Jean-Philippe Heng,1, 26 Balazs´ Hidasi,27 Sina Honari,1 Arjun Jain,28 Sebastien´ Jean,1, 11 Kai Jia,29 Mikhail Korobov,30 Vivek Kulkarni,6 Alex Lamb,1 Pascal Lamblin,1 Eric Larsen,1, 31 Cesar´ Laurent,1 Sean Lee,17 Simon Lefrancois,1 Simon Lemieux,1 Nicholas Leonard,´ 1 Zhouhan Lin,1 Jesse A. Livezey,32 Cory Lorenz,33 Jeremiah Lowin, Qianli Ma,34 Pierre-Antoine Manzagol,1 Olivier Mastropietro,1 Robert T. McGibbon,35 Roland Memisevic,1, 4 Bart van Merrienboer,¨ 1 Vincent Michalski,1 Mehdi Mirza,1 Alberto Orlandi, Christopher Pal,1, 2 Razvan Pascanu,1, 18 Mohammad Pezeshki,1 Colin Raffel,36 Daniel Renshaw,25 Matthew Rocklin, Adriana Romero,1 Markus Roth, Peter Sadowski,37 John Salvatier,38 Franc¸ois Savard,1 Jan Schluter,¨ 39 John Schulman,24 Gabriel Schwartz,40 Iulian Vlad Serban,1 Dmitriy Serdyuk,1 Samira Shabanian,1 Etienne´ Simon,1, 41 Sigurd Spieckermann, S.
    [Show full text]
  • Comparative Study of Deep Learning Software Frameworks
    Comparative Study of Deep Learning Software Frameworks Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah Research and Technology Center, Robert Bosch LLC {Soheil.Bahrampour, Naveen.Ramakrishnan, fixed-term.Lukas.Schott, Mohak.Shah}@us.bosch.com ABSTRACT such as dropout and weight decay [2]. As the popular- Deep learning methods have resulted in significant perfor- ity of the deep learning methods have increased over the mance improvements in several application domains and as last few years, several deep learning software frameworks such several software frameworks have been developed to have appeared to enable efficient development and imple- facilitate their implementation. This paper presents a com- mentation of these methods. The list of available frame- parative study of five deep learning frameworks, namely works includes, but is not limited to, Caffe, DeepLearning4J, Caffe, Neon, TensorFlow, Theano, and Torch, on three as- deepmat, Eblearn, Neon, PyLearn, TensorFlow, Theano, pects: extensibility, hardware utilization, and speed. The Torch, etc. Different frameworks try to optimize different as- study is performed on several types of deep learning ar- pects of training or deployment of a deep learning algorithm. chitectures and we evaluate the performance of the above For instance, Caffe emphasises ease of use where standard frameworks when employed on a single machine for both layers can be easily configured without hard-coding while (multi-threaded) CPU and GPU (Nvidia Titan X) settings. Theano provides automatic differentiation capabilities which The speed performance metrics used here include the gradi- facilitates flexibility to modify architecture for research and ent computation time, which is important during the train- development. Several of these frameworks have received ing phase of deep networks, and the forward time, which wide attention from the research community and are well- is important from the deployment perspective of trained developed allowing efficient training of deep networks with networks.
    [Show full text]
  • Comparative Study of Caffe, Neon, Theano, and Torch
    Workshop track - ICLR 2016 COMPARATIVE STUDY OF CAFFE,NEON,THEANO, AND TORCH FOR DEEP LEARNING Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah Bosch Research and Technology Center fSoheil.Bahrampour,Naveen.Ramakrishnan, fixed-term.Lukas.Schott,[email protected] ABSTRACT Deep learning methods have resulted in significant performance improvements in several application domains and as such several software frameworks have been developed to facilitate their implementation. This paper presents a comparative study of four deep learning frameworks, namely Caffe, Neon, Theano, and Torch, on three aspects: extensibility, hardware utilization, and speed. The study is per- formed on several types of deep learning architectures and we evaluate the per- formance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings. The speed performance metrics used here include the gradient computation time, which is important dur- ing the training phase of deep networks, and the forward time, which is important from the deployment perspective of trained networks. For convolutional networks, we also report how each of these frameworks support various convolutional algo- rithms and their corresponding performance. From our experiments, we observe that Theano and Torch are the most easily extensible frameworks. We observe that Torch is best suited for any deep architecture on CPU, followed by Theano. It also achieves the best performance on the GPU for large convolutional and fully connected networks, followed closely by Neon. Theano achieves the best perfor- mance on GPU for training and deployment of LSTM networks. Finally Caffe is the easiest for evaluating the performance of standard deep architectures.
    [Show full text]
  • Toolkits and Libraries for Deep Learning
    J Digit Imaging DOI 10.1007/s10278-017-9965-6 Toolkits and Libraries for Deep Learning Bradley J. Erickson1 & Panagiotis Korfiatis1 & Zeynettin Akkus1 & Timothy Kline 1 & Kenneth Philbrick 1 # The Author(s) 2017. This article is published with open access at Springerlink.com Abstract Deep learning is an important new area of machine the algorithm learns, deep learning approaches learn the impor- learning which encompasses a wide range of neural network tant features as well as the proper weighting of those features to architectures designed to complete various tasks. In the medical make predictions for new data. In this paper, we will describe imaging domain, example tasks include organ segmentation, le- some of the libraries and tools that are available to aid in the sion detection, and tumor classification. The most popular net- construction and efficient execution of deep learning as applied work architecture for deep learning for images is the to medical images. convolutional neural network (CNN). Whereas traditional ma- chine learning requires determination and calculation of features How to Evaluate a Toolkit from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those There is not a single criterion for determining the best toolkit for features to make predictions for new data. In this paper, we will deep learning. Each toolkit was designed and built to address the describe some of the libraries and tools that are available to aid in needs perceived by the developer(s) and also reflects their skills the construction and efficient execution of deep learning as ap- and approaches to problems.
    [Show full text]
  • Chainer and Chainerx
    The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks, Inc. S9380 Deep Learning Framework for fast iterative research/development 2 Define-by-Run frameworks by default from 2.0 3 x = numpy.array(…) h1 = layer1(x, W1) Write forward prop h2 = layer2(h1, W2) as a plain Python script. loss = loss_func(h2) loss.backward() Variables hold how they W1.array -= lr * W1.grad were computed. Use it to W2.array -= lr * W2.grad compute the gradient. 4 Deep learning framework optimized for the Define-by-Run API design 5 ✓ Model description ✓ Distributed training ✓ Serialization, export …… Everything is optimized for Define-by-Run style programming 6 class Linear(chainer.Link): Tie parameters to the def __init__(self, n_in, n_out): forward code using OOP. super().__init__() with self.init_scope(): self.W = chainer.Parameter(I.HeNormal(), (n_in, n_out)) self.b = chainer.Parameter(0, (n_out,)) def forward(self, x): return x @ self.W + self.b 7 class MLP(chainer.Chain): def __init__(self): super().__init__() with self.init_scope(): Object structure = self.l1 = Linear(784, 200) composition of NN fragments self.l2 = Linear(200, 100) self.l3 = Linear(100, 10) def forward(self, x): h1 = F.relu(self.l1(x)) h2 = F.relu(self.l2(h1)) return self.l3(h2) 8 for batch in iterator: # fetch the next minibatch x, t = converter(batch) # concat, transfer to the device loss = loss_fun(x, t) # forward prop loss.backward() # backprop optimizer.update() # update parameters model.cleargrad() # cleanup gradients
    [Show full text]
  • Which Deep Learning Framework Is Growing Fastest? Integrations
    Which Deep Learning Framework is Growing Fastest? TensorFlow vs. PyTorch Jeff Hale Follow Apr 1 · 8 min read In September 2018, I compared all the major deep learning frameworks in terms of demand, usage, and popularity in this article. TensorFlow was the undisputed heavyweight champion of deep learning frameworks. PyTorch was the young rookie with lots of buzz. ] How has the landscape changed for the leading deep learning frameworks in the past six months? To answer that question, I looked at the number of job listings on Indeed, Monster, LinkedIn, and SimplyHired. I also evaluated changes in Google search volume, GitHub activity, Medium articles, ArXiv articles, and Quora topic followers. Overall, these sources paint a comprehensive picture of growth in demand, usage, and interest. Integrations and Updates We’ve recently seen several important developments in the TensorFlow and PyTorch frameworks. PyTorch v1.0 was pre-released in October 2018, at the same time fastai v1.0 was released. Both releases marked major milestones in the maturity of the frameworks. TensorFlow 2.0 alpha was released March 4, 2019. It added new features and an improved user experience. It more tightly integrates Keras as its high-level API, too. Methodology In this article, I include Keras and fastai in the comparisons because of their tight integrations with TensorFlow and PyTorch. They also provide scale for evaluating TensorFlow and PyTorch. I won’t be exploring other deep learning frameworks in this article. I expect I will receive feedback that Caffe, Theano, MXNET, CNTK, DeepLearning4J, or Chainer deserve to be discussed. While these frameworks each have their virtues, none appear to be on a growth trajectory likely to put them near TensorFlow or PyTorch.
    [Show full text]
  • Supercomputer Fugaku
    Supercomputer Fugaku Toshiyuki Shimizu Feb. 18th, 2020 FUJITSU LIMITED Copyright 2020 FUJITSU LIMITED Outline ◼ Fugaku project overview ◼ Co-design ◼ Approach ◼ Design results ◼ Performance & energy consumption evaluation ◼ Green500 ◼ OSS apps ◼ Fugaku priority issues ◼ Summary 1 Copyright 2020 FUJITSU LIMITED Supercomputer “Fugaku”, formerly known as Post-K Focus Approach Application performance Co-design w/ application developers and Fujitsu-designed CPU core w/ high memory bandwidth utilizing HBM2 Leading-edge Si-technology, Fujitsu's proven low power & high Power efficiency performance logic design, and power-controlling knobs Arm®v8-A ISA with Scalable Vector Extension (“SVE”), and Arm standard Usability Linux 2 Copyright 2020 FUJITSU LIMITED Fugaku project schedule 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 Fugaku development & delivery Manufacturing, Apps Basic Detailed design & General Feasibility study Installation review design Implementation operation and Tuning Select Architecture & Co-Design w/ apps groups apps sizing 3 Copyright 2020 FUJITSU LIMITED Fugaku co-design ◼ Co-design goals ◼ Obtain the best performance, 100x apps performance than K computer, within power budget, 30-40MW • Design applications, compilers, libraries, and hardware ◼ Approach ◼ Estimate perf & power using apps info, performance counts of Fujitsu FX100, and cycle base simulator • Computation time: brief & precise estimation • Communication time: bandwidth and latency for communication w/ some attributes for communication patterns • I/O time: ◼ Then, optimize apps/compilers etc. and resolve bottlenecks ◼ Estimation of performance and power ◼ Precise performance estimation for primary kernels • Make & run Fugaku objects on the Fugaku cycle base simulator ◼ Brief performance estimation for other sections • Replace performance counts of FX100 w/ Fugaku params: # of inst. commit/cycle, wait cycles of barrier, inst.
    [Show full text]